I have one query with multiple execution plans, the memory granted to one plan is huge comparing to the 2nd one
The issue occurs when the outer table of the Nested Loop join has a
predicate that filters the result to a small input, but the batch sort
appears to be using an estimate for cardinality that is equivalent to
the entire outer table. This can result in a perceived excessive
memory grant which in a very concurrent server can have several
side-effects, like OOM conditions, memory pressure for plan cache
eviction, or unexpected RESOURCE_SEMAPHORE waits. We have seen how a
single query that match this pattern can actually get several GB’s of
granted memory on high-end machines (1TB+ RAM).
One option until now would be to disable this feature globally using
Trace Flag 2340, as described in KB 2801413. However, in SQL Server
2016 RC0 we have changed the behavior to maintain the advantage of the
optimization, but now the max grant limit is based on the available
memory grant space. This improvement also translates into better
scalability, in the sense more queries can be executed with a smaller
memory footprint. We are looking at back porting this behavior to an
upcoming have ported this behavior to SQL Server 2014 Service Pack 2,
and as usual deliver added value to in-market versions.
This is exactly what I am seeing however I am using SQL Server 2016 Enterprise.
These are the execution plans
My questions are
What is the reason for 2 execution plans?
The optimizer is using the top execution plan, I forced it to use the lower plan but after sometime it goes again to the top one, any reason for that?
How to fix this problem? This issue causes the application to crash (there were many
RESOURCE_SEMAPHOREwaits and the application become unresponsive)? Should I use the hint :
Trace Flag 2340?
NOTE : I checked the XML and both plans have