I have a trigger (MS SQL Server) on TABLE_A that fires on an update that looks basically like the following (I've changed the names of the tables/trigger to simplify):
CREATE TRIGGER [dbo].[TABLE_A_B_UPDATE] ON [dbo].[TABLE_A] FOR UPDATE AS IF UPDATE ( [ATTRIBUTES] ) BEGIN BEGIN TRAN -- update the attributes of the table C records UPDATE [TABLE_C] SET [ATTRIBUTES] = I.ATTRIBUTES FROM TABLE_C C INNER JOIN INSERTED I ON C.UNIQUE_ID = I.UNIQUE_ID AND C.USER_ID = I.USER_ID WHERE E.[ATTRIBUTES] & 4 = 0 UPDATE [TABLE_B] SET [ATTRIBUTES] = I.ATTRIBUTES FROM TABLE_C C INNER JOIN INSERTED I ON C.UNIQUE_ID = I.UNIQUE_ID AND C.USER_ID = I.USER_ID INNER JOIN [TABLE_B] B ON B.UNIQUE_ID = C.UNIQUE_ID AND B.USER_ID = C.USER_ID WHERE B.[ATTRIBUTES] & 4 = 0 COMMIT TRAN END
Currently by design, there is only ever ONE record in the INSERTED table (we are only updating one record at a time from a UI).
What we have discovered is that, as the number of records increase in TABLE_B, trigger performance degrades rapidly. For example, with around 12000 or so records in TABLE_B, this update statement takes around 40 seconds (we established a timeout of 30 seconds). As I remove records from TABLE_B, performance gradually improves. As this was an unacceptable solution, I had to find ways to improve this update statement.
Through testing/profiling, I found that the problem was with the second update statement (update TABLE_B). The first update statement works without problem; if I change the second update statement to its equivalent SELECT statement, it also runs fast.
The solution that I found was to shove the singular record in the INSERTED table into a #TEMP table and join on that instead. I was also able to do this with a table variable as well, but performance was terrible until I created an index on it. This immediately resolved the problem and the update now runs almost instantaneously.
My question is this – why did this solve the performance problem? Perhaps I am looking at this in the wrong way, but I can't imagine why I would need an index on a one record table. I have read that the INSERTED table isn't created with an index on it, but it still seems odd to me that I should need one.
Thanks in advance!
EDIT: As pointed out, I forgot to mention some other relevant table structure tidbits.
TABLE_B indeed has a compound primary key/index created as follows:
CONSTRAINT [TABLE_B] PRIMARY KEY CLUSTERED ( [USER_ID] ASC, [UNIQUE_ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
TABLE_C has the same index as above. All indexes were rebuilt at the start of testing.
Both tables have additional triggers that are being fired – however, during testing, I disabled these triggers to determine where specifically the performance hit was. Disabling all other triggers did not improve performance.
I'm not super savvy on execution plans for triggers, but as far as I can tell, you can view them from the profiler with the showplan option turned on. I believe this is the relevant plan:
I'm not sure what to think about this plan, other than I noticed that 80% on that clustered index scan. Hopefully this is what was being asked for, if not, I can repost. Thanks again!