Should I use NOLOCK against #temp tables?
Back in September, I wrote about putting NOLOCK everywhere. I highlighted several of the things that can go wrong with its pervasive use, but admitted that it can have its place – as long as you understand the risks.
Recently, Kenneth Fisher (@sqlstudent144) blogged about NOLOCK as well, in a series starting with Tales of a DBA fed up with NOLOCK. Part 1. (While mostly in jest, Part 2 offers Policy-Based Management solutions in addition to the DDL trigger mentioned in Part 1 for unilaterally preventing NOLOCK or READ UNCOMMITTED from appearing in SQL Server modules.)
In Part 1, there was a comment thread that went basically like this:
Kenneth: […] while I agree there is no down side to using NOLOCK on a temp table, what's the up side?
Commenter: The reason to use it on temp tables, as always, is to reduce locking overhead.
This is not the first time I've heard this sentiment; I'm not convinced that there's enough overhead to worry about here, at least not in SQL Server 2014. Since the beginning, NOLOCK hasn't been named accurately – it's not that no locks are ever taken, it's that locks that block other data readers and data writers are not taken (there are all kinds of other locks that still have to occur, including schema-stability locks). This effectively means that readers don't block other readers or writers, and while nobody can change the object itself, it also means that some of those bad things mentioned in my previous post can happen.
While NOLOCK can "help" queries against *permanent* tables, it does not quite have the same effect against *temporary* tables – SQL Server knows that it doesn't have to worry about blocking other readers or writers, because #temp tables are scoped to a single, specific session. So it takes far fewer locks in the first place, with or without a NOLOCK hint.
Let me illustrate with a quick example. We'll create a #temp table from sys.all_columns, then – in a transaction – run a SELECT against that #temp table while enabling trace flag 1200, and check sys.dm_tran_locks for any evidence of locking overhead. I'll do this with and without a clustered index, so we can see if there are any differences with a heap.
SET NOCOUNT ON; SELECT * INTO #x FROM sys.all_columns; --CREATE UNIQUE CLUSTERED INDEX x ON #x(object_id, column_id); BEGIN TRANSACTION; DBCC TRACEON(3604,1200,-1) WITH NO_INFOMSGS; SELECT * FROM #x --WITH (NOLOCK); DBCC TRACEOFF(3604,1200,-1) WITH NO_INFOMSGS; SELECT resource_type, request_owner_type FROM sys.dm_tran_locks WHERE request_session_id = @@SPID; ROLLBACK; GO DROP TABLE #x;
Now let's compare the differences in the TF1200 output. For the heap (no hints on the left, NOLOCK on the right):
Diff between TF1200 output for a heap with no hints (left) and NOLOCK (right) (click to enlarge)
And for the clustered index (again, no hints on the left, NOLOCK on the right):
Diff between TF1200 output for a clustered index with no hints (left) and NOLOCK (right) (click to enlarge)
I've highlighted the few, subtle differences in the nature of the locks taken (ignoring all the diffs due to different session and object IDs), and while I will concede that the locking patterns are not identical, there's nothing here that I would characterize as a significant difference in overhead. In both cases an intent shared lock is taken under read committed while it's a schema stability lock under read uncommitted; with the clustered index there's one additional shared lock.
Both with and without a clustered index, and both with and without the NOLOCK hint, the output from sys.dm_tran_locks is always the same in all four cases, showing no object-specific locks at the transaction level:
resource_type request_owner_type ------------- ---------------------------- DATABASE SHARED_TRANSACTION_WORKSPACE
(If you compare the same output for the scripts using a permanent table instead of a #temp table, you will see a long series of IS locks in the version without NOLOCK, and you may spot differences in runtime and other metrics – especially under concurrency. But don't forget that, even though you got your results faster, they may be incomplete or inaccurate.)
Finally, we can also compare the plans and runtime metrics using SQL Sentry Plan Explorer, and see that we get the same plan and very similar runtime metrics such as duration and I/O. For the heap (no hints on the left, NOLOCK on the right):
Plans and runtime metrics for queries against a heap (click to enlarge)
And with a clustered index (again, no hints on the left, NOLOCK on the right):
Plans and runtime metrics for queries against a clustered index (click to enlarge)
You may be shocked to observe that the NOLOCK version of the query isn't any faster – because, deep down, aside from the minor differences highlighted above, it really doesn't change anything by adding the NOLOCK hint.
So, I have to agree with Kenneth here: while there is no downside to using NOLOCK against a #temp table (since the data can't be changed underneath you by other sessions anyway), there is no tangible upside, either. To me they are just extra characters that provide a placebo effect, lulling the developer into believing the query will be faster. This is a substantial part of the problem with having a blanket policy of applying NOLOCK everywhere: the assumption that it will make all queries faster, even when you don't care about the risks or, in cases like #temp tables, where there really are no benefits or risks.
To be fair to the commenter, they later said:
That, I can agree with. I just don't agree that there is any advantage to peppering all queries against #temp tables with NOLOCK, because I don't believe it significantly addresses any locking overhead at all, and you don't get any of the other benefits of NOLOCK, either.
(Again, these tests were performed against SQL Server 2014; I will revisit this topic for older versions as time allows.)