I went through the archive above, and was quite surprised to discover that I have never blogged explicitly about
NOLOCK. Out on the forums, I typically refer to this hint as a "magic, pixie-dust turbo button." It may seem to make some queries faster, but at a significant cost that must be weighed. I'm not saying don't ever use the read uncommitted isolation level for any query; the "habit" I am talking about here is not that of using
NOLOCK in an isolated scenario where the effects are known and deemed inconsequential, but rather, that of slapping
NOLOCK on every table mentioned in every query in the entire workload.
What are the effects you need to worry about? Well, others, like Andrew Kelly, have described this before, Paul White digs quite deep into it, these search results are quite telling, and Kendra Little even has a video about it… but I'll point them out anyway:
- "Dirty read" – this is the one most people are aware of; you can read data that has not been committed, and could be rolled back some time after you've read it – meaning you've read data that never technically existed.
- Missing rows – because of the way an allocation scan works, other transactions could move data you haven't read yet to an earlier location in the chain that you've already read, or add a new page behind the scan, meaning you won't see it at all.
- Reading rows twice – similarly, data that you've already read could be moved to a later location in the chain, meaning you will read it twice.
- Reading multiple versions of the same row – when using
READ UNCOMMITTED, you can get a version of a row that never existed; for example, where you see some columns that have been changed by concurrent users, but you don't see their changes reflected in all columns. This can even happen within a single column (see a great example from Paul White).
- Index corruption – surely you are not using
INSERT/UPDATE/DELETEstatements, but if you are, you should be aware that this syntax is deprecated and that it can cause corruption, even in SQL Server 2014 RTM – see this tip for more information. Note that you should check for the hint in any views that you are trying to update, too.
- Read error – because the underlying data could be moved or deleted during your read, you could see this error:
Msg 601, Level 12, State 1
Could not continue scan with NOLOCK due to data movement.
Back in SQL Server 2000, this bug was supposedly fixed (see KB #815008), but only under a trace flag (9134) – and several customers have reported that it can still happen, even with the flag, and even in SQL Server 2014.
Maybe these effects are okay to you. Maybe you're using
NOLOCK in scenarios where 100% accuracy isn't crucial (a rough ballpark of the number of messages posted to your forum today, a monitoring tool collecting aggregate metrics from DMVs) or where it can't really hurt (pulling data from a static auxiliary table like a numbers or calendar table). But maybe you aren't aware of the potential issues listed above, or don't believe they can happen to you. Let me assure you: they can. Not convinced? There's more! Please read on.
It really is too bad that they named this hint
NOLOCK. The semantics of read uncommitted don't actually mean "take zero locks" – they actually mean "take no shared locks." I have seen many people surprised to see rows in
sys.dm_tran_locks for a query that is running under read uncommitted. Here is a quick example, where we can see Sch-S (schema stability) and other locks taken out for an object, even though
NOLOCK is used (click to enlarge):
These aren't held for the duration of the transaction, but the locks are maintained while the query is running. What does this mean? Well, if you have lots of users running read-only queries under
NOLOCK, you may still have issues "cutting in line" to make any changes to the table or its indexes, because you will be blocked by Sch-S. It is a good thing to be conscious of this issue – "
NOLOCK" in this case isn't exactly working as advertised.
There are other scenarios, too. The query writer doesn't always have control over whether the objects they are referencing will obey the current session's isolation level semantics. I gave an example recently regarding the metadata helper functions like
OBJECT_NAME() – a
NOLOCK query can still be blocked because several of these functions enforce their own isolation level. You can also hit this if you reference your own functions or views with stronger hints, or only apply the hint to some of the tables involved in the query. To avoid the latter, and to make a later change to a better isolation level easier (more on this below), I always recommend using the session-level hint instead of the table-level
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
If you insist on using the hint at the table level, you need to be very careful about the syntax. Compare the following, both of which are *allowed* syntax, but with markedly different behavior (click to enlarge):
On the left, session 68 just used
FROM dbo.tablename NOLOCK – as you can see from the results, this was actually interpreted as a table alias, not a hint, and as a result, it took out shared and intent shared locks at the page and object level – which is not behaving the same way as the author likely expected. So please, always use the fully explicit form,
FROM dbo.tablename [AS alias] WITH (NOLOCK).
You can use a different isolation level to avoid the issues listed at the beginning of this article, while still preventing readers from blocking writers and vice-versa. Once again, Kendra comes to the rescue with a great post explaining
READ COMMITTED SNAPSHOT:
RCSI in particular is a great way to get the performance of
NOLOCK without sacrificing accuracy. Don't just go turn this feature on, though; there is no such thing as a free lunch. You'll want to note the following, and make sure that you have thoroughly tested all of these scenarios before making the change:
- Sch-S locks still need to be taken even under RCSI.
- Snapshot isolation levels use row versioning in tempdb, so you really need to test the impact there.
- RCSI can't use efficient allocation order scans; you will see range scans instead.
- Paul White (@SQL_Kiwi) has some great posts you should read in his blog series on isolation levels.
Since using the snapshot isolation levels is not, and should not be, a simple "switch and forget it" operation, you may want to look into other alternatives to
NOLOCK. Since this usually involves read-heavy queries, you could consider letting users read a copy of the data, and your choices are limited only by the edition of SQL Server you're using and how stale the data is allowed to be. Some options off the top of my head:
|Solution||Granularity||Currency of data||Edition requirements||Effort / maintenance|
|Database||Near real-time||Enterprise Edition||Medium|
|Database||Snapshot frequency||Enterprise Edition||High|
|Replication||Object||Near real-time||Most editions||High|
|Log shipping||Database||Log backup frequency||Any edition||Low|
|Object||Implementation dependent||Any edition||Medium|
In a previous life, I have implemented something similar to the application-level data caching approach, but I replicated data, on a schedule within to multiple SQL Server Express instances, each sitting on an application server. This way the application had relatively current data, but it was static and local, so no issues with concurrent access, no conflicts between readers and writers, and no network round-trips. You can read more about this solution here and here.
I do have some work invested in a post about a poor man's secondary involving log shipping; I will update this post when that one is published.
I am not vehemently against
NOLOCK – there are definitely use cases where you can "get away with it." I used it quite a bit earlier in my career, but looking back, mostly to avoid solving the real issue(s). Today I am just against it being used in every single query as an implicit rule. If you are using it everywhere, I hope that I have convinced you to reel that in a bit, and use it only in those scenarios where accuracy is not important – or at least where accuracy can be traded for other priorities. Better yet, use one of the alternatives I discuss above, though I do acknowledge that change takes time and sometimes a lot more.