SentryOne Team Blog (

#BackToBasics : CAST vs. CONVERT

There are several T-SQL language features where you have a choice of two or more approaches. I've talked before about old-style vs. explicit inner joins, how to express the "not equal to" operator, and whether you should use the newer FORMAT function.

convert-oAnother area where I see a lot of inconsistency is the simple choice between CAST and CONVERT.

In many cases, there is no material difference. For example, it is true that these do the same thing:

  [cast]    = CAST(CURRENT_TIMESTAMP AS date),
  [convert] = CONVERT(date, CURRENT_TIMESTAMP);

Neither is really any more typing than the other, and they both work the exact same way under the covers. So it would seem that the choice between CAST and CONVERT is merely a subjective style preference. If that were always true, however, you probably wouldn't be reading this post.

There are other cases where you need to use CONVERT in order to output correct data or perform the right comparison. Some examples:


Interpreting a date from a string

It is very common to accept freeform date strings as input, and then translate those to date or datetime within SQL Server. This is very easy to do with CONVERT – let's say people are passing in m/d/y strings, you can get the right date by using CONVERT with style 101:

SELECT CONVERT(date, '09/13/2016', 101);

Conversely, if you know the date is being entered as d/m/y, you can simply use style 103 instead:

SELECT CONVERT(date, '13/09/2016', 103);

You could even have cases where you interpret the string input based on the user's country or explicit preference; for example, you may have users from Canada, who will input d/m/y, and users from the USA, who will input m/d/y.

INSERT dbo.Users(UserName, Country, BirthDate)
SELECT @UserName, @Country, CONVERT(date, @BirthDateString, CASE Country
  WHEN 'Canada' THEN 103 
  WHEN 'USA'    THEN 101 END);

You can't do any of these things with CAST – it will simply assume that the string format will be in the same regional and language settings as SQL Server. If you pass in 13/09/2016 and the language is US_English, CAST will fail:

Msg 241, Level 16, State 1
Conversion failed when converting date and/or time from character string.

Of course, ideally, you wouldn't be relying on any sort of string formats at all, and rather using calendar controls or drop-downs to accept date input from users. When the date entered is 13/09/2016, it's pretty easy to try to convert it one way, and when that fails, try the other way. But when the date entered is 06/09/2016, how will you know whether the user meant June 9th or September 6th? If you control the format that is ultimately sent to SQL Server, you don't have to guess.


Presenting a date/time with a regional string format

Similar to interpretation of date strings, you might want to conditionally display date values as specific regional strings. A few examples:

SELECT CONVERT(char(10), @today, 101), -- 11/02/2016
       CONVERT(char(10), @today, 103), -- 02/11/2016
       CONVERT(char(10), @today, 120), -- 2016-11-02
       CONVERT(char(8),  @today, 112), -- 20161102
       CONVERT(char(8),  @today, 108); -- 09:00:04

CAST can't really help you here; while you can simply say CAST(@today AS CHAR(10)), you can't dictate the format. What you end up in this specific case, at least with US_English, is:

Nov  2 201

You might suggest using FORMAT, since you don't need to memorize style numbers and it will lead to less code in some cases, but please don't forget that this function is expensive. So while it would be fine for onesy-twosy stuff, it would be unacceptable at scale. Arguably a better solution would be to return the date/time value to the application, and let the presentation layer handle the output format – after all, that's not really SQL Server's job.


Swapping between varbinary and string

Converting dates to and from strings is pretty common, but a less common exercise is switching between strings and binary values. Let's say you want to encode some string as a varbinary value:

SELECT CONVERT(varbinary(32), 'some string');

This yields:


To get the value back, you can simply use CONVERT/CAST back to a varchar:

SELECT CAST(0x736F6D6520737472696E67 AS varchar(32)), 
       CONVERT(varchar(32), 0x736F6D6520737472696E67);

But what if you actually need a string representation of the 0x73... binary value? For example, to display the encoded value in a sentence or to build a comma-separated list of such values. CONVERT has your back, but once again CAST can't help you:

SELECT CONVERT(varchar(32), 0x736F6D6520737472696E67, 1);
------- style number, pretty important -------------^^^

The output is the same as above, but now it's a string:


I used this technique recently when I needed to construct automated e-mails for a customer, using Advisory Conditions, that included explicit plan_handle values in the message body. I didn't want to simply CAST a plan handle to a string:

SELECT CAST(0x05000500E914345FA01B4475AA020000010 AS nvarchar(max));

Because this is the result:

倀倀鄎䕃Ǻ䞴ꁚ က

Which wouldn't be very useful, since the customer wanted to be able to copy a plan handle from the message and paste it into a query window. Using CAST, they would end up with gibberish.



CAST does not support style numbers in any way, so for many of these scenarios,CAST is simply not an option. And my point here isn't to point out where CAST and CONVERT differ, nor to provide an exhaustive resource of all the places where style numbers can be used (the documentation does a good job of that, and I did show how to self-document all the styles available for date/time).

I'm just asking the rhetorical question, "Why would you ever choose CAST?" I always use CONVERT, even when I don't have to, because there are so many cases where I *do* have to. Consistency does not get a lot of publicity, but it's pretty important in my books. Just like spelling out DATEPART(DAY, ... instead of DATEPART(D, ... makes sense, even though there is no chance for ambiguity there, as there is with DATEPART(W, .... Stay consistent, and don't take advantage of exceptions unless there is a tangible benefit to doing so (and saving three keystrokes is not a tangible benefit, IMHO).

Now, you might argue, I prefer CAST because it is ANSI standard, while CONVERT is not. Okay, valid argument, but like many other things you are probably using that are proprietary to SQL Server, you need to weigh the consistency and other benefits above against the odds that you'll ever actually need to port the code to a different platform. I'm all for following the standard, but not when doing so gets in my way, or when it is just to appease the Celkos of the world in their glass classrooms. :-)

9 replies on “#BackToBasics : CAST vs. CONVERT”

9 Comments (Comments are now closed.)
  1. >> Because in the real world we don't drop our chosen RDBMS wholesale and hop onto a different RDBMS tomorrow. <<

    I would say, that in the real world you do not get to pick which databases you have. Your suppliers, business partners, data sources, and assorted platforms can all be different. I grant the back in 1960's, we had single vendor shops (IBM, DEC, Microsoft, etc.) in which your data pretty much never left your shop.

    I have been at this for a few decades now as a consultant and as an author. I noticed that I can stay in print for 20+ years because my material is not focused on one particular implementation of one particular SQL product on one particular platform.

    Much of my consulting work has been bringing existing SQL into standard conformance. One of my favorite war stories was writing a a set of queries for SQL Server back when they used the old *= outer join syntax. The same basic queries also ran on an Oracle platform, which had a different implementation. I inserted comments which used the current standard LEFT OUTER JOIN syntax, and told them that when they upgraded to the next release of Oracle and SQL Server, they could simply un–comment my code on both platforms and drop the old proprietary syntax.

    Vendors tend to put improvements into core features of an SQL engine (this is also some of my consulting work), rather than spending the effort optimizing some obscure proprietary feature they have. This makes sense; what would buy the most overall improvement? A better SELECT or an obscure join feature?

    1. Well, I'm not as old as you, but I'm old too, and in 20+ years I have managed to make a decent living dealing with SQL Server only. These ports between platforms that you warn us about just don't happen as often as you'd like us to believe; and when they do, the data gets moved and a new app gets built around it. I highly doubt you can cite many examples of application that use 100% ANSI compliant queries that will run against all versions of all platforms without any effort at all, and in those cases I would argue that changing CONVERT to CAST will be the least of your worries (if you have code like this, you'll also have to worry about changing between varchar and varchar2, number and numeric, timestamp and datetime, etc).

      There are plenty of proprietary features that set certain database platforms apart and that simply can't be emulated elsewhere (same syntax or not), and that can't simply be swapped out for standard code that will perform as well. I'm not going to bill 10 hours to build some solution so I can avoid the IDENTITY property in case the code might ever be ported to Oracle or MySQL.

  2. Hi, good article, I have worked in an environment where the database may be spawned on different platforms, in this case CAST is more supported as it makes stored procedure conversion a more simple process. as a general rule I tend to use cast due to this being planted in my mind (unless I need convert).. but that's why. Interesting I have just had a similar conversation.

    1. Yes, there are certainly cases where you'll want to use CAST.

      However, I'd be surprised if you have a *lot* of code in SQL Server that will just port over to any other platform without issue.

    1. Joe Celko (see above) tries to impress upon people that you should only ever write fully portable, ANSI-standard code, and that it is a sin to use any proprietary extensions from any vendor. This tends to work in the classroom, but not in the real world. Because in the real world we don't drop our chosen RDBMS wholesale and hop onto a different RDBMS tomorrow.

      While I agree that adhering to the standards is a good idea when it's practical, and when you don't lose anything by doing so, but when there are vendor-specific features or syntax I can use that make my life easier or make my code perform better or more reliably, I'm going to use them. Because I'm not worried about switching to Oracle tomorrow and having to lose them.

  3. @Joe (Celko),

    You wrote:
    "In modern programming such things be done in a presentation layer and not in the database. Maybe we should not feel too bad. How many of you have a pair of blue jeans with a watch pocket, and have never owned pocket watch? You think somebody would make it big enough to hold a cell phone today, wouldn't you?"

    First, consider that "in modern programming", there frequently is NO presentation layer. Then, think outside the box a bit. To use your example, ALL jeans that have pockets typically have pockets large enough to hold a cell phone. They're just not labeled as such. As for the tiny pocket watch pocket, consider that it's also great for holding other tiny things such as your house key. ;-) Again, it's just not labeled as such.

    And, that's a part of the key to all of this. Labels vs. uses. You keep preaching that people are like "old Cobol programmers" (who weren't actually as dogmatic as you imply they were) and yet you insist that people be as dogmatic as you claim they were for what? The sake of writing the mythological and unattainable "transportable code"? You need to learn to think outside the proverbial box but first you must realize, you're in a box. ;-)

  4. Many of the features you see in SQL Server were inherited from Sybase. We also did not have the concept of a tiered architecture back in the dark ages. This is why we have CONVERT() and MONEY data types. They exist to keep COBOL programmers happy when they could not find their missing PICTURE clauses.

    Most of my consulting work these days is either in training or in repairs. When you look at SQL code, there are some smells that stick out and let me know what the programmer was probably thinking when he screwed up. The use of CONVERT() tells me he is probably an old COBOL programmer (even if he does not know it!) And did not get the message about tiered architecture decades ago. I know I can expect to see more "monolithic mindset" in the following code.

    When we added temporal datatypes to ANSI/ISO Standard SQL, we picked one and only one display format; ISO 8601 with "yyyy-mm-dd HH:MM:SS.sss.." option. This eliminated the need for conversions. This eliminated ambiguity. This matches other ISO standards. Notice the dashes and space between DATE and TIME fields (this is what the term "field" means in SQL; it is not anything like a column).

    In modern programming such things be done in a presentation layer and not in the database. Maybe we should not feel too bad. How many of you have a pair of blue jeans with a watch pocket, and have never owned pocket watch? You think somebody would make it big enough to hold a cell phone today, wouldn't you?