SentryOne Team Blog (

T-SQL Tuesday #68 : Just Say No to Defaults

T-SQL Tuesday #68

This month’s T-SQL Tuesday is hosted by Andy Yun (b|t), and is on default settings in SQL Server. I have a SQL Saturday session that covers set-up and configuration of SQL Server, so I do talk about default values and why you may or may not want to change them. There are plenty to discuss, but I'm going to cover three of them here.

The first one is the default directory for data, log and backup files. It's set during installation and I know that the first several times I installed SQL Server, I just stuck with the defaults. They are the defaults, so they must be right, right? I have a feeling most SQL Server defaults were set in the first version of SQL Server and then never touched again. The default value for these is C:\Program Files\Microsoft SQL Server\% instance id %\MSSQL\. I think that we can agree, for the most part, that we don't want to put those files on the root drive – especially the system databases, since that's where they'll be created. If you're not sure what your settings are, this StackOverflow post has a query to find out. Changing those defaults can definitely help save your bacon.

Backup compression has been available in SQL Server since 2008 and it has been available in Standard Editions since 20008R2. When I started as a DBA, we paid money to vendors to get that backup compression. You don't have to do that any longer, but you do have to enable it because it is disabled by default. Keep in mind, nothing is free. There can be a CPU impact when making use of backup compression, so test and validate before you pushing it into production. But I know that you, dear reader, always test and validate, so it goes without saying. You can enable backup compression via SSMS by going to the Server->Properties->Database Settings and checking the backup compression box or by running the following script:

EXEC sys.sp_configure N'backup compression default', 1;

The last default setting is one that can cause so many performance issues and is also the setting that makes me sad. Well, sad is probably too strong a word for a SQL Server setting. It should be something that should be easy to reset the default value for, but it is not always so. I'm talking about the file autogrowth setting. I'm sure that there was a time where growing a data file by 1MB and log file by 10% was reasonable, but was also probably when setting the Cost Threshold for Parallelism to 5 made sense (sorry, nerd humor). When I was an up and coming DBA, I was taught that you should change the default settings in the Model database to something reasonable and when you created new databases, it would use those values. Unfortunately, that's not always the case. If you create databases within SSMS and use the GUI entirely, the new database will have the settings from Model. If, however, you click on the Script button or (god forbid) just use the CREATE DATABASE syntax, it will revert to the original model autogrowth values. Kenneth Fisher(b/t) wrote a blog post about it here. As mentioned in the post, there was a Connect item on this, but it has been closed. So, this is one case where you need to explicitly ensure that you're setting these values appropriately. It can be set within SSMS or by using the following syntax:

     NAME = N'NewDBData', 
     FILEGROWTH = <some reasonable value>

The other option, of course, is to edit the Create Database template, edit autogrowth settings there, and remember to use that template.

There are plenty of settings within SQL Server. Even for those settings where the default value is typically fine, you should understand what that setting does and when/why you might want to change it. Take the time to read all of those posts for this T-SQL Tuesday. I'm sure that I will learn a few new things. Thanks, Andy, for this great topic.

Comments ( 0 )

    Leave A Comment

    Your email address will not be published.

    This site uses Akismet to reduce spam. Learn how your comment data is processed.