SentryOne Team Blog (

Mining Performance Data from SQL Sentry : Part 3


Last time we took a quick look at a table that lets us query some information about the Operating System from the SQL Sentry repository database. In this post, we're going to kick things up a notch, and go over some tables that will give us a picture of our storage layout.

The disk system for a Windows server is a fairly complicated hierarchy of objects, so although I promised to only introduce 1 or 2 tables at a time, it would be very difficult to split this topic into separate posts. As a result, we're going to be covering several tables in this post.


Disk schema starts with different types of controllers. The names of these are kept in a small type table. While small, it is still important for our purposes here. For example, it would probably be useful for you to know the difference between a set of disks on a SCSI controller vs. those on a USB controller.


Rows in this table are currently static, and list out as follows:

ID Name
4 IEEE1394


The disk controller table lists out all the disk controllers discovered via WMI and/or VDS queries. There are a few bits of information that may be useful, like the manufacturer and inbound and outbound maximum rates, but remember to take these values for what they are, which is what is reported back from the WMI repository from their providers. The accuracy of the information is tied to how much care and feeding those WMI providers were given by the manufacturer.


Note the foreign key back to the controller type table. This is important for our queries, so that we can narrow down the working set by entire types of controllers.


The disk drive table represents the "physical" disk. We shouldn't take the term "physical" literally, as we're really just talking about a unit of space carved out for the server to use. It could be a physical disk, but it could also be a VHD, Mount Point, LUN carved out from a SAN or whatever else you can think of.


We're finally starting to get a familiar piece of data, as this is where we can see the size of the disk. This one isn't necessarily the one we want to use for a query, for reason that we'll see in a bit, but it's still important to know the actual capacity of the disk itself.

We can't quite use the name yet, as by querying the table, you'll see the name is basically the DeviceID from WMI:


We're getting closer though!


Whenever we get a nice chunk of space, we can carve it up into partitions, right? This table lists those partitions, simple as that.


Again, here size is somewhat useful, but the name isn't good for reporting out just yet.

Disk #0, Partition #0
Disk #0, Partition #1
Disk #1, Partition #0

We store these items, as they are used internally for displaying the various screens in SQL Sentry, but we probably won't see the "name" most folks are familiar with until the next table.


At last, the part we've been waiting for. The location of the volume names we're familiar with along with their size, free space, status and a few other goodies.


After everything else, this is where I'm trying to get to so that I can build a simple query that reports on low free space globally by server.

There is one issue though. This maps back to disk, but not to partition. If I want to include partition data with this, then I need some sort of mapping.

Why is this? Think for a moment, and you probably already know the answer. We can have volumes that span multiple partitions. That's why partition to logical disk needs a mapping table.

Luckily, the last table we're going to look at today, does just that.


As I stated above, logical disks can span multiple partitions, so we need a mapping table for this in order to be able to create an accurate picture of the disk system.

Once in a blue moon, we'll find that the system is reporting incorrect information from WMI about what logical disks map to what partitions, and this mapping table will be missing some rows. If that happens, you'll not be able to see these logical disks in the Disk Activity or Disk Space views of Performance Advisor. As such, this is one of the tables our support engineers look to if someone reports that something is missing in one of those views.

In addition, letter drives (C:, E:, Z:) will have a NULL PerformanceAnalysisDeviceDiskDriveID in the PerformanceAnalysisDeviceLogicalDisk table. This is because they map to partitions directly. Mount points have a value for PerformanceAnalysisDeviceDiskDriveID so we can easily tell what drive they are mounted on. Because of this, we need to join to this mapping table to make sure we're getting all of our logical disks in a query.


Note that this is a standard "many to many" mapping type table with its own surrogate key, and foreign keys back to the logical disk and partition tables.


Using the tables we've discussed here, along with some information from Part 1 of this series, we might build the following query that shows us all volumes on every monitored server that have less than 10% free space:

      ,typ.Name as ControllerType
      ,ctrl.Name as ControllerName
      ,logicalDisk.Name as LogicalDiskName
      ,logicaldisk.Size / 1048576 as SizeMB
      ,logicaldisk.FreeSpace / 1048576 as FreeSpaceMB
      ,(logicaldisk.FreeSpace * 100) / logicaldisk.Size as PctFreeSpace
      dbo.Device d
      join dbo.PerformanceAnalysisDeviceDiskController ctrl
      on ctrl.DeviceID = d.ID
      join dbo.PerformanceAnalysisDeviceDiskControllerType typ
      on typ.ID = ctrl.PerformanceAnalysisDeviceDiskControllerType
      join dbo.PerformanceAnalysisDeviceDiskDrive diskdrive
      on diskdrive.PerformanceAnalysisDeviceDiskControllerID = ctrl.ID
      join dbo.PerformanceAnalysisDeviceDiskPartition part
      on part.PerformanceAnalysisDiskDriveID = diskdrive.ID
      join dbo.PerformanceAnalysisDeviceLogicalDiskToPartition disktopart
      on disktopart.PerformanceAnalysisDeviceDiskPartitionID = part.ID
      join dbo.PerformanceAnalysisDeviceLogicalDisk logicaldisk
      on logicaldisk.ID = disktopart.PerformanceAnalysisDeviceLogicalDiskID
      ,typ.Name as ControllerType
      ,ctrl.Name as ControllerName
      ,logicalDisk.Name as LogicalDiskName
      ,logicaldisk.Size / 1048576 as SizeMB
      ,logicaldisk.FreeSpace / 1048576 as FreeSpaceMB
      ,(logicaldisk.FreeSpace * 100) / logicaldisk.Size as PctFreeSpace
      dbo.Device d
      join dbo.PerformanceAnalysisDeviceDiskController ctrl
      on ctrl.DeviceID = d.ID
      join dbo.PerformanceAnalysisDeviceDiskControllerType typ
      on typ.ID = ctrl.PerformanceAnalysisDeviceDiskControllerType
      join dbo.PerformanceAnalysisDeviceDiskDrive diskdrive
      on diskdrive.PerformanceAnalysisDeviceDiskControllerID = ctrl.ID
      join dbo.PerformanceAnalysisDeviceLogicalDisk logicaldisk
      on logicaldisk.PerformanceAnalysisDeviceDiskDriveID = diskdrive.ID) disks
	(FreeSpaceMB * 100) / SizeMB <= 10
order by
	(FreeSpaceMB * 100) / SizeMB

Note that we did need to have two queries with the union because of the difference in how mount points are stored vs. regular volumes with drive letters.

On a test server at SQL Sentry HQ, I can see results like this:

Sample results from a test serverSample results from a test server

Looks like the QA team is chewing through storage like Pac-Man at a lemon drop factory! For shame…


In Part 4, we will build on what we've learned here, and take a look at data and log file details for SQL Server instances that are monitored with Performance Advisor.

6 replies on “Mining Performance Data from SQL Sentry : Part 3”

6 Comments (Comments are now closed.)
  1. Hi Jason,

    Thank you so much for this series. How often does the data change in this query for SizeMB and FreeSpaceMB. For example, if I ran this query tomorrow and used additional space would I see a smaller number for FreeSpaceMB? Also, let's say I have a VM and grow the size of a drive would I see that in the next day as well?

    I am trying to see if there is an easy way to scrap this query daily to provide a high-level summary of disk usage and change of disk usage as well over a month and more without having to focus on the perfmon data.


    1. Hi John, the information on the Disk Space tab should be updated every 15 minutes, though it may take longer (or you might want to make a manual scan for configuration changes) if you've added/removed files.

  2. Thanks Kevin for the Q on historical info. I was wondering why the queries didn't need to pull just the most recent record. Looks like the ability to trend will rely heavily on counters. Cool…

    On a less interesting note… Internally I am always asking my developers to try to get away from the 8-byte datetime data type and start using datetime2(n), where we can turn down the precision from 3ms to a full second. Using the smaller datetime will could result in more records fitting on a page, which saves IO, which saves on the buffer pool.

    Is datetime still in play just to keep the application compatible with 2005 instances? if that's the case then it would also mean that other features such as filtered indexes, ROW and PAGE compression, and a ton of others are not in the app either. 2008 really did offer a bunch of cool new features that were not in 2005.

    1. Hi Dave,

      You've got some great points there. Yes, the datetime type is basically in play due to compatibility. One thing to note is that in many cases where time tracking is involved, SQL Sentry is actually using a numeric representation of time. We use our own implementation of a "timestamp" for performance counters, and we track duration in ticks for the most part.

      There are values stored as datetime, and we'll certainly be looking at that when we move to 2008 and higher for repository database support. I should note though that compatibility level for the database does not need to be set to 2005, so assuming that ROW and PAGE compression are available to you, they can certainly be used. In fact, I've used compression myself for some of our larger installs with great success. I believe that Melissa is actually working on some documentation on the subject of using data compression for SQL Sentry data that will be published soon.

  3. Ok, two questions –

    1. What if I know my organization doesn't use mount points? Can I safely simplify the query?

    2. I know the repository keeps history. What changes would I need to make to pull these values for specific dates?

    Awesome stuff. Thanks!


    1. Hey Kevin :)

      1. Yes, you can!
      2. History for these tables is not kept, so in order to view this information historically you need to get into the actual performance counter data. I'm going to get to that starting with Part 5 of this series, and it will span several posts.