Mining Performance Data from SQL Sentry : Part 6

Mining Performance Data from SQL Sentry : Part 6

INTRODUCTION

We covered a lot in Part 5 of this series on mining performance data, and at the end we were able to start working with SQL Server waits statistics stored in the SQL Sentry repository database.

In this post, since we don’t need to go over a lot of table definitions, I want to focus on the query itself, and go into a bit more detail on some of the things we’re going to be doing in it.

BACKGROUND

Some time ago, I read an MSSQLTips article from Aaron Bertrand (b|t) on creating pivot columns dynamically in a query. Something that quickly struck me as a neat thing to use this for was on performance data in SQL Sentry.

I’ve used pivot queries quite a bit to pull data from the repository, and even though I find them very useful, they are generally sort of tedious to write, because I have to know ahead of time exactly what counters I want. If I can generate the pivot columns dynamically, I could build a query that would pivot on all the counters in a given category. That’s what we’re going to do here today.

DISK PERFORMANCE

One of the coolest features of SQL Sentry is the Disk Activity feature. Some of the important information we can get from Disk Activity is really how long IO is taking and how much IO is happening. There are several measurements involved in providing this information, and they all have value in different ways. It’s not wise to single out one value, and ignore the others, so the query we’re going to look at today will show us the AVG for the last 30 minutes for the entire category of physical disk counters that SQL Sentry has collected.

Let’s break away from the format I’ve been using, and start with the query, then pick it apart to see what we’re doing.

DECLARE @cols NVARCHAR(MAX), @sql NVARCHAR(MAX);
SET @cols = N'';
SELECT @cols += N', p.' + QUOTENAME(CounterResourceName)
  FROM (SELECT p.CounterResourceName FROM PerformanceAnalysisCounter AS p
  INNER JOIN PerformanceAnalysisCounterCategory AS c
  ON p.PerformanceAnalysisCounterCategoryID = c.ID
  WHERE c.CategoryResourceName = 'PHYSICALDISK'
  AND p.PerformanceAnalysisSampleIntervalID <> 0
  GROUP BY p.CounterResourceName) AS x;
 
SET @sql = N'
SELECT FullyQualifiedDomainName,' + STUFF(@cols, 1, 2, '') + '
FROM
(
  SELECT p.CounterResourceName, Device.FullyQualifiedDomainName, d.Value
   FROM dbo.PerformanceAnalysisCounter AS p with (nolock) 
   JOIN dbo.PerformanceAnalysisDataDiskCounter AS d with (nolock) 
   JOIN dbo.Device with (nolock) ON d.DeviceID = dbo.Device.ID
   ON p.ID = d.PerformanceAnalysiscounterID
   WHERE p.CounterResourceName IN ('
  + STUFF(REPLACE(REPLACE(@cols, ']', ''''), ', p.[', ','''), 1, 1, '')
  + ') AND dbo.fnConvertDateTimeToTimestamp(dateadd(MINUTE, -30, getdate())) < d.Timestamp
) AS j
PIVOT
(
  AVG(Value) FOR CounterResourceName IN ('
  + STUFF(REPLACE(@cols, ', p.[', ',['), 1, 1, '')
  + ')
) AS p;';
 
PRINT @sql;
EXEC sp_executesql @sql;

The first interesting part is this:

SELECT @cols += N', p.' + QUOTENAME(CounterResourceName)
  FROM (SELECT p.CounterResourceName FROM PerformanceAnalysisCounter AS p
  INNER JOIN PerformanceAnalysisCounterCategory AS c
  ON p.PerformanceAnalysisCounterCategoryID = c.ID
  WHERE c.CategoryResourceName = 'PHYSICALDISK'
  AND p.PerformanceAnalysisSampleIntervalID <> 0
  GROUP BY p.CounterResourceName) AS x;

In this statement we’re querying the counter category table and joining to the counter table so that we can use the counter resource name for our column names. I could also go ahead and involve the tables that let me get clean names for counters, but I’m actually not too worried about that in this case, and I can tell what I have from the resource name as well.

The output is really just setting up the columns variable into the format Aaron used in his dynamic pivot article, so that I can use it later in the creation of the pivot query.

Next we’re building the query the dynamic SQL:

SET @sql = N'
SELECT FullyQualifiedDomainName,' + STUFF(@cols, 1, 2, '') + '
FROM
(
  SELECT p.CounterResourceName, Device.FullyQualifiedDomainName, d.Value
   FROM dbo.PerformanceAnalysisCounter AS p with (nolock) 
   JOIN dbo.PerformanceAnalysisDataDiskCounter AS d with (nolock) 
   JOIN dbo.Device with (nolock) ON d.DeviceID = dbo.Device.ID
   ON p.ID = d.PerformanceAnalysiscounterID
   WHERE p.CounterResourceName IN ('
  + STUFF(REPLACE(REPLACE(@cols, ']', ''''), ', p.[', ','''), 1, 1, '')
  + ') AND dbo.fnConvertDateTimeToTimestamp(dateadd(MINUTE, -30, getdate())) < d.Timestamp
) AS j
PIVOT
(
  AVG(Value) FOR CounterResourceName IN ('
  + STUFF(REPLACE(@cols, ', p.[', ',['), 1, 1, '')
  + ')
) AS p;';

I won’t recreate Aaron’s article, so you should read that if you want to know what the how and why of creating the dynamic SQL statement for this, but I will go over what we’re pulling in, and why.

We’re rolling this up by server, so we’re getting the FullyQualifiedDomainName from the Device table. We need the counterResourceName from the counter table, because that is actually what we’re going to pivot for, and of course we need our values from PerformanceAnalysisDataDiskCounter to pivot on.

Note: that we’re not using a data rollup table for this, and if you’ve been following this series, you might remember that I once mentioned disk related detail data being stored in a different table than non-disk related detail data. This means that for this query, our data values will be coming from the PerformanceAnalysisDataDiskCounter table.

Something interesting to point out is this bit right here:

dbo.fnConvertDateTimeToTimestamp(dateadd(MINUTE, -30, getdate()))

This is using a function included in the SQL Sentry database that converts datetime values to Performance Advisor timestamps. There is another function that does the reverse as well. We’re basically just using this to make sure we’re only looking at the values collected within the last 30 minutes.

Finally, I just do a little print, so I can read the query if I want, otherwise I can use sp_executesql to run it:

PRINT @sql;
EXEC sp_executesql @sql;

RESULTS

In my results, I’m looking for the overall average for the last 30 minutes for each counter in the PHYSICALDISK category. This category should give me sec/read, sec/write, read bytes/sec, reads/sec, write bytes/sec, writes/sec.

Mining Performance Data : Results from my PIVOT query for PHYSICALDISK detailsResults from my PIVOT query for PHYSICALDISK details (click to enlarge)

I realize the image is a bit small due to the width of the results, but it’s easy enough to run this for yourself, and see what you get.

You should be able to do this for any counter category that has a non-null or non-0 value for PerformanceAnalysisSampleintervalID. Also, remember that if you are working with a counter category that is not related to disk, you will change the data table from PerformanceAnalysisDataDiskCounter to PerformanceAnalysisData or use one of the data rollup tables, which wouldn’t require the use of separate tables.

WHAT’S NEXT?

This is the last thing I already had in mind to show everyone from Performance Advisor, but there is certainly a lot more that we could do. I haven’t shown anything with Top SQL, or indexes, and I haven’t really touched on the Event Manager side at all.

For the next part of this series, I was hoping I might get some suggestions from folks that are reading these posts. What is something we haven’t gone over that you would like to know how to retrieve from the SQL Sentry repository? Just let me know in a comment (or e-mail us at community@sentryone.com), and I’ll see about getting that information into a post for you.

Comments ( 8 )

        • Ludo Bernaerts says:

          Very useful and interesting nr of articles. Thx a lot. I have one small remark with the last query. Apparently, timestamp of performance data is stored in database using utc date/time. When I run the query like this, it doesn't show me any data, to solve this just change date condition like this " AND dbo.fnConvertDateTimeToTimestamp(dateadd(MINUTE, -30, getutcdate())) < d.Timestamp" , this produce correct result.

        • Cody Konior says:

          If others come across this and have performance issues, try moving the result of fnConvertDateTimeToTimestamp into a variable and using that in the WHERE clause instead. That's what worked for me.

        • Yaroslav says:

          Really interesting blog post series thanks Jason. I would like to know if there is a possibility to get this information straight from PerfMon app, be able to create baselines from those and compare, say different months, and check servers behavior on a 12 month period or any other period.

        • Jason Hall says:

          Baselines are actually a nice feature of Performance Advisor right out of the box.

          Kevin Kline wrote a nice article on the subject that includes details on how to create them and use them in different ways:
          https://sentryone.com/white-papers/performance-baselines-and-benchmarks-for-microsoft-sql-server

        • Melissa Connors says:

          An interesting and helpful series, Jason! Hope to see the next part soon.

        • Jason Hall says:

          Hey Melissa,

          Thanks for stopping by. I'll be getting back to blogging soon. I still owe everyone some info on how index data is stored. :)

        • Michael says:

          Great series!

          I would like to suggest Top SQL followed by Indexes. These are the areas I focus on the most, and I would certainly like to mine this data directly from SQL Sentry using my own filters and criteria.

          Thanks for some great posts!

        • Jason Hall says:

          Hi Michael,

          These are both great suggestions! While I'm adjusting to my new role at the office, Aaron has agreed to tackle some Top SQL queries, so look for something on that from him soon.

          I'll put something together on indexes for everyone on here after that.

        Leave A Comment

        Your email address will not be published.

        This site uses Akismet to reduce spam. Learn how your comment data is processed.