Many data professionals around the world are familiar with SQL Sentry software. We have focused our tools on SQL Server since the beginning, and recently expanded our tooling to include additional Microsoft database platforms with APS and Azure SQL DW Support. Now, with SQL Sentry v10, we are introducing even more support for Azure cloud-based SQL platforms by including a new target type for Azure SQL Database!
Azure SQL Database is a Platform as a Service (PaaS) offering, meaning much of the heavy lifting of managing the database is completely handled for you by Microsoft. When using cloud databases, there’s no more checking disk arrays, pulling cables, or worrying about file allocation. This leaves you to monitor your databases and concentrate on tuning the performance of the database and application code.
In fact, in Azure SQL Database, where you pay for an allotment of the resources your database uses, it is in your best interest to ensure your code is running as efficiently as possible. Well-tuned and high-performing database code can lead to less resource overhead, which means less required resources, which can translate to money saved. Every IT manager with a budget likes to hear that.
To help customers visualize what resources their database is using, we have created a new Dashboard specifically for Azure SQL Database:
Database Throughput Units, or DTUs, represents the power of the database engine as a blended measure of CPU, memory, and read and write rates. This measurement helps you assess the relative power of the SQL Database performance levels. The more DTUs an Azure SQL Database is allocated, the more resources the database will have to service the workload.
One of the charts you’ll become very familiar with for Azure SQL Database is the DTU usage chart. In the example above you can see the DTU Usage chart in the upper left hand corner of the dashboard. At the start of the time frame we are almost maxing out the amount of CPU we have available, and thus our total DTU percentage is also maxing out. This means that for the workload running at that time we were CPU bound.
You’ll also see that, shortly into the timeline, the dashboard is showing us the DTU usage dropped quite a bit. This could be because some of the workload shifted and whatever was hammering the database stopped, but if we take a closer look at the other charts, we can see more information. The allocated memory usage dropped at the same time, meaning we were using less memory.
Indeed, we can also see the buffer in SQL Server memory took a dip at the same point. What really happened here was that we performed a scaling operation and moved from a Basic service tier (5 DTUs) up to a Standard S2 (50 DTUs), more than a 10x increase in available resources! The reason the buffer size dropped was because behind the scenes our physical database moved and thus the new machine had to build the buffer back up, which you can see it does pretty quickly. This all happened while our database stayed live with a constant workload being applied. If only it were that easy to shift to bigger hardware on premises!
The goal of the Azure SQL Dashboard is to give you deeper visibility into what’s going on with your database. Correlating values on the dashboard allows you to make educated decisions on not only what’s happening in the very recent past, but also over time. We also offer the ability to capture and overlay baselines on your dashboard. For example, you can easily check if the rollout you did last night is affecting your memory usage by comparing the previous day’s baseline, or you can capture a baseline to compare performance in your production Azure SQL Database instance against your test Azure SQL Database.
Looking at the DTU usage graph in the example above, we can see that there is a lot of Data I/O going on. That could be just how our workload behaves, or it could be a sign that we have some improvements that need to be made. With the Azure SQL Database support we are also including our Top SQL feature for query and procedure stats.
This feature lets us capture what queries or procedures are the most resource intensive, including their estimated plans. You can easily provide settings that filter down to the resource type you care about most: CPU, duration, reads, or writes.
In the example above, we can easily pick out the query that is consuming a lot of CPU and a high number of reads. This could be our culprit of why our Data I/O is spiking! Since we are also collecting estimated plans, we can dig even deeper and jump straight into Plan Explorer to evaluate each statement with potential problems.
We are very excited to now support Azure SQL Database with this initial release, and this is just the beginning. In future releases we plan on bringing a lot more to Azure SQL Database monitoring from our existing feature set for SQL Server, like blocking, deadlock detection, and index maintenance. We will also be incorporating our excellent Custom Conditions features so that you can define your own events to watch out for, tailored to your workload and environment.
The Azure SQL Database platform is growing fast, and Microsoft is adding new features and ways of using the service at a rapid pace. You should expect SQL Sentry will be taking the new features and models into account as we build out our support for the platform. Find more out about Performance Advisor for Azure SQL Database from the SQL Sentry web site.
But wait, there’s more! In fact Azure SQL Database support is just one of the feature sets we are introducing with version 10! You can pull it down and give these features a try, and more, by using our Free Trial. If you have questions, or want to see a demo, just head to geeksforgeeks.org and click the “Request Demo” button on the right hand side of the screen.
Enjoy!
Mike (@mikewo) is the Site Reliability Engineering Manager for SentryOne, working on cloud based products, services, and related technologies. Mike has over 20 years of experience in the industry, and for the last decade has been focusing on cloud technologies. He was one of the first Microsoft Azure MVPs, first recognized in 2010, and has been awarded an MVP each year since. Mike also blogs at mvwood.com.