Monday, November 25, 2024
Google search engine
HomeData Modelling & AISQL Sentry v7 Beta: First Look

SQL Sentry v7 Beta: First Look

It’s been a while since my last post. Yes, we’re still here (as you well know if you follow us on Facebook or Twitter), we’ve just been heads down since the PASS Summit getting v7 ready to ship. It’s been a long road, but we’re releasing the public beta today!

v7 represents the culmination of almost a year of effort, and ideas going back much, much further than that. We’ve completely redone several aspects of the software such as alerting (condition and action) configuration, and we’ve added some awesome new features like automated defrag, computer groups, and CMS support to boot. Did I mention SQL Server 2012? 😉

Terminology Changes

We’ve made some long overdue changes to the SQL Sentry lexicon in the interest of making things clearer, and since I’ll be using these new terms I wanted to get this out of the way first:

  • A Device is now a Computer (pretty sure I just heard a collective HOORAY! – trust me, we had our reasons for devices, but we won’t get into that here) 😉
  • The former Global node is now the Shared Groups node
  • The SQL Sentry Console is now the SQL Sentry Client
  • The SQL Sentry Server Service is now the SQL Sentry Monitoring Service

Computer Groups

The first thing existing users will notice when they open the client is the new Shared Groups node at the very top of the Navigator. This node represents your entire SQL Sentry environment organized by Site. It is called “shared” because every SQL Sentry user sees exactly the same view here. The user-specific device registrations and groups (formerly Global) has been moved and renamed to Local Groups to better reflect what they actually are. You can still configure server-specific settings and below here, but not global settings – those are now set at the Shared Groups root node only.

Sites have always been there to enable logical partitioning of servers and monitoring services. For example, if your HQ is in Atlanta, but you have 100 SQL Servers in Miami and 200 SQL Servers in New York, you might install one monitoring service in Miami, and two in New York. You would create a site for each location, and place the monitoring services in the appropriate site so that they only monitor the servers in their location.

In v7, you can now easily apply special alerting rules to the servers in Miami and New York, versus having to touch each server in order to override global alerting settings:

ComputerGroups_thumb2

 

In addition, you can create unlimited nested child groups in each site, and – you guessed it – apply specialized rules to those groups as well. The inheritance works exactly as it always has in SQL Sentry, you start at the highest level (Shared Groups), then override those global settings as needed at lower levels. Previously alerting & setting configuration looked like this:

  • Global
    • Computer
      • SQL Server
        • Object (job, report, etc.)

Now it looks like this:

  • Shared Groups
    • Site
      • [Child Group] [,…n] 
        • Computer
          • SQL Server
            • Object

As you can see, the ability to group servers can dramatically reduce the alerting configuration required for many environments.

Custom Object Groups

Being able to click on a group node in the Navigator and easily change settings for a bunch of servers at once is great, but it’s inherently limited by the fact that a computer node can only exist in one group at a time in the navigator. What if you want to have another set of rules for servers that effectively “cuts across” navigator groups? For example, “All QA Servers” in both Miami and New York?

This is easy to do with custom groups. You simply create a new group by double-clicking the Object Groups node in the navigator, add the QA servers to it, then adjust the settings:

ObjectGroups_thumb1

 

Similarly, if you wanted to disable Runtime Threshold alerts for all transaction log backup jobs, you can easily search on the jobs using a name pattern, use Shift + left click to highlight and add several at once, add the Runtime Max condition, then select “Disable.”

Automated Defragmentation

Your first thought here may be, “I already have scripts that perform automated defragmentation, why do I need a tool?” Good question! Here are three compelling reasons:

  • Manageability
  • Visibility
  • Defrag Speed
Manageability

There are several great scripts out there that many use to perform automated defrag. They can get the job done, but the main issue is that they are all, well, scripts. Configuring exactly which databases and indexes are defragmented and when can be a challenging and time-consuming task, especially if you are talking about 10s or 100s of SQL Servers. Manual script changes and multiple jobs on each server are typically required.

With our new Fragmentation Manager module, just like everything else in SQL Sentry, you can start at the top and work your way down. For example, if you have 20 SQL Servers, you can set a default global defrag schedule of 2am for all servers at once by enabling it at the Shared Groups level:

GlobalDefrag_thumb3

 

So in 30 seconds or less, you’ve configured enterprise-wide defrag!

DISCLAIMER: I am NOT recommending that you do this, as every environment is different, and you’d of course want to disable any existing defrag jobs first. I’m just letting you know what is possible. 😉

Typically you’ll want to enable Fragmentation Manager at the SQL Server instance level by right-clicking the instance in the navigator pane, or clicking the Enable button on the new Indexes tab inside Performance Advisor.

Once you’ve enabled one or more defrag schedules, if you view the “Defragmentation Schedule” sample event view, or the calendars for any of those servers, you’ll see defrag instances show up alongside other events:

DefragSchedule_thumb3

 

You can of course drag-and-drop to move them. But what if you have a 100GB index on one of the servers that really needs to be analyzed and defragged separately? You simply select the index and override the inherited schedule:

IndexSchedule_thumb4

 

It’s that easy. Everything is point-and-click, and since the SQL Sentry monitoring service manages all of the defrag tasks, there are no scripts or jobs required.

Visibility

Once you’ve enabled the Fragmentation Management Module on a SQL Server, you’ll see a new Fragmentation tab appear inside Performance Advisor:

FragTab_thumb2

 

This tab has tons of good information about your indexes, including 6 charts showing disk and buffer space, used and wasted, both total and at the index level. The purpose of this tab is not only let you know the state of fragmentation on a server, but help you make good decisions about how and when to defrag your indexes, adjust fill factors, or even change index definitions. One of the coolest charts on this tab is Index Space Usage (center bottom) – it shows you exactly how much of an index is on disk and in buffer over time, and how much disk and buffer space is wasted due to non-full pages.

There are also 3 new alerting conditions: Defrag Started, Completed, and Failure, so you can be as informed as you want to be regarding the status of your SQL Sentry defrag operations.

Speed

No, we haven’t invented some magical new higher performance technology for analyzing and defragging your indexes… however, we have come up with a unique approach for potentially dramatically speeding up your regular defrag process, thereby reducing the maintenance window required for defrag – by allowing more than one concurrent defrag operation:

MaxOps_thumb3

 

If your disk system can handle it, why not run multiple analysis or defrag tasks in tandem? Most systems we’ve tested have no problem running 2 or 3 concurrent defrag ops, especially when indexes are split across multiple data files and disks. An op can be an analysis, reorg, or rebuild. Currently this setting is capped at 5 for safety. I recommend starting with 2 concurrent ops on a test server, and see how it performs. With the Performance Advisor dashboard and Disk Activity views, you can easily assess the performance impact of increasing the concurrent defrag ops.

Alerting Enhancements

In addition to group-based alerting configuration, many other major improvements have been made in the area of alerting:

  • You can now configure multiple actions of the same type for the same condition! For example, you can have 3 different Send Email actions for the Job Failure condition, each with different alert targets (users or groups), different rulesets, and different alert windows.
  • What’s this, “windows?” Yes, that’s right, you can now set exactly when contacts should be alerted using configurable ranges of time, for example “Business Hours” or “Weekends.” You can even create compound windows which combine multiple windows together.
  • We no longer list all conditions by default, only those that are in effect. This can dramatically reduce the noise when viewing and configuring alerts.
  • Inherited conditions/actions are displayed in one pane, and conditions/actions set at the current level are in another (Explicit).
  • Since there can now be multiple levels of inheritance with groups, we show you exactly where the inherited settings are coming from via the Object column.
  • You can choose to Disable, Override, or Combine with an inherited condition action. Combine works just as it sounds – you can set the same action again at the current level, but leave the inherited action in effect.

Together, I think you’ll find that these changes make for the most flexible and robust alerting system we’ve ever had.

Performance Advisor Dashboard Enhancements

Aside from various cosmetic improvements, the two primary new features on the dashboard are NUMA support and mirroring queue monitoring. When monitoring a NUMA system, you’ll notice that both the Windows and SQL Server memory charts are now split to show exactly how much memory is allocated to and used by each NUMA node. In addition, page life expectancy history is also shown for each node. When monitoring a system acting as a primary, mirror, or both, the Send and/or Redo Queues are shown on the same chart previously used to show backup/restore activity.

Beta Download

I’ve really only scratched the surface. Please take the beta for a spin, and let us know what you think – we want your feedback!

As always, upgrading your existing SQL Sentry environment to the beta, and from the beta to v7 RTM is fully supported. Be sure to take a backup of your current SQL Sentry database first. Rolling back for any reason is easy – uninstall the beta, restore the database backup, then reinstall the previous version and point it to the database. No settings will be lost.

Greg is founder and Chief Scientist at SentryOne. He is a Microsoft developer by background and has been working with SQL Server since the mid 90s. He is perpetually engaged in the design and development of SQL Sentry, Plan Explorer and other SentryOne solutions for optimizing performance on the Microsoft Data Platform.

RELATED ARTICLES

Most Popular

Recent Comments