Wednesday, July 3, 2024
HomeDatabasesAn Overview of Intelligent Query Processing in SQL Server

An Overview of Intelligent Query Processing in SQL Server

When you issue a query to SQL Server or Azure SQL, it internally tries to optimize a query plan through calculations such as whether to use an index. Much of SQL Server’s query plans are based on its best guess of what will happen at run time when your query executes. Even when SQL Server guesses right, as your data changes (especially as the volume of data increases), optimal plans can end up performing so poorly, they can drag your whole system’s performance down.

The suite of optimizations that make up Intelligent Query Processing (IQP) prevent many sub-optimal plans by looking at:

  • The amount of data being processed
  • The level of accuracy required
  • The query features used
  • The type of query being run

 It’s all automatic. To have these optimization tools applied in SQL Server 2019 or Azure SQL, all you have to do is set your server’s compatibility level to 150. Not all of these optimizations are available in earlier versions of SQL Server.

Figure 1 shows the main IQP features and where they’re applied during query planning. The rest of this article describes the purpose of each feature and how it does its job.

Figure 1: The three main features of Intelligent Query Processing

Figure 1: The three main features of Intelligent Query Processing

Integrating Runtime Information

Adaptive Joins is an example of an IQP tool that modifies your plan based on runtime feedback. Normally, SQL Server decides what type of join to use at planning time, based on a best guess of how many rows will be joined at run time. With Adaptive Joins, SQL Server makes the decision at run time, based on the actual number of rows involved. If, as it turns out, there are a lot of rows to join in the record sets involved in the join, SQL Server will dynamically choose a hash join. If, on the other hand, 10 or fewer rows are being joined to a larger, indexed record set, SQL Server dynamically chooses a nested loop join.

In Figure 2, a Plan Explorer diagram shows the analysis of a query using an Adaptive Join. Three tables are joined together using an Adaptive Join before delivering the result to the Select clause.

Figure 2: An Overview of Sentry One Plan Explorer Showing an Adaptive Join

Figure 2: An Overview of Sentry One Plan Explorer Showing an Adaptive Join

Memory Grant Feedback (MGF) is another example of using runtime information to modify the plan. While Adaptive Joins works on the current query, MGF modifies the plan, so the query runs faster the next time it executes.

The goal of MGF is to optimize the use of available memory for all concurrent queries. At planning time, SQL Server makes a best guess on how much memory to set aside to hold all of a query’s rows. If SQL Server guesses wrong and grants too much memory, other queries may not be allowed to execute concurrently; If too little memory is granted, SQL Server swaps out rows to disk and your query runs slower.

MGF looks at the actual memory required when a query runs and uses it to adjust the plan’s memory grant to improve performance the next time the query runs. MGF is smart enough to recognize the amount of memory required by a query may depend on the parameters passed to the query and will disable itself for the query if the memory requirements can’t be predicted.

Adjusting Queries Based on Purpose

Batch Mode on Rowstore is used when an analytical query (a query that processes large numbers of rows to produce a summary result) is run on a table optimized for transactions (a table without columnstore indexes, for example).

IQP’s Batch Mode on Rowstore enables batch mode, which supports simultaneous processing of multiple values, for most rowstore indexes. It works on-disk heaps and B-Tree indexes but not, apparently, hash indexes on memory-optimized tables. The name for this tool reflects batch mode’s history: it was originally designed to process columnstore indexes and was later extended to handle rowstores.

IQP’s Approximate Query Process is applied when a query is operating against a huge number of rows and accuracy isn’t critical (e.g., when returning a result to a dashboard). If, for example, you’re using counting distinct rows over billions of rows, Approximate Query Process will give you a faster answer close to the real one.

Adjust Queries Based on Query Components

Several IQP tools are designed to improve the performance of queries using functions or table variables.

Scalar UDF Inlining, for example, is applied to queries with user-defined functions (UDFs), which return a single data value. By default, those functions would be executed once for every row processed. Scalar UDF Inlining effectively treats the function like a subquery, substantially reducing the number of times the function is called. There are some limitations to be aware of, though. Scalar UDF Inlining also isn’t available in Azure SQL.

Interleaved Execution is invoked when the query uses functions that return a table. But like Adaptive Joins, rather than restructuring the query, Interleaved Execution uses runtime information to improve query processing.

If a table function contains multiple statements, SQL Server can’t determine at planning time how many rows the function will return at run time. As a result, SQL Server assumes the function will return 100 rows. If, at run time, the actual number of rows returned is much higher or lower, that plan may not be optimal.

With Interleaved Execution, when query is about to execute the table function, Interleaved Execution stops the main query, executes the table function, and uses the resulting estimates of the number of the rows returned by the function to plan the rest of the main query’s processing.

Certain conditions restrict when Interleaved Execution will be applied. For example, the main query can’t be modifying data and the function must use runtime constants (SUSER_SNAME or GETDATE, for example).

Table Variable Deferred Compilation uses a strategy similar to Interleaved Executions but is applied to queries that use a table variable. As with multi-statement functions that return a table, SQL Server can’t, at plan time, determine how many rows will be in a table variable at run time. Like Interleaved Execution or Adaptive Processing, Table Variable Deferred Compilation waits until runtime to determine the actual number of rows in the table, and then uses this number to plan the rest of the query.

Optimize – Manually, Automatically, or Both

These automatic optimizations won’t eliminate the need for monitoring and diagnosis tools like SolarWinds SQL Sentry or SolarWinds Database Performance Monitor (DPM). They can, however, help eliminate some potential problems, so you can focus on the problems in need of your attention.

 

Peter Vogel is an ActualTech Media Contributor, expert system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments