Saturday, September 21, 2024
Google search engine
HomeData Modelling & AIWhy Design Impacts Performance on Microsoft SQL Server

Why Design Impacts Performance on Microsoft SQL Server

Did you know that when creating a Microsoft SQL Server database application, certain design choices can hurt its performance?

In a recent SentryOne webinar, I shared common Architectural Choices That Impact Performance. If you’re a DBA, do you have scripts you can run to perform these checks? If you’re a developer, do you know which common design choices are actually pitfalls? If your answer is “no,” this webinar can help.

Webinar Recap

During the discussion, I covered a variety of topics, including:

  • What are the basics of tables and indexes? How do data type choices directly impact the size, I/O burden, and performance of applications?
  • What are the most common database design issues that surface in application code? How can you detect them?
  • What are the primary techniques database designers use to improve performance when optimizing a database design?

I also shared several demos that show what sort of anti-patterns are common in applications you buy on the open market, how to detect them and, when possible, how to refactor those problems.

 If you missed the live webinar, click here to view it on-demand for free.

Architectural Choices SQL Server Webinar

I received some great questions from attendees. I’ve compiled all the questions and shared the answers below in case they are helpful to you.

 

Webinar Q&A

Q: Are multiple data files on multiple disks still needed with the advent of SAN/SSD technology?

A: There are still reasons to create multiple data files, usually around user latch contention on schema locks, which is a non-IO problem. For OLTP production databases, I usually go with 4 or 8 data files assuming the server has a lot of CPU cores available. There are also some additional IO-specific reasons to have multiple data files, but that usually only comes into play at the very high-end of performance issues and is highly dependent on the vendor of your SAN product.

 

Q: How can you have additional or different indexes on a Read Only Replica of an AG?

A: If you need different indexes, you’d need to keep the servers synchronized through other means, like transactional replication. A readable secondary CAN have different auto-created statistics, but not different indexes. For more information, check out this resource.

 

Q: Which process is the best: ISNULL or Coalesce?

A: It depends. High-level details can be found in my colleague Aaron Bertrand’s blog post here. For a more in-depth discussion, check out this resource. Also, when you use any function in a search argument predicate, for example WHERE ISNULL(emp_id, 0) = 283, then the query optimizer will not be able to perform an index seek. 

 

Q: For tempdb, is it still applicable to create as many data files as you have the CPU cores in a box? If a server has more than 12 or 16 cores, then how many files should we create for it?

A: That recommendation originated in the early days of multi-core servers. The common recommendation today is to start with 4 data files, then measure. If performance still suffers, then go to 8, and again measure. If your experience is like mine, performance improvements plateau at 8 database files. Industry-wide, maybe 10-15% of workloads benefit from going above eight files. AND REMEMBER: keep the files all of the same size and ensure that auto-growth expands them all the same amount and at the same time. See trace flag 1117 to ensure that data file growths happens across all data files in a filegroup simultaneously

 

Q: In physical and virtual environments, do we need to disable hyper-threading where we have SQL Server?

A: It depends, but you will usually see more good than harm by enabling hyper-threading. Your mileage may vary significantly depending upon the brand and type of CPU in your hardware. As with all performance tips, test Test TEST!

 

Q: With solid-state drives, do you still suggest multiple database files?

A: Yes, definitely. You can get contention on a single file for non-IO reasons. Having multiple files can mitigate that risk, especially on applications with large numbers of concurrent users and operations. For more detail, check out this resource. However, you should note that this type of contention is possible on any very database, not just Tempdb. That’s why my recommendation multiple data files for Tempdb, plus all of your very busy user databases.

 

Q: For data compression vs. TDE, do they work together? Or create poorer performance?

A: You can use them together, but you shouldn’t. TDE completely scrambles repeating patterns you might see within the data to facilitate encryption. Data compression, on the other hand, works by leveraging repeating patterns, so you get no benefit at all from using it while TDE is also enable.

Additional Resources

If you haven’t already, I hope you’ll view our Architectural Choices That Impact Performance webinar. Here are some additional resources for further best practices.

 

Connect with me online! Facebook | Twitter | LinkedIn!

Kevin (@kekline) serves as Principal Program Manager at SentryOne. He is a founder and former president of PASS and the author of popular IT books like SQL in a Nutshell.

Kevin is a renowned database expert, software industry veteran, Microsoft SQL Server MVP, and long-time blogger at SentryOne. As a noted leader in the SQL Server community, Kevin blogs about Microsoft Data Platform features and best practices, SQL Server trends, and professional development for data professionals.

RELATED ARTICLES

Most Popular

Recent Comments