Sunday, November 17, 2024
Google search engine
HomeData Modelling & AI5 Surprising Lessons Every SQL Server Generalist Has to Learn

5 Surprising Lessons Every SQL Server Generalist Has to Learn

Webinar Recap

Maybe you are an accidental DBA who has recently been assigned responsibility for the SQL Server, on-premises or in the cloud. Maybe you are a generalist who can keep the SQL Server up and running, but don’t know much about how it works. Or maybe you are just starting in your career and want to learn more. Whatever situation applies to you, there are a handful of very important lessons you will have to learn if you administrate a SQL Server.

I recently teamed up with Denny Cherry (@mrdenny), of Denny Cherry & Associates Consulting (DCAC, @dcacco), for a webinar on the 5 Surprising Lessons Every SQL Server Generalist Has to Learn. We shared five surprising situations where what you don’t know will hurt you. We covered:

  • How the default settings in Microsoft SQL Server can hurt you
  • Why security is so frequently overlooked but is more important than ever
  • Quick checks that can tell you if your indexes are working properly
  • How not planning for availability and recovery can hurt the organization
  • Why preventative maintenance requires extra attention from you

During the webinar, we discussed the native features within SQL Server that explain these concepts, as well as free community tools from SentryOne and other providers that can make your job easier.

5 Lessonssocial_post_1200x628

If you missed the webinar, you can view it on-demand here.

During the webinar, we received a lot of great questions, which are answered below in case they are helpful to you, over and above the conversation in the webinar.

Webinar Q&A

Q: How many of these default settings apply to VMs which are used frequently?

A: The default settings should still be changed in a VM or a physical SQL Server. There’s no differences between a VM and a physical server for the default settings.

 

Q: If you have intense SQL Server Reporting Services (SSRS) and/or SQL Server Integration Services (SSIS) activities running on your SQL server, would you still lock pages in memory?

A: Only if I’d fully assessed their memory needs in advanced, exempted that amount from SQL Server’s memory allocation, and then locked pages in memory. For example, you could track the memory consumption PerfMon counters for SSRS and/or SSIS to determine what their peak memory needs are. Then you can specifically reduce the Max Memory setting in SQL Server by the amount of memory needed by SSRS and SSIS, combined.

 

Q: Given how much they are coming to the forefront, what are your thoughts on the impact of legal regulations on data (such as GDPR) and information governance on SQL Server and the DBA role?

A: Massively important. There are several on-demand webinars on data governance, GDPR, and HIPAA in the SentryOne Resource Library. Be sure to check them out.

 

Q: Is WRITELOG wait type related to autogrowth settings?

A: WRITELOG is mostly about log flushing and writing transactions to the log. The specific wait stat you want is PREEMPTIVE_OS_WRITEFILEGATHER. Be sure to check out the specific wait stats details at https://www.sqlskills.com/help/waits/writelog/ and refer to https://www.sqlskills.com/help/waits/ for deeper info on all of the wait stats.

 

Q: Is there a RAM size that’s too small to consider using Large Pages (or LPIM)?

A: Large pages are a pre-requisite for LPIM. In general, you probably wouldn’t use either with 4GB or less on the Windows server.

 

Q: Are New Technology File System (NTFS) 64k clusters still important in AWS, Azure, etc.?

A: Typically, you cannot tune those parameters on the various cloud platforms.

 

Q: Is it best to configure the Windows OS Processor Scheduling for programs or for background services?

A: I would strongly recommend that you don’t tinker with that setting. Leave it at the default. I’ve never seen it help and have on occasion seen it harm performance. It’s a setting that was made for the very early days of Windows and SQL Server. I consider it obsolete and retained only for backward compatibility.

 

Q: We have certain databases that have logfiles that are much bigger than the data files. Is it wrong in all cases?

A: No, that can be normal depending on the workload of the application and how often the log is backed up. If there are a lot of updates, for example, then the log will grow but the database won’t. Typically, you do not want multiple log files (although they don’t really hurt performance much). You can probably do some tuning to ensure that the log files don’t grow too large. For example, you can set the database to SIMPLE recovery model or do more frequent log backups, which should help keep the logs small. Also, you may have sloppy transactions that don’t explicitly open and close or may close long after they should. That sort of sloppy coding or perhaps poor transaction isolation level choices can cause your log to grow significantly.

 

Q: A query is using all the CPUs on the server. Does that mean other queries are waiting for CPU?

A: Yes, the other queries will be slower because the single query is using all the CPUs. However, without proper research, it’s possible that other activities are consuming a lot of that CPU. For example, some monitoring tools are notorious for consuming a lot of CPU.

 

Q: How can I check how many CPU are being used by a query? I only see threads.

A: You can see what scheduler is being used by a thread. The DMVs can be used to track the scheduler number back to a physical core number. Each scheduler is a single core.

 

Q: How much memory does Windows Server require? Say 2008R2 and 2016?

A: Generally speaking, 4-6 GB for Windows should be enough. If there are other things running on the server besides SQL, then that number will increase.

 

Q: What is the recommended Maximum Degree of Parallelism (MAXDOP) number?

A: For an OLTP system, the recommended MAXDOP is half of one Non-uniform Memory Access (NUMA) nodes of the server. SentryOne SQL Sentry is the only monitoring product on the market that explicitly tracks your NUMA node usage.

 

Q: My SQL database resides on Solid State Drives (SSDs). Should I reindex or not?

A: Fragmentation matters much less on SSDs than HDs. However, fragmentation still wastes a lot of space in memory. So, I still treat reindex/rebuild questions exactly the same for SSDs as HDs.

 

Q: How can I rebuild indexes on a very large table actively used at all times?

A: Use a supported version of SQL Server Enterprise Edition, such as SQL Server 2012 or later, and the feature for Online Index Rebuild.

 

Q: Please advise any best practices for SQL Server consolidation.

A: SentryOne just aired two webinars about SQL Server consolidation. Visit the SentryOne Resource Library to view the on-demand webinars at your convenience.

 

Q: How do we manage the Binary Large Objects (BLOB) objects as our database continues to grow?

A: In modern versions of SQL Server you’d want to use FILESTREAM to store BLOB data, so that we can keep the data out of the database, and out of the buffer pool, but simultaneously enjoy the benefits of ACID transactions and normal SQL Server backups and recovery. With FILESTREAM you don’t have to change the code that is used to access the database, as it’s just changing how the files are stored without needing to change the application.

 

Q: I also have read that composite keys (two or more columns) are bad. But exactly why are keys of two integer columns bad?

A: Composite keys can be bad when they are the clustered index (CI). It’s a space problem and an I/O problem, but not necessarily a performance problem. To explain further, each key value of a clustered key must also be stored in every non-clustered index (NCI) key value. So, in a heavily queried table, you might have another half-dozen or dozen NCIs, that means a bloated CI is duplicated over and over in all the NCIs. You would be better served by using a surrogate, like an INT or BIGINT column, as your CI (and primary key), then creating an NCI on your old composite key. That way, each row in each of the NCIs has a much smaller pointer to the exact record in the CI. 

 

Q: How often should I defragment indexes?

A: The frequency for defragmenting indexes depends on how quickly they become fragmented. If they don’t become quickly fragmented then you don’t need to reindex them.

 

Q: Are both rebuild and reorg blocking operations?

A: Rebuild can be online or offline. Reorg is always online.

 

Q: Does online index rebuilding affect performance?

A: Performance will be impacted during the rebuild because the I/O on the server is running as fast as it can. There shouldn’t be blocking that happens.

 

Q: When you split tempdb into multiple files, should each file be placed on a separate drive?

A: No, there’s no need to put the files on multiple disks. They can all be placed on one disk. Multiple files are needed because it fixes a problem that’s happening in memory, not on disk.

 

Q: Can you share resources to aid in learning the internals of indexes?

A: I suggest diving into the “Indexes from every angle” content by Kimberly Tripp at SQLSkills.com.

 

Q: Do include columns matter in index?

A: Yes. Missing include columns can be a major performance impact for running queries. They can help a lot with a problematic query operator called key lookups. Be sure to read more about included columns in the SQL Server documentation.

 

Q: Do I need another index on a Foreign Key (FK) column if I already have a composite index with the FK as the first column?

A: No, you shouldn’t need another index. But only if the FK column is the first column in the composite index.

 

Q: What should the optimal number of Virtual Log Files (VLFs) be?

A: As few VLFs as possible. Modern versions of SQL Server will throw a warning if you have 1000 VLFs or more.

 

Q: How do I reduce or reset VLF count?

A: You’ll need to shrink the log file down to its smallest size, then regrow the log file in larger chunks of space. If a large log file is needed, then manually grow the log file 8 GB at a time.

 

Q: Is update stats with full scan necessary with a rebuild index job or is it only recommended to be run when indexes are reorganized? Or none of these?

A: A rebuild will update stats automatically. A reorg requires you to also update stats. Usually update stats with full scan isn’t needed. Sometimes it is, but not usually.

 

Q: Do the backups impact SQL Server performance?

A: For I/O performance yes, for query processing performance no. They can be very expensive in terms of I/O consumption. We have seen them slow down SQL Servers that have a lot of other I/O activities going on at the same time.

 

Q: Does restore verifyonly detect corruption?

A: Restore verifyonly will only check that the backup is valid, it does not check for the database being corrupt.

 

Q: I have a database and log shipping configured for my database. It has a huge table. How do I reduce the fragmentation for that huge table while keeping log size under control? Also, when doing incremental defrag (a little bit at any given time), I have noticed that when I start the defrag process, at the beginning the defrag level actually grows more, and then at some point it starts decreasing. Is it an expected behavior?

A: Basically, you don’t. Fixing fragmentation will cause the transaction log to grow, and that data movement is logged. It is normal for fragmentation to increase at the beginning as the rows are moved around. If you do not have users on the system at the same time as when you are defragmenting the table, then turn off log shipping until your defragmentation is completed. Once done, turn it back on.

 

Q: If a database is being changed while backups are running, will the backup have all the changes made to database up until the back is complete?

A: Yes. When a backup finishes, the backup is a copy of the database in the state that the database is in when the backup finishes.

 

Q: Do Availability Groups (AG) in synchronize mode allow for zero data loss?

A: An AG in synchronous mode will allow for zero data loss on failover. If someone drops a table or commits another error of logic like issuing a DELETE statement without a WHERE clause, an AG isn’t going to help.

 

Q: Is Point in Time recovery possible?

A: Yes, as long as your database is in full recovery mode (or bulk logged recovery mode, but there’s some restrictions here).

 

Q: What are the advantages of log shipping vs. replication?

A: Replication will copy every object (tables, procedures, triggers), permissions, etc. Log shipping does not allow selective copy of DB objects.

 

Q: When an Always On Availability Group (AG) fails over to synchronous mode, are only the uncommitted transactions lost? Will the application get an error that the transaction failed?

A: Correct. The application will get disconnected and it’ll throw an error as a result.

 

Q: In full recovery mode, it does not stop on a particular time. For example, if the transaction log backup is made every 15 minutes, I can’t recover the log in 5 minutes. It does not take the STOPAT command.

A: The STOPAT command works when restoring transaction log restores only. It won’t work on full restores. If you’re in bulk logged recovery mode, you won’t be able to stop the restore in the middle of a bulk logged operation. You probably want to read more about how to do point-in-time recovery in the Microsoft SQL Server documentation.

 

Q: If DBCC CheckDB is taking too long to run every day, which parts of the check can I remove and just run once a week?

A: There’s basically none that can be removed. You can run DBCC CHECKDB on a secondary to remove the production load from the system, then you can run CHECKDB with PHYSICAL ONLY on the production system. It’ll be important that if you change CHECKDB to run weekly, that you keep two weeks of backups. As an alternative, you could use DBCC CheckTable, CheckAlloc, and CheckIdent (on tables with IDENTITY columns), and perform individual checks on half the tables one day and the other half the next. But you’ll still want to keep lots of backups because you have a greater chance for corruption.

 

Q: How would you approach SQL Injection if apps are hard-coded with T-SQL and no one changes or updates them?

A: There’s no good approach if you can’t fix the application. If you can fix the application, there are many steps you can take to make the application more secure. Drop us an email to discuss this topic further or Google for blogs on the topic.

 

 Q: Regarding security, what is best practice regarding the default port for SQL Server?

A: Changing the SQL Server port does nothing for security. If there’s an attacker that’s trying to breach your SQL Server, changing the port number will slow them down by a second or two at most.

 

Q: Are there any tools in the SQLCAT Tiger Toolbox that you’d recommend using?

A: Denny Cherry: At DCAC we regularly use SQL Nexus to analyze the results of pssdiag captures. They are all really great tools.

Kevin Kline: I also like and use the Tiger Toolbox. My favorite tool is the Best Practices Check.

 

Q: What was the VMware network card setting mentioned in the webinar for performance?

A: Generally speaking, the VMXNET 3 NIC will be your best bet for VMs.

 

Q: What kind of protection or prevention can be taken against accidently deleting table?

A: Proper SQL Permissions can be used to prevent a user from deleting a table by accident. You can also deny users direct access to the base tables and force them to use views or procedures instead. If you want to stop someone with sysadmin permissions from deleting from a table, you can use a DDL trigger to rollback deletes, which would be the only way to prevent someone with sysadmin rights from deleting a table. (If your question refers to dropping a table, then you can use the DDL trigger to also prevent dropping a table).

 

Q: Can I perform a point in time restore part way through the transaction log? Tlog happens at 1PM and then at 1:15PM, but I want to get to 1:08PM.

A: Yes you can as long as you’re not in bulk logged recovery model and a bulk logged operation is happening at that time.

Additional Resources

For more information on the topics discussed today, be sure to check out DCAC’s Database Security white paper, which covers the variety of ways in which you can properly secure your database environment, as well as Denny Cherry’s blog and Kevin Kline’s blog.

SentryOne also has a number of products that can help you solve your toughest SQL Server performance problems. Contact us today to schedule a demo to see how these tools can meet your unique needs.

Kevin (@kekline) serves as Principal Program Manager at SentryOne. He is a founder and former president of PASS and the author of popular IT books like SQL in a Nutshell.

Kevin is a renowned database expert, software industry veteran, Microsoft SQL Server MVP, and long-time blogger at SentryOne. As a noted leader in the SQL Server community, Kevin blogs about Microsoft Data Platform features and best practices, SQL Server trends, and professional development for data professionals.

RELATED ARTICLES

Most Popular

Recent Comments