Paul van Oordt SQL Services
Freelance SQL Server specialist and troubleshooter, Utrecht / Antwerp
+31 627400408 - WhatsApp - info@vanoordt.nl
LinkedIn - curriculum vitae
vanoordt.nl - Nederlands
newsletter
What to know about SQL Server performance
Performance problems are well known in database systems. It is very common for an application to initially work well and then become slower, sometimes gradually, sometimes dramatically with many timeouts or deadlocks. This leads to dissatisfied users and rising costs for hardware, licenses or cloud contracts. Whether you experience the problem now, or you are trying to avoid it, the information on this page can help you make SQL Server run well.
Regarding my services, I have tuned and troubleshooted worked on hundreds of systems. I always work closely with the customer's employees and my intention is to help you swiftly. Knowledge transfer is key. I have no interest in staying for a long time. Typically an assignment takes a single day, sometimes a bit more. I deliver a report with findings and recommendations that gets you going in performance improvement.
Below I will give some general findings about SQL Server performance, and then some possible actions to improve performance. But first some remarks on ..
SQL Server performance on a physical or virtual box and in the cloud (IaaS, PaaS)
Almost everything in the following sections applies to SQL Server on a physical or virtual machine, on premise or with an external party, or on infrastructure as a service in the cloud.
Most of it also applies to platform as a service Azure SQL Database.
A big difference between physical and virtual systems is of course the ease of scaling up. That is why a performance problem often presents itself differently.
On a physical system there may be an interruption of service where on virtual servers, and certainly in the cloud, this can often be prevented by adjusting the resources
and paying the corresponding bills.
Cloud computing offers many benefits, but may be expensive. Looking at my experience with SQL Server workloads and resources, most systems can down scale considerably with just a few days
of tuning. A good performance scan is the first step and an excellent investment.
General findings about SQL Server performance
Poor performance can have all kinds of causes. I divide them into three categories, roughly according to decreasing impact.
- the way SQL Server is used, the data model and the code
- the indexing of tables
- the platform
With this I mean that by adding extra memory or CPUs (platform) you might get the system a few times faster. Properly indexing a problem query often produces a gain of 10 to 100 times as fast. And it is no exception that rewriting bad code leads to a factor 1000 faster code. This is all very approximate of course, but it is what emerges from the hundreds of systems that I have worked on. And yes, there are always multiple factors involved, interacting in all sorts of ways.
The use of SQL Server
So, the biggest performance problems are usually caused by incorrect programming (querying) of SQL Server. There are many ways to do it wrong.
See my recommendations for SQL Server development for good programming of SQL Server.
If you follow those recommendations you will get code that is understandable and easy to maintain and that usually performs well too and keeps doing that.
Good code is scalable code.
Some of the ways to do it wrong are:
- ✗ thinking and programming procedural instead of set-based, most visible through the use of while loops, with or without cursors
- ✗ performing work in multiple queries that can be done in a single query, for example selecting data in variables, and subsequently modifying and inserting these, while it can also be selected in a single insert
- ✗ excessive use of temporary tables to store intermediate results
- ✗ use of scalar functions where simple expressions of built-in functions are sufficient, or where an inline table valued function can be used
- ✗ use of scalar functions where a persisted computed column is possible
- ✗ use of a poor data model that does not adequately reflect the regularities of the domain
- ✗ retrieving too much data to the client
- ✗ performing work on the client that could better be done by SQL Server, or vice-versa, doing work in SQL Server that should be done by the client
- ✗ use of unparametrized code, from client code or using dynamic sql, which increases the number of compilations
- ✗ keep transactions open too long
- ✗ use of a needlessly high transaction isolation level, for example serializable where read committed is sufficient
- ✗ inadequate use of expressions with indexed columns so that an index cannot be used
- ✗ careless use of data types that prevent statistics from being used
- ✗ overruling the optimizer with hints
- ✗ send from the client a string of comma-separated values that SQL Server needs to transform into a table, instead of using table-valued parameters
Again, to see how it's done well: ✓ recommendations for SQL Server development.
Indexing of tables
Individual queries that are well written can still be slow, especially when querying large tables. These queries can often be tuned by creating the right indexes. With good indexing, the size of a table doesn't really matter, and vice versa, the larger the table, the more important good indexing, and the bigger the performance penalty if it isn't.
Indexing tables by hand requires a thorough understanding of how the SQL Server engine works. This gives the best results, because all index features can be included: clustered and non-clustered, filtered, included columns, the fill factor, column store indexes and indexed views. (There are also XML and spatial indexes.)
SQL Server also makes index suggestions. You will find these in the query plan, in the missing index statistics and via the Database Engine Tuning Advisor. Never just create an index without looking at what is already there. SQL Server does not prevent you from creating redundant or even identical indexes. It is often the case that an existing index can be adjusted by adding key or included columns. Also know that sometimes dubious indexes are suggested, such as on non-selective columns, that some missing indexes are never found, for instance when a trivial plan is used, and that the estimated performance gain is sometimes very much off.
The platform
You will want to have some basic information about the platform, at least about the following three resources: CPU, memory and I/O.
If the platform suffers form memory shortage, you can of course add extra memory, but if you have not looked carefully at the way in which SQL Server is used and at the indexing of tables, it will only give limited results. There is also a chance that you will purchase additional core licenses, or use a more expensive cloud version, while rewriting or tuning some queries solves the problem better, cheaper and more sustainable.
Note that is possible that a system has abundant resources and is still slow. This happens in case of excessive blocking, but also if parallelism is not used properly.
Addressing SQL Server performance issues
Performance problems can be so complicated, and the toolkit is so extensive that I am not trying to give a general recipe. I will talk about a couple of indicators and actions that may be part of the tuning process.
The workload
It is important to know what workload (queries) cause a slow system, and whether the slowness is mainly the result of high use of resources such as CPU, or of blocking.
You can find a snapshot of current activity most easily by using the sp_who2 procedure. It shows the resources used per connection, such as CPU and I/O, and also any blocking.
An overall view can be seen through the dynamic management view
sys.dm_os_wait_stats
. Showing for example whether there are CPU queues (SOS_SCHEDULER_YIELD, and a high signal_wait_time), whether there is a lot of blocking (look at both the number, duration and maximum duration of all waits with type LCK_M_ x ), whether there are memory or I/O issues (PAGEIOLATCH_SH, PAGEIOLATCH_EX). This gives a very global picture of the state of the SQL Server since the last restart. Note that these numbers can be strongly influenced by, for example, (maintenance) tasks that run outside of production hours. If in doubt, use a script that returns the values for a single hour, or even better, for a single query.
Query tuning
Queries that take a long time to complete or take a lot of resources can be found in the Activity Monitor, in the procedure cache or in the Query Store (which you must first enable for this). You can replay those queries in the SQL Server Management Studio, and tune them by rewriting, adding indexes etcetera. It may be important that the connection settings are equal to those in the production environment.
Analyze and tune the most expensive queries. See if the code is written according to the principles in recommendations for SQL Server development. If that is the case, or there is (currently) no possibility to rewrite the code, view the query plan. Pay particular attention to:
- Indications of missing indexes, or the occurrence of spool operators. You could create the missing index, but be aware of the following:
- Indexes speed up some queries, but also need to be updated and therefore delay modifications.
- Adding missing indexes just like that will lead to a situation with redundant indexes or indexes that can be easily merged and therefore unnecessary overhead.
- In certain situations, an index can also significantly delay a select query. Filtered indexes can then be the solution. Sometimes the code needs to be adjusted for this.
- Warnings concerning hash and sort spills or large differences in estimated and current number of rows.
- Regularly updating of statistics may prevent this.
- Most of the time, however, the problem is that the code is written in such a way that the optimizer cannot use the statistics. Rewriting the code is then necessary.
- Warnings concerning implicit type conversions. This usually requires simple rewrites of the code. For example, a variable is defined as nvarchar, while the column being compared is of varchar type. Changing the variable can have a major effect on performance.
- Heavy nested loops. This is often caused by bad cardinality estimates by the optimizer, but can also be the result of the way in which the join was written.
- Unordered scans of large indexes or tables. This may be solved with an index, but often it is an inevitable consequence of the query. Sometimes simply because of the desired functionality, for example if there is a clause
col like '%searchterm%'
.
- If the plan is serial and takes a lot of CPU, look at what prevents a parallel plan from being chosen. That could be the use of scalar functions, or settings or query hints. It can also be that the optimizer sees no profit. And perhaps the optimizer is wrong there.
- If inefficient plans are created due to very uneven distribution of cardinality for different values (parameter sniffing), consider the
optimize for unknown
or recompile
hints.
- If an inefficient plan is used and there is no control over the query, consider 'plan guides' (or Query Store hints in SQL Server 2022). Using a plan guide you can force the optimizer to use hints, for example optimize for, parameterization or maxdop, or even an entire plan.
Tuning the platform
You will want to do a couple of things at the platform level, even though the result is usually less dramatic. I give some quick checks and actions here.
- Check the platform:
- Do not run interactive applications on the server, especially do not keep Activity Monitor in SQL Server Management Studio open.
- Look at the other services and whether they need to run on this system.
- Limit the memory usage of SQL Server to prevent other processes from running out of memory, which in the end is not good for SQL Server either.
- Do not use anti-virus software on the server (protect the server in another way), or else exclude the SQL Server data files.
- Exclude SQL Server data files from text indexing.
- Check that the processor works in high performance mode.
- Check that you are using recent versions of Windows and SQL Server and an adequate database compatibility level.
- Check buffer cache memory pressure via the Page Life Expectancy. This is a value that can drop quickly, but cannot rise quickly. A value below 300 is usually seen as problematic, but you actually want the value to be consistently much higher. You can monitor this figure using Windows Performance Monitor or through the dynamic management view
sys.dm_os_performance_counters
. When adding memory, do not forget to adjust the max server memory
setting.
- Check I/O contention per database file. The standard is a maximum of 20 ms per write/read. Files with a lot of I/O should meet this standard. Note that these averages can be strongly influenced by, for example, a checkdb or reindex, especially if there is little other I/O. When in doubt, use a script that gives the results for an hour. Possible actions:
- If there are poor read times and Page Life Expectancy is not particularly high (for example, regularly below 1000), add more memory, or consider using data compression to achieve better memory use and I/O.
- Place the most important database files on faster disks, for example SSD.
- Divide the I/O load over more spindles.
- Check the I/O subsystem (or contact the SAN administrator).
- Check the possibilities of write caching.
- Check the processor load, in particular the processor queues and scheduler yields. Current CPU queues can be seen as runnable tasks through
sys.dm_os_schedulers
. This is a number that can change quickly.
Possible actions:
- Adjust the
max degree of parallelism
(maxdop) server setting so that it matches cpu count, hyperthreading, numa architecture and workload. For example, the use of parallelism is less appropriate for a transaction processing system than for a decision support system. Adjust the cost threshold for parallelism
for the same reason.
- Check the time spent on compilation, and whether this might be reduced by adding more memory. Although memory for the procedure cache cannot be configured directly, memory pressure as described above has an adverse effect on this. If there are many plans in the query cache that have only been used once, the
optimize for ad hoc workloads
server option or the parameterization forced
database setting may work. It is also possible that the compilation workload can only be tackled by rewriting code.
- Adding processor cores is an option, but generally also requires additional licenses and is therefore expensive. Especially in a virtual environment it may be possible to temporarily use extra cores and licenses, until your workload is well written and tuned. The ability to scale up and scale down are important considerations when choosing a platform. This allows you to immediately convert the results of performance tuning into lower platform costs.
- Check the size and use of the tempdb and whether it is well equipped for this. Possible actions:
- Check what tempdb is primarily used for, user objects, internal objects or the version store. Look for possibilities to reduce the load, for instance the use of non-durable memory-optimized tables.
- Make sure files are big enough so that they do not have to grow at runtime.
- Place the data file(s) on faster disks, such as SSD.
- Use multiple, equally sized data files, possibly on the same disk, if many temporary objects (user or internal) are created, altered or dropped.
- Check the database files, in particular the amount of free space and the autogrow settings. These are the recommendations for both data and log files:
- Make sure that the files are large enough to accommodate, for example, the growth of the coming year.
- Set the growth increment to 64MB. Up until SQL Server 2014, the defaults were 1MB and 10% for data and log files respectively.
- Regularly check whether the files still meet the first rule and otherwise grow them at a quiet moment.
- Do not shrink a file unless in exceptional cases, so make sure the auto_shrink database setting is off.
- Use Instant File Initialization, which by the way does not work for log files.
- Improve the concurrency:
- If you see high values in the wait statistics for shared locks (LCK_M_S or LCK_M_IS) and perhaps also for exclusive locks (LCK_M_X and especially LCK_M_IX), you may benefit from the
read committed snapshot isolation
database setting. This setting ensures that readers (read committed) and writers do not block each other. The tempdb is used to store versions of modified data, regardless of whether it is being read or not. This takes resources too, so consider and test it well.
- If this setting is already used and you still have wait times for shared locks, check the used isolation levels and whether these are really needed.
- In most cases, deadlocks are difficult to solve directly. The best way to deal with these is to reduce blocking, which is realized by proper use of transactions and good query performance.
- Check the indexing. Indexes can greatly improve query performance, but they delay data modifications.
- Remove duplicate or otherwise redundant indexes.
- Remove unused indexes, or indexes that add little to other existing indexes.
- Check the fill factor. For historical reasons, this is sometimes set at 80%, making memory use and I/O 20% less efficient. 100% is fine for most indexes.
(c) Paul van Oordt, www.vanoordt.nl
Last modified: 28 December 2022