Paul van Oordt SQL Services
Freelance SQL Server specialist and troubleshooter, Utrecht / Antwerp
+31 627400408 - WhatsApp - info@vanoordt.nl
LinkedIn - curriculum vitae
vanoordt.nl - Nederlands
newsletter
Here is a brief list of recommendations for SQL Server administration. Some applications will require much more configuration and management than the list below indicates. Other applications, including large and mission-critical, violate many of these best practices and still function just fine (although there may be unrecognized risks). This page will help you navigate your way through SQL Server administration. This is also the disclaimer: There is no single recipe that is good for all applications. (Then it would have been implemented out of the box.) Only make changes if you know what you are doing and always test it carefully. There is no guarantee that the recommendations on this page are correct in your case.
Determine which edition of SQL Server is required. In general, the Enterprise Edition is only needed for very large and critical 24/7 applications. Sometimes the free Express Edition is suffices. The Web Edition is only available to third-party hosting service providers. See the SQL Server Feature Comparison.
Define the licensing model, per server + CAL or per core. SQL Server pricing.
Determine whether you are installing on a physical or virtual machine. Note that in the first case you have to license all physical cores for SQL Server. This means that you will not want to have too many cores in your server, which makes it difficult to scale up later. Also consider Azure SQL Managed Instance.
Determine whether software assurance is required. This entitles you to new software versions and to the use of a standby server.
Research whether Azure SQL Database is an option. This can be an good and worry-free option, especially for new, not too heavy database applications. For existing applications, it can be investigated whether migration is practical.
Schedule upgrades, it's generally a good idea to be on the latest or at least a recent version. There are all kinds of features that bring improvements, even without software changes.
An important choice is whether your SQL Server will run on a physical machine, on a virtual machine managed by you, or, for example, as an Azure virtual machine in the cloud. In all cases, sufficient resources are of course important for a smoothly functioning system. Upscaling and downscaling the server is easier in Azure and usually with other virtual machines than with physical ones. Make sure that there is always the possibility to downscale. This way you can immediately monetize improvements to the performance of your software. Of course, the possibility to upscale is also important.
There is no simple formula for what sufficient hardware is. It depends on many factors such as the size of the (active) data, the number of concurrent sessions and the quality of the data model, queries and indexing. Once the server is operational, it is fairly easy to determine whether there are sufficient resources. See recommendations for SQL Server performance.
Sufficient memory is a simple and economical way to ensure a smooth running database server.
Disk IO is important for SQL Server performance, and more so if there is a shortage of memory. For all databases except tempdb, read performance of the data files is more important than the write speed. For databases with many modifications, the write performance of the log file is also important. Disk-IO is easy to measure, both from SQL Server and with specific tools such as diskspd. Based on the results, improvements can be made if necessary.
Setting the number of CPU cores needed is not easy. A complication is that with a per-core license, all physical cores for SQL Server must be licensed. In the case of a virtual machine it is easier to start with a few cores, adding and licensing extra cores when necessary.
Explore the possibility of running your Windows servers with SQL Server as Azure virtual machines in the cloud. In case of virtualization, provide dedicated resources if possible.
Installing SQL Server is not too complicated, but remember that it is an incredibly scalable product that works well for small and very large deployments. Some things therefore have to be configured manually. It is impossible to install a good system out of the box for every application.
Install patches as needed. See the Latest updates.
Allow the account under which SQL Server runs the Perform Volume Maintenance Task privilege. This speeds up the creation and growth of files. If you want to change the service account, use the SQL Server Configuration Manager, which sets the required permissions. Don't do this with services.msc.
Make sure the tempdb performs well. Place the files on fast disks, perhaps SSD, and avoid having to grow the files at runtime. It can be beneficial to create multiple tempdb data files with equal size and growth settings, even on the same physical disks.
After migrating to a higher SQL Server version, the compatibility level of the databases remains at the same level, unless it was so low that it is no longer supported by the new version. Plan to increase the compatibility level. This can have unexpected performance consequences. The compatibility level can be easily reset at any time.
Change some of the server configuration settings:
Plan carefully how permissions will be granted. There are several options and it quickly becomes confusing. It is a good idea to group users in Windows Active Directory if possible and create logins for these groups in SQL Server.
If possible, use only Windows authentication. Avoid granting access to individual Windows accounts, only grant access to Windows groups. The logged in user is still known to SQL Server for auditing purposes.
If you use mixed authentication, make sure to use strong passwords for SQL logins and optionally rename the sa account. Avoid applications that use the sa account.
Limit the number of logins with sysadmin or control server privileges. In general, use the principle of minimum required permissions.
Disable the guest account in your production databases.
Use row-level security, for example if data from multiple customers is stored in one database (multi-tenant).
If necessary, change the TCP port on which SQL Server listens.
Consider the different encryption forms offered by SQL Server and Windows.
Implement some form of auditting if desired.
Change the default growth settings for database files to avoid having file growth at runtime. Databases created before SQL Server 2016 may still have the old settings. This is especially true for log files, as they are always initialized with zeros, and because this can lead to too many virtual log files. It is generally not a good idea to regularly shrink database files.
Don't put development, test, acceptance, or demo databases on production servers. Administration will be complicated in all sorts of ways if you do. No SQL Server license is needed for non-production databases. Just run them on freely downloadable SQL Server Developer Edition. For non-production databases, the simple recovery model will usually be appropriate. Remember to change this when a production database is restored as a test database.
Take databases that are not in use offline, so they will not be included in backup or other management routines. And you really know for sure that they are not being used. Other databases may be put in read-only mode.
Determine the recovery point objective (RPO), the acceptable data loss and the recovery time objective (RTO), the acceptable downtime in case of emergency and of planned maintenance.
In most cases a daily full database backup is practical. In addition, differential backups can be made for large databases.
Databases for which little data loss is acceptable, should have a full or bulk-logged recovery model in which the transaction logs are preserved for a possible restore. Backup of the transaction log must then take place, for example every 10 minutes.
Backup files must of course be stored in a good and safe place and also cleaned up again. Adequate backup and cleanup routines can be created with a SQL Server Maintenance Plan. There are also all kinds of other tools available, ranging from external backup software to open source scripts such as those from Ola Hallengren. Also consider backup via url to Microsoft Azure.
Volume shadow copy service (VSS) backups are also possible, but they do not provide point in time restore. If backup software is used, check whether SQL native backup or VSS is used.
Make sure you also have up-to-date backups of the master, msdb and model databases. The tempdb need not and cannot be backed up.
In most cases backup compression is a good choice.
Do a restore test regularly including log tail backups and application connectivity. Make sure you have a plan. A lot can be prepared for a restore to another server. If the acceptable downtime is exceeded, provide a high availability solution such as a failover cluster or a (Basic) Availability Group.
Make sure to have a regular dbcc checkdb. This can also be done with a Maintenance Plan.
Index fragmentation is usually not as big a problem as is thought to be. Still, it can be beneficial to perform reindexing every now and then, also to make free space in an index available. Ideally, use a script that takes action based on established fragmentation. The alternative is a Rebuild Index or Reorganize Index step in a maintenance plan, which can also be fine if there are sufficient resources. Determine, possibly per index, whether you are doing a rebuild or reorganize. The first is more thorough, but takes a long exclusive lock on the index that prevents other use (except in the Enterprise Edition).
Updates of the statistics, inexact information about the size of tables and the distribution of values in columns which is used when creating query plans, take place automatically. Nevertheless, it is often wise to also do this as part of the maintenance tasks, for example at night or during the weekend.
Both index defragmentation and statistics updates can be performed with a Maintenance Plan.
Regularly monitor important values such as memory pressure (page life expectancy), cpu usage, IO performance and blocking.
Make sure that job failures are notified. Configure a mail profile for this. Create a job 'testnotify' with which you can test notification very easily. Also make sure that the occurrence of errors from, for example, level 19 is notified.
Check the missing indexes statistics every now and then.
Check deadlocks in the system_health extended events session every now and then.
Inspect the SQL Server error logs regularly. Optionally create and schedule a job that performs a weekly log rollover. Succesfull backup operations may be excluded from the error log by using traceflag 3226.
The msdb contains historical information about, among other things, the execution of jobs. Schedule a clean up of this information.
Monitor disk space and file growth. Sudden filling up or growth of log files is often an indication that something is wrong, for example a long open transaction, or an error in replication or high-availability.
Enable the Query Store for the most important databases, giving you an overview of the workload, including the most expensive and suddenly deteriorating queries.
Regularly run the open source script sp_blitz, which gives a picture of the state of the SQL Server instance, including many of the recommendations on this page.
Be careful with scaling out in the form of, for example, replication. Use it when you really need it. It introduces a lot of complexity.
Consider stretch databases for archiving old data.
(c) Paul van Oordt, www.vanoordt.nl