Monday, May 27, 2013

How to Size the SQL Server Memory & Why is it important?

The following blog post has 2 objectives.

 1) How to size the memory correctly ( or atleast being close to correctly )
 2) Why is it important to size the memory ( & not leave it default after installation )



First let me address item 2.

Setting a upper limit to the memory that SQL Server can use is termed as sizing memory on SQL Server.One of the common mistakes done on SQL Server Database Server is not sizing the memory. By default, after a typical installation, SQL Server is not set any upper limit on the amount of memory it can use. In other words, after a default installation, SQL Server is configured to use any amount of memory that is available on the Server. For ex ., if you have 32 GB RAM server and set up SQL Server, by default settings SQL Server is allowed to use the entire 32 GB. After a default installation, the memory setting of a server looks like the picture below.

The number "2147483647" just implies SQL Server has no upper limit and will use all the memory on the server.

      As I explained previously over here, SQL Server by design uses all the memory that is allocated to it. For ex., If SQL Server is allocated 26 GB of RAM ( out of say 32 GB on the server ), it utilizes entire 26 GB. SQL Server is designed in such a way, so that it reduces the need to read the disk frequently and leverages maximum on the memory allocated to it.

    So, why is it bad not to configure a upper limit? The reason is as SQL Server starts using the memory, its memory utilization goes on increasing ( by design as explained above ) day by day and at one point will leave very little memory for Windows Operating system to function. Windows Operating system requires adequate memory for healthy functioning of the server. When SQL Server has no upper limit, SQL Server consumes all the memory on the server, leaves operating  system to starve, bringing slowness to overall server. Usually under such circumstances, the memory utilization is over 97 or 98% with almost 90% used by SQL Server.

   When the Operating system experiences memory pressure, though SQL Server may be functioning as usual, physically perfoming any operation on the server ( logging in to the server, copying files, opening sql server client tools  etc ) may be extremely slow. Prolonged persistance of the problem ( for a few days to a week ), can result in a unexpected shutdown / restart of the server. Like many of Microsoft problems, after restart the problem vanishes for a while and comes back once memory utilization increases again.

  Now for the second part of the post. How to Size the memory?

    Though sizing the memory accurately is a task to be done meticulously taking quite a few parameters into account, I stumbled upon a dummies guide to configure the memory needed for SQL Server and OS to function smoothly.The article written by Jonathan Kehayias, a industry expert, provides a simple formula in layman terms, with which you ensure you have configured enough memory for the OS & SQL to run smoothly. Please refer to the article here.

   Brief explanation of the technique to size the memory is given below. The amount of memory to be given to Operating System

  1.  1 GB of memory reserved to Operating System
  2.  1 GB each for every 4 GB in 4 to 16 GB
  3.  1 GB each for every 8 GB in more than 16 GB.
ie., if you have a 32 GB RAM Database Server, then memory to be given to Operating System would be

  1. 1 GB, the minimum allocation +
  2. 3 GB, as 4 to 16 GB has 12 GB and 12 Divided by 4 GB ( each 4 GB gets 1 GB ) is 3GB +
  3. 2 GB, as 16 to 32 GB has 16 GB and 16 divided by 8 ( each 8 GB after 16 GB gets 1 GB ) is 2 GB
So in total for a 32 GB server, 7 GB will be reserved for the Operating System. This implies maximum memory allocated to SQL Server should be 25 GB. Similarly, for a 64 GB Server, 10 GB should be reserved for the Operating System & 54 GB should be allocated for SQL Server.

Above method ensures Operating System has enough memory to function smoothly and doesnt starve for memory.One may attempt to increase the max memory allocation to SQL Server more than what is recommended by the technique above when SQL Server internally is facing memory pressure. However, by doing so, one needs to ensure, Operating System doesn't starve for memory. The above technique of sizing the memory is sufficient for both OS & SQL in most of the cases.

To conclude, as a DBA,it is extremely important to ensure upper limits for memory is configured for SQL Server to prevent sudden performance issues.

Thursday, May 16, 2013

Xp_readerrorlog in SQL Server 2012


I have always been a big fan of Xp_readerrorlog. The simple reason is that it much faster to read errorlog using the script compared to using SSMS. xp_readerrorlog gives more options to filter the log for specific dates or containing specific string etc

In SQL Server 2012, xp_readerrorlog has undergone a minor change. if you use xp_readerrorlog as used before you may encounter the following error

for ex: -
 
EXEC master.dbo.Xp_readerrorlog
  0,
  1,
  '',
  '',
  '20130502 00:00',
  '20130503' 


throws a error  saying

Msg 22004, Level 12, State 1, Line 0
Error executing extended stored procedure: Invalid Parameter Type


If you are like me, who likes to capture the results of errorlog into a table then you may get a different error.

for ex:
 
INSERT INTO #read_error_log
EXEC master.dbo.Xp_readerrorlog
  0,
  1,
  '',
  '',
  '20130502 00:00',
  '20130503' 



Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.


The reason for the error is that the third and the fourth parameter, to be passed as a search strings strictly accepts nvarchar Datatype. Adding the letter 'N'; makes the parameter into nvarchar as shown below .
 
EXEC master.dbo.Xp_readerrorlog
  0,
  1,
  N'',
  N'',
  '20130502 00:00',
  '20130503' 


The script provided below will help one read the error log quickly and effectively. The script is similar to the one shared in this blog previously over here. Instead of using sp_readerrorlog, the script below uses xp_readerrorlog with additional parameters which makes it even more effective. sp_readerrorlog doesn't provide time based filters (6th & 7th parameters) which are the most important filters while reading errorlog.
 
CREATE TABLE #read_error_log
  (
     logdate      DATETIME,
     processinfo  VARCHAR(200),
     errorlogtext VARCHAR(max)
  )

INSERT INTO #read_error_log
EXEC master.dbo.Xp_readerrorlog
  0,
  1,
  N'',
  N'',
  '20130502 00:00',
  '20130503'

SELECT *
FROM   #read_error_log
ORDER  BY logdate DESC

TRUNCATE TABLE #read_error_log

DROP TABLE #read_error_log 

Friday, March 1, 2013

When was the last successful backup?

What is the fastest and perhaps easiest way to find the last successful database backup?
Just right click on the Database on pick properties on the management studio.


 



On top of window the last full backup completion time and log backup completion time is shown.
This is perhaps the simplest way of finding the success of a daily backup without even typing a command.

Sunday, May 20, 2012

SQL Server Memory Usage - Task Manager


Another question I have answered 100s of times to Non SQL folks. "SQLServr.Exe is consuming 80% of memory. Can you please help to reduce the memory utilization to less than 50% and check whats the problem with SQL ? "

If you are a DBA, I guess you would have faced the question many times. Let me explain the response to such a question in the post below.

              Usually, SQL Server utilizes as much as RAM that is available on the server. This behavior is not a bad thing and has been specifically designed to be so. Initially, when a SQLServr.EXE started ( ie., when SQL Service is started ) , SQL Server consumes just a few MB of RAM. As the SQL Server starts getting utilized or when queries are getting fired and result sets are returned, the memory utilization starts increasing. When a query is executed, SQL Server retrieves data pages from the disk, loads them into memory ( RAM ) , performs the processing and returns the result set. As the data pages are getting loaded to the memory, the memory utilization increases. After the query execution completes, SQL Server doesn't release the memory by flushing data pages loaded. SQL Server retains them on the memory so that the next time when the same or similar query is executed, SQL Server need not fetch the data pages from the disk and instead directly fetch it from RAM, which would make the query execution much faster.

           So a Production Server in use for few days or weeks ( depending upon its usage or RAM available), is expected to use most of the memory available in the server. On 64 BIT database server, SQL Server is free to use the entire memory available while on a 32 bit database server, SQL server can use only up to 50% of RAM if AWE is not enabled. Usually, the windows operating system consumes 1 GB of RAM at least. So normally on 64 BIT server, one can allocate, 75 to 80% of RAM to SQL Server, 1 GB for the operating system and the rest for other process on the server. Even if there are no other process on the server, it is better to have a few GB of RAM left so that there is no tussle for memory between SQL Server and Windows.

        One can allocate memory to SQL Server from Management Studio. Right click on the registered server on the management studio, pick Memory tab. Set the amount of memory in KB you would like to allocate to SQL Server Process. Click OK and the maximum limit for memory utilization is set. The above operation doesn't require a service restart.




             Coming back to question, the above details explain why SQLServr.exe on the task manager is having a high memory utilization and why it absolutely normal to have that. If SQL Server's memory allocation is reduced to below 75%, then it can hurt SQL Servers performance as queries will more frequently require to fetch data from disk instead of memory. So it is good to have 75 - 80% allocated to SQL Server and normal to have a 85 to 90% overall memory utilization on DB Server.

Monday, April 23, 2012

Sysprocesses CPU and CPU Percentage - utilization

I am not sure how many of you have faced this situation? I have faced it many many times. Somebody asks you to provide currently running expensive queries or processes. You quickly look up to sys.dm_exec_requests or sys.processes and export the result in a excel and send it to requestor.Mostly you would have included the columns like CPU,Memory,IO to indicate the utilization. Immediately the question people shoot at you is "Can you convert the CPU column to CPU %?. How much CPU% my query consumed?"

Eh..I have always found it bit hard to make people understand it doesnt indicate CPU%. So what does it indicate?

CPU_time column on Sys.Exec_requests and Sys.processes indicates amount of time in milliseconds the process was utilizing the CPU. CPU time is similar to "man hours" calculation used in Corporate world. If two men work for 1 hour each then the man hours spent is 2 hours. Similarly CPU time is amount of time all the processors spent on executing your query.

So does that show Query running time? No. It is "related" to query running time. Why it doesnt reflect exact running time? Because of the following factors.

* A query can be waiting for some other resource as well. The wait, if it doesnt involve any CPU processing then the CPU_time will be zero though the query may be running for long time. For example, if the query is waiting for a lock to be released, it doesnt consume any CPU_time. Other such examples can be latch wait,network wait etc.

* CPU_time is the time spend by all processes just like "Man hours". If the query uses parallel processing or consumes multiple processors, the CPU time returned by sys.processes/exec_requests will be much higher than actual running time of the query. For ex: if the query runs on both the processors p1 and p2, then CPU_time is equal to the sum of time consumed on all p1 and p2. So we can say CPU_time relates to actual execution time but not exactly the same.

So what can we infer from the CPU_time field? We understand that the queries which had the highest CPU_time have made the processor work harder and are likely to have caused a stress on the server.

Sunday, April 8, 2012

Calling Function / UDF using linked server


If one needs to call a UDF or System function ( ex: Serverproperty, Getdate(), etc ) on a remote server, then one cannot use the usual 4 part linked server type query.ie., Assume you want to find the Servername, instance name or the current time on a remote sql server then, querying [linked_server].master.dbo.getdate() or [linked_server].master.dbo.Servername('Servername') doesn't work.  
      In such a case OPENQUERY function comes to our rescue. How does that help? Assume we want to get the Servername,Instance name, Version, Service Pack levels of a remote server. Then use the following query

SELECT *

FROM   Openquery([10.13.221.55], 'Select Serverproperty(''Servername'') as srvname,
Serverproperty(''instancename'') as instance_name,
Serverproperty(''productLevel'') as Service_pack,
Serverproperty(''productversion'') as Patch_Version,
Serverproperty(''Edition'') as Remote_Server_Edition
'
) 




As you can see, the first parameter is the linked server name. The second paramater is the actual query we want to execute.Within the quotation we can run any query and obtain the results. Openquery sends the query to the remote server, executes it in the remote server and presents the result for processing on the local server. Note that this allows one to perform joins / filters with the local table after the results have been returned by the remote server using OpenQuery function. Ex :


SELECT *
FROM   Openquery([10.13.221.55], 'Select Serverproperty(''Servername'') as srvname,
Serverproperty(''instancename'') as instance_name,
Serverproperty(''productLevel'') as Service_pack,
Serverproperty(''productversion'') as Patch_Version,
Serverproperty(''Edition'') as Remote_Server_Edition
'
)
WHERE  remote_server_edition = Serverproperty('Edition') 


The above query prints the result if and only if the local and remote server's SQL Editions were the same. On the queries above we have used additional quotations on serverproperty function as we have to specify quotation with in a quotation.

Also,one can also store the result into a temp/physical table or in a variable. Sample query for storing in temp table

INSERT INTO #temp
SELECT *
FROM   Openquery([10.13.221.55], 'Select Serverproperty(''Servername'') as srvname,
Serverproperty(''instancename'') as instance_name,
Serverproperty(''productLevel'') as Service_pack,
Serverproperty(''productversion'') as Patch_Version,
Serverproperty(''Edition'') as Remote_Server_Edition
'
) 


Sample query for storing the remote server's date in a variable

DECLARE @dt DATETIME

SELECT @dt = dt
FROM   Openquery([10.13.221.55], 'Select getdate() as dt ')

PRINT @dt 

So,Openquery provides easy way to access functions esp the system ones like getdate(),db_name(),Serverproperty,DMFs which can't be accessed directly using linked servers.

Tuesday, March 27, 2012

Inserting UniCode / Special characters in tables


Consider the situation where one needs to store multilingual data / Special characters into a table. For example Chinese characters or Tamil characters. Most of the folks would be aware that one should use NVarchar column instead of Varchar column as Nvarchar column can store unicode characters. This post explains the problem one faces while inserting special characters from a query. Consider the following script to insert some special character data into database


CREATE TABLE #sample
  (
     id       INT,
     spl_char NVARCHAR(500)
  )
GO
INSERT INTO #sample
SELECT 1,
       '我的妻子塞尔伽'
GO
INSERT INTO #sample
SELECT 2,
       'மறத்தமிழன் ' 


The script executes successfully.Let us see the results. Refer to picture below.



We are surprised to see that the special characters are not inserted correctly. We have set the column as Nvarchar but still the special characters appear corrupted. Why?

The reason is when one is expilictly specifying the special character within quotation, one needs to prefix it with the letter N. For ex, while specifying 'மறத்தமிழன்', one needs to specify it as N'மறத்தமிழன்'. The reason is when a string is enclosed with single quotes, its automatically converted to Non Unicode data type or Varchar/char data type. Specifying the letter N before the quotes informs SQL Server that the next string contains unique code character and should be treated as Nvarchar.

Let us modify the script and try using inserting special / Unicode characters.


CREATE TABLE #sample
  (
     id       INT,
     spl_char NVARCHAR(500)
  )
GO
INSERT INTO #sample
SELECT 1,
       N'我的妻子塞尔伽'
GO
INSERT INTO #sample
SELECT 2,
       N'மறத்தமிழன் '
GO
SELECT *
FROM   #sample; 


The result shows that the multilingual characters are now correctly displayed.



So one shouldn't forget to include the letter N while specifying NVarchar or special characters explicitly.