Dear All,
Welcome to the second interview of Data Channel. Very happy and proud to have Mr Hamid Fard, MVP and MCM from Malaysia for Data Channel. Hamid discusses SQL Server Memory Capacity Planning on Data Channel. Discussion covers topics like SQL Server Memory sizing, AWE, Plan Cache, In Memory OLTP Capacity Planning and Buffer Pool Extension.
Interview Video provided below:
Excerpt from the interview provided below.
What is Capacity Planning? What does one needs to know?
While managing SQL Server, DBA's need to estimate and come up with the sufficient memory required for database instance. There have been many cases where the database server was setup and released to production and upon analysis it was found that the memory available for the server was insufficient and this resulted in poor performance. So, DBA's should plan and carefully estimate the right memory required for SQL Server.
There is a common complaint that SQL Server uses all the memory in the server. How does SQL Server use the memory?
SQL Server is a memory hungry and one shouldn't be scared if it does so!!! SQL Server loads all the data into the memory for all its queries and then the results are returned to client. If we have lot of memory, more data can be placed on the memory and that improves the performance of queries
Does SQL Server ever release the memory to Windows?
Yes. Lazy writer process scans the buffer pool when the SQL Server or Windows operating system is under memory pressure, and releases the unused pages in buffer pool. The least recently used memory pages are released by Lazy writer. If the pages are dirty ( unwritten to disk ), then they are written to disk before being removed from memory.
Is there any guideline for setting the maximum memory?
Ones needs to come up with a formula based on the Windows Server & Edition and SQL Server Version & Edition. A formula which I will be explaining, will help in coming up with maximum memory setting.
Maximum memory setting allocates only for the buffer pool. One needs to estimate the memory required for SQL MGR, memory used by worker and threads, columnstore cache, plan cache and reduce the same from available physical memory to derive the maximum memory setting for SQL Server.
Can you please share the formula used? Has that been documented anywhere by you? Would be useful for many.
Yes. I have blogged the same. Please refer to my blogpost below where I have written about it in detail http://fard-solutions.com/sql-server-memory-capacity-planning/
Is there a recommended method to set the maximum memory?
Please refer to the blog written by me provided below http://fard-solutions.com/sql-server-memory-capacity-planning/
Blog post clearly explains how to size the memory for the following items
Is there a need to size differently for OLTP and OLAP systems? Why ?
OLAP systems involves more of "Select" queries and hence one needs to have bigger buffer pool to cache more data pages. Otherwise, OLAP systems will have lots of disk IO.
OLTP Systems normally comprises of insert statements, random I/Os, smaller transactions and don't have huge select statements. As the types of workloads are different, a slightly different formula for different systems. Some OLTP systems also have huge reports causing massive select statements, but the formula can serve as a general guideline.
Buffer Pool Extension is good for companies using old hardware but running on SQL Server 2014, where one is unable to upgrade the memory. Buffer Pool Extension is normally used with SSD disks, but SSD disks response rates are in micro seconds and not nano seconds as it is in usual memory. So, BPE doesn't give a drastic performance gain.
Welcome to the second interview of Data Channel. Very happy and proud to have Mr Hamid Fard, MVP and MCM from Malaysia for Data Channel. Hamid discusses SQL Server Memory Capacity Planning on Data Channel. Discussion covers topics like SQL Server Memory sizing, AWE, Plan Cache, In Memory OLTP Capacity Planning and Buffer Pool Extension.
Interview Video provided below:
Excerpt from the interview provided below.
What is Capacity Planning? What does one needs to know?
While managing SQL Server, DBA's need to estimate and come up with the sufficient memory required for database instance. There have been many cases where the database server was setup and released to production and upon analysis it was found that the memory available for the server was insufficient and this resulted in poor performance. So, DBA's should plan and carefully estimate the right memory required for SQL Server.
There is a common complaint that SQL Server uses all the memory in the server. How does SQL Server use the memory?
SQL Server is a memory hungry and one shouldn't be scared if it does so!!! SQL Server loads all the data into the memory for all its queries and then the results are returned to client. If we have lot of memory, more data can be placed on the memory and that improves the performance of queries
Does SQL Server ever release the memory to Windows?
Yes. Lazy writer process scans the buffer pool when the SQL Server or Windows operating system is under memory pressure, and releases the unused pages in buffer pool. The least recently used memory pages are released by Lazy writer. If the pages are dirty ( unwritten to disk ), then they are written to disk before being removed from memory.
Is there any guideline for setting the maximum memory?
Ones needs to come up with a formula based on the Windows Server & Edition and SQL Server Version & Edition. A formula which I will be explaining, will help in coming up with maximum memory setting.
Maximum memory setting allocates only for the buffer pool. One needs to estimate the memory required for SQL MGR, memory used by worker and threads, columnstore cache, plan cache and reduce the same from available physical memory to derive the maximum memory setting for SQL Server.
Can you please share the formula used? Has that been documented anywhere by you? Would be useful for many.
Yes. I have blogged the same. Please refer to my blogpost below where I have written about it in detail http://fard-solutions.com/sql-server-memory-capacity-planning/
Is there a recommended method to set the maximum memory?
Please refer to the blog written by me provided below http://fard-solutions.com/sql-server-memory-capacity-planning/
Blog post clearly explains how to size the memory for the following items
1) Reserve memory for windows
2) SQL Server DLLs
3) MTL - SQL Server Memory To Leave area
4) Plan Cache
5) Buffer Pool
Is there a need to size differently for OLTP and OLAP systems? Why ?
OLAP systems involves more of "Select" queries and hence one needs to have bigger buffer pool to cache more data pages. Otherwise, OLAP systems will have lots of disk IO.
OLTP Systems normally comprises of insert statements, random I/Os, smaller transactions and don't have huge select statements. As the types of workloads are different, a slightly different formula for different systems. Some OLTP systems also have huge reports causing massive select statements, but the formula can serve as a general guideline.
Buffer Pool Extension ( BPE ) was introduced in SQL Server 2014. How do you think it helps in "Memory Capacity Planning"?
Buffer Pool Extension is good for companies using old hardware but running on SQL Server 2014, where one is unable to upgrade the memory. Buffer Pool Extension is normally used with SSD disks, but SSD disks response rates are in micro seconds and not nano seconds as it is in usual memory. So, BPE doesn't give a drastic performance gain.
My suggestion is to use it as last resort and ideally one should look to upgrade the RAM. Even if one decides to use BPE, suggestion is to use it with faster disks like PCI RAM disks
Another major technology to release in SQL Server 2014, is "In Memory" OLTP. How does one size if the database uses "In Memory OLTP" tables?
"In Memory OLTP" is one of the good features to have released in 2014 and improved further in SQL Server 2016. However, one needs to carefully size them to achieve the performance gain.
Are there been scenarios where "In Memory OLTP" actually resulted in negative performance?
Yes. If one doesn't size the memory carefully, it can even result in negative performance. Refer to the formula provided below.
Internal memory pressure is experienced if insufficient memory is sized. It can be felt via
1) Stolen Pages
2) Plan cache issues
3) Not enough buffer pool memory to keep all the data
Another thing one may want to check is unnecessary or unused "Non Clustered Indexes". As the indexes are updated, they are loaded to memory as well. So, unused indexes not just add up to the disk usage but also waste buffer pool space.
Edit from Nagaraj: Some of the content is debatable as indicated in the comments. Viewers discretion is advised
One needs to calculate 4 different areas
1) Table & Row Data - Table data size is the sum of timestamp size, In memory Index Pointers size and actual data size ( row size )
Begin & End Timestamp - 24 bytes
In memory index pointers - 8 + 8 + 8 + 8 ( because 8 bytes per index and there are 4 indexes )
Data Size - 200 bytes in this example
So the per row table data for this example is 256 bytes.
2) Hash Index - Size of hash index depends upon bucket count. It is as follows
( Bucket Count rounded up to nearest power of 2) *
( Index Pointer ) * ( number of hash indexes )
In this case - 5,000,000 nearest power of 2 is 2^23. So, 2^23 * 8 * 3
3) Memory for Non Clustered Index - Derived using indexed column ( key column ) size and number of unique values on the column
Index Pointer + Sum( Key Column Data Size ) * Unique Rows
4) Memory for Row Versioning - In memory OLTP uses row versioning which implies for every operation, the older rows are copied and changes are applied on the latest version. So, the formula is
Longest Transaction Duration per Second * Peak rows updated / deleted * Row Size
So, for the above example, for a table that has a maximum 30% of its rows changing at a time, would require 3.6 GB of memory even though the table size is 1.2 GB.
Ideally, In memory OLTP tables are best suited for tables with insert only and fewer changes ( Deletes / Updates ) on it.
SQL Server 2016 SP1 announcement saw quite a few changes to SQL Server features availability across edition. How do you see the changes that are related to memory?
I was shocked(pleasantly) to see the changes. The big change was making columnstore and "In Memory" feature available on Standard and lower editions. The memory used by Columnstore is
capped to 32 GB and is outside of buffer pool.Hence one needs to calculate the memory for column store and then substract the same from physical memory to come up with the maximum memory
required for buffer pool.
"In Memory" OLTP tables have a maximum of 32 GB of memory per database which is also outside buffer pool. One needs to take that into account as well while sizing
the memory.
What are the warning signs if one fails to configure memory correctly or downsizes it?
Internal memory pressure is experienced if insufficient memory is sized. It can be felt via
1) Stolen Pages
2) Plan cache issues
3) Not enough buffer pool memory to keep all the data
Another thing one may want to check is unnecessary or unused "Non Clustered Indexes". As the indexes are updated, they are loaded to memory as well. So, unused indexes not just add up to the disk usage but also waste buffer pool space.
Edit from Nagaraj: Some of the content is debatable as indicated in the comments. Viewers discretion is advised
6 comments:
Half of this stuff is either incomplete or plain wrong (e.g. AWE behavior). Please don't make things up and claim as fact. It really hurts the community, especially newer DBAs and developers.
Thanks for the comment. The excerpt is a short overview to the interview while video is the actual content. We try our best to present complete view of the content within the brief period of time. Will try to do it better on next interview
If you find the statement said by Interviewee is outright incorrect, please state it exactly where the objection is.
Will pass the feedback to the Interviewee.
So many, I haven't the time to write up a detailed response so I'll just call out high level here.
SQL Server is not memory hungry. It has a tiny memory footprint to run the binaries and handle user connections. Workloads may or may not be memory hungry. It's a big difference. Oracle and DB2 used to be very memory hungry. In some distance past, an idle Oracle server required about 4x more memory than SQL Server at the time and per user connection took about 30x more memory. THAT is memory hungry. Both are MUCH better now.
Having lots of memory is likely to help but is no guarantee of better query performance. Broad sweeping statements we often hear from database consultants to improve performance is to add more memory. I've worked on a number of systems in the past where consultants added more memory and tanked query performance - this is a very real problem with 32-bit systems. In modern 64-bit systems, I've also seem 0 to <10% query perf improvement after RAM was doubled.
Lazy writer is not how SQL Server determines if there is memory pressure on the host and releases memory. This is too long to elaborate. Google it.
Memory settings covers more than buffer pool, this has been the case for at least 2 releases, if not more.
SQL Server does not allocate min memory at startup.
AWE and PAE description is a complete mix of random bits that incorrectly assembled. Page size changes are affected by PAE, not 3GB plus they have no impact to SQL Server as it's abstracted by the OS. SQL Server does not have direct memory access, always via virtualalloc which calls the appropriate underlying windows memory allocator API including AWE. The AWE API is still used in 64-bit systems and has nothing to do with AWE user setting. The 3GB startup flag tells Windows to reduce kernel memory reservation to 1GB instead of default 2GB and absolutely does not cause conflict. It can result in undesirable behavior but it is by design and those who don't know what it does, suffered the consequences. So much more detail I won't elaborate here, google it.
Enabling AWE on 32-bit versions of SQL Server may or may not help. It is not a panacea performance problems. It takes more memory to manage more memory - no that's not a typo. You need more virtual memory to manage the added (extended) memory. If your memory pressure is anywhere except data cache, taking up valuable, fixed size VAS will make your perf problems worse.
OLAP and OLTP descriptions make no sense. Anytime you need to touch data, IO and memory are involved. It doesn't matter what you do with it. The guidance makes no sense. OLTP systems do read extensively - update/delete operations first need to at least read the index then possibly the row(s) before it can update/delete.
The link to the memory sizing blog post is both factually wrong (e.g. MTL section) and logically broken (add up all the % and it's >100).
These subjects are very well documented in SQL Server BOL/MSDN/TechNet. You can also search Paul Randal/Kimberly Tripp blogs, they usually provide test scripts you can use to prove things yourself. Slava Oks has some really old but very detailed blogs on how SQL Server uses/manages memory and low level memory structures if you want to get really deep.
I love what you're doing for the community. Just find better candidates for your interviews so your readers don't learn the wrong things.
Thanks for the detailed comments on the interview. Will review the content again. Noted on improving the quality of the content delivered. Thanks.
I got a good answer from the above description,but it still requires some more update to be made. Please share more content on MSBI Online Course Bangalore
OLTP is a database having a required current data and schema for various online transactional databases. this oltp is characterized by low volume of transactions you can know and learn more in ibm training
Post a Comment