Thursday, April 13, 2017

SQL Saturday Christchurch 2017

Dear All,

 Last week, I had travelled to the beautiful city of Christchurch, for speaking in SQL Saturday South Island. This is by far the farthest I have travelled for a SQL Saturday, nearly 8500KMs from Singapore. I am so glad, I made it as it turned out to be a excellent event organized by my dear friend / brother Hamish Watson, Martin Catherall and rest of the SQL Saturday South Island team.

My talk was on "Cardinality estimator - A ride from SQL 7.0 to SQL Server 2016", going thru the various changes on CE 120 and CE 130. Personally, I enjoyed delivering this one as the audience was fairly well informed and it gave me the license to get lot more detailed on the subject. No one left the room since I started the talk and it was a very interactive session, which gave me a feel that the talk was well received too.

After my session, I managed to interview couple of wonderful MVPs, Mr. Manohar Punna and Mr. Steve Knutson for #DataChannel. Both the interviews were on my editing table and will be out in few weeks’ time :)

The Oz/NZ SQL family is so warm, welcoming and take so much of care, that it is always a pleasure being part of their events. I always have a ball of a time in company of Hamish, Martin Catherall, Manu, Warwick, Martin Cairney, Rob Douglas, Patrick Flynn, Reza and Leila (though they had to leave a little early this time around). Thanks for making me feel part of the family as always.
One of the main reasons for me deciding to take part in this event was beacuse of Mr.Hamish Watson, who invited me way back in Aug 2016 itself. While he calls me "SQL Brother", he took care of me like his real brother, taking me around the beautiful city of Christchurch, taking me to his home, showing me his farm as well. I got a chance to meet his wonderful family, his wife and 4 lovely kids and I really wish them all the happiness in this world. Hamish also cooked delightful vegetarian dinner for the entire family, making the overall stay at his home memorable. I really don't think I could play such a good host as him and I had lots to nice things to learn from him and his family.

Finally, I also did the tourist thing of sightseeing and shopping in Christchurch. After travelling around a few countries and a number of cities, I can certainly say, Christchurch is one of the most beautiful cities I have ever seen. The mountains, the ocean, the beach and the country side are so scenic. Sadly, multiple earthquakes in last few years have caused loss of several lives and properties but nothing stops the New Zealanders from being happy and keeping the city neat, clean and beautiful.


 I managed to visit a place called Gondula which is a location on Mount cavendish, at about thousand feet above sea level, from where one gets an aerial view to the city & harbor at one side and calm endless pacific ocean at the other side. Needless to say, the views were simply stunning and I was awestruck to see how nature has blessed this place with so much of beauty. Gondula is one of the two stunning locations I have ever been too (the other being the great oceans road / 12 apostles in Australia).


 Overall, a memorable trip both professionally and personally, thanks again to Hamish Watson for giving me an opportunity to be part of this excellent event!!! 

if interested, please check out more photos here



Wednesday, April 12, 2017

Data Channel Interview - 05 - Edwin Sarimento on Availablity Groups & High Availability

Dear all,

On the 5th interview of #DataChannel, Mr.Edwin Sarimento, MVP, MCM and a popular high availability expert discusses SQL Server Always on Availability Groups and High Availability options in SQL Server. In this interview, Edwin gives a totally new dimension to common perceptions on high availability.

Sharing a few top lines from Edwin in the interview!!!

  1. Keeping it simple is the most difficult thing to do!!!
  2. Don't do something just because its the latest. Always ensure it solves the business problem
  3. Deprecated doesn't mean not supported
Watch the full interview below and enjoy!!!

Monday, March 27, 2017

Data Channel - Interview 04 - Martin Catherall on In Memory OLTP

Welcome to the 4th Interview of Data Channel.
We have Mr.Martin Catherall, 3 time Data Platform MVP from Australia / New Zealand discussing "In Memory OLTP" tables. Interview was shot at his wonderful home in Melbourne. I would like to thank Martin for sharing his knowledge for the SQL Community. Check out the interview below.


Sunday, January 22, 2017

MVP 2016!!!

Dear All,

Very happy to share the news that I have been awarded Microsoft Most Valuable Professional award for my contributions to SQL Server Community in 2016!!!. This is an honor I have dreamt of for long needless to say that the award motivates me to contribute more for the SQL Server Community.

In case if you don't know what MVP award is all about, you may read about it here and here

No big achievement is possible without a team. This one is no different. My family, Singapore Microsoft Data Platform Team, MVPs who nominated me, Microsoft DX Team, my friends at Singapore SQL Server User Group, colleagues at work and several experts from whom I have learnt the tricks of the is a long list of people who have helped me get here!!! I have just collected the award on behalf of everyone :)

I received the award few days ago shipped from Microsoft. Sharing the pictures of the same below. Thanks to Microsoft again for giving me something I would cherish forever!!!!


Wednesday, January 18, 2017

Data Channel - Interview 03 - Hamish Watson on DevOps and Azure

Dear All,

After a short break, we are back with the 3rd interview on #DataChannel. Mr.Hamish Watson, Community Lead from Christchurch, New Zealand discusses "DevOps Practices and Azure" with me.  Had a great time as always talking technology with the ever awesome Hamish. Interview recording provided below

Interview transcript to be updated soon.


Saturday, December 31, 2016

Data Channel - Interview 02 - Hamid Fard on SQL Server Memory Capacity Planning

Dear All,

Welcome to the second interview of Data Channel. Very happy and proud to have Mr Hamid Fard, MVP and MCM from Malaysia for Data Channel. Hamid discusses SQL Server Memory Capacity Planning on Data Channel. Discussion covers topics like SQL Server Memory sizing, AWE, Plan Cache, In Memory OLTP Capacity Planning and Buffer Pool Extension.

Interview Video provided below:

Excerpt from the interview provided below.

What is Capacity Planning? What does one needs to know?

While managing SQL Server, DBA's need to estimate and come up with the sufficient memory required for database instance. There have been many cases where the database server was setup and released to production and upon analysis it was found that the memory available for the server was insufficient and this resulted in poor performance. So, DBA's should plan and carefully estimate the right memory required for SQL Server.

There is a common complaint that SQL Server uses all the memory in the server. How does SQL Server use the memory?

SQL Server is a memory hungry and one shouldn't be scared if it does so!!! SQL Server loads all the data into the memory for all its queries and then the results are returned to client. If we have lot of memory, more data can be placed on the memory and that improves the performance of queries

Does SQL Server ever release the memory to Windows?

Yes. Lazy writer process scans the buffer pool when the SQL Server or Windows operating system is under memory pressure, and releases the unused pages in buffer pool. The least recently used memory pages are released by Lazy writer. If the pages are dirty ( unwritten to disk ), then they are written to disk before being removed from memory.

Is there any guideline for setting the maximum memory?

Ones needs to come up with a formula based on the Windows Server & Edition and SQL Server Version & Edition. A formula which I will be explaining, will help in coming up with maximum memory setting. 

Maximum memory setting allocates only for the buffer pool. One needs to estimate the memory required for SQL MGR, memory used by worker and threads, columnstore cache, plan cache and reduce the same from available physical memory to derive the maximum memory setting for SQL Server.

Can you please share the formula used? Has that been documented anywhere by you? Would be useful for many.

Yes. I have blogged the same.  Please refer to my blogpost below where I have written about it in detail 

Is there a recommended method to set the maximum memory?

Please refer to the blog written by me provided below

Blog post clearly explains how to size the memory for the following items

1) Reserve memory for windows
2) SQL Server DLLs
3) MTL - SQL Server Memory To Leave area
4) Plan Cache
5) Buffer Pool 

Is there a need to size differently for OLTP and OLAP systems? Why ? 

OLAP systems involves more of "Select" queries and hence one needs to have bigger buffer pool to cache more data pages. Otherwise, OLAP systems will have lots of disk IO.
OLTP Systems normally comprises of insert statements, random I/Os, smaller transactions and don't have huge select statements. As the types of workloads are different, a slightly different formula for different systems. Some OLTP systems also have huge reports causing massive select statements, but the formula can serve as a general guideline.

Buffer Pool Extension ( BPE ) was introduced in SQL Server 2014. How do you think it helps in "Memory Capacity Planning"? 

Buffer Pool Extension is good for companies using old hardware but running on SQL Server 2014, where one is unable to upgrade the memory. Buffer Pool Extension is normally used with SSD disks, but SSD disks response rates are in micro seconds and not nano seconds as it is in usual memory. So, BPE doesn't give a drastic performance gain.

My suggestion is to use it as last resort and ideally one should look to upgrade the RAM. Even if one decides to use BPE, suggestion is to use it with faster disks like PCI RAM disks

Another major technology to release in SQL Server 2014, is "In Memory" OLTP. How does one size if the database uses "In Memory OLTP" tables?

"In Memory OLTP" is one of the good features to have released in 2014 and improved further in SQL Server 2016. However, one needs to carefully size them to achieve the performance gain.

Are there been scenarios where "In Memory OLTP" actually resulted in negative performance?

Yes. If one doesn't size the memory carefully, it can even result in negative performance. Refer to the formula provided below.

One needs to calculate 4 different areas

1) Table & Row Data - Table data size is the sum of timestamp size, In memory Index Pointers size and actual data size ( row size )

   Begin & End Timestamp - 24 bytes

   In memory index pointers - 8 + 8 + 8 + 8 ( because 8 bytes      per index and there are 4 indexes )

   Data Size - 200 bytes in this example

So the per row table data for this example is 256 bytes. 

2) Hash Index - Size of hash index depends upon bucket count. It is as follows

( Bucket Count rounded up to nearest power of  2) * 
( Index Pointer ) * ( number of hash indexes )

     In this case - 5,000,000 nearest power of 2 is 2^23. So,            2^23 * 8 * 3

3) Memory for Non Clustered Index -  Derived using indexed column ( key column ) size and number of unique values on the column

 Index Pointer + Sum( Key Column Data Size ) * Unique Rows

4) Memory for Row Versioning - In memory OLTP uses row versioning which implies for every operation, the older rows are copied and changes are applied on the latest version. So, the formula is 

       Longest Transaction Duration per Second * Peak rows updated / deleted * Row Size 

So, for the above example, for a table that has a maximum 30% of its rows changing at a time, would require 3.6 GB of memory even though the table size is 1.2 GB.

Ideally, In memory OLTP tables are best suited for tables with insert only and fewer changes ( Deletes / Updates ) on it.

SQL Server 2016 SP1 announcement saw quite a few changes to SQL Server features availability across edition. How do you see the changes that are related to memory?

I was shocked(pleasantly) to see the changes. The big change was making columnstore and "In Memory" feature available on Standard and lower editions. The memory used by Columnstore is
capped to 32 GB and is outside of buffer pool.Hence one needs to calculate the memory for column store and then substract the same from physical memory to come up with the maximum memory 
required for buffer pool.

"In Memory" OLTP tables have a maximum of 32 GB of memory per database which is also outside buffer pool. One needs to take that into account as well while sizing
the memory.

What are the warning signs if one fails to configure memory correctly or downsizes it? 

Internal memory pressure is experienced if insufficient memory is sized. It can be felt via

1) Stolen Pages
2) Plan cache issues
3) Not enough buffer pool memory to keep all the data

Another thing one may want to check is unnecessary or unused "Non Clustered Indexes". As the indexes are updated, they are loaded to memory as well. So, unused indexes not just add up to the disk usage but also waste buffer pool space.

Edit from Nagaraj: Some of the content is debatable as indicated in the comments. Viewers discretion is advised

Monday, December 26, 2016

Query Store- Part 5: Checking forced plans and Query Store Failures

Most of you would know that using query store one can force query plans. Is there a need to check what plans are forced? Why?

Fundamentally, two important reasons for checking forced plans. 

1) To check if the query is yielding the desired performance?. The performance needs to be obviously better than the traditional query performance in most of the cases.

2) To look for query store failures while forcing plans.

This post will deal with point 2, query store failures while forcing plans.

What are query store failures?

Query store failures are events where query store is unable to force the plan it is supposed to. 

For example, - Let's say you have set the query store to force the index "NCIX_dt_present" for a particular query, and assume you have dropped the index, then query store does the following

1) Ignores the plan to be forced and picks up the next best query plan of optimizer's choice

2) Marks the failure to force plan on query store on the table "sys.query_store_plan" in the column "force_failure_count" 

For example,

Consider the following query

Select session_id,cpu_time,dt,logical_reads 
from [dbo].[Process_monitor]
WHERE dt between '20150715' and '20150715 00:15'

Refer to the screenshot from query store

Plan 99 uses "NCIX_dt_present"( a non clustered Index ) for the query

To ensure that the query always uses the Index "NCIX_dt_present", lets force the plan 99 on query 92 as shown below.

Run the select query few times and verify if plan is getting forced. Now lets drop the index

DROP INDEX Process_monitor.NCIX_dt_present

Re run the select query again few times

Select session_id,cpu_time,dt,logical_reads 
from [dbo].[Process_monitor]
WHERE dt between '20150715' and '20150715 00:15'

List of query store failures can be found from the query below

CAST(query_plan AS XML) AS 'Execution Plan',
rs.first_execution_time, rs.last_execution_time,
FROM sys.query_store_plan qp
INNER JOIN sys.query_store_query q
ON qp.query_id = q.query_id
INNER JOIN sys.query_store_query_text qt
ON q.query_text_id = qt.query_text_id
INNER JOIN sys.query_store_runtime_stats rs
ON qp.plan_id = rs.plan_id
WHERE qp.is_forced_plan = 1
AND qp.force_failure_count > 0
Order by rs.last_execution_time

qp.force_failure_count indicates the number of failures of query store to force plan. qp.last_force_failure_reason_desc" gives the reason why plan couldn't be forced. 

It is strongly recommended to track query store failures and forced plans to avoid unexpected performance results.