Thursday, October 27, 2016

Technical Presentations - What happen's behind the scene

A while ago I wrote a piece on public speaking assignements over here and promised to write what it takes to do the same. While I started writing it, realized it can be made into a article for SQL Server Central. Link to the article provided below.

Do let me your thoughts over here or at SQL Server Central. Thanks!!!


Monday, October 24, 2016

Query Store – Part 2 - Configuration Settings

Introduction to Query Store covered here.
Now for the second part on “How to Configure?”

Query Store is configured at the database level. Right Click on the Database-> Properties ->QueryStore

* Operation Mode: To turn the query store on set "Operation Mode" to "Read Write". At "Read Write" mode, query store starts to record the query plans

* Query Store Retention: One can set a max size to the data collected by query store. Size consumed includes the query  plans recorded size and all the other statistics size recorded   by query store. Size  specified in MB. 

* Stale Query Threshold:
Purges the plans older than the specified date. Helps in query store not using up the total storage limit set by "Query Store  Retention"

* Size Based Cleanup:
One can set to "Auto" / "Off". Recommended to set to "Auto". Once set to "Auto", query store automatically triggers  the cleanup process if the query store's storage usage is close to its limit set by "Query Store Retention". If set to "off", once the query store's usage reaches its limit, query store stops recording the data and slips into "Read Only" Operational mode.

* Statistics Collection Interval: Indicates the frequency at which query store runtime statistics are collected. Statistics collected are run time statistics of query plans like number of executions, cost per executions, row count etc.  Shorter Statistics collection interval provided   will help the statistics collection to be lot more granular but at the cost of additional storage.  Shorter Collection intervals doesn't have adverse impact  on performance and will have only additional storage cost. 

* Data Flush Interval: Indicates the frequency at which the query store statistics and plans are flushed to disk. 

Query Store Capture Mode: "All" captures all the query plans generated

We will cover how query store collects and stores the data in the next post

Monday, October 17, 2016

Query Plan's Compile Time Parameter Values

Most of you would know how to get the query plan from the cache via "sys.dm_exec_cached_plans" or "sys.dm_exec_query_stats" DMV or via a "sys.dm_exec_requests" with the plan_handle. But one interesting thing which many not have noticed is the ability to check the "compile time" parameter values of the query or stored procedure via the query plan.

 What is compile time parameter value?
Let’s say we have the following stored procedure
       EXEC usp_get_income @start_income, @end_income;

@start_income, @end_income are the parameters. For the first call, the query optimizer generates the query plans using the parameter values passed initially. For the subsequent calls, the query plan is reused and it is based on the values passed for the first call. Till the plan moves out of the cache or till it is recompiled, the query plan generated via the initial call is reused. The parameter values based on which the query plan is generated is referred as the compile time values
Consider the following example

EXEC usp_get_income 1000, 2000;

Open up another query window and pull out the plan of the
call "EXEC usp_get_income 1000, 2000" using the following

select cp.objtype, cp.size_in_bytes,
from sys.dm_exec_cached_plans cp
outer apply sys.dm_exec_sql_text(cp.plan_handle) st
outer apply sys.dm_exec_query_plan(cp.plan_handle) qp
where st.text like '%usp_get_income%'
and st.text not like '%dm_exec_cached_plans%';

Use count value 1 indicates it is the first time execution. Click

on the value in the query plan column

Right click on the right most operator and click properties as

shown below.

Expand parameter list section and observe the "parameter

compile value" section as shown below.

"parameter compile value" indicates the inputs based on

which the plan was prepared and shows 1000,2000 as values.

Re run the query with different values

EXEC usp_get_income 5000, 6000;

Check the query's plan reuse count via

"sys.dm_exec_cached_plans" script provided earlier. Check

the compile time value again. It indicates old values

1000,2000 which were used to generate the plan initially at

first compilation.

The compile time parameter values of a

poor performing query can be extremely useful in

troubleshooting query regressions / parameter sniffing

issues. More about the same in subsequent posts.

Wednesday, October 12, 2016

Check Database Server Workload - Batches/ Sec

Today's post is about a simple but very useful query.

One of the basic things one needs to check is the amount of workload on the database server. Amount of workload or finding how busy the server is, can be done by "Batch Requests/sec" performance counter ( I guess most of my well informed readers should know that :) )  "Batch Requests/sec" actually tells us number of batches ( batch of query requests ) that the SQL Server is processing per second

One can check the same by the query below.

SELECT object_name,counter_name,cntr_Value
FROM sys.dm_os_performance_Counters

WHERE counter_name like 'Batch Requests/sec%'

However, the above query actually returns a huge number which makes you wonder if the server is actually processing so many queries a second. The huge number is because of the fact that "sys.dm_os_performance_Counters" actually returns a cumilative value of 'Batch Requests/sec' since last restart. So, the correct way of finding average "batches/sec" or average workload on the server would be

Average Workload Query:

SELECT object_name,counter_name,
cntr_Value / datediff(ss, ( SELECT create_Date FROM
                            WHERE name like 'tempdb'),getdate())
                            as Avg_Batches_Per_Sec
FROM sys.dm_os_performance_Counters 
WHERE counter_name like 'Batch Requests/sec%'   

The above query gives us the average workload since last restart. If you are interested just checking the current workload, instead of the average since restart, then

Current Workload Query:

DECLARE @BatchRequests Bigint
SELECT @BatchRequests = cntr_Value
FROM sys.dm_os_performance_Counters
WHERE counter_name like 'Batch Requests/sec%'
WaitFor Delay '00:00:01'
SELECT object_name,counter_name,cntr_Value - @BatchRequests as [Batches/Sec]
FROM sys.dm_os_performance_Counters
WHERE counter_name like 'Batch Requests/sec%'

Though there are no good or bad numbers for Batches/sec counter, a busy server would be expected to have about few thousand batches per second with extremely busy ones reaching 5 digit workloads.

Sunday, October 9, 2016

SQL Server 2016 Query Store - Gentle Introduction - 1

Query Store is perhaps one of the path breaking features to release in SQL Server 2016. This post will give a quick introduction to the same.

What is this query store?

Short answer: Records / Stores every query plan generated along with their run time statistics like Execution Count, Row count, Execution cost, IO and memory usage on to the user database.

What are the key benefits?

1) One can track the query run time performance easily at any point in time
2) Track change of a query plan or change in performance of a particular query over a period of time
3) Track regression - Find which queries started performing badly 
4) Query plans will be part of user database backup
5) Ability to force plans via user graphical 
interface ( unlike plan guides which demanded some scripting)

What else we get?

4 wonderful reports

1) Regression report -> To track queries that have performed badly over a period of time. One gets to compare historical performance against recent  performance 

2) Top consuming queries -> To find the poor performing queries by CPU / duration /IO last one hour etc

3) Overall performance report-> Reports average response times, total queries executed etc

4) Tracked queries -> useful to see the trend in performance of a particular query

How to configure?

Right Click on the Database -> Properties -> Query Store -> Set Operation mode to read write.
Rest can be left at default to get it started or you may use the script below

USE [master]

That's a decent start with query store. Upcoming posts will cover various bells and whistles in it.

Thursday, October 6, 2016

Sp_recompile doesn't recompile!!!!

Many of us use the stored procedure "sp_recompile" to mark the stored procedure for recompilation, so that next time it runs we get a new plan. But does it actually recompile? Lets check.

Consider the stored procedure "usp_people_income"

Step 1: Lets mark the stored proc for recompilation

sp_recompile 'usp_people_income'

Step 2: Start a extended event trace for the event "sql_statement_recompile" to see if our call to the stored procedure actually results in recompilation

Step 3: EXEC usp_people_income 5000,5010

You would notice that the extended event trace didn't return any result indicating that the stored procedure didn't recompile. 

So, does "sp_recompile" really work?

Yes. It does.


Though, it doesn't actually force a recompilation, it actually achieves the same effect by removing the plan from the cache. At a subsequent call, new plan is actually inserted to the cache. You may check the same in the following way.

Step 1: Start a extended event trace for the events

Turn on the extended events sp_cache_insert,sp_cache_remove events

Step 2: Mark the stored proc for recompilation again

EXEC sp_recompile 'usp_people_income'

Step 3: Run EXEC usp_people_income 5000,5010

You may notice the following events fired in order

1) sp_cache_remove event immediately after "sp_recompile" is executed
2) sp_cache_insert event just before the stored procedure is executed.

Refer to screenshot provided below.