Tuesday, April 12, 2016

Cardinality Estimator - Part 2- Out of Date Statistics

Continuing from our series on Cardinality Estimator 2014, Let us look at the first difference between 2 CE models. The way estimates are made when the statistics are out of date or stale is the first difference we will be exploring. Couple of prerequisites for this demo

  1. Download SQL Server 2012 Adventureworks database from here
  2. Restore the downloaded database into a 2014 instance and switch the compatibility level to 120 ( SQL 2014 )  

Let us check the row count of the table [Production].[TransactionHistory]

EXEC sp_spaceused '[Production].[TransactionHistory]'
GO


Contains 113,443 rows. Lets insert 10,000 ( little less than 10%) rows so that it doesn't cause the statistic to be automatically updated and statistic remains out of date. Note that 20% of table needs to change for statistics to be updated automatically.

The plan is to insert rows, run queries against both the Cardinality Estimator models and compare the estimates.



Step 1: Insert 10,000 rows.

Insert into [Production].[TransactionHistory]([ProductID]
      ,[ReferenceOrderID]
      ,[ReferenceOrderLineID]
      ,[TransactionDate]
      ,[TransactionType]
      ,[Quantity]
      ,[ActualCost]
      ,[ModifiedDate])
SELECT Top 10000
       [ProductID]
      ,[ReferenceOrderID]
      ,[ReferenceOrderLineID]
      ,[TransactionDate]
      ,[TransactionType]
      ,[Quantity]
      ,[ActualCost]
      ,[ModifiedDate]
  FROM [Production].[TransactionHistory]
  WHERE [ModifiedDate] BETWEEN '20070101' and '20080101'

 
Step 2: Run query against both the models


  SELECT *
  FROM [Production].[TransactionHistory]  WHERE ProductID = 714  OPTION (QUERYTRACEON 9481)



  SELECT *
  FROM [Production].[TransactionHistory]
  WHERE ProductID = 714



Note that the hint "OPTION (QUERYTRACEON 9481)" will force the query to use old CE 70 in SQL Server 2014.Observe the estimates for both.




CE 70:





CE 120:




They are the same. Let us observer the histogram below.




Notice that the histogram contains value "714" as a range hi key and it estimates 645 rows. Estimate on the query plan is slightly higher. As SQL Server knows that approximately 10% of the rows have changed, it makes an adjustment ( 645 + ~10% of 645 ) on the both the Cardinality Estimator models.


Step 3 :


Now let us observe for another value


  SELECT *
  FROM [Production].[TransactionHistory]   WHERE ProductID = 746  OPTION (QUERYTRACEON 9481)



  SELECT *
  FROM [Production].[TransactionHistory]
  WHERE ProductID = 746


Let us compare the estimates now


CE 70:



CE 120:




Now we see a difference in the estimates. Let us check the histogram for the value.




The Value doesn't exist as a Range Hi Key and hence "Avg Range Rows" is used for the estimate. CE 70 uses exactly the value estimated by the histogram but CE 120 (new CE) makes the adjustment to the estimate from Histogram, even if the estimate is a approximate guess via Average Range rows. So, CE 70 estimates the value to be 206 as shown above, while CE 120 estimates it to be slightly higher, at 224.15


So, the takeaway from the post is Cardinality Estimator 120 will adjust the estimate obtained from histogram when statistics are out of date and even when the value has to be guessed via "Avg Range rows".




Sunday, April 3, 2016

SQL Server 2014 - Cardinality Estimator - Part 1

It has been a while since I blogged. Just moved to few other mediums like speaking at user group and SQL Saturday events in the last few months and it has kept me away from blogging for a while. Now back to blogging again. Hope to do a better balancing act :)

Starting from where I left, which was on Statistics and Histograms, I now move on to Cardinality Estimator, one of the major changes of SQL Server 2014

What is Cardinality Estimator?

Cardinality Estimator is responsible for providing the estimates on number of rows returned for each operator. If you have observed a query plan before, the estimated row count is provided by the cardinality estimator. What happens behind the scene is

  1. Query optimizer instructs cardinality estimator (CE) to find the estimated rows for each operation ( read a table, filter, join etc )
  2.  Cardinality estimator checks statistics and histogram to find the number of rows returned
  3.  Query optimizer based on the inputs given by CE, picks the right operator and subsequently the right plan

Obviously, CE plays a key role in getting right plan. Incorrect estimates would lead to poor plan and spiral out into poor performance.

CE has remained unchanged since SQL Server 7.0 and SQL Server 2014 brought about changes on the way estimates are made and on the algorithm used in making estimates. 

Run the following query with actual plan ( Query ->Include Actual Execution Plan ) on "Adventureworks2012". 

SELECT * FROM 
[Production].[TransactionHistory]
WHERE ProductID < 100

Right click on the select operator and click on properties as shown below


Observe the property "CardinalityEstimatonModelVersion". Value equal to 70 indicates it is using Old CE.




Query uses old CE as the compatibility level of "Adventureworks2012" database is 110 (SQL 2012). Switch the compatibility level to 2014 or higher,
rerun the query and you will notice the "CardinalityEstimatorModelVersion" to be 120, which indicates it uses the new cardinality estimator.

In a series of posts ( expect around 10 ), I would be covering the various changes in new cardinality estimator.

Wednesday, January 13, 2016

SQL Server Statistics - Histograms

Most of us are aware of Statistics in SQL Server and its purpose. But still, just to get started, statistics store information on the distribution of data in a column(s), which helps the query optimizer pick the right query plans. Effectively, it helps to answer questions like "how many rows with value 25 on column "Age" or how many more than 15 etc..

This post will get little in depth on how statistics are actually stored in the database. Statistics are actually stored as "histograms" ( https://en.wikipedia.org/wiki/Histogram). One may take a look at histogram used in statistics by expanding a commonly queried table in your database, expand statistics -> properties -> Click on "Details" tab.

 
Interesting part of the image above is the histogram steps. One would notice that histogram steps contain entries for selected values in the column. Basically what histogram steps intend to convey is the number of rows that exist for a particular or range of values.
For example, for a column containing
marks, it attempts to say
 
0 - 10   there are 15 rows
10 - 30 - there are 45 rows
50 - 80 - there are 60 rows
80 - 100 - there are 20 rows and so on..
 
Observe RANGE_HI_KEY column. It indicates the highest value in the range. Each row represents the range starting from the ending of previous row ( Previous row RANGE HI KEY value )  to the value in RANGE_HIGH_KEY. For example the second row "" indicates distribution between x to y
 
  1. RANGE_ROWS Column indicate the total number of rows in the range ( Previous hi key to current hi key )
  2. DISTINCT_RANGE_ROWS indicate how many among them are distinct
  3. AVG_RANGE_ROWS indicate the average row returned per distinct value
  4. EQ_ROWS indicate total the number of rows for the range hi key value alone.
 
The most important part is number of steps in the histogram are fixed to 200. Meaning the histogram will have maximum of 200 entries / rows irrespective of number of rows the table. Even if there are 1 million rows, 200 entries will describe the column's data distribution.
So, why this post on statistics? :) Armed with this knowledge, let us dive into deeper concepts like cardinality estimator and how it has changed in SQL 2014 in upcoming posts.
 


Wednesday, November 18, 2015

In-Memory OLTP Table Checkpoint Processes Performance Comparison

Checkpoint processes and other internal processes have undergone major changes for In Memory OLTP tables and transactions. The following article of mine compares the performance of checkpoint processes in disk based ( traditional ) tables against In memory tables. I understand that, I have not written about checkpoint processes of In Memory tables here. Will be doing soon. Anyways, sharing the article below. Read n comment if interested

http://www.sqlservercentral.com/articles/SQL+2014/131964/

Friday, November 6, 2015

Hekaton Part 11 :- Finding row count in memory table

Finding a row count in a "in memory" Hekaton table can be a tricky task as the regular commands and DMVs don't give the desired result.

  • sp_spaceused - doesn't work and returns a blank result
  • sys.dm_db_partition_stats - doesn't track in memory tables.

So the  option we are left with is
    
      SELECT OBJECT_SCHEMA_NAME(Ind.OBJECT_ID) AS  
     [Schema Name],

      OBJECT_NAME(Ind.OBJECT_ID) As [Table Name],

      ISNULL([rows],0) as [Total Records]
      FROM sys.dm_db_xtp_index_stats AS Ind
      CROSS APPLY sys.dm_db_stats_properties
      (Ind.object_id,Ind.index_id)
      WHERE Ind.index_id =2
      ORDER BY OBJECT_NAME(Ind.OBJECT_ID)
 
However, please note that the above query will give correct row counts if and only if statistics are updated for the table. For in memory tables, statistics are not automatically updated manually and hence will have to be updated  before one attempts to find the row count. So, please refer to the query below, to update the statistics for in memory table alone.
 
 
DECLARE @id int,@rowcnt int
DECLARE @in_mem_tbl_name varchar(500)
SET @id = 1
 
CREATE TABLE #in_memory_tbl(id int identity(1,1),in_mem_tbl_name varchar(500))
 
INSERT INTO #in_memory_tbl(in_mem_tbl_name)
SELECT object_name(object_id) FROM sys.dm_db_xtp_table_memory_stats
WHERE object_id > 0
SET @rowcnt = @@ROWCOUNT
WHILE @id <= @rowcnt
BEGIN
 
 
SELECT @in_mem_tbl_name = in_mem_tbl_name
FROM #in_memory_tbl
WHERE id = @id
 
EXEC( 'UPDATE STATISTICS ' + @in_mem_tbl_name + ' WITH FULLSCAN,NORECOMPUTE ' );

SET @id = @id + 1
 
END
 
DROP TABLE #in_memory_tbl
 

Please ensure to run the statistics section above, before attempting to find the row counts on "in memory tables"
 
 


 
 
 

 
 

      
 
 

Monday, November 2, 2015

Hekaton Part 10: Things Hash Indexes Can do

Hash Indexes on "In memory" tables are best suited for equality searches. Reason is obvious, as they are based on hash algorithm which doesn't sort data, hash indexes aren't useful for sorting or non equality searches ( Refer here). But, hashing being a fast and a effective technique, hash indexes are perhaps the best bet for handling equality searches ( where conditions using "=" ). Note that, equality searches are by far the most common filter condition you would see in queries. So, it is one of those things Microsoft has done to address the obvious with best possible solution ignoring the not so common problems :) Besides equality filters other areas where hash indexes can be useful are

Group by clause:

Consider the following query

SELECT count(*),data
FROM [HK_tbl]
GROUP BY Data

Query plan indicates a index scan but no additional sorting or grouping required, as even though data is not sorted by hash index, it is grouped / categorized and hence it is very effective for processing "group by" queries.


Inner Join:

HK_tbl2 is another "in memory" table with hash index on data column.

SELECT HK_tbl.data,HK_tbl2.dt
FROM HK_tbl
INNER JOIN HK_tbl2
ON HK_tbl.data = HK_tbl2.data

Outer table is scanned as there is no filtering condition while the inner table makes use of the "Hash Index" . Query plan shown below



 

Left Outer Join:

SELECT HK_tbl.data,HK_tbl2.dt
FROM HK_tbl
LEFT OUTER JOIN HK_tbl2
ON HK_tbl.data = HK_tbl2.data

Semi equijoin condition like left outer join also benefits from hash index

 

Sunday, October 25, 2015

Hekaton Part 9: Things Hash Indexes can't do

Last few posts have explored hash indexes in depth. This one will deal with the limitations on Hash Indexes. What are things hash indexes can't do?
Hash indexes are based on hash functions, which doesn't sort the data at all like conventional (binary tree indexes). Hash Indexes just categorize the data into hash buckets which help in fetch the required row faster.

 "HK_tbl" is a in memory table, with column "data" having a hash index.

Order by:

Consider the following query using Order by:

SELECT TOP 1000 [ID]
,[Data]
,[dt]
FROM [HKDB].[dbo].[HK_tbl] WITH (SNAPSHOT)
ORDER BY data desc;

Query plan shown below:


Observe the "Sort" operator. Sort operator indicates that the data was sorted and index didn't avoid sort operation.

Pattern match using like and '%'

SELECT TOP 1000 [ID]
,[Data]
,[dt]
FROM [HKDB].[dbo].[HK_tbl] WITH (SNAPSHOT)
WHERE data like 'C%'
 
 



Observe the filter operator. Filter operator indicates that the predicate ( or the filtering condition ) was applied after the index was scanned. The number of rows which came out of index scan operator would also confirm that index scan operator read the entire table.

Similarly, non equality wouldn't be using hash indexes. For handling range scans, sort operations, the other index type called range index ( also called as in memory non clustered index ) in "in memory" tables would help. 

On next post, we will see the scenarios, which explore scenarios where hash indexes are most useful.