Monthly Archives: June 2013

SQL Down Under podcast 60 with SQL Server MVP Adam Machanic

I posted another podcast over the weekend. Late last week, I managed to get a show recorded with Adam Machanic. Adam’s always fascinating. In this show, he’s talking about what he’s found regarding increasing query performance using parallelism. Late in the show, he gives his thoughts on a number of topics related to the upcoming SQL Server 2014.

Enjoy!

The show is online now: http://www.sqldownunder.com/Podcasts

 

Comparing SQL Server and Oracle background processes

As I mentioned in a previous tip, that compared datatypes between SQL Server and Oracle, given that I’ve worked with both technologies I constantly find myself thinking in terms of one and comparing it to something in the other. This tip will give an overview of the different background processes that make up both technologies and will highlight the similarities and differences between them.

‘Wicked Fast’–Real World Experience with SQL Server 2014 In-Memory OLTP

Recently Quentin Clark announced SQL Server 2014 during TechEd North America. One of the key features to be announced was the in-memory online transaction processing (OLTP) engine, which would complement the in-memory technologies which have been available in the data warehouse and analytics engines in SQL Server 2012, and integrated into PowerPivot in Excel 2013.

During the announcement Quentin highlighted how Edgenet were using In-Memory OLTP to gain real-time access to retail product data with in-memory OLTP. At TechEd Europe, it’s a timely reminder to look back to a story we highlighted first at PASS Summit 2012, in which bwin announced the benefits that they were gaining from using In-Memory OLTP – at that time, still under it’s codename ‘Project Hekaton’.

Bwin is the largest regulated online gaming company in the world, and their success depends on positive customer experiences. They had recently upgraded some of their systems to SQL Server 2012 – a story you can read here. Bwin had already gained significant in-memory benefit in their data warehouse using in-memory column store, for example – a large report that used to take 17 minutes to render now takes only three seconds.

Given the benefits, they had seen with in-memory technologies, they were keen to trial the technology preview of In-memory OLTP. Prior to using In-memory, their online gaming systems were handling about 15,000 requests per second, a huge number for most companies. However, bwin needed to be agile and stay at ahead of the competition and so they wanted access to the latest technology speed.

Using In-memory OLTP bwin were hoping they could at least double the number of transactions. They were ‘pretty amazed’ to see that the fastest tests so far have scaled to 250,000 transactions per second. So how fast is SQL Server 2014 In-memory OLTP or as it was then Project ‘Hekaton’ – just ask Rick Kutschera, the Database Engineering Manager at bwin – in his words it’s ‘Wicked Fast’!

Watch ‘bwin Wins with SQL Server 2014 below and download SQL Server 2014 CTP1 today!

 

Microsoft Discusses Big Data at Hadoop Summit 2013

Hortonworks and Yahoo! kicked of the sixth annual Hadoop Summit yesterday in San Jose, the leading conference for the Apache Hadoop community. We’ve been on the ground discussing our big data strategy with attendees and showcasing HDInsight Service, our Hadoop-based distribution for Windows Azure, as well as our latest business intelligence (BI) tools. This morning, Microsoft Corporate Vice President, Quentin Clark will deliver a presentation on “Reaching a Billion Users with Hadoop” where he will discuss how Microsoft is simplifying data management for customers across all types of platforms. You can tune in live at 8:30 AM PT at www.hadoopsummit.org/sanjose.  

Hadoop Summit 2013Hortonworks also made an announcement this morning that aligns well with our goal to continue to simplify Hadoop for the enterprise.  They announced that they will develop management packs for Microsoft System Center Operations Manager and Microsoft System Center Virtual Machine Manager that will manage and monitor the Hortonworks Data Platform (HDP). With these management packs, customers will be able to monitor and manage HDP from System Center Operations Manager alongside existing data center deployments, and manage HDP from System Center Virtual Machine Manager in virtual and cloud infrastructure deployments. For more information, visit www.hortonworks.com.

Another Microsoft partner, Simba Technologies, also announced yesterday that it will provide Open Database Connectively (ODBC) access to Windows Azure HDInsight, Microsoft’s 100% Apache compatible Hadoop distribution. Simba’s Apache Hive ODBC Driver with SQL Connector provides customers easy access to their data for BI and analytics using the SQL-based application of their choice. For more information, see the full press release at http://www.simba.com/about-simba/in-the-news/simba-provides-hdinsight-big-data-connectivity. For more information on Hadoop, see http://hortonworks.com/hadoop/.

Mike Flasko, a senior program manager for SQL Server, will also deliver a session this afternoon at 4:25PM PT focused on how 343 Industries, the studio behind the Halo franchise, is leveraging Windows Azure HDInsight Service to gain insight into the millions of concurrent gamers that lead to weekly Halo 4 updates and support email campaigns designed to increase player retention. If you are attending Hadoop Summit, be sure to sit in on Quentin’s keynote, stop by Mike’s session and check out our booth in the Expo Hall!

Security for SQL Server User Defined Data Types

I have a group of developers that I support and they are reporting they cannot see columns within their tables. I have granted them db_datareader permissions which is a standard at my company for QA environments. Why can’t they see their column definitions? Check out this tip to learn more.

AlwaysOn in SQL Server 2014 CTP1

AlwaysOn encompasses the SQL Server solutions for mission-critical high availability and disaster recovery. Two solutions were included under the AlwaysOn umbrella in SQL Server 2012: Availability Groups and Failover Cluster Instances. Each solution has different characteristics, making them appropriate for different scenarios, and both can be combined in the same deployment.

In SQL Server 2014, we are enhancing the availability, scalability, and flexibility of these solutions.

In this blog, we remind you of the AlwaysOn solutions and then describe the enhancements in SQL Server 2014 CTP1. In a future blog, we’ll describe the enhancements in CTP2.

Availability Groups

Availability Groups (AGs), introduced in SQL Server 2012, integrate and extend features from Database Mirroring and Log Shipping.

AGs provide high availability for a group of databases through redundant replicas hosted by up to 5 SQL Server instances (1 primary, 4 secondaries). Each SQL Server instance has its own copy of the databases (on its local disks), and AGs continuously synchronize transactions from the primary replica to the secondary replicas. Secondaries can be configured as synchronous or asynchronous allowing to trade data safety for performance. Similarly, they can be configured for automatic or manual failover to account for automated and manual processes. Secondary replicas maintain hot database copies (online state) so that failovers take only seconds.

AGs enable integrated high availability and disaster recovery configurations without the need for shared storage (e.g. SAN). In addition, secondary replicas can be used to offload read workloads and backups from the primary replica. A SQL Server instance can host replicas for multiple AGs, allowing the distribution of primary replicas across different SQL Server instances. AGs provide many other capabilities such as a Listener (virtual network name) for client connectivity, flexible failover policies, automatic page repair, full cross-feature support, compression, and encryption.

In SQL Server 2014 CTP1, AGs are enhanced with:

  • Increased availability of readable secondaries in multi-site environments
  • Increased number of secondaries
  • Enhanced Diagnostics
  • Integration with Windows Azure

Increased availability of readable secondaries in multi-site environments

Readable secondary replicas now allow read workloads to run without any disruption even in the presence of lasting network failures or loss of quorum on the underlying Windows Server Failover Cluster.

This is especially desirable in large-scale geo-distributed environments where network disconnections are not that unusual. This is depicted in the picture below, reports running on readable secondaries far from the main data center will continue running despite a network disconnection. Similarly, new reports started during the network disconnection will run.

clip_image002

Using the AlwaysOn Dashboard or DMVs, you can detect that a secondary replica is disconnected from the primary and determine the last time and transaction LSN that the secondary committed.

Increased number of secondaries

Many customers use AGs to satisfy their high availability, disaster recovery, and reporting requirements in a single solution (to configure and manage). The fact that AG log synchronization is many times faster than other technologies (e.g. Replication or Log Shipping) motivates use readable secondaries for reporting.

Thus, we have increased the maximum number of secondaries from 4 to 8.

This facilitates the following scenarios:

a) Distribution of replicas in a geo-distributed environment

Additional replicas can be deployed across the geo-distributed environment, allowing read workloads to run against a local replica.

b) Scaling-out of read workloads

Additional replicas can be used to load balance read workloads.

Load balancing can be implemented using simple DNS round-robin or specialized (hardware or software) load balancing solutions.

The picture below depicts the increased number of replicas in Object Explorer and the AlwaysOn Dashboard.

AlwaysOn Dashboard

Enhanced Diagnostics

We have enhanced the diagnostics information for troubleshooting issues.

This involves:

  1. Simplifying error messages. For example, separating the generic error “replica can’t become primary” into multiple messages, each with a specific cause: “because replica is not synchronized”, “because windows cluster was started in forced quorum mode”, etc
  2. Making information easier to find and correlate. For example, adding names (AG, replica, and database) to all AlwaysOn XEvents or making additional columns more discoverable in the AlwaysOn Dashboard.

There are many more coming in CTP2. For example, allowing to view XEvents in UTC time, triggering XEvents when replicas change synchronization state, and recording the last time and transaction LSN committed when a replica goes to resolving state.

Integration with Windows Azure Infrastructure Services

This year we started supporting AGs on Windows Azure Infrastructure Services (SQL Server running on virtual machines). We support SQL Server 2012 and now SQL Server 2014.

This enables two scenarios:

a) High Availability for SQL Server databases hosted in Windows Azure

Configure a synchronous secondary replica for automatic failover in case of failure (SQL Server or VM), guest patching (SQL Server or OS), or Windows Azure’s host upgrade.

b) Disaster Recovery for on-premise SQL Server databases using Windows Azure

Configure one or more asynchronous secondary replicas for your on-premise databases. This allows manually failing over to an Azure replica in case of a failure impacting the on-premise environment. The Azure replicas can be used to offload read workloads and maintain additional backups.

The picture below depicts the canonical scenario. The AG has 2 replicas on-premise for high availability and a 3rd replica on Windows Azure for disaster recovery.

clip_image006

Implementing this scenario requires configuring a site-to-site VPN tunnel between your on-premise network and Windows Azure. You can use either a physical VPN device or Windows Server 2012 Routing and Remote Access Service (RRAS).

In CTP2, we’ll release a wizard to greatly simplify adding a replica on Azure.

Failover Cluster Instances

Failover Cluster Instances (FCIs) enhance the traditional SQL Server Failover Clustering feature. FCIs provide high availability for a SQL Server instance through redundant servers that share the same storage (generally a SAN) and appear as a single logical SQL Server instance. Only one active server has access to the storage and can run workloads. When this instance becomes unavailable, the storage ownership moves to a different server where SQL Server is started. SQL Server startup, which includes databases recovery, can take from seconds to a couple of minutes. The secondary servers are passive (can’t run any workloads).

Given that FCI ensures availability for a full SQL Server instance, it is suited for servers with many instance-level dependencies (e.g. jobs or SSIS packages) or for consolidation, hosting hundreds to thousands of databases.

In SQL Server 2012, FCIs were enhanced to support multi-subnet clusters, faster and more predictable failover times, flexible failover policies, and tempDB support on local disk.

In SQL Server 2014 CTP1, FCIs are enhanced with:

  • Support for Cluster Shared Volumes
  • Enhanced Diagnostics

Support for Cluster Shared Volumes

Cluster Shared Volumes (CSVs) is a feature of Windows Server Failover Cluster. A CSV is a shared disk that is made accessible to all nodes in a Windows Server Failover Cluster. CSVs build a common global namespace across the cluster (accessible under the %SystemDrive%ClusterStorage root directory). This allows creating as many shared volumes as desired in the shared storage, removing the 24 drive letter limitation. For FCI this means that you can create as many FCIs as desired, without having to manage mount points.

CSVs also increase the resiliency of the cluster by having I/O fault detection and recovery over alternate communication paths between the nodes in the cluster. This is built on top of the Server Message Block (SMB) protocol. For FCI this means that, if the active server is not able to read/write directly to the shared storage, the CSV will automatically re-route the request through another node that has access. In addition, if the shared storage has any transient issues, CSV will transparently cache and re-try the operation.

Finally, CSVs increase the reliability of failovers, as disks don’t need to be unmounted and mounted as with traditional cluster disks. On failover, the new primary SQL Server instance can simply access the CSV.

CSVs are supported on Windows Server 2012 and Windows Server 2012 R2.

The picture below depicts two CSVs (Disk 1 and Disk 4) registered in Windows Server Failover Cluster. These CSVs can be selected as the shared disks during the installation of a FCI.

Failover Cluster Manager

Enhanced Diagnostics

We made some errors easier to understand. In CTP2, we’ll display information about the underlying Windows Server Failover Cluster in AlwaysOn DMVs.

Try SQL14 CTP1 and give us feedback! Download it or try it on Windows Azure.

Let us know if you would like to join the Technology Adoption Program (TAP) and give us your feedback.

Getting Started with SQL Server 2014 In-Memory OLTP

SQL Server 2014 introduces memory-optimized database technology for optimizing the performance of OLTP workloads. In particular, it introduces memory-optimized tables for efficient, contention-free data access, and natively compiled stored procedures for efficient execution of business logic.    

With this post we are giving you a brief introduction to the new syntax for in-memory OLTP, and will show you how to get started with T-SQL and SQL Server Management Studio (SSMS).      

Before we start, let’s create a sample database. If you have an existing database you can skip this step.

— optional: create database
CREATE DATABASE imoltp
GO

SSMS: To create a database,
1.    In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that instance.
2.    Right-click Databases, click New Database and then enter the value for the database name.

Step 1: enable your database for in-memory OLTP

We are going to add a filegroup for memory_optimized_data to our database, and add a container to this filegroup. This filegroup will be used to guarantee durability of memory-resident data in the event of a server crash or restart. During the crash recovery phase in server startup, the data is recovered from this filegroup and loaded back into memory.

When creating the container in the memory_optimized_data filegroup you must specify the storage location. In this example we picked the folder ‘c:data’. Make sure the folder exists before running the script.

— enable for in-memory OLTP – change file path as needed
ALTER DATABASE imoltp ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA
ALTER DATABASE imoltp ADD FILE (name=’imoltp_mod1′, filename=’c:dataimoltp_mod1′) TO FILEGROUP imoltp_mod
GO

SSMS: To add a memory_optimized_data filegroup and its container,
1.    In Object Explorer, expand the Databases node, right-click your database and then click Properties.
2.    To add a new memory optimized data filegroup, click the Filegroups page. Under MEMORY OPTIMIZED DATA, click Add filegroup and then enter the values for the filegroup.
3.    To add a file to the filegroup, click the General page. Under Database files, click Add and then enter the values for the file. Use file type FILESTREAM Data.

Getting Started with SQL Server 2014 In-Memory OLTP

Getting Started with SQL Server 2014 In-Memory OLTP Part 2

 
Step 2: create your first memory-optimized table

We are now ready to create our first memory-optimized tables. We have here two tables, ‘ShoppingCart’, and ‘UserSession’. ‘ShoppingCart’ is a durable table (the default), which means that its contents are persisted on disk and will not be lost on a server crash. ‘UserSession’ is a non-durable table (DURABILITY=SCHEMA_ONLY), which means that the contents of the table exist only in memory, and are lost on server restart.

Note that in CTP1 memory-optimized tables support only ‘nonclustered hash’ indexes. The bucket_count of the index should be roughly 1 to 2 times the number of unique index keys you expect to find in the table.

— create memory optimized tables
USE imoltp
GO

— durable table – contents of this table will not be lost on a server crash
CREATE TABLE dbo.ShoppingCart (
   ShoppingCartId int not null primary key nonclustered hash with (bucket_count=2000000),
   UserId int not null index ix_UserId nonclustered hash with (bucket_count=1000000),
   CreatedDate datetime2 not null,
   TotalPrice money
)
WITH (MEMORY_OPTIMIZED=ON)
GO

— non-durable table – contents of this table are lost on a server restart
CREATE TABLE dbo.UserSession (
   SessionId int not null primary key nonclustered hash with (bucket_count=400000),
   UserId int not null,
   CreatedDate datetime2 not null,
   ShoppingCartId int,
   index ix_UserId nonclustered hash (UserId) with (bucket_count=400000)
)
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY)
GO

SSMS: To create a memory-optimized table,
1.    In Object Explorer, right-click the Tables node of your database, click New, and then click Memory Optimized Table. A template for creating a memory-optimized table is displayed.
2.    To replace the template parameters, click Specify Values for Template Parameters on the Query menu. The shortcut key is Ctrl-Shift-M.

Step 3: load your data

You can load data into the tables in various ways, including INSERT .. SELECT from an existing disk-based table and BCP. In this example we are using simple INSERT statements for loading the data.

— Basic DML
— insert a few rows
INSERT dbo.UserSession VALUES (1,342,GETUTCDATE(),4)
INSERT dbo.UserSession VALUES (2,65,GETUTCDATE(),NULL)
INSERT dbo.UserSession VALUES (3,8798,GETUTCDATE(),1)
INSERT dbo.UserSession VALUES (4,80,GETUTCDATE(),NULL)
INSERT dbo.UserSession VALUES (5,4321,GETUTCDATE(),NULL)
INSERT dbo.UserSession VALUES (6,8578,GETUTCDATE(),NULL)
INSERT dbo.ShoppingCart VALUES (1,8798,GETUTCDATE(),NULL)
INSERT dbo.ShoppingCart VALUES (2,23,GETUTCDATE(),45.4)
INSERT dbo.ShoppingCart VALUES (3,80,GETUTCDATE(),NULL)
INSERT dbo.ShoppingCart VALUES (4,342,GETUTCDATE(),65.4)
GO

— verify table contents
SELECT * FROM dbo.UserSession
SELECT * FROM dbo.ShoppingCart
GO

SSMS: To view the contents of a memory-optimized table,
⦁    In Object Explorer, right-click on your memory-optimized table, click on Script Table as, click on SELECT To, click on New Query Editor Window and then execute the query that is displayed.

Step 4: update statistics

Memory-optimized tables do not support auto_update_statistics, thus statistics will need to be updated manually. You can use UPDATE STATISTICS to update statistics for individual tables, or sp_updatestats for all tables in the database.

— update statistics on memory optimized tables
UPDATE STATISTICS dbo.UserSession WITH FULLSCAN, NORECOMPUTE
UPDATE STATISTICS dbo.ShoppingCart WITH FULLSCAN, NORECOMPUTE
GO

Step 5: run queries

You are now ready to run your queries. Because they access memory-optimized tables, these queries will benefit from the latch-free data structures and more efficient data access. Here are a few examples.

— in an explicit transaction, assign a cart to a session and update the total price.
— note that the isolation level hint is required for memory-optimized tables with
— SELECT/UPDATE/DELETE statements in explicit transactions
BEGIN TRAN
  UPDATE dbo.UserSession WITH (SNAPSHOT) SET ShoppingCartId=3 WHERE SessionId=4
  UPDATE dbo.ShoppingCart WITH (SNAPSHOT) SET TotalPrice=65.84 WHERE ShoppingCartId=3
COMMIT
GO
— verify table contents
SELECT *
FROM dbo.UserSession u JOIN dbo.ShoppingCart s on u.ShoppingCartId=s.ShoppingCartId
WHERE u.SessionId=4
GO

Step 6: create natively compiled stored procedures

To further optimize the access to memory-optimized tables, and to optimize execution of your business logic, you can create natively compiled stored procedures. While these procedures are written using Transact-SQL, they do not support the full Transact-SQL surface area. For details, see Books Online.

Here is an example of a natively compiled stored procedure that accesses the tables we created previously.

— natively compiled stored procedure for assigning a shopping cart to a session
CREATE PROCEDURE dbo.usp_AssignCart @SessionId int
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N’us_english’)

  DECLARE @UserId int,
    @ShoppingCartId int

  SELECT @UserId=UserId, @ShoppingCartId=ShoppingCartId
  FROM dbo.UserSession WHERE SessionId=@SessionId

  IF @UserId IS NULL
    THROW 51000, ‘The session or shopping cart does not exist.’, 1

  UPDATE dbo.UserSession SET ShoppingCartId=@ShoppingCartId WHERE SessionId=@SessionId
END
GO

EXEC usp_AssignCart 1
GO

The following stored procedure showcases the performance of natively compiled stored procedures by inserting a large number of rows into a memory-optimized table. This scripts inserts 1,000,000 rows.

Note that if log IO becomes a bottleneck in the application, SQL Server allows you to use a non-durable table (DURABILITY=SCHEMA_ONLY), which removes the log IO completely.

— natively compiled stored procedure for inserting a large number of rows
–   this demonstrates the performance of native procs
CREATE PROCEDURE dbo.usp_InsertSampleCarts @StartId int, @InsertCount int
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N’us_english’)

  DECLARE @ShoppingCartId int = @StartId

  WHILE @ShoppingCartId < @StartId + @InsertCount
  BEGIN
    INSERT INTO dbo.ShoppingCart VALUES
         (@ShoppingCartId, 1, ‘2013-01-01T00:00:00’, NULL)
    SET @ShoppingCartId += 1
  END

END
GO

— insert 1,000,000 rows
DECLARE @StartId int = (SELECT MAX(ShoppingCartId)+1 FROM dbo.ShoppingCart)
EXEC usp_InsertSampleCarts @StartId, 1000000
GO

— verify the rows have been inserted
SELECT COUNT(*) FROM dbo.ShoppingCart
GO

SSMS: To create a natively compiled stored procedure,
1.    In Object Explorer, right-click the Stored Procedures node of your database, click New, and then click Natively Compiled Stored Procedure. A template for creating natively compiled stored procedures is displayed.
2.    To replace the template parameters, click Specify Values for Template Parameters on the Query menu. The shortcut key is Ctrl-Shift-M.

For more details about the concepts for in-memory OLTP, as well as a reference of the syntax, see Books Online or get started and download SQL Server 2014 CTP1 here.

That’s all for today, but stay tuned for further posts on this blog!    

Alert visualization recipe: Get out your blender, drop in some sp_send_dbmail, Google Charts API, add your favorite colors and sprinkle with html. Blend till it’s smooth and looks pretty enough to taste.

 

I really like database monitoring. My email inbox have a constant flow of different types of alerts coming from our production servers with all kinds of information, sometimes more useful and sometimes less useful. Usually database alerts look really simple, it’s usually a plain text email saying “Prod1 Database data file on Server X is 80% used. You’d better grow it manually before some query triggers the AutoGrowth process”.

Imagine you could have received email like the one below.  In addition to the alert description it could have also included the the database file growth chart over the past 6 months. Wouldn’t it give you much more information whether the data growth is natural or extreme? That’s truly what data visualization is for.

Believe it or not, I have sent the graph below from SQL Server stored procedure without buying any additional data monitoring/visualization tool.

image

 

Would you like to visualize your database alerts like I do? Then like myself, you’d love the Google Charts.

All you need to know is a little HTML and have a mail profile configured on your SQL Server instance regardless of the SQL Server version.

First of all, I hope you know that the sp_send_dbmail procedure has a great parameter @body_format = ‘HTML’, which allows us to send rich and colorful messages instead of boring black and white ones. All that we need is to dynamically create HTML code.

This is how, for instance, you can create a table and populate it with some data:

DECLARE @html varchar(max)

SET @html = ‘<html>’
+ ‘<H3><font id=”Text” style=”color: Green;”>Top Databases: </H3>’
+ ‘<table border=”1″ bordercolor=”#3300FF” style=”background-color:#DDF8CC” width=”70%” cellpadding=3 cellspacing=3>’
+ ‘<tr><font color=”Green”><th>Database Name</th><th>Size</th><th>Physical Name</th></tr>’ +

CAST( (SELECT TOP 10
                            td = name,”,
                            td = size * 8/1024 ,”,
                            td = physical_name 
            FROM sys.master_files  
            ORDER BY size DESC
            FOR XML PATH (‘tr’),TYPE
) AS VARCHAR(MAX))
+
‘</table>’

EXEC msdb.dbo.sp_send_dbmail
@recipients = ‘PutYourEmailHere’,
@subject =‘Top databases’,
@body = @html,
@body_format = ‘HTML’

This is the result:

image

 

If you want to add more visualization effects, you can use Google Charts Tools https://google-developers.appspot.com/chart/interactive/docs/index which is a free and rich library of data visualization charts, they’re also easy to populate and embed.

There are two versions of the Google Charts

image Image based charts: https://google-developers.appspot.com/chart/image/docs/gallery/chart_gall

This is an old version, it’s officially deprecated although it will be up for a next few years or so. I really enjoy using this one because it can be viewed within the email body. For mobile devices you need to change the “Load remote images” property in your email application configuration.        

 

image Charts based on JavaScript classes: https://google-developers.appspot.com/chart/interactive/docs/gallery

This API is newer, with rich and highly interactive charts, and it’s much more easier to understand and configure. The only downside of it is that they cannot be viewed within the email body. Outlook, Gmail and many other email clients, as part of their security policy, do not run any JavaScript that’s placed within the email body. However, you can still enjoy this API by sending the report as an email attachment.

Here is an example of the old version of Google Charts API, sending the same top databases report as in the previous example but instead of a simple table, this script is using a pie chart right from  the T-SQL code

DECLARE @html  varchar(8000)

DECLARE @Series  varchar(800),@Labels  varchar(8000),@Legend  varchar(8000);
   
SET @Series = ;
SET @Labels = ;
SET @Legend = ;

SELECT TOP 5 @Series = @Series + CAST(size * 8/1024 as varchar) + ‘,’,
                        @Labels = @Labels +CAST(size * 8/1024 as varchar) + ‘MB‘+’|‘,
                        @Legend = @Legend + name + ‘|’
FROM sys.master_files
ORDER BY size DESC

SELECT @Series = SUBSTRING(@Series,1,LEN(@Series)-1),
        @Labels = SUBSTRING(@Labels,1,LEN(@Labels)-1),
        @Legend = SUBSTRING(@Legend,1,LEN(@Legend)-1)

SET @html =
  ‘<H3><font color=”Green”> ‘+@@ServerName+’ top 5 databases : </H3>’+
   ‘<br>’+
   ‘<img src=”
http://chart.apis.google.com/chart?’+
   ‘chf=bg,s,DDF8CC&’+
   ‘cht=p&’+
   ‘chs=400×200&’+
   ‘chco=3072F3|7777CC|FF9900|FF0000|4A8C26&’+
   ‘chd=t:’+@Series+’&’+
   ‘chl=’+@Labels+‘&’+
   ‘chma=0,0,0,0&’+
   ‘chdl=’+@Legend+‘&’+
   ‘chdlp=b”‘+
   ‘alt=”‘+@@ServerName+‘ top 5 databases” />’
            
EXEC msdb.dbo.sp_send_dbmail @recipients = PutYourEmailHere,
                            @subject = ‘Top databases’,
                            @body = @html,
                            @body_format = ‘HTML’

This is what you get. Isn’t it great?

image

Chart parameters reference:

chf     Gradient fill  bg – backgroud ; s- solid
cht     chart type  ( p – pie)
chs        chart size width/height
chco    series colors
chd        chart data string        1,2,3,2
chl        pir chart labels        a|b|c|d
chma    chart margins
chdl    chart legend            a|b|c|d
chdlp    chart legend text        b – bottom of chart

 

Line graph implementation is also really easy and powerful

DECLARE @html varchar(max)
DECLARE @Series varchar(max)
DECLARE @HourList varchar(max)

SET @Series = ”;
SET @HourList = ”;

SELECT @HourList = @HourList + SUBSTRING(CONVERT(varchar(13),last_execution_time,121), 12,2)  + ‘|’ ,
             @Series = @Series + CAST( COUNT(1) as varchar) + ‘,’
FROM sys.dm_exec_query_stats s 
   CROSS APPLY sys.dm_exec_sql_text(plan_handle) t
WHERE last_execution_time > = getdate()-1
GROUP BY CONVERT(varchar(13),last_execution_time,121)
ORDER BY CONVERT(varchar(13),last_execution_time,121)

SET @Series = SUBSTRING(@Series,1,LEN(@Series)-1)

SET @html =
‘<img src=”http://chart.apis.google.com/chart?’+
‘chco=CA3D05,87CEEB&’+
‘chd=t:’+@Series+‘&’+
‘chds=1,350&’+
‘chdl= Proc executions from cache&’+
‘chf=bg,s,1F1D1D|c,lg,0,363433,1.0,2E2B2A,0.0&’+
‘chg=25.0,25.0,3,2&’+
‘chls=3|3&’+
‘chm=d,CA3D05,0,-1,12,0|d,FFFFFF,0,-1,8,0|d,87CEEB,1,-1,12,0|d,FFFFFF,1,-1,8,0&’+
‘chs=600×450&’+
‘cht=lc&’+
‘chts=FFFFFF,14&’+
‘chtt=Executions for from’
+(SELECT CONVERT(varchar(16),min(last_execution_time),121)
         FROM sys.dm_exec_query_stats
         WHERE last_execution_time > = getdate()-1)
+’ till ‘+
+(SELECT CONVERT(varchar(16),max(last_execution_time),121)
    FROM sys.dm_exec_query_stats)
+
‘&’+
‘chxp=1,50.0|4,50.0&’+
‘chxs=0,FFFFFF,12,0|1,FFFFFF,12,0|2,FFFFFF,12,0|3,FFFFFF,12,0|4,FFFFFF,14,0&’+
‘chxt=y,y,x,x,x&’+
‘chxl=0:|1|350|1:|N|2:|’+@HourList+’3:|Hour&’+
‘chma=55,120,0,0″ alt=”” />’

EXEC msdb.dbo.sp_send_dbmail
@recipients = PutYourEmailHere,
@subject =‘Daily number of executions’,
@body = @html,
@body_format = ‘HTML’

image

Chart parameters reference:

chco    series colors
chd        series data
chds    scale format
chdl    chart legend
chf        background fills
chg        grid line
chls    line style
chm        line fill
chs        chart size
cht        chart type
chts    chart style
chtt    chart title
chxp    axis label positions
chxs    axis label styles
chxt    axis tick mark styles
chxl    axis labels
chma    chart margins

If you don’t mind to get your charts as an email attachment, you can enjoy the Java based Google Charts which are even easier to configure, and have much more advanced graphics. In the example below, the sp_send_email procedure uses the parameter @query which will be executed at the time that sp_send_dbemail is executed and the HTML result of this execution will be attached to the email.

DECLARE @html varchar(max),@query varchar(max)
DECLARE @SeriesDBusers  varchar(800);
   
SET @SeriesDBusers = ”;

SELECT @SeriesDBusers = @SeriesDBusers +  ‘ [“‘+DB_NAME(r.database_id) +’“, ‘ +cast(count(1) as varchar)+’],’
FROM sys.dm_exec_requests r
GROUP BY DB_NAME(database_id)
ORDER BY count(1) desc;

SET @SeriesDBusers = SUBSTRING(@SeriesDBusers,1,LEN(@SeriesDBusers)-1)

SET @query =
PRINT ”
<html>
  <head>
    <script type=”text/javascript” src=”
https://www.google.com/jsapi”></script>
    <script type=”text/javascript”>
      google.load(“visualization”, “1”, {packages:[“corechart”]});
       google.setOnLoadCallback(drawChart);
      function drawChart() {
                     var data = google.visualization.arrayToDataTable([
                       [“Database Name”, “Active users”],
                       ‘+@SeriesDBusers+’
                     ]);
 
                     var options = {
                       title: “Active users”,
                       pieSliceText: “value”
                     };
 
                     var chart = new google.visualization.PieChart(document.getElementById(“chart_div”));
                     chart.draw(data, options);
      };
    </script>
  </head>
  <body>
    <table>
    <tr><td>
        <div id=”chart_div” style=”width: 800px; height: 300px;”></div>
        </td></tr>
    </table>
  </body>
</html>
”’

EXEC msdb.dbo.sp_send_dbmail
   @recipients = PutYourEmailHere,
   @subject =‘Active users’,
   @body = @html,
   @body_format = ‘HTML’,
   @query = @Query
   @attach_query_result_as_file = 1, 
   @query_attachment_filename = ‘Results.htm’

After opening the email attachment in the browser you are getting this kind of report:

image

In fact, the above is not only for database alerts. It can be used for applicative reports if you need high levels of customization that you cannot achieve using standard methods like SSRS.

If you need more information on how to customize the charts, you can try the following:

Use the above examples as a starting point for your procedures and I’d be more than happy to hear of your implementations of the above techniques.

Yours,

Maria

SQL Server 2014 In-Memory Technologies: Blog Series Introduction

At the SQL PASS conference last November, we announced the In-memory OLTP (project code-named Hekaton) database technology built into the next release of SQL Server. Microsoft’s technical fellow Dave Campbell’s blog provides a broad overview of the motivation and design principles behind this project codenamed In-memory OLTP.

In a nutshell – In-memory OLTP is a new database engine optimized for memory resident data and OLTP workloads. In-memory OLTP is fully integrated into SQL Server – not a separate system. To take advantage of In-memory OLTP, a user defines a heavily accessed table as memory optimized. In-memory OLTP tables are fully transactional, durable and accessed using T-SQL in the same way as regular SQL Server tables. A query can reference both In-memory OLTP tables and regular tables, and a transaction can update data in both types of tables. Expensive T-SQL stored procedures that reference only In-memory OLTP tables can be natively compiled into machine code for further performance improvements. The engine is designed for extremely high session concurrency for OLTP type of transactions driven from a highly scaled-out mid-tier. To achieve this it uses latch-free data structures and a new optimistic, multi-version concurrency control technique. The end result is a selective and incremental migration into In-memory OLTP to provide predictable sub-millisecond low latency and high throughput with linear scaling for DB transactions. The actual performance gain depends on many factors but we have typically seen 5X-20X in customer workloads.

In the SQL Server product group, many years ago we started the investment of reinventing the architecture of the RDBMS engine to leverage modern hardware trends. This resulted in PowerPivot and In-memory ColumnStore Index in SQL2012, and In-memory OLTP is the new addition for OLTP workloads we are introducing for SQL2014 together with the updatable clustered ColumnStore index and (SSD) bufferpool extension. It has been a long and complex process to build this next generation relational engine, especially with our explicit decision of seamlessly integrating it into the existing SQL Server instead of releasing a separate product – in the belief that it provides the best customer value and onboarding experience.

Now we are releasing SQL2014 CTP1 as a public preview, it’s a great opportunity for you to get hands-on experience with this new technology and we are eager to get your feedback and improve the product. In addition to BOL (Books Online) content, we will roll out a series of technical blogs on In-memory OLTP to help you understand and leverage this preview release effectively.

In the upcoming series of blogs, you will see the following in-depth topics on In-memory OLTP:

  • Getting started – to walk through a simple sample database application using In-memory OLTP so that you can start experimenting with the public CTP release.
  • Architecture – to understand at a high level how In-memory OLTP is designed and built into SQL Server, and how the different concepts like memory optimized tables, native compilation of SPs and query inter-op fit together under the hood.
  • Customer experiences so far – we had many TAP customer engagements since about 2 years ago and their feedback helped to shape the product, and we would like to share with you some of the learnings and customer experiences, such as typical application patterns and performance results.
  • Hardware guidance – it is apparent that memory size is a factor, but since most of the applications require full durability, In-memory OLTP still requires log and checkpointing IO, and with the much higher transactional throughput, it can put actually even higher demand on the IO subsystem as a result. We will also cover how Windows Azure VMs can be used with In-memory OLTP.
  • Application migration – how to get started with migrating to or building a new application with In-memory OLTP. You will see multiple blog posts covering the AMR tool, Table and SP migrations and pointers on how to work around some unsupported data types and T-SQL surface area, as well as the transactional model used. We will highlight the unique approach on SQL server integration which supports a partial database migration.
  • Managing In-memory OLTP – this will cover the DBA considerations, and you will see multiple posts ranging from the tooling supporting (SSMS) to more advanced topics such as how memory and storage are managed.
  • Limitations and what’s coming – explain what limitations exist in CTP1 and new capabilities expected to be coming in CTP2 and RTM, so that you can plan your roadmap with clarity.

In addition – we will also have blog coverage on what’s new with In-memory ColumnStore and introduction to bufferpool extension. 

SQL2014 CTP1 is available for download here.  Please stay tuned for more.