Monthly Archives: November 2012

The SQL Server Community Looks to Emerging Trends in BI and Database Technologies

At PASS Summit this year Ted Kummert outlined his views on accelerating insights in the new world of data.  He mentioned in his blog post, that this is an incredible time for the industry, and that data has emerged as the new currency of business.

Given that it’s such an exciting time to be in the industry, we thought this would be an ideal opportunity to ask some of the SQL Server community members attending PASS about what issues from the past they are glad are behind them, and about what industry and technology trends they are looking forward to in the future.

The answers from community members on what future trends they are most interested in were extremely diverse, including topics such as big data, new data visualizations, in-memory technologies and cloud-based & hybrid architectures. Watch the full video below to hear what the SQL Server community had to say.

Incidentally, many of the people featured in the video have already worked on published SQL Server 2012 customer stories.  You can find a complete list of these case studies at

Crutchfield Turns to Microsoft and EMC to Help Transform SQL Server to the Private Cloud

When it comes to consumer electronics gear, audio and video enthusiasts rely on Crutchfield Corporation for excellent customer service and stellar product know-how. Crutchfield powers its information-based service with a wide range of tools for its website visitors, customers and internal customer advisors. To keep improving their stellar customer service, Crutchfield develops most of its line of business applications in-house – many leveraging SQL Server. In recent years, an expanding set of applications led to rampant data and server growth in its data center. 

To address this challenge, Crutchfield looked to EMC and Microsoft technologies. Already a user of Microsoft technologies, Crutchfield was able to virtualize 75% of its EMC storage infrastructure using Microsoft Windows Server Hyper-V.  

Using EMC and Microsoft technologies, Crutchfield was able to:

  • Save a total of $500,000 through virtualization using Windows Server Hyper-V
  • Drive applications to market 20% faster
  • Decrease SQL Server disk read times and latency from 5 – 10 milliseconds to less than one millisecond
  • Improve storage utilization from 40 to 80 percent.

Watch Crutchfield Information Systems Manager of Enterprise Storage Craig VanHuss discuss how the company worked with EMC and Microsoft to transform its Microsoft applications, speeding performance and increasing efficiency, in this video.

Oracle Surprised by the Present

I’d like to clear up some confusion from a recent Oracle-sponsored blog. It seems we hit a nerve by announcing our planned In-Memory OLTP technology, aka Project ‘Hekaton’, to be shipped as part of the next major release of SQL Server. We’ve noticed the market has also been calling out Oracle on its use of the phrase ‘In-Memory’, so it wasn’t unexpected to see a rant from Bob Evans, Oracle’s SVP of Communications, on the topic. [Editorial update: Oracle rant removed from on 11/20, see Bing cached page 1 and page 2]

Here on the Microsoft Server & Tools team that develops SQL Server, we’re working towards shipping products in a way that delivers maximum benefits to the customer. We don’t want to have dozens of add-ons to do something the product, in this case the database, should just do. In-Memory OLTP, aka ‘Hekaton’, is just one example of this.

It’s worth mentioning that we’ve been in the In-memory game for a couple of years now. We shipped the xVelocity Analytics Engine in SQL Server 2012 Analysis Services, and the xVelocity Columnstore index as part of SQL Server 2012. We’ve shown a 100x reduction in query processing times with this technology, and scan rates of 20 billion rows per second on industry-standard hardware, not some overpriced appliance. In 2010, we shipped the xVelocity in-memory engine as part of PowerPivot, allowing users to easily manipulate millions of rows of data in Excel on their desktops. Today, over 1.5 million customers are using Microsoft’s In-memory technology to accelerate their business. This is before ‘Hekaton’ even enters the conversation.

It was great to see Doug from Information Week also respond to Bob at Oracle, and highlight that in fact Oracle doesn’t yet ship In-Memory database technology in its Exadata appliances. Instead, Oracle requires customers to purchase yet another appliance, Exalytics, to make In-Memory happen.

We’re also realists here at Microsoft, and we know that customers want choices for their technology deployments. So we build our products that way, flexible, open to multiple deployment options, and cloud-ready. For those of you that have dealt with Oracle lately, I’m going to make my own prediction here: ask them to solve a problem for you and the solution is going to be Exadata. Am I right? And as Doug points out in his first InformationWeek article, Oracle’s approach to In-memory in Exadata is “cache-centric”, in contrast to which “Hekaton will deliver true in-memory performance”.

So I challenge Oracle, since our customers are increasingly looking to In-Memory technologies to accelerate their business. Why don’t you stop shipping TimesTen as a separate product and simply build the technology in to the next version of your flagship database? That’s what we’re going to do.

This shouldn’t be construed as a “knee-jerk” reaction to anything Oracle did. We’ve already got customers running ‘Hekaton’ today, including online gaming company Bwin, who have seen a 10x gain in performance just by enabling ‘Hekaton’ for an existing SQL Server application. As Rick Kutschera, IT Solutions Engineer at Bwin puts it, “If you know SQL Server, you know Hekaton”. This is what we mean by “built in”. Not bad for a “vaporware” project we just “invented”.

As for academic references, we’re glad to see that Oracle is reading from the Microsoft Research Database Group. But crowing triumphantly that there is “no mention of papers dealing with in-memory databases” [your emphasis] does not serve you well. Couple of suggestions for Oracle: Switch to Bing; and how about this VLDB paper as a starting point.

Ultimately, it’s customers who will choose from among the multiple competing In-memory visions on offer. And given that we as enterprise vendors tend to share our customers, we would do well to spend more time listening to what they’re saying, helping them solve their problems, and less time firing off blog posts filled with ill-informed and self-serving conclusions.

Clearly, Oracle is fighting its own fight. An Exadata in every data center is not far off from Bill’s dream of a “computer on every desk.” But, as with Bill’s vision, the world is changing. There will always be a need for a computer on a desk or a big box in a data center, but now there is so much more to enterprise technology. Cloud, mobility, virtualization, and data everywhere. The question is, how can a company called “Oracle” be surprised by the trends we see developing all around us?

— Nick King, Senior Marketing Manager, Server & Tools

Watch to Win on November 28th! Enter the ‘Big Data Webcast’ Challenge

Is_Big_Data_For_EveryoneMark your calendars for November 28th to get an inside track on how to make smarter business decisions. Join us for the new webcast “Driving Smarter Decisions with Microsoft Big Data”, presented by Mike Flasko, Principle Program Manager at Microsoft and IDG Enterprise.  Watch the webcast at on November 28th and you can earn a chance to win one of three Executive Gift Packs (includes a SQL Server branded jacket, a SQL Server branded laptop case and a non-branded USB hub) through our Sweepstakes Drawing or one of three Xbox/Kinect bundles by participating in our Skills Contest.

To enter the Sweepstakes portion of the Microsoft Big Data Webcast Challenge, you must:

To enter the Skills contest portion of the Microsoft Big Data Webcast Challenge, you must:

  • Log in to your Twitter account.  If you do not have a Twitter account, you can register for a free account by visiting
  • Follow @SQLServer on Twitter to be eligible.
  • Watch the Big Data Webcast on November 28th between 6 am and 5 pm PT.
  • During the course of the day, three questions relating to the Big Data Webcast will be posted via @SQLServer on Twitter. Reply with the correct answer to @SQLServer and include the hashtag #bigdatawebcast in your reply.
  • You may only answer each question one time. If you submit more than one answer to a question, all of your responses (including your first) will be disqualified.

There is a limit of one Challenge prize per person. The six winners of the Big Data Webcast Challenge will be announced at 5 pm PT on November 30th via @SQLServer on Twitter.  This Challenge is open to all eligible participants worldwide.  If you are unable or choose not to accept the prize, the prize will be awarded to an alternate winner. See full contest rules below.



By participating in the “Big Data Webcast Challenge” (the “Challenge”) ”, you understand that these Official Rules are binding and that the decisions of Microsoft Corporation (the “Sponsor” who may also be referred to as “Microsoft”, “we”, “us”, or “our”) are final and binding on all matters pertaining to this Challenge. The Challenge includes a skills contest and sweepstakes drawing as described more fully below.

It is your responsibility to review and understand your employer’s policies regarding your eligibility to participate in trade promotions such as this one. If you are participating in violation of your employer’s policies, you may be disqualified from entering or receiving prizes.  Microsoft disclaims any and all liability or responsibility for disputes arising between employees and their employers related to this matter. Prizes will only be awarded in compliance with the employer’s policies.

ELIGIBILITY: You are eligible to enter this Challenge if you meet the following requirements at time of entry:

  • You are an IT Professional or a developing IT Professional and you are 18 years of age or older; and
  • You are NOT a resident of any of the following countries: Cuba, Iran, North Korea, Sudan, or Syria.
    • PLEASE NOTE: U.S export regulations prohibit the export of goods and services to Cuba, Iran, North Korea, Sudan and Syria. Therefore residents of these countries/regions are not eligible to participate; and
  • You are NOT an employee of Microsoft Corporation or an employee of a Microsoft subsidiary; and
  • You are NOT involved in any part of the administration and execution of this Challenge; and
  • You are NOT an immediate family (parent, sibling, spouse, child) or household member of a Microsoft employee, an employee of a Microsoft subsidiary, or a person involved in any part of the administration and execution of this Challenge.

ENTRY PERIOD: The Challenge begins at 6:00 a.m. Pacific Time (PT) on November 28, 2012, and ends at 5:00 p.m. PT on November 28, 2012. (“Entry Period”).


Sweepstakes Drawing:

To receive one entry into the sweepstakes drawing, watch the Microsoft “Big Data” Webcast on November 28th between 6 am PT and 5 pm PT. For finishing the webcast in its entirety, you will receive one entry into the Sweepstakes. Limit one entry per person.

Skills Contest:

To enter the skills contest, you must be logged into your Twitter account and you must be a follower of @SQLServer to be eligible. If you do not have a Twitter account you can register for a free account by visiting Then visit, log in to the Microsoft “Big Data” Webcast on November 28 from 6am-5pm PT. During the course of the day on November 28th between 6 am PT and 5 pm PT,, three questions relating to the Microsoft “Big Data Webcast” will be posted via @SQLServer on Twitter. Reply to @SQLServer with the correct answer and include the hashtag #bigdatawebcast. Limit one entry per person per question.

We are not responsible for entries that we do not receive for any reason. We reserve the right to modify the Webcast schedule for any reason.


Sweepstakes: On or around November 30, we, or a company acting under our authorization, will randomly select three winners from among all eligible sweepstakes entries received to win a prize package consisting of the following items: A SQL branded hoodie, a branded Laptop bag, and a non-branded USB hub. Approximate retail value, ARV, $150.

Contest: The first eligible entrant to reply to @SQLServer on Twitter with the correct answer to each question will win an Xbox 360 4GB with Kinect. ARV, $299 each. Three contest prizes will be awarded, one for each question. The Xbox/Kinect bundle are US versions.

Limit one Challenge prize per person. If you are a potential winner, we will notify you through your Twitter account, e-mail address, or the telephone number provided when you registered within 3 business days following the random drawing. If the notification that we send is returned as undeliverable, or you are otherwise unreachable for any reason, we may award the prize to an alternate, randomly selected winner. Winners will have seven (7) days to reply to the notification; otherwise an alternate, randomly selected winner will be determined.

Your odds of winning this Challenge depend on the number of eligible entries received.

If you are a winner:

  • You may not exchange your prize for cash; and
  • If you do not wish to or cannot accept the prize, it will be forfeited and we may, at our discretion, award it to a runner-up. We may, however, award a prize of comparable or greater value, at our discretion; and
  • You are responsible for all federal, state, provincial and local taxes (including income and withholding taxes) as well as any other costs and expenses associated with accepting and/or using the prize that are not included above as being part of the prize award; and
  • You understand you are accepting the prize “as is” with no warranty or guarantee, either express or implied by us; and
  • ·You understand that all prize details shall be determined by us.

WINNER NOTIFICATION: If you are determined to be the winner:

  • The prize will be awarded to you and you shall ensure it is used and/or distributed in accordance with your company’s policies (the Promotion Parties (as hereinafter defined) are not responsible for the re-distribution of prizes within your company); and
  • You will notified by phone or by U.S. mail, overnight mail, or e-mail; and
  • You may be required to sign and return an Affidavit of Eligibility and Liability/Publicity release, unless prohibited by law, within ten (10) days of date of prize notification.

If you are the winner and you: (i) do not reply to such notification or the notification is undeliverable; (ii) do not return the Affidavit of Eligibility and Liability/Publicity release completed and signed within ten (10) days of date of prize notification; or, if you (iii) are not otherwise in compliance with these Official Rules, you will be disqualified and, we may, at our discretion, notify a runner-up. If you are a winner and accept the prize, you agree that we and our designees shall have the right to use your name, city and state of residence in any and all media now or hereafter devised worldwide in perpetuity, without additional compensation, notification or permission, unless prohibited by law.

LIMITATIONS OF LIABILITY: The Promotion Parties are not responsible for any liability, cost or injury incurred by Participants arising out of or in connection with the Challenge, including, without limitation, the following:

  • Lost, late, incomplete, inaccurate, stolen, fraudulent, misdirected, undelivered, interrupted, damaged, delayed or postage-due reports, entries, mail or other information;
  • Any incorrect or inaccurate information or error or defect of any kind regardless of who it is caused by, or the means by which it occurred; or
  • Equipment, software, network or systems that fail, have viruses or other problems, are breached or that cause injury or damage to participants or their property.

By entering this Challenge, you agree to, and hereby release and hold harmless Microsoft, its parent company, affiliates, subsidiaries, and advertising or promotion agencies, and anyone working directly on this program, product and promotion, and all of their respective officers, directors, employees and representatives (which for the purpose of these Official Rules will be referred to together as the “Promotion Parties”) from any and all liability or any injuries, loss or damage of any kind arising from or in connection with this Challenge or acceptance and use of any prize. No responsibility is assumed by Microsoft for lost, late, or misdirected entries or any computer, online telephone, or technical malfunctions that may occur. All entries become the property of Microsoft and will not be returned.

GENERAL CONDITIONS: This Challenge is governed by Washington law. You agree that the jurisdiction and venue for the handling of any disputes or actions arising out of this Challenge shall be in the courts of the State of Washington.

If, for any reason, the Challenge is not capable of running as planned for any reason, including, without limitation, the reasons set forth above, we reserve the right at our sole discretion to cancel, terminate, modify or suspend the Challenge. If a solution cannot be found to restore the integrity of the Challenge, we may, at our sole discretion, determine the winners of this Challenge, using all non-suspect, eligible customer redemption reports and/or entries received (as applicable) before we had to cancel, terminate, modify or suspend the Challenge.

We may disqualify you from participating in the Challenge, or winning a prize (and void your participation in the other promotions we may offer) if, in our sole discretion, we determine you are attempting to undermine the legitimate operation of the Challenge by cheating, deception or other unfair playing practices, or intending to annoy, abuse, threaten or harass us, any other entrant or our representatives or if you are otherwise not in compliance with the terms of these Official Rules. CAUTION: ANY ATTEMPT BY YOU OR ANY OTHER INDIVIDUAL TO DELIBERATELY DAMAGE ANY WEBSITE OR UNDERMINE THE LEGITIMATE OPERATION OF THE CHALLENGE IS A VIOLATION OF CRIMINAL AND CIVIL LAWS AND SHOULD SUCH AN ATTEMPT BE MADE, WE RESERVE THE RIGHT TO SEEK DAMAGES FROM YOU TO THE FULLEST EXTENT PERMITTED BY LAW.


To find out if you won, requests can be emailed to for 30 days following the drawing.

SPONSOR: This Challenge is sponsored by Microsoft Corporation, One Microsoft Way, Redmond, WA 98052.

Updates: AdExplorer v1.44, Contig v1.7, Coreinfo v3.2, Procdump v5.1

AdExplorer v1.44: This release fixes a bug that caused AdExplorer to crash when it encountered corrupted extended rights schemas.

Contig v1.7: Contig is a command-line file defragmentation and fragmentation analysis utility. v1.7 has more detailed fragmentation analysis reporting, fixes a bug that enables creation of contiguous files larger than 8GB, and adds support for setting the valid data length on files to avoid zero-fill overhead.

Coreinfo v3.2: Coreinfo, a command-line utility that dumps processor topology and feature support, now reports the presence of many additional features, including SMAP, RDSEED, BMI1, ADX, HLE, RTM, and INVPCID.

Procdump v5.1: This major update to Procdump, a command-line utility for creating process crash dump files based on triggers or on-demand, adds support for Silverlight applications and the ability to register Procdump as the just-in-time (JIT) debugger for more advanced scenarios.

PASS Summit 2012 Recap & the Milestones of SQL Server 2012

Microsoft_VP_Ted_KummertLast week marked the completion of a great week at PASS Summit 2012, the world’s largest technical training conference for SQL Server professionals and BI experts alike. During this year’s 3-day conference, nearly 4,000 attendees heard firsthand about the great advances being made toward managing big data. Over the course of two keynote speeches by Microsoft Corporate Vice Presidents Ted Kummert (Data Platform Group) and Quentin Clark (SQL Program Management), Microsoft announced the following:

  • Project codename “Hekaton,” a new in-memory technology that will be built directly into the data platform, will ship in the next major version of SQL Server.  Currently in private technology preview with a small set of customers, Hekaton completes the company’s portfolio of in-memory technologies across analytics, transactions, streaming and caching workloads, enabling business acceleration by shrinking the time from raw data to insights.
  • SQL Server 2012 Parallel Data Warehouse (PDW), the next version of Microsoft’s enterprise-class appliance, will be available during the first half of 2013.  SQL Server 2012 PDW includes PolyBase, a fundamental breakthrough in data processing that will enable queries across relational data and non-relational Hadoop data.
  • SQL Server 2012 SP1, which supports Office 2013 by offering business users enhanced, new capabilities for self-service business intelligence using familiar tools such as Excel and Sharepoint, is now available for download here

What’s more, on the final day of PASS Summit 2012, attendees were treated to the presentation, “Big Data Meets SQL Server 2012” by Microsoft Technical Fellow David DeWitt. 

PASS_Summit_2012All the while, conference participants attended a wide variety of technical sessions presented by industry experts in addition to a host of other programs. From on-site certification testing, to hands-on-labs, attendees were able to boost their technical skills using these resources, as well as work through technical issues with top Microsoft Customer Service and Support (CSS) engineers and get architectural guidance from the SQL Server, Business Intelligence and Azure Customer Advisory Teams (CAT). Of course, the learning didn’t stop there; attendees were invited to new, “I Made That!” Developer Chalk Talks, which featured 30 minute casual talks with the Microsoft developers who worked on specific features and functionalities of SQL Server 2012. The topics appealed to many, ranging from AlwaysOn to Hekaton. You can see more great photos from PASS Summit 2012 on the SQL Server Facebook page or access the video interviews with Dave Campbell, Quentin Clark, and David DeWitt available at the SQL Server virtual press room.

And so, as we close on another year of PASS Summit, it’s the perfect time to look back and see how far we’ve come since the launch of SQL Server 2012.  Join us below, as we take a celebratory look at the milestones we’ve hit along the way, and let’s look together toward the bright future ahead!


CROSSMARK Uses SQL Server 2008 R2 Parallel Data Warehouse to Quickly Deliver Business Insights

With the growth of the consumer goods industry, sales and marketing campaigns have created large and complex databases that are hard to sift through without the right tools. As retailers approach the busy holiday shopping season, they need to have insights into the effectiveness of their campaigns. Retailers need to know what market trends are affecting their customers to maximize the reach of these campaigns and they need to be able to sort through all of this data quickly to find useful and actionable insights.

Every now and then, we like to highlight how our customers are using Microsoft’s database platform solutions now to solve for these types of needs in real-time. One such customer is CROSSMARK, a provider of sales and marketing services for manufacturers and retail companies, who recently launched a new self-service data portal powered by SQL Server 2008 R2 Parallel Data Warehouse (PDW) to bring this data and these insights to its customers. SQL Server PDW’s on-demand data access will allow CROSSMARK’s customers to leverage shopper insights and data to inform strategies and tactics to create more effective sales and marketing campaigns to boost sales and profitability.

Before implementing SQL Server PDW, CROSSMARK had a bottleneck in its legacy platform that created data reports that weren’t scalable, making employees spend valuable time with data reporting instead of working with customers. Now, with SQL Server PDW, CROSSMARK can easily scale its resources to handle the millions of in-store activities processed each year and allow CROSSMARK employees to spend more time with its customers and less time with the data.

CROSSMARK is also on-track to implement SQL Server BI tools including Power View and PowerPivot to provide more business intelligence tools to its customers.

To read more about CROSSMARK, take a look at this Customer Spotlight feature on News Center.

Seamless insights on structured and unstructured data with SQL Server 2012 Parallel Data Warehouse

In the fast evolving new world of Big Data, you are being asked to answer a new set of questions that require immediate responses on data that has changed in volume, variety, complexity and velocity. A modern data platform must be able to answer these new questions without costing IT millions of dollars to deploy complex and time consuming systems.

On November 7, we unveiled details for SQL Server 2012 Parallel Data Warehouse (PDW), our scale-out Massively Parallel Processing (MPP) data warehouse appliance, which has evolved to fully embrace this new world. SQL Server 2012 PDW is built for big data and will provide a fundamental breakthrough in data processing using familiar tools to do seamless analysis on relational and Hadoop data at the lowest total cost of ownership.

  • Built for Big Data: SQL Server 2012 PDW is powered by PolyBase, a breakthrough in data processing, thatenables integrated queries across Hadoop and relational data. Without manual intervention, PolyBase Query Processor can accept a standard SQL query and join tables from a relational source with data from a Hadoop source to return a combined result seamlessly to the user. Going a step further, integration with Microsoft’s business intelligence tools allows users to join structured and unstructured data together in familiar tools like Excel to answer questions and make key business decisions quickly.   
  • Next-generation Performance at Scale: By upgrading the primary storage engine to a new updateable version of xVelocity columnstore, users can gain in-memory performance (up to 50x faster) on datasets that linearly scale out from small all the way up to 5 Petabytes of structured data.     
  • Engineered For Optimal Value: In SQL Server 2012 PDW, we optimized the hardware specifications required of an appliance through software innovations to deliver significantly greater cost savings, roughly 2.5x lower cost per TB and value. Through features delivered in Windows Server 2012, SQL Server 2012 PDW has built-in performance, reliability, and scale for storage using economical high density disks. Further, Windows Server 2012 Hyper-V virtualizes and streamlines an entire server rack of control functions down to a few nodes. Finally, xVelocity columnstore provides both compression and the potential to eliminate the rowstore copy to reduce storage usage up to 70%. As a result of these innovations, SQL Server 2012 PDW has a price per terabyte that is significantly lower than all offers in the market today.

With SQL Server 2008 R2 Parallel Data Warehouse, Microsoft already demonstrated high performance at scale when customers like HyVee improved their performance 100 times by moving from SQL Server 2008 R2 to SQL Server 2008 R2 Parallel Data Warehouse. SQL Server 2012 Parallel Data Warehouse takes a big leap forward in performance, scale, and the ability to do big data analysis while lowering costs. For the first time, customers of all shapes, sizes and data requirements from the low end to the highest data capacity requirements can get a data warehouse appliance within their reach.

We are very excited about SQL Server 2012 PDW which will be released broadly in the first half of 2013 and invite you to learn more through the following resources:

  • Watch the latest PASS Summit 2012 Keynote or sessions here
  • Microsoft Official Blog Post on PASS Summit 2012, authored by Ted Kummert here
  • Read customer examples of SQL Server 2008 R2 PDW (HyVee)
  • Visit HP’s Enterprise Data Warehouse for SQL Server 2008 R2 Parallel Data Warehouse site
  • Find out more about Dell’s SQL Server 2008 R2 Parallel Data Warehouse here

Breakthrough performance with in-memory technologies

In a blog post earlier this year on “The coming database in-memory tipping point”, I mentioned that Microsoft was working on several in-memory database technologies. At the SQL PASS conference this week, Microsoft unveiled a new in-memory database capability, code named “Hekaton1”, which is slated to be released with the next major version of SQL Server. Hekaton dramatically improves the throughput and latency of SQL Server’s transaction processing (TP) capabilities. Hekaton is designed to meet the requirements of the most demanding TP applications and we have worked closely with a number of companies to prove these gains. Hekaton’s technology adoption partners include financial services companies, online gaming and other companies which have extremely demanding TP requirements. What is most impressive about Hekaton is that it achieves breakthrough improvement in TP capabilities without requiring a separate data management product or a new programming model. It’s still SQL Server!

As I mentioned in the “tipping point” post, much of the energy around in-memory data management systems thus far has been around columnar storage and analytical workloads. As the previous blog post mentions, Microsoft already ships this form of technology in our xVelocity analytics engine and xVelocity columnstore index. xVelocity columnstore index will be updated in SQL Server 2012 Parallel Data Warehouse (PDW v2) to support updatable clustered columnar indexes. Hekaton, in contrast, is a row-based technology squarely focused on transaction processing (TP) workloads. Note that these two approaches are not mutually exclusive. The combination of Hekaton and SQL Server’s existing xVelocity columnstore index and xVelocity analytics engine, will result in a great combination.

The fact that Hekaton and xVelocity columnstore index are built-in to SQL Server, rather than a separate data engine, is a conscious design choice. Other vendors are either introducing separate in-memory optimized caches or building a unification layer over a set of technologies and introducing it as a completely new product. This adds complexity forcing customers to deploy and manage a completely new product or, worse yet, manage both a “memory-optimized” product for the hot data and a “storage-optimized” product for the application data that is not cost-effective to reside primarily in memory.

Hekaton is designed around four architectural principles:

1) Optimize for main memory data access: Storage-optimized engines (such as the current OLTP engine in SQL Server today) will retain hot data in a main memory buffer pool based upon access frequency. The data access and modification capabilities, however, are built around the viewpoint that data may be paged in or paged out to disk at any point. This perspective necessitates layers of indirection in buffer pools, extra code for sophisticated storage allocation and defragmentation, and logging of every minute operation that could affect storage. With Hekaton you place tables used in the extreme TP portion of an application in memory-optimized main memory structures. The remaining application tables, such as reference data details or historical data, are left in traditional storage optimized structures. This approach lets you memory-optimize hotspots without having to manage multiple data engines.

Hekaton’s main memory structures do away with the overhead and indirection of the storage optimized view while still providing the full ACID properties expected of a database system. For example, durability in Hekaton is achieved by streamlined logging and checkpointing that uses only efficient sequential IO.

2) Accelerate business logic processing: Given that the free ride on CPU clock rate is over, Hekaton must be more efficient in how it utilizes each core. Today SQL Server’s query processor compiles queries and stored procedures into a set of data structures which are evaluated by an interpreter in SQL Server’s query processor. With Hekaton, queries and procedural logic in T-SQL stored procedures are compiled directly into machine code with aggressive optimizations applied at compilation time. This allows the stored procedure to be executed at the speed of native code.

3) Provide frictionless scale-up: It’s common to find 16 to 32 logical cores even on a 2-socket server nowadays. Storage-optimized engines rely on a variety of mechanisms such as locks and latches to provide concurrency control. These mechanisms often have significant contention issues when scaling up with more cores. Hekaton implements a highly scalable concurrency control mechanism and uses a series of lock-free data structures to eliminate traditional locks and latches while guaranteeing the correct transactional semantics that ensure data consistency.

4) Built-in to SQL Server: As I mentioned earlier – Hekaton is a new capability of SQL Server. This lays the foundation for a powerful customer scenario which has been proven out by our customer testing. Many existing TP systems have certain transactions or algorithms which benefit from Hekaton’s extreme TP capabilities. For example, the matching algorithm in financial trading, resource assignment or scheduling in manufacturing, or matchmaking in gaming scenarios. Hekaton enables optimizing these aspects of a TP system for in-memory processing while the cooler data and processing continue to be handled by the rest of SQL Server.

To make it easy to get started, we’ve built an analysis tool that you can run so you can identify the hot tables and stored procedures in an existing transactional database application. As a first step you can migrate hot tables to Hekaton as in-memory tables. Doing this simply requires the following T-SQL statements2:


While Hekaton’s memory optimized tables must fully fit into main memory, the database as a whole need not. These in-memory tables can be used in queries just as any regular table, however providing optimized and contention-free data operation at this stage.

After migrating to optimized in-memory storage, stored procedures operating on these tables can be transformed into natively compiled stored procedures, dramatically increasing the processing speed of in-database logic. Recompiling these stored procedures is, again, done through T-SQL, as shown below:


What can you expect for a performance gain from Hekaton? Customer testing has demonstrated up to 5X to 50X throughput gains on the same hardware, delivering extreme TP performance on mid-range servers. The actual speedup depends on multiple factors, such as how much data processing can be migrated into Hekaton and directly sped up; and, how much cross transaction contention is removed as a result of the speed up and other Hekaton optimizations such a lock free data structures.

Hekaton is now in private technology preview with a small set of customers. Keep following our product blogs for updates and a future public technology preview.

Dave Campbell
Technical Fellow
Microsoft SQL Server

[1] Hekaton is from the Greek word ἑκατόν for “hundred”. Our design goal for the Hekaton original proof of concept prototype was to achieve 100x speedup for certain TP operations.

[2] The syntax for these operations will likely change. The examples demonstrate how easy it will be to take advantage of Hekaton’s capabilities.