Category Archives: dell

SQL Server 2014 In-Memory Gives Dell the Boost it Needed to Turn Time into Money

There’s an old adage: time is money. Technology and the internet have changed the value of time and created a very speed-oriented culture. The pace at which you as a business deliver information, react to customers, enable online purchases, etc. directly correlates with your revenue. For example, reaction times and processing speeds can mean the difference between making a sale and a consumer losing interest. This is where the right data platform comes into play.

If you attended PASS Summit or watched the keynotes online, you saw us speak about Dell and the success they’ve had in using technology performance to drive their business. For Dell, providing its customers with the best possible online experience is paramount. That meant boosting its website performance so that each day its 10,000 concurrent shoppers (this number jumps to nearly 1 million concurrent shoppers during the holiday season) could enjoy faster, frustration-free shopping experiences. For Dell, time literally means money.

With a very specific need and goal in mind Dell evaluated numerous other in-memory tools and databases, but ultimately selected SQL Server 2014.

Dell turned to Microsoft’s in-memory OLTP (online transaction processing) technology because of its unique lock and latch free table architecture the removed database contention while still guaranteeing 100 percent durability. By removing database contention Dell could utilize far more parallel processors to not only improve transactional speed but also significantly increase the number of concurrent users. And choosing SQL Server 2014 with in-memory built-in meant Dell did not have to learn new APIs or tools their developers could use familiar SQL Server tools and T-SQL to easily implement the new in-memory technologies.

All of this meant Dell was able to double its application speeds and process transactions 9x faster. Like Dell, you also can take advantage of the workload optimized in-memory technologies built into the SQL Server 2014 data platform for faster transactions, faster queries and faster analytics. And you can do it all without expensive add-ons utilizing your existing hardware, and existing development skills. 

Learn more about SQL Server 2014 in-memory technology

Dell Doubles Application Speeds, Processes Transactions 9X Faster with In-Memory OLTP

As a global IT leader, Dell manufactures some of the world’s most innovative hardware and software solutions. It also manages one of the most successful e-commerce sites. In 2013, the company facilitated billions in online sales. On a typical day, 10,000 people are browsing Dell.com at the same time. During peak online shopping periods, the number of concurrent shoppers can increase 100 times, to as many as one million people.

To help facilitate fast, frustration-free shopping despite traffic spikes, Dell has distributed the website’s online transaction processing (OLTP) load between 2,000 virtual machines, which include 27 mission-critical databases that run on Microsoft SQL Server 2012 Enterprise software and the Windows Server 2012 operating system. These databases, along with hundreds of web applications, are supported by Dell PowerEdge servers, Dell Compellent storage, and Dell Networking switches.

When Dell learned about SQL Server 2014 and its in-memory capabilities, the company immediately signed up to be an early adopter. Not only are memory-optimized tables in SQL Server 2014 lock-free—making it possible for numerous applications to simultaneously access and write to the same database rows—but also the solution is based on the technologies that IT staff already know how to use.

Initially, engineers set up the database tables to be fully durable, meaning that the table replicas are synchronous copies. However, developers can also configure the tables to use delayed durability, which means that changes made to a table’s replica are delayed slightly to minimize any impact on performance.

By gaining the option to store tables in memory, Dell is achieving unprecedented OLTP speeds. “The performance increase we realize with In-Memory OLTP in SQL Server 2014 is astounding!” says Scott Hilleque, Design Architect at Dell. “After just a few hours of work, groups sped database performance by as much as nine times. And all aspects of our In-Memory OLTP experience has been seamless for our staff because it is so easy to adopt, and its implementation produces zero friction for architects, developers, database administrators, and operations staff.”

Although Dell is in the very early stages of adopting SQL Server 2014, IT workers are excited by the impact of In-Memory OLTP. The more the IT team can speed database performance, the faster web applications can get the information that they need to deliver a responsive and customized browsing experience for customers.  Reinaldo Kibel, Database Strategist at Dell summarizes that “In-Memory OLTP in SQL Server 2014 really signifies a new mindset in database development because with it, we no longer have to deal with the performance hits caused by database locks—and this is just one of the amazing benefits of this solution.”

You can read the full case study here  and watch the video here:

Also, check out the website to learn more about SQL Server 2014 and start a free trial today.  

Architecture of the Microsoft Analytics Platform System

In today’s world of interconnected devices and broad access to more and more data, the ability to glean ambient insight from the variety of data sources has been made quite hard by the variety and speed with which data is being delivered. Think about it for a minute, your servers continue to provide interesting data to you about the operations happening in your business but now you have data coming to you from the temperature sensors in the A/C units, the power supplies and networking equipment in the data center that can be combined to show that spikes in temperature and traffic have a dramatic effect on the life of a server. This type of contextual data is growing to include larger and with more detailed insights into the operations and management of your business. As we look to the future, Pew Research has released a report that predicts 50 billion connected devices by 2025. That is 5 devices for every person expected to be alive. With data coming from sources like manufacturing equipment to jet airliners, from mobile phones to your scale, and to things we haven’t even imaged yet the question really because how do you take advantage of all of these data sources to provide insight into the current and future trends in your business.

In April 2014, Microsoft announced the Analytics Platform System (APS) as Microsoft’s “Big Data in a Box” solution for addressing this question. APS is an appliance solution with hardware and software that is purpose built and pre-integrated to address the overwhelming variety of data while providing customers the opportunity to access this vast trove of data. The primary goal of APS is to enable the loading and querying of terabytes and even petabytes of data in a performant way using a Massively Parallel Processing version of Microsoft SQL Server (SQL Server PDW) and Microsoft’s Hadoop distribution, HDInsight, which is based off of the Hortonworks Data Platform.

Basic Design

An APS solution is comprised of three basic components:

  1. The hardware – the servers, storage, networking and racks.
  2. The fabric – the base software layer for operations within the appliance.
  3. The workloads – the individual workload types offering structured and unstructured data warehousing.

The Hardware

Utilizing commodity servers, storage, drives and networking devices from our three hardware partners (Dell, HP, and Quanta), Microsoft is able to offer a high performance scale out data warehouse solution that can grow to very large data sets while providing redundancy of each component to ensure high availability. Starting with standard servers and JBOD (Just a Bunch Of Disks) storage arrays, APS can grow from a simple 2 node and storage solution to 60 nodes. At scale, that means a warehouse that houses 720 cores, 14 TB of RAM, 6PB of raw storage and ultra-high speed networking using Ethernet and InfiniBand networks while offering the lowest price per terabyte of any data warehouse appliance on the market (Value Prism Consulting).

Fabric

The fabric layer is built using technologies from the Microsoft portfolio that enable rock solid reliability, management and monitoring without having to learn anything new. Starting with Microsoft Windows Server 2012, the appliance builds a solid foundation for each workload by providing a virtual environment based on Hyper-V that also offers high availability via Failover Clustering all managed by Active Directory. Combining this base technology with Clustered Shared Volumes (CSV) and Windows Storage Spaces, the appliance is able to offer a large and expandable base fabric for each of the workloads while reducing the cost of the appliance by not requiring specialized or proprietary hardware. Each of the components offers full redundancy to ensure high-availability in failure cases.

Workloads

Building upon the fabric layer, the current release of APS offers two distinct workload types – structure data through SQL Server Parallel Data Warehouse (PDW) and unstructured data through HDInsight (Hadoop). These workloads can be mixed within a single appliance offering flexibility to customers to tailor the appliance to the needs of their business.

SQL Server Parallel Data Warehouse is a massively parallel processing, shared nothing scale-out solution for Microsoft SQL Server that eliminates the need to ‘forklift’ additional very large and very expensive hardware into your datacenter to grow as the volume of data exhaust into your warehouse increases. Instead of having to expand from a large multi-processor and connected storage system to a massive multi-processor and SAN based solution, PDW uses the commodity hardware model with distributed execution to scale out to a wide footprint. This scale wide model for execution has been proven as a very effective and economical way to grow your workload.

HDInsight is Microsoft’s offering of Hadoop for Windows based on the Hortonworks Data Platform from Hortonworks. See the HDInsight portal for details on this technology. HDInsight is now offered as a workload on APS to allow for on premise Hadoop that is optimized for data warehouse workloads. By offering HDInsight as a workload on the appliance, the pressure to define, construct and manage a Hadoop cluster has been minimized. Any by using PolyBase, Microsoft’s SQL Server to HDFS bridge technology, customers can not only manage and monitor Hadoop through tools they are familiar with but they can for the first time use Active Directory to manage security into the data stored within Hadoop – offering the same ease of use for user management offered in SQL Server.

Massively-Parallel Processing (MPP) in SQL Server

Now that we’ve laid the groundwork for APS, let’s dive into how we load and process data at such high performance and scale. The PDW region of APS is a scale-out version of SQL Server that enables parallel query execution to occur across multiple nodes simultaneously. The effect is the ability to run what appears to be a very large operation into tasks that can be managed at a smaller scale. For example, a query against 100 billion rows in a SQL Server SMP environment would require the processing of all of the data in a single execution space. With MPP, the work is spread across many nodes to break the problem into more manageable and easier ways to execute tasks. In a four node appliance (see the picture below), each node is only asked to process roughly 25 billion rows – a much quicker task.

To accomplish such a feat, APS relies on a couple of key components to manage and move data within the appliance – a table distribution model and the Data Movement Service (DMS).

The first is the table distribution model that allows for a table to be either replicated to all nodes (used for smaller tables such as language, countries, etc.) or to be distributed across the nodes (such as a large fact table for sales orders or web clicks). By replicating small tables to each node, the appliance is able to perform join operations very quickly on a single node without having to pull all of the data to the control node for processing. By distributing large tables across the appliance, each node can process and return a smaller set of data returning only the relevant data to the control node for aggregation.

To create a table in APS that is distributed across the appliance, the user simply needs to add the key to which the table is distributed on:

CREATE TABLE [dbo].[Orders]
(
  [OrderId] ...
)
WITH
(
  DISTRIBUTION = HASH([OrderId])
)

This allows the appliance to split the data and place incoming data onto the appropriate node onto the appropriate node in the appliance.

The second component is the Data Movement Service (DMS) that manages the routing of data within the appliance. DMS is used in partnership with the SQL Server query (which creates the execution plan) to distribute the execution plan to each node. DMS then aggregates the results back to the control node of the appliance which can perform any final execution before returning the results to the caller. DMS is essentially the traffic cop within APS that enables queries to be executed and data moved within the appliance across 2-60 nodes.

Performance

With the introduction of Clustered Column Indexes (CCI) in SQL Server, APS is able to take advantage of the performance gains to better process and store data within the appliance. In typical data warehouse workloads, we commonly see very wide table designs to eliminate the need to join tables at scale (to improve performance). The use of Clustered Column Indexes allows SQL Server to store data in columnar format versus row format. This approach enables queries that don’t utilize all of the columns of a table to more efficiently retrieve the data from memory or disk for processing – increasing performance.

By combining CCI tables with parallel processing and the fast processing power and storage systems of the appliance, customers are able to improve overall query performance and data compression quite significantly versus a traditional single server data warehouse. Often times, this means reductions in query execution times from many hours to a few minutes or even seconds. The net results is that companies are able to take advantage of the exhaust of structured or non-structured data at real or near real-time to empower better business decisions.

To learn more about the Microsoft Analytics Platform System, please visit us on the web at http://www.microsoft.com/aps.

What’s Your Favorite Feature of SQL Server 2012?

PASS Summit in November was a perfect opportunity to catch up with SQL Server community members to ask them about their favorite features of SQL Server 2012. We caught up with many of them at a local restaurant and captured their responses in this video to kick off Quentin Clark’s keynote.

Perhaps not surprisingly, the favorite features named were exceedingly diverse, but there were some commonalities in the outcomes people were looking for.  These benefits included:

  • Reductions in application downtime
  • Improvements in database and application performance
  • Improvements in productivity
  • Costs savings
  • Empowering end-users with BI tools to improve decision making

So if any of these outcomes are critical to your next project, watch the full video above and see what features of SQL Server 2012 can help you achieve these aims.  And for those that are interested in the Business Intelligence benefits for your next project, you may want to hear more by attending the PASS Business Analytics Conference on April 10-12 in Chicago.  That would be a great opportunity to catch up and hear more about your favorite feature of SQL Server 2012!

Many of the customers featured in the video have already worked on published SQL Server 2012 customer stories.  You can find a complete list of these case studies at www.microsoft.com/sqlcustomers

David Hobbs-Mallyon, Senior Product Marketing Manager