Category Archives: BI

On Agile Data Warehousing

In the last month, I’ve been working on the slide deck of my Agile Data Warehousing workshop. In order to give to it additional value that goes beyond the pure technical aspects, and since now the “Agile” approach is becoming more and more mainstream also (and finally!) on BI, I did a small research to check what one can find on the web regarding this topic. Many things happened from the very first time I presented at PASS 2010, where I first mentioned the need to be Agile (or “Adaptive” as I prefer to say when talking about BI & Agility). In 2011 Gartner, at their BI Summit, stated through the voice of Andy Bitterer that

50% of requirements change in the first year of a BI project

and, as a result, the only possible way to succeed in a BI project is to be able to adapt quickly to the new requirements and requests. The doors to Agile BI were opened.

Agile BI as grown from that point on, until the point that Forrester even started to evaluate Agile Business Intelligence Platform, even nominating Microsoft as one of the Leaders:

Microsoft named a Leader in Agile Business Intelligence by Forrester

I must say I’m not 100% with the definition of Agile BI the Forrester gives, since it puts together to many things (Data Visualization, Automation, Self-Service BI just to name a few), but I understand that they see the things from the end user perspective, that simply wants to “do everything, immediately, easily and nicely” with its data. There is also a definition on Wikipedia (page created on January 2013) that is better, more complete and less marketing-oriented:

Beside those definitions, terms like Agile BI and Lean BI became quite common. Of course, with them, came also the idea of Agile Project Management and Agile Modeling. Especially this latter subject seems to be very hot and of course is something that is also close to my interests. Now, I won’t want to go into a deep discussion of the topic, telling you what it good and what is bad. There is already a lot on the web for or against any possible modeling solutions. Data Vault, BEAM, Model-Storming…a simple search on the web and you’ll find thousands of articles. Who’s the best? Should we go for Data Vault? Or for an Inmon-style DWH? Or Kimball? Or something else?

Well…I don’t really care. Or, to be honest, a care just a little bit.

Now, since “Ideas without execution are hallucinations”, and models are ideas after all, it’s my strong opinion that you don’t model the be agile: you “engineer” to be agile. Why? It’s simple: all models are agile…since they are models, and nothing more. Is not a problem to change a model, since it’s “just” a formal definition of a system…(of course, I’m bringing the idea to the extreme here)  and, since we’re assuming that business requirement will be changing, you known in advance that no model that will satisfy them all (immediately) exists (yeah, you can try to model the “Universal Data Model” but it’s going to be *very* complex…). So, the main point is to be able to bring changes quickly, with a measurable quality, in a controlled and economic way.

We all know that the one and only one model that should be presented to the end user is the Dimensional Model. This is how your Data Mart should look like. But how do you model your Data Warehouse is completely up to you. And it will change over time, for sure. So how you implement the process in order to extract, transform and load the data, is the key point. That implementation must be agile. What lies behind the scenes, following the information hiding principle, should be considered a simple “technical enabler” that could change at any time. So, if one prefer to use Data Vault, or Inmon, or just store anything in some hundreds Hadoop server…I don’t see any problem with that. As soon as you have defined an engineered approach with naming conventions, design pattern automation, quality checks, metadata and all the stuff in order to make sure that when you have to change something, you can do the smallest change possible, measure its impact, and test the result.

I’ve been trying to apply Agile principles to BI since 2003…I’ve been through any possible changes that you can imagine (even a complete change of an ERP that was the main source of data) and the most important thing I’ve learned is that the only model that works is the one that is liquid and is able to adapt quickly to changing requirements. I usually start modeling in the easiest way possible, and thus I apply the Dimension Model, and then I make all the changes to it in order to be able to keep

  • All the data at the highest granularity
  • Optimal performances
  • Historical Informations (that may not be visible to end user, but may be needed to correctly transform data)

Which, for complex DWH, means that at the beginning the DWH and the Data Mart overlaps, and that they diverge as the project goes on. In one project we even decided to go for a normalized model of data since the DWH became the source not only for reporting and analysis but also for other, more operative, duties.

Now, in order to be really agile, it’s mandatory to have an engineered approach that make sure that from agility the project doesn’t fall into anarchy. Because this is the biggest risk. The line that separates the two realities it’s very thin and crossing it is very easy. When you have a team of people, or they work as one, or Agile BI is not for you. Otherwise chaos will reign. And to make sure this does not happen, you have to have a streamlined building process, tools and methods (design patterns, frameworks and so on) so that everyone can technically do a good job and technical quality of the outcome is not only proportional to the experience of the person doing it.

It’s really important that everyone who wants to approach Agile BI understand the “engineering” part. I found it always underestimated and in all post I’ve found on the web, I never read someone stressing the importance of that part. That’s why I felt the urge to write this post, and that’s why I’ll go very deep in this topic during my PASS Workshop.

Now, before finishing the post, there is still one thing missing, but vital, for the success of an Agile VI solution: testing. Agility cannot exist if you don’t have an automated (or semi-automated) testing framework that assures you and your users that no errors will be introduced in the data as a result of a change done to satisfy some new or changed requirements. This is mandatory and I’m quite disappointed to see that almost no-one underline this point enough. Forrester doesn’t even took into consideration this point when evaluating the existing “Agile BI Platforms”. That’s a very big mistake in my opinion…since everyone give for granted data quality, but it’s actually the most difficult thing to obtain and maintain.

Testing frameworks are quite common in development, even Visual Studio has a testing engine integrated, and they should become common in BI to. Something is starting to appear (, but I wish that also big players (Microsoft above all) start to take this subject more seriously. How cool and useful will be a strong integration of testing in SSIS? After DWH/DM/Cube loading one could launch all the tests (maybe done right from Excel, from a power-user, or even created automatically if certain conditions are met…say the generation of year balance) and make sure that the freshly produced data are of good quality.

Just like water. Because data IS water. I won’t drink it if not tested.

Driving Ground Breaking BI with APS

This blog post will detail how APS gives users the ability to:

  • Leverage Power Query, Power Pivot, and Power Map at massive scale
  • Iteratively query APS, adding BI on the fly
  • Combine data seamlessly from PDW, HDI, and Azure using PolyBase

The Microsoft Analytics Platform System (APS) is a powerful scale out data warehouse solution for aggregating data across a variety of platforms. In Architecture of the Microsoft Analytics Platform System and PolyBase in APS – Yet another SQL over Hadoop solution?, the base architecture of the platform was defined. Here we’ll build on this knowledge to see how APS becomes a key element of your BI story at massive scale.

Let’s first start with a business case. Penelope is a data analyst at a US based restaurant chain with hundreds of locations across the world. She is looking to use the power of the Microsoft BI stack to get insight into the business – both in real time and aggregate form for the last quarter. With the integration of APS with Microsoft BI stack, she is able to extend her analysis beyond simple querying. Penelope is able to utilize the MOLAP data model in SQL Server Analysis Services (SSAS) as a front end to the massive querying capabilities of APS. Using the combined tools, she is able to:

  • Quickly access data in stored aggregations that are compressed and optimized for analysis
  • Easily update these aggregations based on structured and unstructured data sets
  • Transparently access data through Excel’s front-end

Using Excel, Penelope has quick access to all of the aggregations she has stored in SSAS with analysis tools like Power Query, Power Pivot, and Power Map. Using Power Map, Penelope is able to plot the growth of restaurants across America, and sees that lagging sales in two regions, the West Coast and Mid-Atlantic, are affecting the company as a whole.

After Penelope discovers that sales are disproportionately low on the West Coast and in the Mid-Atlantic regions, she can use the speed of APS’ Massively Parallel Processor (MPP) architecture to iteratively query the database, create additional MOLAP cubes on the fly, and focus on issues driving down sales with speed and precision using Microsoft’s BI stack. By isolating the regions in question, Penelope sees that sales are predominantly being affected by two states – California and Connecticut. Drilling down further, she uses Power Chart and Power Pivot to breakdown sales by menu item in the two states, and sees that the items with low sales in those regions are completely different.

While querying relational data stored in APS can get to the root of an issue, by leveraging PolyBase it becomes simple to also take advantage of the world of unstructured data, bringing additional insight from sources such as sensors or social media sites. In this way Penelope is able to incorporate the text of tweets relating to menu items into her analysis. She can use PolyBase’s predicate pushdown ability to filter tweets by geographic region and mentions of the low selling items in those regions, honing her analysis. In this way, she is able to discover that there are two separate issues at play. In California she sees customers complaining about the lack of gluten free options at restaurants, and in Connecticut she sees that many diners find the food to be too spicy.

Iterative Analytics

So how did Penelope use the power of APS to pull in structured data such as Point of Sale (POS), inventory and ordering history, website traffic, and social sentiment into a cohesive, actionable model? By using a stack that combines the might of APS, with the low time to insight of Excel – let’s breakdown the major components:

  • Microsoft Analytics Platform System (APS)
  • Microsoft HDInsight
  • Microsoft SQL Server Analysis Services (SSAS)
  • Microsoft Excel with Power Query, Power Pivot and Power Map

Loading Data in APS and Hadoop

Any analytics team is able to quickly load data into APS from many relational data sources using SSIS. By synchronizing the data flow between their production inventory and POS systems, APS is able to accurately capture and store trillions of transactional rows from within the company. By leveraging the massive scale of APS (up to 6 PB of storage), Penelope doesn’t have to create the data aggregates up front. Instead she can define them later.

Concurrently, her team uses an HDInsight Hadoop cluster running in Microsoft Azure to aggregate all of the individual tweets and posts about the company alongside its menus, locations, public accounts, customer comments, and sentiment. By storing this data in HDInsight, the company is able to utilize the elastic scale of the Azure cloud, and continually update records with real-time sentiment from many social media sites. With PolyBase, Penelope is able to join transactional data with the external tables containing social sentiment data using standard TSQL constructs.

Creating the External Tables

Using the power of PolyBase, the development team can create external tables in APS connected to the HDInsight instance running in Azure. In two such tables, Tweets and WordCloud, Twitter data is easily collected and aggregated in HDFS. Here, the Tweets table is raw data with an additional sentiment value and the WordCloud table is an aggregate of all words used in posts about to the company.

Connecting APS and SSAS to Excel

Within Excel, Penelope has the ability to choose how she would like to access the data. At first she uses the aggregations that are available to her via SSAS – typical sales aggregates like menu items purchases, inventory, etc. – through PowerQuery.

But how does Penelope access the social sentiment data directly from APS? Simple, by using the same data connection tab, Penelope can directly connect to APS and pull in the sentiment data using PolyBase.

Once the process is complete, tables pulled into Excel, as well as their relationships, are shown as data connections.

Once the data connection is created, Penelope is able to create a report using PowerPivot with structured data from the Orders table and the unstructured social sentiment data from HDInsight in Azure.

With both data sets combined in Excel, Penelope is able to then create a Power Map of the sales data layered with the social sentiment. By diving into the details, she can clearly see issues with sentiment from customers in Connecticut and California.

To learn more about APS, please visit

Drew DiPalma – Program Manager – Microsoft APS
Drew is a Program Manager working on Microsoft Analytics Platform System.  His work on the team has covered many areas, including MPP architecture, analytics, and telemetry.  Prior to starting with Microsoft, he studied Computer Science and Mathematics at Pomona College in Claremont, CA. 

SQL Server 2014 Columnstore Indexes: The Big Deck

The History

Though Columnstore indexes were introduced in SQL Server 2012; they’re still largely unknown.  In 2012, some adoption blockers remained; yet Columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & Columnstore is going to profoundly change the way we interact with our data.

I’ve been working with Columnstore Indexes since Denali alpha bits were available.  As SQL CAT Customer Lab PM, I hosted over a half-dozen customers in my lab proving out our builds, finding & entering bugs, & working directly with the product group & our customers to fix them. 

The Why

Why Columnstore?  If we’re looking for a subset of columns from one or a few rows,  given the right indexes, SQL Server has long been able to do a superlative job of providing an answer.  But if we’re asking a question which by design needs to hit lots of rows—reporting, aggregations, grouping, scans, DW workloads, etc., SQL Server has never had a good mechanism—until Columnstore.  Columnstore was a competitive necessity—our Sybase & Oracle customers needed a solution to satisfy what was heretofore a significant feature & performance deficit in SQL Server.  Our leadership & product team stepped up & provided a superb response.

The Presentation

I’ve delivered my Columnstore presentation over 20 times to audiences internal & external, small & large, remote & in-person, including the 2013 PASS Summit, two major Microsoft conferences (TechReady 17 & TechReady 18), & several PASS user groups (BI Virtual chapter, IndyPASS, Olympia, PNWSQL, Salt Lake City, Utah County, Denver, & Northern Colorado).

The deck has evolved significantly & includes a broad overview, architecture, best practices, & an amalgam of exciting success stories.  The purpose is to educate you & convince you that Columnstore is a compelling feature, to encourage you to experiment, & to help you determine whether Columnstore could justify upgrading to SQL Server 2014.

The Table of Contents

Here’s my deck’s ToC:

  • Overview
  • Architecture
  • SQL Server 2012 vs. new! improved! 2014
  • Building Columnstore Indexes
  • DDL
  • Resource Governor
  • Data Loading
  • Table Partitioning
  • Scenarios & Successes
    • Motricity
    • MSIT Sonar
    • DevCon Security
    • Windows Watson
    • MSIT Problem Management
  • Room for Improvement
  • Learnings & Best Practices
  • More Info

The Demos

I’ve included several demos, all of which are exceedingly simple & include step-by-step walkthroughs.

  • Conventional Indexes vs. Columnstore Perf
  • DDL
  • Resource Governor
  • Table Partitioning

Let me know if you have any questions.  In the meantime, enjoy!

BI-lösning i SQL Server ger Boxer lojala kunder

Konkurrensen på TV-marknaden ökar ständigt. Bara under de senaste två åren har den skruvats upp av nya film- och playtjänster på internet. Det är viktigt att få nya kunder, men kanske ännu viktigare att behålla de man redan har. För att möta konkurrensen förbättrar Boxer TV-Access sin kundkommunikation med hjälp av artificiell intelligens och en beslutsstödslösning byggd av Random Forest på Microsoft SQL Server Enterprise 2012.


– Den här lösningen har blivit en stor succé i företaget. Den hjälper oss att komma med rätt erbjudanden till rätt kunder i rätt tid. Därför har vi blivit betydligt bättre på att behålla våra kunder, säger Martin Carlsson, CIO på Boxer.


Tekniken där man tillämpar artificiell intelligens på affärsinformation kallas Data Mining. Genom att statistiskt kombinera och analysera olika typer av kundinformation kan sannolikheten för olika beteenden beräknas. Detta kan till exempel baseras på var en kund bor, vad kunden har för abonnemang, hur kunden agerat och agerar just nu tillsammans med tidigare avhoppsstatistik.


Denna typ av lösning lär sig automatiskt och förbättrar sig själv över tiden, och lösningen kan ofta byggas på en plattform som de flesta bolag redan har.


– Random Forest  har varit kreativa och nytänkande i skapandet av Boxers nya BI-lösning och använt sig av prediktiv analys. Det är sådant som verkligen gör skillnad för kunderna, säger Tommy Flink, Marknadsansvarig Beslutsstöd på Microsoft Sverige.


– Vi har sedan länge valt att satsa på Microsoft-miljö. Därför var det naturligt för oss att använda Microsoft-produkter när vi skapade den här lösningen, säger Martin Carlsson.


– För företag som har tillgång till en BI-lösning som baseras på Microsofts verktyg blir det betydligt enklare och mycket mindre kostsamt än om de använder externa analysverktyg. Tekniken för att kunna skapa en lösning som Boxers finns redan på plats, säger Gustav Rengby, områdesansvarig för kundanalys på Random Forest, konsultbolaget som hjälper Boxer att ta fram lösningen.


Bild: Tommy Flink, Marknadsansvarig Beslutsstöd på Microsoft Sverige.

Microsoft Updates Power BI for Office 365 Preview with New Natural Language Search, Mapping Capabilities

Today we’re pleased to announce the addition of significant new features to the Power BI for Office 365 preview, including natural language search with Q&A and improved experiences in two preview add-ins for Excel with 3D mapping visualizations through Power Map and improved data search in Power Query.

Introduced in July and currently in preview, Power BI for Office 365 is a self-service business intelligence (BI) service delivered through Office 365. Complementing Excel, it arms information workers with data analysis and visualization capabilities, enabling them to identify deeper business insights either on premises or within a trusted cloud environment. With Power BI for Office 365, customers can connect to data in the cloud or extend their existing on premises data sources and systems to quickly build and deploy self-service BI solutions hosted in Microsoft’s enterprise cloud. You can sign up to register for the preview here.

We’ve had the preview open to an initial wave of customers for the past month and are encouraged by the enthusiastic response we’ve received. Today we’re excited to share some of the new features we’ve added recently to both Excel and the Power BI for Office 365 service.

Search-driven data visualization with Q&A

One of the Power BI features users have been most interested in is Q&A, which takes enterprise data search and exploration to a whole new level. With Q&A, we looked at how consumers experienced Bing search and used that knowledge to enable customers to query their enterprise data and generate stunning visual results. The search experience is instantaneous and uses natural language query – Q&A interprets the question the user is asking and serves up the correct interactive chart or graph. We’ve received great responses from customers who have tested this capability and look forward to hearing what you think. To see Q&A in action, check out this video:

Storytelling through 3D mapping with Power Map

First previewed a few months ago, Power Map (formerly GeoFlow) is an add-in for Excel which gives users the ability to plot geographic and temporal data visually on Bing Maps, analyze that data in 3D, and create interactive tours to share with others. This month, we made some significant updates to Power Map on the Download Center including immediate geo-coding of geospatial elements of data coupled with new region-based visualization that color-codes these geo-political areas: zip code, county, state, country/region. Users can also take the interactive tours designed in Power Map to create videos optimized for mobile, tablets/computer, and HD displays. These videos can be shared anywhere, including social media, PowerPoint slides, and Office 365. To read more about the new features if Power Map, check out the Excel blog.

Power Map

Simplifying data discovery with Power Query

We’ve also updated Power Query an add-in to Excel which helps customers easily discover, combine and transform their data. We have improved the online search experience and expanded the number of available datasets including popular datasets from and the Windows Azure Marketplace, in addition to Wikipedia. We’ve also improved the external data import for SQL Server/Windows Azure SQL Database, as well as the overall filter capabilities across all supported data sources. Additionally, Power Query now supports different merge options for more flexibility in building your queries. We’re offering better integration with Excel so users can share queries with others in their organizations.

Power Query

These new features compliment the current capabilities already included in the Power BI preview, such as:

  • Power BI Sites – Quickly create collaborative BI sites in Office 365 for teams to share reports and data views. Larger workbook viewing is now supported (up to 250MB) so users can view and interact with larger workbooks through the browser.
  • Data Stewardship – Users can now not only share their workbooks but also the data queries they create in Excel.
  • Data Catalog – IT departments now have a new way to provision users by enabling data search. IT departments can register corporate data with the Data Catalog so that users can discover this data with the new online search feature introduced with Power Query for Excel.
  • Mobile Access – Mobile BI access to reports in Office 365 is provided through new HTML 5 support and a native mobile application for Windows 8 tablets available in the Windows Store.

Bringing big data to a billion users

Power BI for Office 365 is just one way we are delivering on our vision to enable the broadest set of people to gain actionable insights from big data, at any time and from anywhere. With Power BI we are providing access to powerful business analytics tools, built into our existing products including Excel and Office 365 to make data analysis engaging and impactful.

To learn more and register for the preview visit You can also download Power Map and Power Query along with sample datasets on the Power BI add-in Getting Started page. To see Power BI for Office 365 in action, check out this demonstration. Tell us what you think by posting in the comments below or tweeting us at @SQLServer #MSBI #PowerBI. And check out the Power BI blog for more detailed information on the features and functionality in Power BI.

Power BI for Office 365

Self-Service BI with the Microsoft Data Explorer Preview for Excel 2013: 5 Reasons to Get Started Right Away

Using a variety of data sources has been a daunting task for mere mortals.  Now, with the Data Explorer add-in for Excel, you don’t need to be a super hero to unlock data gems.  The Microsoft Data Explorer Preview for Excel is now available for download.

Here are the Top 5 Reasons to investigate this new add-in:

  1. Discover the World’s Data
  2. Connect to a wide variety of Data Sources
  3. Combine data from multiple data sources
  4. Reshape and transform your data effortlessly
  5. Refresh your data anytime

Sound interesting?  Head on over to the Microsoft Business Intelligence blog post, “5 Things You Need to Know about Microsoft Data Explorer Preview for Excel” for more details and become a super hero in your organization!

What’s Your Favorite Feature of SQL Server 2012?

PASS Summit in November was a perfect opportunity to catch up with SQL Server community members to ask them about their favorite features of SQL Server 2012. We caught up with many of them at a local restaurant and captured their responses in this video to kick off Quentin Clark’s keynote.

Perhaps not surprisingly, the favorite features named were exceedingly diverse, but there were some commonalities in the outcomes people were looking for.  These benefits included:

  • Reductions in application downtime
  • Improvements in database and application performance
  • Improvements in productivity
  • Costs savings
  • Empowering end-users with BI tools to improve decision making

So if any of these outcomes are critical to your next project, watch the full video above and see what features of SQL Server 2012 can help you achieve these aims.  And for those that are interested in the Business Intelligence benefits for your next project, you may want to hear more by attending the PASS Business Analytics Conference on April 10-12 in Chicago.  That would be a great opportunity to catch up and hear more about your favorite feature of SQL Server 2012!

Many of the customers featured in the video have already worked on published SQL Server 2012 customer stories.  You can find a complete list of these case studies at

David Hobbs-Mallyon, Senior Product Marketing Manager


The SQL Server Community Looks to Emerging Trends in BI and Database Technologies

At PASS Summit this year Ted Kummert outlined his views on accelerating insights in the new world of data.  He mentioned in his blog post, that this is an incredible time for the industry, and that data has emerged as the new currency of business.

Given that it’s such an exciting time to be in the industry, we thought this would be an ideal opportunity to ask some of the SQL Server community members attending PASS about what issues from the past they are glad are behind them, and about what industry and technology trends they are looking forward to in the future.

The answers from community members on what future trends they are most interested in were extremely diverse, including topics such as big data, new data visualizations, in-memory technologies and cloud-based & hybrid architectures. Watch the full video below to hear what the SQL Server community had to say.

Incidentally, many of the people featured in the video have already worked on published SQL Server 2012 customer stories.  You can find a complete list of these case studies at

Watch to Win on November 28th! Enter the ‘Big Data Webcast’ Challenge

Is_Big_Data_For_EveryoneMark your calendars for November 28th to get an inside track on how to make smarter business decisions. Join us for the new webcast “Driving Smarter Decisions with Microsoft Big Data”, presented by Mike Flasko, Principle Program Manager at Microsoft and IDG Enterprise.  Watch the webcast at on November 28th and you can earn a chance to win one of three Executive Gift Packs (includes a SQL Server branded jacket, a SQL Server branded laptop case and a non-branded USB hub) through our Sweepstakes Drawing or one of three Xbox/Kinect bundles by participating in our Skills Contest.

To enter the Sweepstakes portion of the Microsoft Big Data Webcast Challenge, you must:

To enter the Skills contest portion of the Microsoft Big Data Webcast Challenge, you must:

  • Log in to your Twitter account.  If you do not have a Twitter account, you can register for a free account by visiting
  • Follow @SQLServer on Twitter to be eligible.
  • Watch the Big Data Webcast on November 28th between 6 am and 5 pm PT.
  • During the course of the day, three questions relating to the Big Data Webcast will be posted via @SQLServer on Twitter. Reply with the correct answer to @SQLServer and include the hashtag #bigdatawebcast in your reply.
  • You may only answer each question one time. If you submit more than one answer to a question, all of your responses (including your first) will be disqualified.

There is a limit of one Challenge prize per person. The six winners of the Big Data Webcast Challenge will be announced at 5 pm PT on November 30th via @SQLServer on Twitter.  This Challenge is open to all eligible participants worldwide.  If you are unable or choose not to accept the prize, the prize will be awarded to an alternate winner. See full contest rules below.



By participating in the “Big Data Webcast Challenge” (the “Challenge”) ”, you understand that these Official Rules are binding and that the decisions of Microsoft Corporation (the “Sponsor” who may also be referred to as “Microsoft”, “we”, “us”, or “our”) are final and binding on all matters pertaining to this Challenge. The Challenge includes a skills contest and sweepstakes drawing as described more fully below.

It is your responsibility to review and understand your employer’s policies regarding your eligibility to participate in trade promotions such as this one. If you are participating in violation of your employer’s policies, you may be disqualified from entering or receiving prizes.  Microsoft disclaims any and all liability or responsibility for disputes arising between employees and their employers related to this matter. Prizes will only be awarded in compliance with the employer’s policies.

ELIGIBILITY: You are eligible to enter this Challenge if you meet the following requirements at time of entry:

  • You are an IT Professional or a developing IT Professional and you are 18 years of age or older; and
  • You are NOT a resident of any of the following countries: Cuba, Iran, North Korea, Sudan, or Syria.
    • PLEASE NOTE: U.S export regulations prohibit the export of goods and services to Cuba, Iran, North Korea, Sudan and Syria. Therefore residents of these countries/regions are not eligible to participate; and
  • You are NOT an employee of Microsoft Corporation or an employee of a Microsoft subsidiary; and
  • You are NOT involved in any part of the administration and execution of this Challenge; and
  • You are NOT an immediate family (parent, sibling, spouse, child) or household member of a Microsoft employee, an employee of a Microsoft subsidiary, or a person involved in any part of the administration and execution of this Challenge.

ENTRY PERIOD: The Challenge begins at 6:00 a.m. Pacific Time (PT) on November 28, 2012, and ends at 5:00 p.m. PT on November 28, 2012. (“Entry Period”).


Sweepstakes Drawing:

To receive one entry into the sweepstakes drawing, watch the Microsoft “Big Data” Webcast on November 28th between 6 am PT and 5 pm PT. For finishing the webcast in its entirety, you will receive one entry into the Sweepstakes. Limit one entry per person.

Skills Contest:

To enter the skills contest, you must be logged into your Twitter account and you must be a follower of @SQLServer to be eligible. If you do not have a Twitter account you can register for a free account by visiting Then visit, log in to the Microsoft “Big Data” Webcast on November 28 from 6am-5pm PT. During the course of the day on November 28th between 6 am PT and 5 pm PT,, three questions relating to the Microsoft “Big Data Webcast” will be posted via @SQLServer on Twitter. Reply to @SQLServer with the correct answer and include the hashtag #bigdatawebcast. Limit one entry per person per question.

We are not responsible for entries that we do not receive for any reason. We reserve the right to modify the Webcast schedule for any reason.


Sweepstakes: On or around November 30, we, or a company acting under our authorization, will randomly select three winners from among all eligible sweepstakes entries received to win a prize package consisting of the following items: A SQL branded hoodie, a branded Laptop bag, and a non-branded USB hub. Approximate retail value, ARV, $150.

Contest: The first eligible entrant to reply to @SQLServer on Twitter with the correct answer to each question will win an Xbox 360 4GB with Kinect. ARV, $299 each. Three contest prizes will be awarded, one for each question. The Xbox/Kinect bundle are US versions.

Limit one Challenge prize per person. If you are a potential winner, we will notify you through your Twitter account, e-mail address, or the telephone number provided when you registered within 3 business days following the random drawing. If the notification that we send is returned as undeliverable, or you are otherwise unreachable for any reason, we may award the prize to an alternate, randomly selected winner. Winners will have seven (7) days to reply to the notification; otherwise an alternate, randomly selected winner will be determined.

Your odds of winning this Challenge depend on the number of eligible entries received.

If you are a winner:

  • You may not exchange your prize for cash; and
  • If you do not wish to or cannot accept the prize, it will be forfeited and we may, at our discretion, award it to a runner-up. We may, however, award a prize of comparable or greater value, at our discretion; and
  • You are responsible for all federal, state, provincial and local taxes (including income and withholding taxes) as well as any other costs and expenses associated with accepting and/or using the prize that are not included above as being part of the prize award; and
  • You understand you are accepting the prize “as is” with no warranty or guarantee, either express or implied by us; and
  • ·You understand that all prize details shall be determined by us.

WINNER NOTIFICATION: If you are determined to be the winner:

  • The prize will be awarded to you and you shall ensure it is used and/or distributed in accordance with your company’s policies (the Promotion Parties (as hereinafter defined) are not responsible for the re-distribution of prizes within your company); and
  • You will notified by phone or by U.S. mail, overnight mail, or e-mail; and
  • You may be required to sign and return an Affidavit of Eligibility and Liability/Publicity release, unless prohibited by law, within ten (10) days of date of prize notification.

If you are the winner and you: (i) do not reply to such notification or the notification is undeliverable; (ii) do not return the Affidavit of Eligibility and Liability/Publicity release completed and signed within ten (10) days of date of prize notification; or, if you (iii) are not otherwise in compliance with these Official Rules, you will be disqualified and, we may, at our discretion, notify a runner-up. If you are a winner and accept the prize, you agree that we and our designees shall have the right to use your name, city and state of residence in any and all media now or hereafter devised worldwide in perpetuity, without additional compensation, notification or permission, unless prohibited by law.

LIMITATIONS OF LIABILITY: The Promotion Parties are not responsible for any liability, cost or injury incurred by Participants arising out of or in connection with the Challenge, including, without limitation, the following:

  • Lost, late, incomplete, inaccurate, stolen, fraudulent, misdirected, undelivered, interrupted, damaged, delayed or postage-due reports, entries, mail or other information;
  • Any incorrect or inaccurate information or error or defect of any kind regardless of who it is caused by, or the means by which it occurred; or
  • Equipment, software, network or systems that fail, have viruses or other problems, are breached or that cause injury or damage to participants or their property.

By entering this Challenge, you agree to, and hereby release and hold harmless Microsoft, its parent company, affiliates, subsidiaries, and advertising or promotion agencies, and anyone working directly on this program, product and promotion, and all of their respective officers, directors, employees and representatives (which for the purpose of these Official Rules will be referred to together as the “Promotion Parties”) from any and all liability or any injuries, loss or damage of any kind arising from or in connection with this Challenge or acceptance and use of any prize. No responsibility is assumed by Microsoft for lost, late, or misdirected entries or any computer, online telephone, or technical malfunctions that may occur. All entries become the property of Microsoft and will not be returned.

GENERAL CONDITIONS: This Challenge is governed by Washington law. You agree that the jurisdiction and venue for the handling of any disputes or actions arising out of this Challenge shall be in the courts of the State of Washington.

If, for any reason, the Challenge is not capable of running as planned for any reason, including, without limitation, the reasons set forth above, we reserve the right at our sole discretion to cancel, terminate, modify or suspend the Challenge. If a solution cannot be found to restore the integrity of the Challenge, we may, at our sole discretion, determine the winners of this Challenge, using all non-suspect, eligible customer redemption reports and/or entries received (as applicable) before we had to cancel, terminate, modify or suspend the Challenge.

We may disqualify you from participating in the Challenge, or winning a prize (and void your participation in the other promotions we may offer) if, in our sole discretion, we determine you are attempting to undermine the legitimate operation of the Challenge by cheating, deception or other unfair playing practices, or intending to annoy, abuse, threaten or harass us, any other entrant or our representatives or if you are otherwise not in compliance with the terms of these Official Rules. CAUTION: ANY ATTEMPT BY YOU OR ANY OTHER INDIVIDUAL TO DELIBERATELY DAMAGE ANY WEBSITE OR UNDERMINE THE LEGITIMATE OPERATION OF THE CHALLENGE IS A VIOLATION OF CRIMINAL AND CIVIL LAWS AND SHOULD SUCH AN ATTEMPT BE MADE, WE RESERVE THE RIGHT TO SEEK DAMAGES FROM YOU TO THE FULLEST EXTENT PERMITTED BY LAW.


To find out if you won, requests can be emailed to for 30 days following the drawing.

SPONSOR: This Challenge is sponsored by Microsoft Corporation, One Microsoft Way, Redmond, WA 98052.