Monthly Archives: March 2014

Hurry up! Buy one book and get one for free on Packt Publishing!

Packt Publishing wants to celebrate the release of their 2000th title with the great offer “Buy One, Get One Free” applied into their comprehensive catalog.



If you buy a book, of your choice, you will get another one for free. In addition, you will exploit this offer unlimitedly during the offer period.


I suggest you to take a look to available titles, you can risk to bring home two books at the ones price πŸ™‚ in particular, I suggest you to look this categories:



Enjoy the books!

Power BI for Office 365: Making It Real

In my recent post, Simplifying Business Intelligence through Power BI for Office 365, I described how the Power BI cloud service has changed the way I personally work. The ability to do “self-service BI” without any IT involvement—and to have the infrastructure transparently provisioned in a cloud computing environment—has enabled me to drop any dependency on specific hardware, and I can use just about any Web browser. We have heard from many business users that this self-service approach is extremely empowering, and it enables rapid progress and insights.

One potential down side of self-service BI is that the reports, charts, and graphs that users develop with such tools can be cut off from other data and reports. Self-service BI solutions can also become out-of-date and are, as such, unreliable. These are serious limitations of self-service BI tools. For self-service tools to be convenient, as well as reliable and trustworthy, there needs to be a way to easily include user-created reports as part of a larger solution, and also to operationalize these solutions. Without such capabilities, self-service tools are, at best, good for prototyping.

Power BI is a complete system, tying together the empowering aspects of self-service with the operational and collaborative capabilities that are critical for an enterprise-class solution. Power BI also provides governance and data stewardship capabilities, as needed, to enable management oversight. The result is a managed self-service system.

In this post, I want to talk about how Power BI enables me to create complete business insight solutions that are based on fresh, reliable data. I will not cover data governance capabilities, keeping that as a topic for another day.

Pulling It Together

For me, a Power BI site is the place where I can pull together a full solution that gives an “at a glance” view of visualizations and insights that are available for a set—or sets—of data. It also is the place to enable operations such as refreshing the data for the solution. I use the “Featured Reports” row at the top of a Power BI sites application to highlight the reports and visualizations that may be most important and that I want to call out visually.

Power BI Q&A is a good way to ask ad-hoc questions using natural language, but to be really effective, it is important to provide starting points from which users can start exploring a dataset. For this, I use the “Featured Questions” capability. Featured questions enables me to come up with starter questions, which can be used in two ways:

1. To lead users into the Q&A experience from the Power BI site app main page through the Featured Questions row on Power BI sites.

2. To guide users to what they can ask on the Q&A page itself. The “About this data” slide out panel on the right side of the Q&A page lists several types of questions that users can ask.

Adding specific featured questions can be done from the Featured Questions view (accessible from the  icon next to the Q&A text box and in the “About this data” slide out panel).

Keeping It Fresh

Power BI leverages Excel workbooks to create data models and compelling interactive visuals. Data is mashed up in Excel and can come from various different data sources. Typically, once I mash up data and have some interesting visuals to share, I upload the Excel workbook to a site on the Power BI service, and as simple as that, the solution is available in the cloud. So far so good, right—except that the data in the solution is “as of” the upload time. It can quickly get stale and might not make sense later, if the data is time-sensitive. Of course, it is possible to re-upload the workbook with refreshed data, but that is a manual and potentially unreliable solution. What is required is a way to automatically refresh the data. Power BI enables this via the “Scheduled Data Refresh” feature. Scheduled Data Refresh enables automatic updates on a regular schedule, keeping the data fresh and the reports reliable, and this is critical for real world solutions.

Like everything in Power BI, the process for setting up the Scheduled Data Refresh is self-service in nature. The ellipsis (…) menu for each workbook on the Power BI site is where a data refresh is configured and monitored.

Setting up the actual schedule is straightforward, with simple options. Of course, the flip side to simplicity is that the options are somewhat limited. Over time, we will add more customizations, and the overall goal is to enable business users to set up the refresh without requiring help from someone in IT.

Once this is set up properly, the refresh job runs as defined in the schedule, and switching to the “history” tab provides a quick view of jobs that have already run and the results of each run (including any errors).

In the example, my data source is an Azure SQL Database, and as such, it is simple to establish the connection. The connection I defined in my Excel workbook just works in the cloud after uploading to the Power BI service, and the refresh operation also just works. However, in many cases, data comes from an on-premises source, and in these cases, the refresh is not as simple. To refresh data from an on-premises source, the first step is to install and configure the Data Gateway; then, add the data source to the Data Gateway. I will not cover this in detail here; it deserves a write up on its own.

A Complete BI System

With this up and running, the overall solution is now a workable and practical implementation. Power BI sites provide the “face” of the solution and allows the user to get productive quickly. The Scheduled Data Refresh ensures data freshness and reliability so critical to business insights. Power BI converts the Excel workbook from an island of data to a full participant in a BI solution, with fresh, reliable data that the business can rely on. Add this to the rest of the capabilities in Power BI related to data management, and we have a complete BI system that enables the business user to get powerful insights, and at the same time, still enables administrators to have a degree of oversight and control.

Read more about using the Power BI sites application with your SharePoint Online site.

Kamal Hathi, Director of PM, Data Platform Group

 

Check out the website to learn more about Power BI for Office 365 and start a free trial today. For those who want access to the upcoming SQL Server 2014 release as soon as possible, please sign-up to be notified once the release is available.  Also, please join us on April 15 for the Accelerate Your Insights event to learn about our data platform strategy and how more customers are gaining significant value with SQL Server 2014.  There also will be additional launch events worldwide so check with your local Microsoft representatives or your local PASS chapter for more information on SQL Server readiness opportunities.

Collect Database and Table Index Grow Statistics for all SQL Servers Using PowerShell

You want to collect growth statistics on your databases and report on them. Management would like to see how fast databases are growing. Users would like to know how big their main tables are becoming. You have over a hundred servers with thousands of databases and you don’t have $500 per server to spend. In this tip we cover a simple way to collect this data for all servers using PowerShell.

Visit Microsoft at the Gartner Business Intelligence and Analytics Summit 2014 in Las Vegas

Microsoft will be at the Gartner Business Intelligence and Analytics summit being held in Las Vegas from March 30th – April 2nd as a premier sponsor. We’re looking forward to sharing our vision of how we’re making big data real through the familiar tools you already use – SharePoint, Excel and SQL Server – as well as new ones such as Power BI for Office 365 and Windows Azure HDInsight.

Over the last few years, we’ve all had to deal with an explosion in data types and the velocity at which we need to react to that data. In this world of big data, Microsoft has refined our data toolkit – adding performance and scaling capabilities on commodity hardware in SQL Server 2014, as well as the ability to store, process and analyze large volumes of data through Windows Azure HDInsight, our 100% compatible implementation of Apache Hadoop.

Just as we’ve added capabilities to our data platform, we’ve continued to focus on making it as easy as possible to get rich insights from your stored data – whether it’s in SQL Server, Windows Azure HDInsight or a 3rd party data provider such as a LOB system. We’ve built powerful visualizations right into Excel with Power View and have added geospatial mapping capabilities through Power Map. It’s also now possible to query your data with natural language through Q&A in Power BI.

Our focus at Gartner will be on showcasing how all of these innovations are coming together to enable all of your users to find, analyze and use the information they need quickly and easily.

We’d love to speak to you if you’ll be there. Stop by our booth; attend our session on April 2nd from 10:45 AM – 11:45 AM; or schedule an individual meeting with Microsoft through the Gartner concierge. We’re also co-hosting a learning lab on the show floor with our partner SAP where you can learn about how Power BI connects to SAP BusinessObjects BI Universes, both through small group sessions and hands-on demonstrations.

We hope to see you. If you haven’t yet registered, you may use code BISP7 to get a $300 discount on registration.

For those who want access to the upcoming SQL Server 2014 release as soon as possible, please sign-up to be notified once the release is available.  Also, please join us on April 15 for the Accelerate Your Insights event to learn about our data platform strategy and how more customers are gaining significant value with SQL Server 2014.  There also will be additional launch events worldwide so check with your local Microsoft representatives or your local PASS chapter for more information on SQL Server readiness opportunities.

Amway sees 100 percent availability with hybrid data-recovery solution

With the release to manufacturing announcement of SQL Server 2014 you will start to see more customer stories showcasing their use of hybrid features that span the cloud and on-premises.

One such customer is global direct seller, Amway.  Its data centers support about 34 terabytes of information spread across 100 instances of Microsoft SQL Server software, with the data load growing at an annual rate of about 15 percent.  The company faces the same challenges of almost any sizable organization in maximizing data availability and ensuring disaster recovery.  As Amway has grown they have created additional secondary data centers as disaster-recovery sites. All this additional infrastructure has, inevitably, introduced more complexity and cost into the Amway data environment.

Previously, Amway had been concerned about cloud configurations that were not under its control. But with Windows Azure Infrastructure as a Service, the company could create its own virtual machine configuration image and install it on Windows Azure, addressing that concern. Amway uses the same virtual-machine media for instances on Windows Azure and in its own data centers, ensuring that installations are consistent across both environments.

Amway conducted a pilot test of a prerelease version of Microsoft SQL Server 2014 software, focusing on the software’s AlwaysOn Availability Groups for high availability and disaster recovery. That feature is based on multisite data clustering with failover to databases hosted both on-premises and in Windows Azure. The pilot test focused on a CRM application. The test architecture consisted of three nodes in a hybrid on-premises/cloud configuration.  A primary replica and secondary replica, operating synchronously to support high availability through automatic failover, both located on-premises. A secondary replica located in Windows Azure, operating in asynchronous mode to provide disaster recovery through manual failover

Amway found that the test of SQL Server AlwaysOn Availability Groups with Windows Azure replicas delivered 100 percent uptime and failover took place in 10 seconds or less, compared to the 45 seconds Amway experienced with traditional SQL Server Failover Clusters. Amway is looking forward to an even bigger reduction in the time required to recover from a complete data center failure. Instead of the two-hour, three-person process required with database mirroring, Amway will be able to restore a data center with just 30 seconds of one DBA’s time.

You can learn more about the Amway solution by reading the more detailed case study here.  

For those who want access to the upcoming SQL Server 2014 release as soon as possible, please sign-up to be notified once the release is available.  Also, please join us on April 15 for the Accelerate Your Insights event to learn about our data platform strategy and how more customers are gaining significant value with SQL Server 2014.  There also will be additional launch events worldwide so check with your local Microsoft representatives or your local PASS chapter for more information on SQL Server readiness opportunities.

SQL Server 2014 Columnstore Indexes: The Big Deck

The History

Though Columnstore indexes were introduced in SQL Server 2012; they’re still largely unknown.  In 2012, some adoption blockers remained; yet Columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & Columnstore is going to profoundly change the way we interact with our data.

I’ve been working with Columnstore Indexes since Denali alpha bits were available.  As SQL CAT Customer Lab PM, I hosted over a half-dozen customers in my lab proving out our builds, finding & entering bugs, & working directly with the product group & our customers to fix them. 

The Why

Why Columnstore?  If we’re looking for a subset of columns from one or a few rows,  given the right indexes, SQL Server has long been able to do a superlative job of providing an answer.  But if we’re asking a question which by design needs to hit lots of rows—reporting, aggregations, grouping, scans, DW workloads, etc., SQL Server has never had a good mechanism—until Columnstore.  Columnstore was a competitive necessity—our Sybase & Oracle customers needed a solution to satisfy what was heretofore a significant feature & performance deficit in SQL Server.  Our leadership & product team stepped up & provided a superb response.

The Presentation

I’ve delivered my Columnstore presentation over 20 times to audiences internal & external, small & large, remote & in-person, including the 2013 PASS Summit, two major Microsoft conferences (TechReady 17 & TechReady 18), & several PASS user groups (BI Virtual chapter, IndyPASS, Olympia, PNWSQL, Salt Lake City, Utah County, Denver, & Northern Colorado).

The deck has evolved significantly & includes a broad overview, architecture, best practices, & an amalgam of exciting success stories.  The purpose is to educate you & convince you that Columnstore is a compelling feature, to encourage you to experiment, & to help you determine whether Columnstore could justify upgrading to SQL Server 2014.

The Table of Contents

Here’s my deck’s ToC:

  • Overview
  • Architecture
  • SQL Server 2012 vs. new! improved! 2014
  • Building Columnstore Indexes
  • DDL
  • Resource Governor
  • Data Loading
  • Table Partitioning
  • Scenarios & Successes
    • Motricity
    • MSIT Sonar
    • DevCon Security
    • Windows Watson
    • MSIT Problem Management
  • Room for Improvement
  • Learnings & Best Practices
  • More Info

The Demos

I’ve included several demos, all of which are exceedingly simple & include step-by-step walkthroughs.

  • Conventional Indexes vs. Columnstore Perf
  • DDL
  • Resource Governor
  • Table Partitioning

Let me know if you have any questions.  In the meantime, enjoy!

Optimize DISTINCTCOUNT in #dax with SQL Server 2012 SP1 CU 9 #ssas #tabular

If you use DISTINCTCOUNT measures in DAX, you know performance are usually great, but you might have also observed that the performance slow down when the resulting number is high (depending on other conditions, it starts decreasing between 1 and 2 million as a result).

If you have seen that, there is a good news. Microsoft fixed this issue (KB2927844) in SQL Server 2012 SP1 Cumulative Update 9. Performance improvement is amazing. With this fix, I have queries previously running in 15 seconds (cold cache) now running in less than 5 seconds. So if you have databases in Tabular with a column containing more than 1 million distinct values, probably it’s better you test this update. It’s available also for Power Pivot for Excel 2010, but not for Excel 2013 (as far as I know – Power Pivot for Excel 2013 updates are included in Excel updates). You can request the SP1CU9 here: http://support.microsoft.com/kb/2931078.

Please consider that the build of Analysis Services that fixes this issue is 11.0.3412 (so a following build should not require this hotfix – useful note for readers coming here in the future, when newer builds will be available).