Category Archives: Windows Azure

Review of the book Learning Windows Azure Mobile Services for Windows 8 and Windows Phone 8

Recently I had the opportunity to read the book Learning Windows Azure Mobile Services for Windows 8 and Windows Phone 8 written by Geoff Webber-Cross (@webbercross) and published by Packt Publishing.

In the last year, Windows Azure has increased the offer of cloud-based services which are hosted on Windows Azure platform. One of those new services is Windows Azure Mobile Services that allows developers to build web-connected application easily.

Before reading this book my knowledge on Windows Azure was on other topics such as SQL Azure Database, Storage and Windows Azure Virtual Machine. When I have heard about the opportunity to read and review this book I have thought it was a great opportunity to learn something new about the services offered by Windows Azure for mobile application.

The book covers all features of Windows Azure Mobile Services starting from the activity to prepare the Windows Azure Mobile Services Portal up to the Best Practices for Web-Connected Applications development. When you start to develop an Apps for Windows 8 or Windows Phone 8 with Windows Azure Mobile Services you may want to know what software and hardware are needed, this topic is covered in the second chapter. Security, Customization, Notification and Scalability are topics covered in the chapters 3, 4, 5 and 6.

Another thing I have appreciated in this book is the attention to the cost of services; many times in the book I read sentences like this “At this point, if we choose the … we will start incurring costs”. As confirm, the concept “Choosing a pricing plan for services you wish to implement” is covered at the beginning of the first chapter.

There are lot of pictures in the book, which make it practical and easy to read. If you want to look inside the book you can download a sample chapter here and this is the table of contents:

  • Chapter 1: Preparing the Windows Azure Mobile Services Portal
  • Chapter 2: Start Developing with Windows Azure Mobile Services 19
  • Chapter 3: Securing Data and Protecting the User
  • Chapter 4: Service Customization with Scripts
  • Chapter 5: Implementing Push Notifications
  • Chapter 6: Scaling Up with the Notifications Hub
  • Best Practices for Chapter 7: Web-connected Apps 

This book cannot missing in your digital or physical library, enjoy!

SQL Server 2014 brings on-premises and cloud database together to improve data availability and disaster recovery

With the recently disclosed general availability of SQL Server 2014, Microsoft brings to market new hybrid scenarios, enabling customers to take advantage of Microsoft Azure in conjunction with on-premises SQL Server.

SQL Server 2014 helps customers to protect their data and make it more highly availably using Azure. SQL Server Backup to Microsoft Azure builds on functionality first introduced in SQL Server 2012, introducing a UI for easily configuring backup to Azure from SQL Server Management Studio (SSMS). Backups are encrypted and compressed, enabling fast and secure cloud backup storage. Set up requires only Azure credentials and an Azure storage account. For help getting started, this step-by-step guide will get you going with the easy, three-step process.

Storing backup data in Azure is cost-effective, secure, and inherently offsite, making it a useful component in business continuity planning. A March 2014 commissioned study conducted by Forrester Consulting on Microsoft's behalf about Cloud Backup and Disaster Recovery found that saving money on storage is the top benefit of cloud database backup, cited by 61%, followed closely by 50% who said savings on administrative cost was a top reason for backing up to the cloud. Backups stored in Azure also benefit from Azure built-in geo-redundancy and high services levels, and can be restored to a Azure VM for fast recovery from onsite outages.

In addition to the SQL Server 2014 functionality for backing up to Azure, we have now made generally available a free standalone SQL Server Backup to Microsoft Azure Tool that can encrypt and compress backup files for all supported versions of SQL Server, and store them in Azure—enabling a consistent backup to cloud strategy across your SQL Server environments. This fast, easy to configure tool enables you to quickly create rules that direct a set of backups to Azure rather than local storage as well as select encryption and compression settings.

Another new business continuity planning scenario enabled by SQL Server 2014 is disaster recovery (DR) in the cloud. Customers are now able to setup an asynchronous replica in Azure as part of an AlwaysOn high availability solution. A new SSMS wizard enables you to simplify the deployment of replicas on-premises and to Azure. As soon as a transaction is committed on-premises it is sent asynchronously to the cloud replica. We still recommend you keep your synchronous replica on-premises, but by having the additional replicas in Azure you gain improved DR and can reduce the CAPEX and OPEX costs of physically maintaining additional hardware in additional data centers.

Another benefit of keeping an asynchronous replica in Azure is that the replica can be efficiently utilized for read functionality like BI reporting or utilized for doing backups, speeding up the backup to Azure process as the secondary is in Azure already.

But the greatest value to customers of an AlwaysOn replica in Azure is the speed to recovery. Customers are finding that their recovery point objectives (RPO) can be reduced to limit data loss, and their recovery time objectives (RTO) can be measured in seconds:

  • Lufthansa Systems is a full-spectrum IT consulting and services organization that serves airlines, financial services firms, healthcare systems, and many more businesses. To better anticipate customer needs for high-availability and disaster-recovery solutions, Lufthansa Systems piloted a solution on SQL Server 2014 and Azure that led to faster and more robust data recovery, reduced costs, and the potential for a vastly increased focus on customer service and solutions. They expect to deploy the solution on a rolling basis starting in 2014.
  • Amway is a global direct seller. Amway conducted a pilot test of AlwaysOn Availability Groups for high availability and disaster recovery. With multisite data clustering with failover to databases hosted both on-premises and in Azure, Amway found that the test of SQL Server AlwaysOn with Azure replicas delivered 100 percent uptime and failover took place in 10 seconds or less. The company is now planning how best to deploy the solution.

Finally, SQL Server 2014 enables you to move your database files to Azure while keeping your applications on-premises for bottomless storage in the cloud and greater availability. The SQL Server Data Files in Microsoft Azure configuration also provides an alternative storage location for archival data, with cost effective storage and easy access.

If you're ready to evaluate how SQL Server 2014 can benefit your database environment, download a trial here. For greater flexibility deploying SQL Server on-premises and in the cloud, sign up for a free Azure evaluation. And, to get started backing up older versions of SQL Server to Azure, try our free standalone backup tool. Also, don't forget to save the date for the live stream of our April 15 Accelerate Your Insights event to hear more about our data platform strategy from CEO Satya Nadella, COO Kevin Turner and CVP of Data Platform Quentin Clark.

Thoughts on Office 365, Windows Azure Active Directory, Yammer & Power BI

This week a SharePoint conference took place somewhere and I took more than a passing interest because it clearly wasn’t a SharePoint conference, it was a Office365/Yammer conference and as far as I can discern the big takeaways were:

It was interesting to me because Power BI is something that is on my radar and which is delivered via Office 365. This got me thinking about scenarios where Power BI & Yammer could play together more effectively.

The BI delivery team that I currently work for is trying find ways to make the information that we produce more discoverable, more accessible and to promote the use of the information that we provide throughout the company. The company is an Office365 customer however they pretty much use it only as an email & IM provider – none of the SharePoint-y stuff is used. The company is also a Yammer customer.

The confluence of Yammer and Power BI might make an interesting story here. Imagine, for example, the ability to build a Power View report using Power BI and then share that throughout the organisation using Yammer, perhaps via a Yammer group. Anyone viewing their Yammer feed would be able to view and interact with that Power View report without leaving Yammer. I’m not talking about simply viewing an image of a report either – I’d want to be able to slice’n’dice that report right within my Yammer feed.

I’ve long thought that we need to think of new ways of delivering BI to the masses and I believe social collaboration tools present a great opportunity to do that. I’m excited about what Yammer + Power BI could bring, let’s hope Microsoft don’t royally screw it up.

I still believe that Microsoft’s Master Data Services (MDS) should be offered through Power BI and again the opportunity to collaboratively compile and discuss data that resides in MDS is compelling. I see no reason why people wouldn’t want to change MDS data from within their Yammer feed – why would we force them to go elsewhere? Again I opine, bring the data to wherever your users are, don’t make them go somewhere else.

Hidden away behind all of the announcements was the implicit assertion that Windows Azure Active Directory is critical to Microsoft’s cloud efforts. Office 365 sits on top of Windows Azure Active Directory and I don’t think many people realise the significance of that. Whoever manages your company’s employees’ identities has a huge opportunity for selling new stuff to you and that’s why Windows Azure Active Directory is free. This is not a new play for Microsoft, over the past 20 years or so they’ve become a huge player in the corporate landscape and that’s in no small way down to Active Directory – own the identity and you can sell them other stuff like SharePoint, Windows, SQL Server etc… By allowing you to extend your Active Directory into the cloud and have pervasive groups its not far off being a no-brainer for companies to use Windows Azure & Office 365.

Active Directory in the cloud, public and private groups, identity management, developer APIs … those are the big plays here and is very much like what I described in my blog post Windows Live Groups predictions and “Active directory in the cloud”. The names and players have changed but the concepts I outlined there are now happening. Back then I said:

[This] gives rise to the idea of Groups becoming something analogous to an "active directory in the cloud". This is a disruptive idea partly because it could become the mechanism by which Microsoft grant access to their online properties in the future.

Even more powerful is the idea that 3rd party websites that authenticate visitors … could use Groups to determine what each user can do on that site. Groups will become part of an authentication infrastructure that anyone in the world can leverage.

This "active directory in the cloud" idea relies on a robust API that allows a 3rd party site to add and remove people from groups.

Believe it or not that was six years ago. Don’t want to say I told you so, but…


Data-intensive Applications in the Cloud Computing World

Building data-intensive applications in emerging cloud computing environments is fundamentally different and more exciting.  The levels of scale, reliability, and performance are as challenging as anything we have previously seen.  Databases are still prevalent in design, but new patterns and storage options need to be considered, as well.

To provide a little context, I have developed and supported database software for over 30 years.  I started with IMS/DLI and CICS/VSAM, then quickly moved to DB2 while it was still in beta (System R).  I became a pretty hard core RDBMS expert with 11 years of DB2 experience and over 20 years of Microsoft SQL Server experience.  I have been involved with some of the largest RDBMS projects in the world. (Example: a reliable, large-scale application in Europe that is available 24/7 and is designed to process up to 500,000 batch requests/sec, which equates to greater than 4 million SQL statements/sec.)  Before the emerging era of cloud computing, my database thinking was all about scale-up computing with transaction latency measured in a few microseconds and IOPS measured in many GBs/sec.

For the past 18 months, my team has worked with customers to build applications on the Windows Azure platform.  We’ve learned a lot about scale-out distributed computing—composing applications and solutions using different sets of services and resources while exploiting cloud platform fundamentals such as scalability, availability, and manageability.  We’ve learned that developing data-intensive applications to a set of online services is very different than writing traditional client/server applications; for some specifics, see my previous post on “Designing Great Cloud Applications”.

In the remainder of this post, I take a high-level look at the role of databases in cloud-based applications.

The design pattern and use of a database for a cloud-based application is different and, generally, expanded.  You still have the need to store the persisted database transactions of the traditional RDBMS application.  And, due to the use of distributed computing resources in a cloud-based application with higher standards for reliability, performance, and manageability, you also need extensive telemetry data captured about the entire application—if you want to build a great cloud application.

When we first started working with customers writing cloud-based data-intensive applications, most would use a relational database like Windows Azure SQL Database for all data storage, including telemetry data.  This could be expected because developers often use the tool(s) they are most familiar with, TSQL provides a quick and well-known interface to get data in and out of the database, and relational databases generally take care of threading and concurrency for developers.  At the time, the default thinking was that most data belongs in a traditional relational database where data is always stateful and carries atomic transactional properties.  However, in the distributed cloud computing environment, scale will likely come from the implementation of stateless as well as stateful data properties.  The new paradigm shifts us away from the use of a traditional RDBMS for all data.

An Example

Let me show you an example of a cloud application where multiple data stores are used.  The architecture is for an online gaming experience.  This application is designed to manage several thousand concurrent users and can scale out at several points, as needed.  After the diagram, I will explain the different functions of the application, the type of data store used for each function, and why that particular type of data store is used for that function.  As I describe each function, I will refer to a number in the diagram as a reference.


Function: Login and Initialize Profile — Looking at the bottom of the diagram, you see three users; let’s start there.  These users log in and their sessions are assigned to a Windows Azure web role (#1).  The web role hosts them while they are active on the system.  The first step is to authenticate them and bring their profiles into Windows Azure Cache (#5).  Their profiles are stored in Windows Azure SQL Database (#2).  Complex queries retrieve profile data from the SQL Database by joining data (game history, scores, activity, etc.) from multiple relational tables to store in the Cache.  A relational database is best suited for this type of persistent storage and complex query activity.  The user profile also needs to be updated when information changes, so the same types of complex transactions are required to update the information back into the SQL Database.  This is a good example of where a traditional relational database is best utilized as your data store.

Function: Play Games and Perform Online Activities — After the users are logged in and their profiles are in Cache, they can start the gaming experience.  As you might expect, the gaming experience will be all in Cache (#5).  The Cache is a high performance data store and is the obvious place to store active game data.  Because Cache is non-durable, leaderboard, profile, and friend information is pushed out to other data stores for persistence.

Function: Documenting and Updating Activities — All active game activity is recorded while the users are playing, and this activity needs to be durable during play while constantly making changes to it.  Activity data is stored in a Queue (#4).  This is a durable Queue, so unlike the Cache, activity data is not lost if an outage takes place.  Data stored in the Queue is processed by “activity processors” (hosted as Windows Azure worker roles) that process the data, carry-out application logic, and persist results and history.

Function:  Activity History — For each user, all activity is stored and kept as history.  Activity history is persisted on a periodic basis from the active Queue (#4) to a NoSQL store (#3).  This NoSQL store rests in a table, using Windows Azure Table service.  The table is used because the data is mostly write-only, with a requirement to easily grow in place with little need for complex query activity against it.  So, the Table service is the best store for this type of data activity.

Function: Friend Interaction and Leaderboard — While users are playing, they can communicate and interact with friends (other users) in the system.  They also might want to keep tabs of the leaderboard.  Friend and leaderboard data changes often but not constantly, so this data is best stored in an Azure SQL Database (#7).  The relational database is updated often, and the “cache tasks” role continuously pulls the latest information and ensures the active cache (#5) is always updated with the latest leaderboard and friend information through a query to the SQL Database.  

Function: Data Warehouse — All user profile data and activity data is stored in a data warehouse (#6) for reporting purposes.  Unstructured data from the Azure Table service is stored in Hadoop (Windows Azure HDInsight), and structured data is stored in a relational data warehouse (Azure SQL Database).  


In summary, you can see that this single application uses five different data storage options:  Windows Azure Cache, Azure SQL Database, Azure Queue service, Azure Table service, and Azure HDInsight (Hadoop).  Each type of store was chosen because it represents the best option for the transactional needs of the operation being executed:

  • Windows Azure Cache is used for performance (non-durable).
  • Azure SQL Database is used for strong transactional consistency and for complex query needs.
  • Azure Queue service is used for performance with heavy activity (durable).
  • Azure Table service (NoSQL) is used for heavy inserts (write-only) and the need to grow easily and quickly.
  • Azure HDInsight (Hadoop) is used for reporting against unstructured data.

If this was an on-premise application, you could have used multiple data stores, too, but the overhead of procurement, installation, and configuration of all of these sources adds time and money to your solution.  As the diagram below suggests, with only a couple of clicks in the Windows Azure portal, you can have any of these data sources installed, configured, and up and running.

My world has changed.  I am no longer just a relational database developer.  The Windows Azure platform and Microsoft cloud services make it easy to use the best data store for whatever task I am trying to accomplish—and, in many cases, this means using several different types of data stores. For more information about our data platform vision and the future of data-intensive applications on the Windows Azure platform, see Quentin Clark’s blog, “What Drives Microsoft’s Data Platform Vision?

Mark Souza
General Manager
Windows Azure Customer Advisory Team

Microsoft Cloud OS Network lanseras idag

Idag lanseras Microsoft Cloud OS Network, ett globalt konsortium av mer än 25 molntjänsteleverantörer som tillhandahåller tjänster som bygger på Microsoft Cloud Platform: Windows Server med Hyper-V, System Center och Windows Azure Pack.


Företagen som gått med i nätverket ställer sig bakom Microsofts Cloud OS vision om en enhetlig plattform som spänner över kunddatacenter, Windows Azure och leverantörsmoln. Medlemmarna i Cloud OS Network erbjuder Microsoft-validerad, molnbaserad infrastruktur och applikationslösningar som är utformade för att möta kundernas behov.


Läs mer om Cloud OS Network i den internationella Microsoftbloggen. Om du är nyfiken på att höra leverantörernas perspektiv så berättar Telecomputing om sitt engagemang här.


Clone an Azure VM using Powershell

In a few months time I will, in association with Technitrain, be running a training course entitled Introduction to SQL Server Data Tools. I am currently working on putting together some hands-on lab material for the course delegates and have decided that in order to save time in asking people to install software during the course I am simply going to prepare a virtual machine (VM) containing all the software and lab material for each delegate to use. Given that I am an MSDN subscriber it makes sense to use Windows Azure to host those VMs given that it will be close to, if not completely, free to do so.

What I don’t want to do however is separately build a VM for each delegate, I would much rather build one VM and clone it for each delegate. I’ve spent a bit of time figuring out how to do this using Powershell and in this blog post I am sharing a script that will:

  1. Prompt for some information (Azure credentials, Azure subscription name, VM name, username & password, etc…)
  2. Create a VM on Azure using that information
  3. Prompt you to sysprep the VM and image it (this part can’t be done with Powershell so has to be done manually, a link to instructions is provided in the script output)
  4. Create three new VMs based on the image
  5. Remove those three VMs


The script has one pre-requisite that you will need to install, Windows Azure Powershell. You also need to be a Windows Azure subscriber which, if you’re reading this blog post, I’m assuming you already are.

Simply download the script and execute it within Powershell, assuming you have an Azure account it should take about 20minutes to execute (spinning up VMs and shutting the down isn’t instantaneous). If you experience any issues please do let me know.

There are additional notes below.

Hope this is useful!



  • Obviously there isn’t a lot of point in creating some new VMs and then instantly deleting them. However, this demo script does provide everything you need should you want to do any of these operations in isolation.
  • The names of the three VMs that get created will be suffixed with 001, 002, 003 but you can edit the script to call them whatever you like.
  • The script doesn’t totally clean up after itself. If you specify a service name & storage account name that don’t already exist then it will create them however it won’t remove them when everything is complete. The created image file will also not be deleted. Removing these items can be done by visiting
  • When creating the image, ensure you use the correct name (the script output tells you what name to use):


  • Here are some screenshots taken from running the script:



  • When the third and final VM gets removed you are asked to confirm via this dialog:


Select ‘Yes’

Data Science and the Cloud

More than perhaps any other computing discipline, Data Science lends itself best to Cloud Computing in general, and Windows Azure in specific. That’s a big claim, but before I offer some evidence, I need to explain what I mean by “Data Science”. I’ve written before on Data Science (, and—keyvalue-pair-systems/ ), but since it’s an evolving field, here’s what I’ve observed as the areas that a Data Scientist focuses on:

  • Research – Standard researching techniques such as domain knowledge, data sources and impact analysis
  • Statistics – Probability and descriptive statistics-focused
  • Programming – At least one functional or object-oriented language, often Python, F#, LISP, Haskell or Java and Javascript
  • Sources of data – Internal organizational data as well as external sources such as weather, economics, spatial, geo-political sources and more
  • Data movement – Traditional Extract, Transform and Load (ETL), along with ingress or referencing external data sources
  • Complex Event Processing (CEP) – Analyzing or triggering computing as data moves through a source
  • Data storage – Storage systems including distributed storage and remote storage
  • Data processing – Both single-node and distributed processing systems, RDBMS, NoSQL (Hadoop, Key/Value Pair, Document Store, Graph databases, etc)
  • Machine learning – Data-instructive programming as well as Artificial Intelligence and Natural Language Processing
  • Decision analysis – Interpreting the processing of data to identify a pattern, make a prediction, and data mining
  • Business Intelligence – Design of exploratory data, visualizations, business and organization impacts and communication to the stakeholders of the use of data and visualization tools

There are of course other aspects of data science, but I believe this list covers the majority of skills I’ve seen in individuals with the Data Scientist title. And it is normally an individual, or at least a very limited group of people. as you examine the list above, you can see this person requires a fairly extensive technical background, and in the domain knowledge area in specific, there’s a pretty large time element. That isn’t to say a very bright person couldn’t ramp up on these areas, just that having all of that in your portfolio takes time.

Given that these are the skillsets, why is cloud computing well suited to assisting in the data science function?

It’s obvious that a researcher needs good Internet skills, beyond simply referencing a Wikipedia article – although that’s certainly a good thing to include from time to time. While searching isn’t specific to Windows Azure, there are platform components that allow the programming function to call out to the web for data access. Windows Azure includes a platform that allows languages from Python to F#, JavaScript (Including NodeJS), Java and more.

Cloud computing allows the data scientist to access data stored in Windows Azure (Blobs, Tables, Queues, RDBMS’s as a service such as SQL Server and MySQL) as well as IaaS systems that can run full RDBMS systems such as SQL Server, Oracle, PostGreSQL and others. In addition, the Windows Azure Marketplace contains “Data as a Service” which has free and fee-based data to include in a single application.

The Windows Azure Service Bus allows architecting a CEP system, and using SQL Server allows the StreamInsight feature, and can communicate from on-premises, Windows Azure IaaS and PaaS, and other data sources.

For data storage and computing, Windows Azure allows everything from traditional RDBMS’s as described to any NoSQL in IaaS, on both Windows and Linux operating systems. Statistical packages such as “R” are also supported. The elasticity allows the data scientist to spin up huge clusters, such as Hadoop or other NoSQL offerings, perform some analysis, and then stop the process when complete, saving cost, and bypassing the internal IT systems (which may have its own dangers, to be sure).  Windows Azure also offer the High Performance Computing (HPC) computing version of Windows Server on Windows Azure, for large-scale massively parallel data processing, in constant and “burst” modes.

In addition, Windows Azure has many services, such as the HDInsight Service (Hadoop on demand) and other analysis offerings that don’t even require the data scientist to stand up and manage a Virtual Machine in IaaS. For visualization, Microsoft has included the ability to use Excel with the HDInight Service, and of course that works with all Microsoft Business Intelligence functions, and there are several other data visualization tools such as Power View . You can enter the tools you have in the Microsoft stack in this tool ( for more on the visualization options you have. The data scientist can also build visualizations in web pages, on iPhone, Android or Windows mobile devices, or in full client-code installations.

Because the need for elasticity, multiple operating systems, and changing landscapes for data and processing, data science is well served by cloud computing – and in Windows Azure in particular because of the services and features offered, not only on Microsoft Windows but Open Source.


Successful Cloud Projects Start With The Plumbing

(Note – I’ll add to this post as new information is updated – latest post date is August 8th, 2013)

I’ve been working on cloud projects of all types for over three years now. Along the way, I’ve learned some basic patterns that make for a successful project – and also the things to avoid. The general steps depend a great deal on whether the project is an Infrastructure, Platform or Service deployment, and also if it is a hybrid or completely cloud architecture. In all cases, what you do before deploying the system – the “plumbing”, if you will – turns out to be the key for a successful deployment.

If this is your first cloud deployment, I recommend working with your local Microsoft team or a partner you trust that has Windows Azure experience. They can help you through the process, and then you can take over from there. It’s a far faster, successful route to get a good deployment quickly.

Accounts and Billing

Probably the most non-technical parts of the project that causes the most issues is setting up an account for the cloud provider and how you pay for it. But this needs to be done first. You have three progressions: No account (everything local), optionally an MSDN account for Dev and Test, and then on to the production account.

NOTE: The order here is not required. It’s simply a guide I use to progress from on-premises to Windows Azure. You can start directly in Windows Azure, and that’s completely OK. For a complete overview of accounts, check this resource:

Step One – Local Dev and Test

There are a couple of dependencies here. If you’re looking at an Infrastructure as a Service (Iaas) deployment of Virtual Machines, you’ll use Hyper-V to create images whether that’s on-premises or through the portal on Windows Azure. For a local system, simply create the VM’s using Hyper-V using the sizes and requirements shown here:

If you’re deploying a Platform as a Service (PaaS) application, it’s also quite simple. Download the Software Development Kit (SDK) from here: and then write your code. When you run the code, a Windows Azure emulator will fire up right on your laptop.

For Software as a Service (SaaS) offerings such as Windows Azure Media Services or HDInsight (Hadoop), there is no local testing other than the code or scripts you want to run. You’ll simply skip to step three.

Step Two – Dev and Test on Azure

You have two options for your development and testing environment. The first is to use your Microsoft Developer Network (MSDN) account, if you have one. If you do, you have “free” Windows Azure time built right in. It’s not a separate Windows Azure or any different than production Windows Azure – it’s the same data centers, servers, services and so on, it’s just billed differently.

There are some restrictions here – this isn’t for production, you can’t “bundle them up” or anything like that. It’s the same as the software you use with your MSDN account. You can learn more about this here: There is a step-by-step activation guide here:

If you don’t have an MSDN subscription, you’ll need to create a regular, billable account, for Dev and Test. This is the same process I’ll describe below. 

Step Three – Deploy to Production

To set up an account, you’ll need to figure out how your company wants to pay for it. Remember, this is a “pay as you go” model, so the two routes you have are to pay a monthly bill (using a corporate credit card or a purchase order) or you can pay ahead of time and use the money throughout the year on an “Enterprise Agreement (EA). Get with your local Microsoft team to work out the best route and price. The general process is detailed here:

Figure out who will control these accounts right from the start. In general, one person should control Dev and Test, another should control production. In any case, determine this before you start – I’ve seen projects fail not because of technical reasons, but because no one checked on whether they could pay for the service. 

Speaking of pricing, there are a couple of simple calculators you can use. If you followed the process above, you already have an idea of which resources you are using, and how much of each you have used in testing. From there you can plug in the usage numbers from Dev and Test to get a prediction of how much production will cost.


 General info on pricing and billing:
 “Slider” calculator:

Planning and Education

In all cases, you need to start with a good plan. It’s true that you don’t always know what you don’t always know, so you’ll need to allow for some amount of adjustment. You still need to start with a good plan, however, even before you know what Windows Azure is or how it works. Your plan should start with what the project does when it is successful. That allows you to use the right technology to accomplish the goal.

I can’t overemphasize this step enough. It sounds simple – certainly you know what you want the system to do, right? So often I have seen teams start with how the system should work before they consider the hard and fast requirements of the system. And sometimes teams are unwilling to try some other technology to solve the problem, instead clinging to the technology they know or like best.

After you have a solid understanding of the success metrics, it’s time to start learning. The route I recommend is an overview of the platform’s capabilities, and then a focus on the components you can use in your solution.


General overview of Windows Azure:

IaaS Hybrid Deployment

The “plumbing” for an IaaS Hybrid deployment needs the most preparation. You need to think about the connectivity, security, and DevOps before you deploy the first Virtual Machine.

To begin, follow the steps here to set up a Storage Accounts, Virtual Networks and then the Availability Groups where the VM’s will be deployed:  


With Windows Azure, you can set up three types of connectivity from your on-premises to Windows Azure VM’s. The first is to use a public-facing TCP/IP address. While this isn’t the most secure route, it does have specific use-cases, such as a public-facing web application that you want to access from your internal systems. The Portal will show the public IP your system is assigned, and then you have control over whether any endpoints are exposed – from there you can map them to your internal endpoints on the Virtual Machine, or even load-balance them if you like. More on that here:

The second method of connectivity is to set up a site-to-site VPN. In this option the Virtual Network you created in Azure (along with the VM’s you put in that Virtual Network) are connected directly to your internal TCP/IP network – making a secure, transparent connection. The process for this connection is here:

Your next option is to set up a point-to-site VPN. This allows a single computer on your network to connect securely and directly to your Virtual Network (and the VM’s you put in them) using only software – no hardware needed. Here is the process to do that: Read that entire page for context.


If you want single-sign-on from your local Active Directory, you have two choices. One is to follow the process above to create the VPN connection, and then deploy a Virtual Machine in Windows Azure and run dcpromo on it. From there it’s similar to an on-premises AD server.

Your second option is to use Windows Azure Active Directory – it’s a service that acts as an ADFS provider. You can learn all the details starting here:

PaaS Deployment

For a PaaS deployment, the primary plumbing considerations are the accounts and billing decisions I described above, security, and DevOps.
Accounting and billing can be more challenging in a PaaS environment since you aren’t always sure how much the service will be used and when. To gain more accurate predictions, you need to place monitoring and metrics right into your code. Your primary knobs and controls fall under Windows Azure Diagnostics – more on that here: Start with the main topic and follow *all* the links on the left-hand side of that page.

For security, the plumbing involves deciding on what type of authentication and access you want to use. The best place for references on that is here:

DevOps is becoming a huge concern for cloud deployments, and you need to think about how you’ll manage and monitor the PaaS application up front. Start with this reference and follow the links for more:


 General Guidelines are here:
 “Real World” guidance is here:

SaaS Deployment 

For a SaaS deployment, you’ll need to consider the accounts and billing, security, and DevOps links I mentioned above. In addition, you’ll need to consider data movement strategies from the outset. More information here:

Videos and Training on Windows Azure IaaS from TechEd New Orleans

 I’m catching up on a bunch of features, functions, updates and more learning from the TechEd Event in New Orleans recently. In fact, videos, Windows Azure documentation, and of course blogs are the new way to keep up – books are just too slow to produce to handle the pace. I thought I’d share the links I’m using:

General IaaS

Best Practices from Real Customers: Deploying to Windows Azure Infrastructure Services (IaaS)
Building Your Lab, Dev, and Test Scenarios in Windows Azure Infrastucture Services (IaaS)
Infrastructure Services on Windows Azure: Virtual Machines and Virtual Networks with Mark Russinovich

Windows Azure Internals

Getting the Most out of Windows Azure Storage

Network and Connectivity, Hybrid and DR

Hybrid Networking Offerings in Windows Azure

Achieve High Availability with Microsoft SQL Server on Windows Azure Virtual Machines 
Designing and Building Disaster Recovery Enabled Solutions in Windows Azure

Specific Applications (SQL Server and SharePoint)

IaaS: Hosting a Microsoft SharePoint 2013 Farm on Windows Azure
Performance Tuning Microsoft SQL Server in Windows Azure Virtual Machines

Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog

This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server.


The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.
Scenario 1: VMs connected through the same Cloud Service

2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them.

Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s).

Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required.

Step 1 – Provision VMs and Configure Ports


Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):


Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal):

After provisioning of the 2 VMs above, here is the default port configurations for example:

Step2 – Verify / Confirm the TCP port used by the database Engine

By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.


1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration

2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP – > IP Addresses


3. Confirm the port number used by SQL Server Engine; in this case 1433

4. Update from Windows Authentication to Mixed mode


5.       Restart SQL Server service for the change to take effect

6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2

Step 3 – Remote Powershell to DemoVM1

Enter-PSSession -ComputerName -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck)

Your will then be prompted to enter the password.

Step 4 – Open 1433 port in the Windows firewall

netsh advfirewall firewall add rule name=”DemoVM1Port” dir=in localport=1433 protocol=TCP action=allow


netsh advfirewall firewall show rule name=DemoVM1Port

Rule Name:                            DemoVM1Port


Enabled:                              Yes

Direction:                            In

Profiles:                             Domain,Private,Public


LocalIP:                              Any

RemoteIP:                             Any

Protocol:                             TCP

LocalPort:                            1433

RemotePort:                           Any

Edge traversal:                       No

Action:                               Allow


Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1

Step 6 – Close port 1433 in the Windows firewall

netsh advfirewall firewall delete rule name=DemoVM1Port


Deleted 1 rule(s).


netsh advfirewall firewall show  rule name=DemoVM1Port

No rules match the specified criteria.


Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1

Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1.

Scenario 2: VMs provisioned in different Cloud Services

2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s).

Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed.

Step 1 – Provision new VM3

Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal):

After provisioning is complete, here is the default port configurations:

Step 2 – Add public port to VM1 connect to from VM3’s DB instance

Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance.

Step 3 – Remote Powershell to DemoVM1

Enter-PSSession -ComputerName -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck)

You will then be prompted to enter the password.


Step 4 – Open 1433 port in the Windows firewall

netsh advfirewall firewall add rule name=”DemoVM1Port” dir=in localport=1433 protocol=TCP action=allow



netsh advfirewall firewall show rule name=DemoVM1Port

Rule Name:                            DemoVM1Port


Enabled:                              Yes

Direction:                            In

Profiles:                             Domain,Private,Public


LocalIP:                              Any

RemoteIP:                             Any

Protocol:                             TCP

LocalPort:                            1433

RemotePort:                           Any

Edge traversal:                       No

Action:                               Allow



Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1

RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above.

Step 6 – Close port 1433 in the Windows firewall

netsh advfirewall firewall delete rule name=DemoVM1Port



Deleted 1 rule(s).


netsh advfirewall firewall show  rule name=DemoVM1Port

No rules match the specified criteria. 

Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1

Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1.


Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections.


SQL Server in Windows Azure Virtual Machines


Originally posted at