Category Archives: SQLServer

SQL Server 2014: A Closer Look

Microsoft SQL Server 2014 was announced by Quentin Clark during the Microsoft TechEd 2013 keynote. Designed and developed with our cloud-first principles in mind, SQL Server 2014 builds on the momentum of SQL Server 2012, released just 14 months ago. We are excited to share a closer look at some of the exciting capabilities included in SQL Server 2014 that will help you unlock real-time insights with mission critical and cloud performance.

SQL Server 2014 helps organizations by delivering:

  • Mission Critical Performance across all database workloads with In-Memory for online transaction processing (OLTP), data warehousing and business intelligence built-in as well as greater scale and availability
  • Platform for Hybrid Cloud enabling organizations to more easily build, deploy and manage database solutions that span on-premises and cloud
  • Faster Insights from Any Data with a complete BI solution using familiar tools like Excel

Mission Critical Performance with SQL Server 2014

SQL Server 2014 delivers new in-memory capabilities built into the core database for OLTP and data warehousing, which complement existing in-memory data warehousing and business intelligence capabilities for a comprehensive in-memory database solution. In addition to in-memory, there are new capabilities to improve the performance and scalability for your mission critical applications.

In Memory Built-In

  • New In-Memory OLTP – built in to core SQL Server database and uniquely flexible to work with traditional SQL Server tables allowing you to improve performance of your database applications without having to refresh your existing hardware. We are seeing customers such as EdgeNet and bwin achieve significant performance gains to scale and accelerate their business.
  • Enhanced In-Memory ColumnStore for Data Warehousing – now updatable with even faster query speeds and with greater data compression for more real-time analytics support.
  • New buffer pool extension support to non-volatile memory such as solid state drives (SSDs) – Increase performance by extending SQL Server in-memory buffer pool to SSDs for faster paging.
  • New Enhanced Query Processing – speeds all SQL Server queries regardless of workload.

Enhanced Availability, Security and Scalability

  • Enhanced AlwaysOn – Built upon the significant capabilities introduced with SQL Server 2012, delivers mission critical availability with up to 8 readable secondaries and no downtime during online indexing operations.
  • Greater scalability of compute, networking and storage with Windows Server 2012 R2 –

– Increased scale – Continue to benefit from scale for up to 640 logical processors and 4TB of memory in a physical environment and up to 64 virtual processors and 1TB of memory per VM.
– Network Virtualization – Abstracts networking layer so that it you can easily migrate SQL Server from one datacenter to another.
– Storage Virtualization with Storage Spaces – Create pools of storage and storage tiers allowing your hot data to access the premium storage and cold data to access standard storage improving resilience, performance and predictability.

  • Enhanced Resource Governance – With Resource Governor, SQL Server today helps you with scalability and predictable performance, and in SQL Server 2014, new capabilities allow you to manage IO, in addition to compute and memory to provide more predictable performance.
  • Enhanced Separation of Duties – Achieve greater compliance with new capabilities for creating role and sub-roles. For example, a database administrator can now manage the data without seeing sensitive data or personally identifiable information.

Platform for Hybrid Cloud

SQL Server 2014 creates a strong platform for hybrid cloud where cloud scale can be leveraged to extend the scalability and availability of on-premises database applications as well as reduce costs.

Simplified Cloud Backup and Disaster Recovery

  • Backup to Azure Storage ��� reduce costs and achieve greater data protection by backing up your on-premises database to Azure Storage at an instance level. Optimize backup policy with intelligence built in to SQL Server that monitors and tracks backup usage patterns to provide optimal cloud backup. Backups can be automatic or manual, and in case of an on-premises failure, a backup can be restored to a Windows Azure Virtual Machine.
  • AlwaysOn integration with Windows Azure Infrastructure Services – Benefit from Microsoft’s global data centers by deploying a Windows Azure Virtual Machine as an AlwaysOn secondary for cost-effective global data protection. Increase performance and scale reporting for your global business units by running reporting off the readable secondaries in Windows Azure. Run backups on the secondaries in Windows Azure to increase data protection and performance.
  • SSMS Wizard for deploying AlwaysOn secondaries in Window Azure – Easily deploy an AlwaysOn secondaries to Windows Azure Virtual Machine with a point and click experience within SQL Server Management Studio (SSMS).

Easy Migration of On-Premises SQL Servers to Windows Azure Virtual Machines

  • SSMS Migration Wizard for Windows Azure Infrastructure Services – Easily migrate an on-premises SQL Server database to a Windows Azure Virtual machine with a point and click experience in SSMS. The newly deployed database application can be managed through SSMS or System Center 2012 R2.

Faster Insights on Any Data

SQL Server 2014 is at the heart of our modern data platform which delivers a comprehensive BI solution that simplifies access to all data types big and small with additional solutions like HDInsight, Microsoft’s 100% Apache compatible Hadoop distribution and project code name “Data Explorer”, which simplifies access to internal or external data. New data platform capabilities like Polybase included in Microsoft Parallel Data Warehouse allows you to integrate queries across relational and non-relational data using your existing SQL Server skills.

With SQL Server 2014, you can accelerate insights with our new in-memory capabilities with faster performance across workloads. You can continue to refine and manage data using Data Quality Services and Analysis Services in SQL Server and finally analyze the data and unlock insights with powerful BI tools built into Excel and SharePoint.

Learn more and sign up for the Preview

SQL Server 2014 brings to market many new exciting capabilities that will deliver tremendous value to customers. SQL Server 2014 can help you unlock real-time insights with mission critical and cloud performance along with one of the most comprehensive BI solutions in the marketplace today.

If you are at TechEd North America and want to learn more about SQL Server 2014 you will not want to miss the following key sessions:

Learn more about SQL Server 2014 and download new datasheet and whitepapers here.

Also if you haven’t already sign up for the SQL Server 2014 CTP 1 bits coming in a few weeks!

Insight Through Integration: SQL Server 2012 Parallel Data Warehouse – PolyBase Demo

SQL Server 2012 PDW has a feature called PolyBase, that enables you to integrate Hadoop data with PDW data. By using PDW with PolyBase capabilities, a user can:

  1. Use an external table to define a table structure for Hadoop data.
  2. Query Hadoop data by running SQL statements
  3. Integrate Hadoop data with PDW data by running a PDW query that joins Hadoop data to a relational PDW table.
  4. Persist Hadoop data in PDW by querying Hadoop and saving the results to a PDW table.
  5. Use Hadoop as an online data archive by exporting PDW data to Hadoop. Since the data is stored online in Hadoop, user will be able to retrieve the data by querying it from PDW.

In the video below, which highlights a solution to a problem that involves sending help to evacuate potential victims of a hurricane, Microsoft SQLCAT Senior Program Manager Murshed Zaman demonstrates how to solve a customer question using relational data from SQL Server Parallel Data Warehouse 2012 (PDW 2012) and non-relational data stored inside Hadoop. The demo will show how you can analyze data by combining the capabilities of Power View and Power Pivot for Excel, Hadoop, and PDW.  This video focuses on the PolyBase feature of SQL Server Parallel Data Warehouse 2012. PowerPivot and PowerView were added to the demonstration to help visualize the data results.   

For step-by-step instructions on creating the PowerView report please visit Cindy Gross’ blog "Hurricane Sandy Mash-Up: Hive, SQL Server, PowerPivot & Power View."

For more information on SQL Server Parallel Data Warehouse Appliance visit

Bootstrapping SQL Server bloggers and blog readers with Twitter!

On 17th December 2009 Aaron Nelson (you may know him as @sqlvariant) had a great idea – he invented the #SQLHelp hashtag; with a little kickstart from Brent Ozar the idea grew and #SQLHelp became a successful QnA channel in the SQL Server community and is today going from strength to strength.

I’m a great advocate of SQLHelp and not just because it builds bridges between those needing help with the people that are able to provide that help. It is also a great exemplar of the power of Twitter and, more specifically, the power of coalescing open data around a shared interest. As I thought more about this I figured there must be a way that the SQL Server community could further leverage what I think is a nascent opportunity around hashtags and as my mind wandered I thought about Steve Gillmor’s post from 5th May 2009 Rest in Peace, RSS in which he opined that RSS (the syndication technology that bootstrapped the blogging craze in the first decade of this century) should be replaced by Twitter feeds. Here’s a choice quote:

It’s time to get completely off RSS and switch to Twitter. RSS just doesn’t cut it anymore.

Steve isn’t averse to putting the cat among the pigeons with his blog posts and in this case I think he has a salient point. Whilst RSS isn’t a consumer technology (i.e. none of my none-techie friends have a clue what it is), Twitter most definitely is. One downside of RSS (in my opinion) is that most blog authors simply publish their outpourings then hope that it gets some Google juice and catches people’s attention. On the other hand, there are still a lot of people that use RSS readers and those people have a problem too – where do they find good bloggers and good blog material?

So, consider this:

  • Lots of people are blogging great stuff but don’t have a way of telling people about it
  • Lots of people want to learn from great bloggers but might not know where to go and find that material

Is there an opportunity to use Twitter to build bridges between bloggers and blog readers in a similar manner to how #SQLHelp has done between questioners and answerers? I think there is and that’s when I hit upon an idea – perhaps we as a community could (as expertly put it) put the internet to work for us.

Here is my suggestion. If you as a blog author tweet a link to a newly published SQL Server related blog post and use the hashtags




and also a hashtag to indicate the language then that tweet (and the all important link) will be available at One can then use Twitter’s ability to make search results available as an RSS feed and subscribe to that RSS feed in one’s RSS reader of choice.

Is that a good idea? I think it is, but then again its my idea so I would, wouldn’t I? I hope a few people out there will get on board with this initiative (perhaps even blog and tweet about it) and hopefully if it can became a fraction as successful as SQLHelp.

Call to action for bloggers

If you as a blogger want to get involved with this initiative then its really very simple. Tweet a link to your SQL Server related blog posts along with a title and the following three hashtags

  • #sqlserver
  • #blogged
  • ISO 639-1 code indicating the language that the blog post is written in

*ISO 639-1 is a standard for 2-digit language codes. You can view the complete list on the International Standards Organisations (ISOs) website at although here are a few to get you on your way:

  • en – English
  • de – German
  • fr – French
  • es – SPanish
  • zh – Chinese

I would also encourage you to use other hashtags to more specifically define the subject matter as this might make for some interesting analysis later.

As an example, here is a tweet that I just tweeted for my blog post Obtaining rowcounts when using Composable DML [T-SQL]


Also, please blog about this yourselves (at the very least that gives you an opportunity to add your first tweet to the SQL Server twitter RSS stream).

Call to action for blog readers

If you are someone who enjoys reading SQL Server related blog posts wants to get involved in this initiative simply subscribe to the appropriate RSS feed in your RSS reader of choice and watch as (hopefully) great content flows into your RSS reader without you having to lift a finger. Here are a few such URLs:

Thanks to Dan English for pointing out in the comments that the search URL can be amended to remove retweets.

That’s all there is to it. Fingers crossed that this initiative catches on because there is a fantastic knowledge sharing opportunity here – let’s put the internet to work for us to make it happen.

I have one more thing to say, a line that I stole from my ex-colleague Howard van Rooijen, one which I am a great believer in and which I believe is very pertinent here:

Work smarter, not harder.


Disk and File Layout for SQL Server

Guest blog post by Paul Galjan, Microsoft Applications specialist, EMC Corporation. To learn more from Paul and other EMC experts, please visit – and join our Everything Microsoft Community.

The RAID group is dead – long live the storage pool!   Pools fulfill the real promise of centralized storage – the elimination of storage silos.  Prior to pool technology, when you deployed centralized storage you simply moved the storage silo from the host to within the array.  You gained some efficiencies, but it wasn’t complete.  Pools are now common across the storage industry, and you can even create them within Windows 2012, where they are called “Storage Spaces.”  This post is about how you allocate logical disks (LUNs) from pools so that you can maintain the visibility into performance.  The methods described can be used with any underlying pool technology.

To give some context, here’s how storage arrays were typically laid out 10 years ago.

Layout of storage arrays ten years ago

A single array could host multiple workloads (in this case, Exchange, SQL, and Oracle), but usually it stopped there – spindles (disks) would be dedicated to a workload.  There were all sorts of goodies like virtual LUN migration that allowed you to seamlessly move workloads between the silos (RAID groups) within the array, but those silos were still there.  If you ran out of resources for Exchange, and had some spare resources assigned to SQL Server, then you’d have to go through gyrations to move those resources.  For contrast, this is how pool technology works:

Pool Technology

All the workloads are sharing the same physical resources.  When you run out of resources (either performance or capacity) you just add more. The method is really enabled by automatic tiering and extended cache techniques.  So the popularity of pool technology is understandable. Increasingly I see VNX and VMAX customers happily running with just one or two pools per array.

The question here is this: if you’re not segregating the workload at the physical resource level, is there any need to segregate the workloads at the logical level?  For example, if tempdb and my user databases are in a single pool of disk on the array, should I bother having them on multiple LUNs (Logical Disks) on the host?

If the database is performance sensitive, then the reason is “Yes.” If you don’t, you may have a difficult time troubleshooting problems down the road.  Take an example of a query that’s resulting in an extraordinarily large number of IOs.  If your tempdb is on the same LUN as your user databases, then you really don’t know where those IOs are destined for.  It also reduces your ability to potentially deal with problems.  Pools may be the default storage option, but they’re not perfect, and not all workloads are appropriate for pools.  Segregating workloads into separate LUNs allows me to move them between pools, in and out of RGs without interrupting the database.

So here’s my default starting layout for any performance sensitive SQL Server environment:

  • Disk 1: OS/SQL Binaries
  • Disk 2: System databases (aside from tempdb)
  • Disk 3: tempdb
  • Disk 4: User databases
  • Disk 5: User DB transaction logs

This allows me to get a good view of things just from perfmon.  I can tell generally where the IO is going (user DBs, master, tempdb, logs etc), and if I need to move things around, I can do so pretty easily.

PASS Summit 2012 Recap & the Milestones of SQL Server 2012

Microsoft_VP_Ted_KummertLast week marked the completion of a great week at PASS Summit 2012, the world’s largest technical training conference for SQL Server professionals and BI experts alike. During this year’s 3-day conference, nearly 4,000 attendees heard firsthand about the great advances being made toward managing big data. Over the course of two keynote speeches by Microsoft Corporate Vice Presidents Ted Kummert (Data Platform Group) and Quentin Clark (SQL Program Management), Microsoft announced the following:

  • Project codename “Hekaton,” a new in-memory technology that will be built directly into the data platform, will ship in the next major version of SQL Server.  Currently in private technology preview with a small set of customers, Hekaton completes the company’s portfolio of in-memory technologies across analytics, transactions, streaming and caching workloads, enabling business acceleration by shrinking the time from raw data to insights.
  • SQL Server 2012 Parallel Data Warehouse (PDW), the next version of Microsoft’s enterprise-class appliance, will be available during the first half of 2013.  SQL Server 2012 PDW includes PolyBase, a fundamental breakthrough in data processing that will enable queries across relational data and non-relational Hadoop data.
  • SQL Server 2012 SP1, which supports Office 2013 by offering business users enhanced, new capabilities for self-service business intelligence using familiar tools such as Excel and Sharepoint, is now available for download here

What’s more, on the final day of PASS Summit 2012, attendees were treated to the presentation, “Big Data Meets SQL Server 2012” by Microsoft Technical Fellow David DeWitt. 

PASS_Summit_2012All the while, conference participants attended a wide variety of technical sessions presented by industry experts in addition to a host of other programs. From on-site certification testing, to hands-on-labs, attendees were able to boost their technical skills using these resources, as well as work through technical issues with top Microsoft Customer Service and Support (CSS) engineers and get architectural guidance from the SQL Server, Business Intelligence and Azure Customer Advisory Teams (CAT). Of course, the learning didn’t stop there; attendees were invited to new, “I Made That!” Developer Chalk Talks, which featured 30 minute casual talks with the Microsoft developers who worked on specific features and functionalities of SQL Server 2012. The topics appealed to many, ranging from AlwaysOn to Hekaton. You can see more great photos from PASS Summit 2012 on the SQL Server Facebook page or access the video interviews with Dave Campbell, Quentin Clark, and David DeWitt available at the SQL Server virtual press room.

And so, as we close on another year of PASS Summit, it’s the perfect time to look back and see how far we’ve come since the launch of SQL Server 2012.  Join us below, as we take a celebratory look at the milestones we’ve hit along the way, and let’s look together toward the bright future ahead!


CROSSMARK Uses SQL Server 2008 R2 Parallel Data Warehouse to Quickly Deliver Business Insights

With the growth of the consumer goods industry, sales and marketing campaigns have created large and complex databases that are hard to sift through without the right tools. As retailers approach the busy holiday shopping season, they need to have insights into the effectiveness of their campaigns. Retailers need to know what market trends are affecting their customers to maximize the reach of these campaigns and they need to be able to sort through all of this data quickly to find useful and actionable insights.

Every now and then, we like to highlight how our customers are using Microsoft’s database platform solutions now to solve for these types of needs in real-time. One such customer is CROSSMARK, a provider of sales and marketing services for manufacturers and retail companies, who recently launched a new self-service data portal powered by SQL Server 2008 R2 Parallel Data Warehouse (PDW) to bring this data and these insights to its customers. SQL Server PDW’s on-demand data access will allow CROSSMARK’s customers to leverage shopper insights and data to inform strategies and tactics to create more effective sales and marketing campaigns to boost sales and profitability.

Before implementing SQL Server PDW, CROSSMARK had a bottleneck in its legacy platform that created data reports that weren’t scalable, making employees spend valuable time with data reporting instead of working with customers. Now, with SQL Server PDW, CROSSMARK can easily scale its resources to handle the millions of in-store activities processed each year and allow CROSSMARK employees to spend more time with its customers and less time with the data.

CROSSMARK is also on-track to implement SQL Server BI tools including Power View and PowerPivot to provide more business intelligence tools to its customers.

To read more about CROSSMARK, take a look at this Customer Spotlight feature on News Center.

Countdown to PASS Summit Series: Make it Happen at The Summit

It’s closer and closer we get to the official start of PASS Summit 2012, and our celebratory countdown continues! Today, we have more advice for you on what you need to get out of PASS Summit from SQL Server community members Rob Farley and John Sansom. 



Rob_Farley Rob Farley runs LobsterPot Solutions, a Gold Partner SQL Server and Business Intelligence consultancy in Adelaide, Australia. He presents regularly at PASS chapters and conferences such as TechEd Australia, SQL PASS, and SQLBits (UK), and heads up the Adelaide SQL Server User Group. He is an MCT and has been a SQL Server MVP since 2006. Rob has helped create several of the MCP exams, wrote two chapters for the book SQL Server MVP Deep Dives (Manning, 2009) and one for SQL Server MVP Deep Dives 2, Volume 2 (Manning, 2011). He is currently a Director of SQLPASS.

I’m probably not the typical PASS attendee, but the thing that I’m looking forward to the most from this year’s PASS Summit is people realising where the real benefit of the PASS Summit lies.

I should point out – I’ve only been to two PASS Summits. My first was as recent as 2010. I gave two presentations plus a lightning talk that year, only to take the next step and deliver a precon in 2011 (oh, and I sang during a keynote). I was on the board as an advisor during last year’s Summit, and this year I will be attending my first PASS Summit as a full director.

So, you see, I’ve never been a real first-timer at PASS. My experience of getting into the SQL Server community came through local channels.

I went to TechEd Australia as a regular delegate in 1999 – that’s when I was a proper first-timer. A colleague and I flew up to Brisbane for the event, where we worked out which sessions we’d each go to, based on what we wanted to learn from the event. There was a lot of information, and I made a lot of notes. I see a lot of people doing the same at conferences today – people who haven’t realised yet.

In 2000, I moved back to the UK for a couple of years, and didn’t go to many local events. I didn’t even consider myself a SQL guy back then, but the company I worked for was involved in some of the other local communities – back then it was things like Commerce Server and Content Management Server. I didn’t attend the meetings that were going on, and I definitely didn’t see it as valuable. I would’ve jumped at the chance of going to the larger conferences, though and I did attend an event that Bill Gates was speaking at in early 2002 (I think). It wasn’t anything like the PASS Summit though.

A few years later I found myself living in Adelaide, attending an event at the Hilton Hotel (the one in Adelaide, not the local pub in Hilton), where someone invited me to the .Net user group. I asked the guy I reported to about going along, and found I could use company time for it. Like all drugs, the first hits are often free.

Fairly soon I realised that the benefit of the local community wasn’t in the presentations that were being given, but in the network of people that were there. The presentations didn’t actually thrill me that much, and it wasn’t long before I offered to give presentations myself, ones that even got noticed by people at Microsoft. This was both in the .Net world and the SQL world – and I was also starting to appreciate that I had a lot more to offer the SQL community than the .Net one.

When I was just attending events for the content, I really wasn’t benefitting much from the whole experience. I could probably skip the talk, look at its title, do some research through the blogosphere, and learn just as much, at my own pace, with lots of varying perspectives – even seeing demos of things on YouTube and the like. Well, maybe I couldn’t do that so much back nearly ten years ago, but certainly these days. (Of course, there’s the time aspect – if you go to an event, you’re actually setting time aside to learn. That’s great. But for me, it’s not quite enough.)

But when I shifted my focus, I started getting so much more out of them. In doing this, I saw three differences.

1. By being more interested in the people that are at events more than the technical aspects of the presentations, you start to hear what’s actually important about the technology, rather than basic technical details.

2. By becoming a presenter, you value presentations differently, and even learn more from them. You start to consider factors such as what made the presenter choose a particular demo over another, and how you would present that point yourself. And you don’t see the other presenters as being so aloof.

3. By understanding that the technical details of a talk are all things you can pick up later through your own research, your energy at an event can go into building relationships. You meet the presenters, get to know them, and they become part of your network.

In 2005, I attended my second major event – another TechEd Australia. I was mainly just a delegate, but I had also been told by the guy who ran the SQL Server user group to catch up with a few particular people there (a few weeks later he asked me to take over the group, but that’s a different story). The conversations I had at 2005 were therefore very different to the ones from 1999.

By 2006 I had become an MCT, and proctored Hands-On Labs at TechEd Australia, which I also did in 2007 and 2008. I presented sessions in 2007, 2008 and 2009, and felt like part of the establishment at the event. I was definitely getting a lot more out of it that I had done in the past.

2006 had also seen me get the MVP award. This meant I could go to the MVP Summit, which I did in both 2007 and 2008. There I was just a delegate, but in hindsight, I found that the focus of the events was on building relationships – both with other MVPs and with Microsoft staff. It was at these events that I met many people in the global SQL community for the first time.

In 2009 and 2010 I travelled to the UK for two SQLBits conferences, where I gave precon seminars and regular presentations. I had met some of the MVPs at the MVP Summits I’d been to, but got the chance to meet many other people as well.

You see, in today’s world, you can find out technical information very easily. What you can’t do so easily is form the kinds of relationships that give you allies in solving problems that aren’t so straight forward.

So what I want “first-timers” to realise (regardless of how many PASS Summits they’ve been to) is this:

The thing you need to do at these things is to get to know the people behind the profiles. Find out what interests them outside the data world (and potentially within it), and you’ll come away with so much more than if you wanted to learn about the technology.

And the thing that I personally enjoy about events like the Summit is seeing people wake up to this tip and making it all happen.


John_Sansom John Sansom (@SQLBrit) is a Technology Lead with the database team at Expedia, Inc. providing consulting services and support for one of the world’s largest SQL Server environments. Awarded the Microsoft Community Contributor Award (MCC) John can be found regularly blogging about Being a DBA and Professional Development over at

Jadba is a diligent, hard working chap with a passion for technology. A typical DBA, he’s all about ensuring the availability and performance of the environments in his care. He enjoys working with a variety of data technology but his favoured weapon of choice is SQL Server. A studious and ambitious fellow, he taps into the vibrant SQL community to learn and grow as best he can. Regularly reading the latest blogs and white papers, attending webinars and even the odd local User Group event. Jadba has invested in his own professional development to become quite the proficient DBA.

Like many Data Professionals it’s always been an ambition of his to attend the most prestigious of all SQL Server community events, the PASS Summit. For the opportunity to learn from the very best, to finally meet with international community peers of many a year in person and to feel like a true member of the #sqlfamily. To attend the Summit seems a distant dream to Jadba, so far out of reach both physically and financially.

At least that was the case, until recently. Frustrated and no longer content to accept missing out on the sidelines, it was time to take action into his own hands. You see earlier in the year he made a commitment to himself that this time there were to be no excuses, no matter what. He was going to attend the PASS Summit 2012!

I am of course talking from my own personal experience and not just another DBA (Jadba). Having worked as a data professional here in London for over ten years, to travel to the PASS Summit has long been an ambition of mine. The desire to attend has been on my radar for so long now but there’s always been an obstacle to contend, be it financial or logistical, that’s kept me from my goal. If I’m being honest, perhaps I’ve been too easily swayed and should have acted sooner. No matter, armed with some sage advice I made a public commitment to finally get to PASS. Don’t make the same mistake that I did by procrastinating. I encourage you to act without hesitation, to take control and make the investment in your own career.

Don’t settle for being simply just another DBA. You can read heaps more about Professional Development for the DBA over on my blog. Make the investment in yourself and I’ll see you at the Summit.