Monthly Archives: July 2013

Are you prepared for the BlueHat Challenge?

Today we are kicking off a new challenge so you can showcase your security prowess and, if we can, help you build some more. Our BlueHat Challenge is a series of computer security questions, which increase in difficulty as you progress. Only the rare and talented engineer will be able to finish the Challenge on the first attempt. It’s not a contest, so there’s no cash involved here, but there will be some great answers we’ll recognize publicly and you could win yourself a big chunk of bragging rights. You can find complete details about this new program over on the Security Research & Defense blog.

Good luck, and I look forward to seeing your submissions. Show me what you’ve got!

Dustin Childs
Group Manager, Response Communications
Microsoft Trustworthy Computing 

July, the 31 Days of SQL Server DMO’s – Day 31 (sys.dm_server_services)

The last DMV for this month long blog session is the sys.dm_server_services DMV. This DMV returns information about your SQL Server, Full-Text, and SQL Server Agent related services. To further illustrate the information this DMV contains, lets run it against our Training instance that we have been using for this blog series.

SELECT * FROM sys.dm_server_services

image

image

The first column returned by this DMV is the actual Service Name. The next columns are the startup_type and startup_type_desc columns which display your chosen method for how a particular method should be started. The next columns status and status_desc display the current status for each of your Services on the instance. The process_id column represents the server process id. The last_startup_time column gives you the last time that a particular service was started. The service_account column provides you with the name of the account that is used to control the service. The filename column gives you the full path to the executable for the service. Lastly we have the is_clustered column and the cluster_nodename which indicates whether or not a particular service is clustered and is part of a resource cluster group, and if so, the cluster node that the service is installed on.

This is a good DMV to provide you with a quick snapshot view of the current SQL Server services you have on your instance.

For more information on this DMV, please see the below Books Online link:

http://msdn.microsoft.com/en-us/library/hh204542.aspx

Follow me on Twitter @PrimeTimeDBA

Create Dynamic T-SQL Code for Task Automation

You have a task to undertake the scripting out of many individual T-SQL statements. For example, you need to rename a number of SQL Server database files. You need to automate the creation of T-SQL statements rather than manually script out the solution. How can you do so? Check out this tip to learn more.

24 Hours of PASS: Summit Preview Edition

The latest 24 Hours of PASS (24HOP) event will start at 12:00 GMT on July 31, with 24 back-to-back one hour live virtual sessions, ending exactly 24 hours later. I have the honor of presenting the final session of the event, so if you are up and about at 11:00 GMT on August 1, I would love to have you listen in to my session!

I will be doing a preview of my day-long pre-con session, called Scaling SQL Server 2012. Here is the abstract:

Scaling SQL Server 2012

SQL Server implementations can rapidly evolve and become more complex, forcing DBAs and developers to think about how they can scale their solution quickly and effectively. Scaling up is relatively easy but can be expensive, while scaling out requires significant engineering time and effort. As your database server nears its load capacity, what can you do? This 24 Hours of PASS session, a preview of the full-day PASS Summit pre-conference session, will give you concrete, practical advice about how to scale SQL Server. We’ll explore how to find and alleviate bottlenecks and cover how you can decide whether you should scale up or scale out your data tier. We’ll also cover how to use middle-tier caching and other application techniques to increase your overall scalability.

I also wanted to point out that Paul Randal just announced a special promotion for the upcoming September IEHW: Immersion Event on SQL Server Hardware and IE0: Immersion Event for Accidental/Junior DBAs, both in Bellevue, WA.

Just to sweeten the deal even further, all attendees of the inaugural session of IEHW will get a free copy of two of my books, SQL Server Hardware, and Professional SQL Server 2012: Internals and Troubleshooting. I’ll even autograph them, if you want to turn them into collector’s items!

Filed under: PASS, Teaching Tagged: 24HOP

SQL Server 2014 In-Memory OLTP – bwin Migration and Production Experience

As part of our SQL Server 2014 In-Memory OLTP blog series we asked early adopter Rick Kutschera from bwin.party, to blog about his solution experience with SQL Server 2014 In-Memory OLTP.  Read on below to hear more the experience from Rick.

You can also learn more about bwin’s experience in the video ‘bwin Wins with SQL Server 2014’, or download SQL Server 2014 CTP1 to experience In-Memory OLTP for yourself.

The whole world right now revolves around the cloud, of which one the major architectural aspects is focused on scale-out. However, there are some areas where scale-out is not an option, and even more unfortunately there are architectures where scale-out in parts of the design causes a massive need to increasingly scale-up in other areas. One of these scenarios is the ASP.Net Session State Server.

When you take a website as a typical example of a scale-out model you have a number of servers in the webserver farm. It is perfectly flexible, with one exception: If you need to transport certain data points from one request a user makes to the next (e.g. the user id of a logged in user in a webshop) this model naturally comes with a limitation. As the subsequent requests can hit different servers in the farm you need to coordinate this special “session data” in a central location. As you can see in the picture, no matter how big your farm is, there is always one database server to coordinate it.

bwin production and migration experience

 Microsoft offers three models within its Session State architecture to meet the needs for this kind of application scenario: 1. An InProc version, which is only good for very small farms, 2. A version based on AppFabric, which in turn needs a database as its controlling instance, or 3. A SQL Server based approach, which is the implementation I will focus on from now on.

SQL Server is a good bet for a scenario like this. It is a proven technology, which can handle large amounts of data and it provides full ACID compliance if needed. The problem though is that SQL Server, along with other relational databases, is designed in a way where heavily accessed data on a page or small number of pages can be a barrier to scale and performance. Looking at the Session State, what you typically find aligns with this situation which is a relatively small number of entries in the table (<100.000) that are updated quite heavily. The reason relational databases have a problem with this is a concept called latching, which helps SQL Server ensure the physical consistency of modified data pages. Without delving too deeply into the details of that let me just say that the solution, due to latching, does not scale too much more than 10.000 Requests/Sec. (Max. 15.000 if you use a lot of tricks.) That number might sound like a lot, but it really isn’t in today’s high-speed world. So what to do about it?

When Microsoft first introduced me to the In-Memory OLTP engine (project codenamed “Hekaton”) as part of SQL Server 2014, I immediately saw the potential for a real fix to my dilemma. A highly scalable, latch-free, memory-optimized, processor-optimized SQL Server. Since the In-Memory OLTP engine was integrated as part of SQL Server, I was quickly amazed with the relative ease in working with it. Due to this integration the syntax was very close to what we were used to and most important the client access layer was identical, meaning that we were able to swap out the “old” SQL Server Session State with an In-Memory OLTP version without the IIS webserver ever noticing the change.

With this ease of migration there are some details to consider. The In-Memory OLTP engine is not only latch-free, but also lock-free. This means that some of the normal rules of pessimistic databases just don’t apply. In SQL Server when two transactions want to update the same record one will wait until the other is finished and eventually both will succeed. This is not true with the In-Memory OLTP engine where the waiting (caused by locks and latches) has been removed for pure speed, which leads one of the transactions failing with a Write/Write conflict in the end. When implementing it this somewhat feels like deadlock handling. Additionally, In-Memory OLTP is limited in some areas when it comes to surface exposure. The most crucial of those was the fact that In-Memory OLTP, memory-optimized tables do not support BLOBs, which are an essential part of the Session State DB. Overcoming those areas does pose a certain challenge, but with a little ingenuity it is a workable solution. (Stay tuned for a detailed post about our way to store BLOBs in the In-Memory OLTP engine).

So how does the “new” version of our implementation of Session State look? Well… exactly as the old one did, and that’s the beauty of it! The only difference when using the In-Memory OLTP engine is that you no longer max out at 10.000 Requests/Sec. With the current hardware you are easily able to scale up to 250.000 Requests/Sec, based on a box that costs <30.000 USD. Also I have a hunch that with the upcoming Brickland chipset we will be far in excess of the 1 million request mark. Additionally this ability to handle much more load enables us to consolidate Session State machines together. In the past every web farm had its own server, which was necessary due to the limitations described above. With In-Memory OLTP one server can handle all web farms traffic by itself, which reduces the TCO significantly (less maintenance overhead, less hardware, less power consumption, etc.)

Will this be enough for the future? I don’t know… But I am convinced that almost all webpages out there will survive for a long time before the Session State Server is a bottleneck again. (And if you should be the one with the exception to that give me a call. I am sure we will find a solution.)

Thanks,
Rick Kutschera, Manager of Database Engineering
BWin.party

Want to learn even more about In-Memory OLTP? Download SQL Server 2014 CTP1 and get started today or see more blogs in the series introduction and index here!

July, the 31 Days of SQL Server DMO’s – Day 30 (sys.dm_server_registry)

The sys.dm_server_registry DMV is used to provide SQL Server configuration and installation information that is currently stored in your Windows Registry. It is a very simple DMV that returns only three columns. The first column returned is the registry_key. The second column returned is the value_name which is the name of the actual registry key value. The third and final column returned is the value_data which is the value of the registry key data. Lets have a look at the information this DMV returns as well as some key values from the Windows Registy.

SELECT * FROM sys.dm_server_registry

image

View using RegEdit to view the registy:

image

This DMV provides you with a quick and easy way to view SQL Server Instance registry values.

For more information about this DMV, please see the below Books Online link:

http://msdn.microsoft.com/en-us/library/hh204561.aspx

Follow me on Twitter @PrimeTimeDBA

July, the 31 Days of SQL Server DMO’s – Day 29 (sys.dm_os_buffer_descriptors)

The sys.dm_os_buffer_descriptors Dynamic Management View gives you a look into the data pages that are currently in your SQL Server buffer pool. Just in case you are not familiar with some of the internals to SQL Server and how the engine works, SQL Server only works with objects that are in memory (buffer pool). When an object such as a table needs to be read and it does not exist in the buffer pool, SQL Server will read (copy) the necessary data page(s) from disk into the buffer pool and cache it. Caching takes place so that it can be reused again and prevents the need of expensive physical reads. To better illustrate this DMV, lets query it against our AdventureWorks2012 database and view the result set.

SELECT * FROM sys.dm_os_buffer_descriptors
WHERE database_id = db_id(‘AdventureWorks2012’)

image

The first column returned from this result set is the database_id column which identifies the specific database for a given row. The file_id column represents the file that a particular buffer descriptor belongs to. The page_id column represents the ID for the data page within the buffer. The page_level column represents the index level of the data page. Next we have the allocation_unit_id column which identifies a unique allocation unit. An allocation unit is basically a set of data pages. The page_type column tells us exactly what type of page is in the buffer pool. From my screen shot above you see I have 3 distinct type of Pages in my buffer pool, Index, Data, and IAM pages. Index pages are pages that are used to build the Root and Intermediate levels of a B-Tree. A Data page would represent the actual leaf pages of a clustered index which contain the actual data for the table. Without getting into too much detail, an IAM page is Index Allocation Map page which track GAM (Global Allocation Map) pages which in turn track extents on your system. The row_count column details how many data rows are present on a given page. The free_space_in_bytes tells you how much of a given data page is still available, remember pages are 8K in size. The is_modified signifies whether or not a page has been changed since it has been read into memory, .ie a dirty page. The numa_node column represents the Nonuniform memory access node for the buffer. Lastly is the read_microsec column which tells you how many microseconds it took for a data page to be read (copied) into the buffer pool.

This is a great DMV for use when you are tracking down a memory issue or if you just want to have a look at what type of pages are currently in your buffer pool.

For more information about this DMV, please see the below Books Online link:

http://msdn.microsoft.com/en-us/library/ms173442.aspx

Follow me on Twitter @PrimeTimeDBA