You successfully entered into the In-Memory OLTP world by the hand of SQL Server 2014. After a while you notice that the In-Memory objects are using more memory than you estimated. In this tip I will explain what is happening and how to address this issue.
There was a question this morning on the SQL Down Under mailing list about how to determine the Windows groups for a given login.
That’s actually easy for a sysadmin login, as they have IMPERSONATE permission for other logins/users.
Here is an example procedure:
When I execute it on my system this way:
It returns the following:
Note that the Usage column could also return “DENY ONLY” or “AUTHENTICATOR”.
What is Azure Data Factory?
Azure Data Factory is a fully managed service that does information production by orchestrating data with processing services as managed data pipelines. A pipeline connects diverse data (like SQL Server on-premises or cloud data like Azure SQL Database, Blobs, Tables, and SQL Server in Azure Virtual Machines) with diverse processing techniques (like Azure HDInsight (Hive and Pig), and custom C# activities). This will allow the data developer to transform and shape the data (join, aggregate, cleanse, enrich) so that it becomes authoritative and trustworthy to be consumed by BI tools. These pipelines are all managed within a single pane of glass where rich health and lineage is available to diagnose issues or do impact analysis across all data and processing assets. Some unique points about Data Factory are:
- Ability to process data from diverse locations and data types. Data Factory can pull data from relational, on-premises sources like SQL Server and join with non-relational, cloud sources like Azure Blobs.
- Provide a holistic view of the entire IT infrastructure that includes both commercial and open source together. Data Factory can orchestrate Hive and Pig using Hadoop while also bringing in commercial products and services like SQL Server and Azure SQL Database in a single view.
What can it do?
With the ability to manage and orchestrate the collection, movement and transformation of semi-structured and structured data together, Data Factory provides customers with a central place to manage their processing of web log analytics, click stream analysis, social sentiment, sensor data analysis, geo-location analysis, and more. In public preview, Microsoft views Data Factory as a key tool for customers who are looking to have a hybrid story with SQL Server or who currently use Azure HDInsight, Azure SQL Database, Azure Blobs, and Power BI for Office 365. In the future, we’ll bring more data sources and processing capabilities to the Data Factory.
How do I get started?
For Microsoft customers, we are offering Azure Data Factory as a public preview. To get started, customers will need to have an Azure subscription or a free trial to Azure. With this in hand, you should be able to get Azure Data Factory up and running in minutes. Start by reading this getting started guide.
For more information on Azure Data Factory:
- Introduction to Data Factory
- Frequently asked questions about Data Factory
- Data Factory developer reference
- Data Factory cmdlet reference
- Copy data with Data Factory
- Use Pig and Hive with Data Factory
- Move and process log files using Data Factory
- Enable your pipelines to work with on-premises data
- Use custom activities in a Data Factory pipeline
- Monitor and manage Data Factory using the Azure portal
- Monitor and manage Data Factory using Azure PowerShell
- Troubleshoot Data Factory
What is Azure Stream Analytics?
Azure Stream Analytics is a cost effective event processing engine that helps uncover real-time insights from devices, sensors, infrastructure, applications, and data. Deployed in the Azure cloud, Stream Analytics has elastic scale where resources are efficiently allocated and paid for as requested. Developers are given a rapid development experience where they describe their desired transformations in SQL-like syntax. Some unique aspects about Stream Analytics are:
- Low cost: Stream Analytics is architected for multi-tenancy meaning you only pay for what you use and not for idle resources. Unlike other solutions, small streaming jobs will be cost effective.
- Faster developer productivity: Stream Analytics allow developers to use a SQL-like syntax that can speed up development time from thousands of lines of code down to a few lines. The system will abstract the complexities of the parallelization, distributed computing, and error handling away from the developers.
- Elasticity of the cloud: Stream Analytics is built as a managed service in Azure. This means customers can spin up or down any number of resources on demand. Customers will not have to setup costly hardware or install and maintain software.
Similar to the recent announcement Microsoft made in making Apache Storm available in Azure HDInsight, Stream Analytics is a stream processing engine that is integrated with a scalable event queuing system like Azure Event Hubs. By making both Storm and Stream Analytics available, Microsoft is giving customers options to deploy their real-time event processing engine of choice.
What can it do?
Stream Analytics will enable various scenarios including Internet of Things (IoT) such as real-time fleet management or gaining insights from devices like mobile phones and connected cars. Specific scenarios that customers are doing with real-time event processing include:
- Real-time ingestion, processing and archiving of data: Customers will use Stream Analytics to ingest a continuous stream of data and do in-flight processing like scrubbing PII information, adding geo-tagging, and doing IP lookups before being sent to a data store.
- Real-time Analytics: Customers will use Stream Analytics to provide real-time dashboarding where customers can see trends that happen immediately when they occur.
- Connected devices (Internet of Things): Customers will use Stream Analytics to get real-time information from their connected devices like machines, buildings, or cars so that relevant action can be done. This can include scheduling a repair technician, pushing down software updates or to perform a specific automated action.
How do I get started?
For Microsoft customers, we are offering Azure Stream Analytics as a public preview. To get started, customers will need to have an Azure subscription or a free trial to Azure. With this in hand, you should be able to get Azure Stream Analytics up and running in minutes. Start by reading this getting started guide.
For more information on Azure Stream Analytics:
- Introduction to Azure Stream Analytics
- Limits in the Stream Analytics preview release
- Developer guide for Stream Analytics
- Query language reference for Stream Analytics
- REST API reference for Stream Analytics
- Scale Stream Analytics jobs
If you read my last tip you can see the performance benefit that can be gained by using the new In-Memory OLTP feature available in SQL Server 2014. This tip will extend that test by using some additional options available with the In-Memory OLTP feature to see if we can realize any further improvement.
The Business Intelligence Development Studio (BIDS) does not come with SQL Server 2014 like it did in previous versions of SQL Server. Furthermore, it has been renamed as SQL Server Data Tools Business Intelligence. If you have Visual Studio 2012, download this . If you have Visual Studio 2013, download this . I recommend that you download and save the file and run the saved copy instead of running directly from the link. A few individuals have reported seeing a “Same architecture installation” error…(read more)
As most websites do, we collect analytics on the people visiting our site http://www.sqldownunder.com
I thought it might be interesting to share the breakdown of visitors to our site. Keep in mind that we have a primarily Microsoft-oriented audience. Enjoy!
No surprise on the native languages:
Country breakdown reflects the amount of local traffic we have for instructor-led courses. Most others are podcast listeners:
We first noticed Chrome slightly outstripping IE a while back but recently, it’s changed a lot. I suspect that IE11 will have been as issue here:
No surprises on the operating systems but Linux continues to disappear from our clients. It used to be higher:
The big change has been in mobile operating systems. It’s the first time that iOS has only managed 50%. It used to be 82% for us:
We’re also seeing a shift in screen resolutions:
And this is the mix of where our site visitors come from:
I had the pleasure of recording another SQL Down Under show today.
Show 64 features Microsoft Azure DocumentDB product group member discussing Azure DocumentDB and what SQL Server DBAs and developers need to know about it.
JSON-based storage has been one of the highest rated requests for enhancements to SQL Server. While we haven’t got those enhancements yet, DocumentDB nicely fills a gap between NoSQL databases (I use the term loosely ) and relational databases.
You’ll find the show here: http://www.sqldownunder.com/Podcasts
This ebook will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS.
The full editorial review is available here.
This book cannot miss in your digital library! And obviously you can buy the paperback on Amazon.
Over the last year, I’ve delivered a number of partner enablement events for Microsoft. These events are low cost training sessions that run for three days. Days 1 and 2 cover SQL Server 2014 content, mostly regarding in-memory OLTP, clustered columnstore indexes, and Azure integration with hybrid systems. Day 3 covers the full Power BI stack.
We’re pleased to be running another set of these around the country:
Melbourne: November 5th to 7th
Perth: November 24th to 26th
Sydney: December 8th to 10th
I’d love to see many of you there. I’m looking forward to delivering them. To find out more, follow these links: