Archive

Author Archive

Microsoft Changes Feature Support Between Editions in SQL Server 2016 Service Pack 1

In an unprecedented announcement at CONNECT(); //2016 in New York City today, Microsoft has announced that a number of Enterprise Edition only features will now be available in all editions of SQL Server. That’s right, all editions of SQL Server!

The delivery vehicle for these changes will be SQL Server 2016 Service Pack 1, which is available immediately!

This is a first for Microsoft, who has to date only ever moved features from Enterprise Edition to Standard Edition between different versions of SQL Server.

Specifically the following features will be available in all SQL Server editions:

  • Always Encrypted (AE)
  • Change Data Capture (CDC)
  • Columnstore indexes
  • Data Compression
  • Database Snapshots
  • Dynamic Data Masking (DDM)
  • Fine Grained Auditing
  • In-Memory OLTP (IMOLTP)
  • Multiple FILESTREAM Containers
  • Partitioning
  • Polybase
  • Row Level Security (RLS)

That is quite an impressive list of functionality which will now be available in all editions. And by all editions I mean all editions. So from LocalDB all the way through to Enterprise Edition a developer will have the same, consistent programmability surface area. Because of the differences in how some of these editions work there will be some limitations. For example Change Data Capture will not work on LocalDB and Express editions due to a various dependencies. In-Memory OLTP will not be available with LocalDB due to technical limitations. With Columnstore Indexes Microsoft will be limiting the degree of parallelism for queries used by the Database Engine. With Standard Edition queries will be limited to 2 cores per query. Web Edition and Express Edition will be limited to 1 core per query. Polybase will not be available with LocalDB. Furthermore Polybase will only be able to be installed as a Compute Node on the lower editions. For scale-out solutions the Head Node will need to be installed on an Enterprise Edition instance. This could potentially dramatically reduce the licensing cost for enterprise customers.

In most cases these “limitations” will not dramatically affect most customers as I expect them to take advantage of this new functionality using Standard Edition.

I fully expect the feature set to potentially evolve as Microsoft embraces this “brave new world” and gets feedback on these changes from the industry.

You might recall the SQL Server Standard Edition is currently limited to 128GB of memory, with Express Edition supporting 1GB of memory. So how will features such as In Memory OLTP tables and Columnstore Indexes, which are memory hungry technologies, play within these memory limits? Instead of allocating a subset of these hard memory limits to these features, which would potentially lead Buffer Pool pressure and lack of out-of-memory problems, Microsoft have decided that these “hero” features will be able to use up to 25% of the hard memory limit for both In-Memory OLTP features and Columnstore Indexes. That means that on an Express Edition instance both In-Memory OLTP tables and Columnstore Indexes could each consume 250MB of memory above the Buffer Pool allocation. On a Standard Edition instance both technologies would be able to take advantage of 32GB of memory each. I expect the primary reason for this approach has been to reduce the potential impact on existing database solutions.

So why the change in “policy”?

I see the primary driver for Microsoft being to reduce the friction that developers have when potentially having to develop multiple versions of what is fundamentally the same solution. To have a consistent programmability surface area, or feature parity for Developers if your like, between all editions of SQL Server. As Enterprise Edition is not deployed everywhere, developers tend to code for the “lowest common denominator” with respect to features.

On the flip side of the same “driver” coin it will also help drive adoption of the “hero features” by both developers and in particular Independent Software Vendors (ISVs), who have for years complained about the lack of certain features in lower editions of SQL Server. I believe Microsoft’s promise/vision of being able to deploy a database solution on LocalDB with all of these programmability features and then being able to run the same solution on all higher editions represents a “game changer” in the Developer space.

This can be evidenced from the CONNECT(); //2016 fact sheet:

SQL Server 2016 SP1 makes leading innovation available to any developer.
Microsoft is making it easier for developers to benefit from the industry-leading innovations in SQL Server for more of their applications. With SQL Server 2016 SP1 is making key innovations more accessible to customers across editions. Developers and partners can now build to a single application programming surface to create or upgrade new intelligent applications and use the edition which scales to the application’s needs. SQL Server Enterprise continues to offer the highest levels of scale, performance and availability for enterprise workloads.

So where does that leave Enterprise Edition?

As the fact sheet hints at, Microsoft still sees SQL Server Enterprise Edition as the edition that will provide unlimited scale, unlimited availability and unlimited virtualization. So being able to scale out your database solution using Readable Secondaries in Availability Groups, for example, will remain in the realm of Enterprise Edition. Likewise there are no changes to Basic Availability Groups in Standard Edition.

In that context Online Index Rebuilds will remain in Enterprise Edition. For now…

Is Online Index Rebuilds a “programmability” feature? Or an “availability” feature? I am sure there have been a number of interesting and heated discussions in Building 16 for the last 6 months on what constituted a “programmability” feature in preparation for the release and announcement of SQL Server 2016 Service Pack 1.

And before I forget, and before you ask, these changes will not be backported to SQL Server 2014.

Both tactically and strategically this is an interesting “play” by Microsoft.

In the Enterprise space Microsoft has proven to be very competitive against Oracle, with a substantially “cheaper price”, an extremely compelling / comparative feature set and a number of Migration Programs where you can get SQL Server Enterprise licenses for free. So I don’t think too much will change in this space.

Tactically this should encourage existing Microsoft customers who only have Standard Edition to upgrade to SQL Server 2016 Service Pack 1. In this use case being able to take advantage of Data Compression transparently will potentially have huge benefits, including substantially improved performance, and substantially reduced backup times and sizes. Likewise, Dynamic Data Masking (another transparent feature) might prove to be very compelling to customers who do not want production database to be so easily exposed in DEV and UAT environment.

More importantly SQL Server 2016 Service Pack 1 will allow SQL Server’s entire customer base to take advantage of the “hero features” that Microsoft have invested in for the last 5+ years, such as In-Memory OLTP and Columnstore Indexes

As I have said this is completely unprecedented in the history of SQL Server releases. With Microsoft’s new philosophy of ensuring a “consistent programming surface area”, or feature parity if you like, developers will be able to build the best possible solutions that they can on the SQL Server platform. That is a huge change of mindset by Microsoft, and highlights their maturity and confidence in their Data Platform.

For me the more interesting “play” is how this is going to strategically impact the Open Source database ecosystem and community in particular. A quick search on the web will reveal how popular the MySQL and in particular PostgreSQL platforms are. We have a new generation of developers in the IT industry that are developing on these Open Source platforms. Yes, as has been well documented, Microsoft will be releasing SQL Server on Linux in 2017. But that is not enough to encourage Open Source Developers to adopt / embrace the SQL Server platform. Microsoft also needs to have a compelling platform to use in its own right, in terms of functionality and features. For the “right price”.

Think back to the mid-nineties in the relational database ecosystem, with SQL Server, Oracle, Access, Clipper, DBase adoption. There were no Open Source database platforms like there is today. Both corporations and Developers / IT Professionals embraced Microsoft SQL Server over other platforms for a number of reasons, with functionality being a primary consideration. Now think to today and the kind of database-centric solutions that can now be easily and affordably built in the cloud. Times have changed. And it’s important to recognise that the Open Source movement is not in competition to Microsoft. Microsoft has embraced Open Source in their cloud like no one could have anticipated or even imagined.

With the release of SQL Server 2016 Service Pack 1’s “consistent programming surface area” / feature parity across all editions, in conjunction with a phenomenal feature set and an extremely mature relational database engine, Microsoft has effectively invited the Open Source community to develop and deploy database-centric solutions on their Enterprise-grade database platform.

It’s certainly interesting to see how this will play out in the next couple of years.

Free SQL Server 2012 Seminar

Microsoft SQL Server 2012: What’s New in the Data Platform

 

The release of Microsoft SQL 2012 server revolutionises how organisations will deploy highly-available and scalable database solutions on premise, and in the cloud. Microsoft has also taken the opportunity with this release to redesign their certification programme “for the cloud”.

In this seminar Victor Isakov (Microsoft Certified Architect, Microsoft Certified Master and Microsoft Valuable Professional (MVP)) will explore the key new features for the Data Platform in SQL Server 2012.

This seminar will enable both IT professionals and decision makers to evaluate the functionality and business value of SQL Server and help mitigate the risks inherent in an upgrade scenario.

Agenda

  • New SQL Server certification
  • SQL Server 2012 courses
  • Licensing and Edition changes
  • SQL Server AlwaysOn technology for Mission Critical environments
  • Performance and Scalability improvements
  • What’s new for database administrators
  • What’s new for developers
  • Upgrade process
  • Business benefits of upgrading to SQL Server 2012

Date

Wednesday, 14 November 2012

5:00 – 5:30pm Registration (light food and beverage will be provided)

5:30 – 6:30pm Presentation

Location

Saxons Training Facilities, Level 10, 10 Barrack Street, Sydney

Sydney SQL Server User Group: 13 November 2012

What DBAs Should Know About Windows Server 2012

 

Thank you to all who attended my session on Windows Server 2012. I hope everyone was as excited as I was with the potential for SMB 3.0 in the design of SQL Server infrastructure solutions.

Session:

What DBAs Should Know About Windows Server 2012

The release of Microsoft SQL 2012 server revolutionises how organisations will deploy highly-available and scalable database solutions on premise, and in the cloud. Microsoft has also taken the opportunity with this release to redesign their certification programme “for the cloud”.

In this seminar Victor Isakov (Microsoft Certified Architect, Microsoft Certified Master and Microsoft Valuable Professional (MVP)) will explore the key new features for the Data Platform in SQL Server 2012.

This seminar will enable both IT professionals and decision makers to evaluate the functionality and business value of SQL Server and help mitigate the risks inherent in an upgrade scenario.

Transact-SQL Demo Scripts

Categories: User Group Tags:

SQL PASS Summit 2011

Important Trace Flags that Every DBA Should Know

 

Again, thank you very much to all of the attendees to my session on indexing.

Likewise thanks to all the people that have thanked me at GameWorks for presenting on this topic.

As I said at the presentation, I was pleasantly surprise by the turn-out.

Finally a big thank you to Julie Koesmarno for co-presenting with me. I hope to do a session with her again soon. It was great fun!

Session:

Important Trace Flags That Every DBA Should Know [DBA-309]

There are a number of important trace flags that should be implemented in most production environments. Yet many DBAs think that trace flags should only be used for testing purposes or to "show-off" at conferences. That they are not supported. Yet nothing could be further from the truth. A good example is trace flag 4199. (Yes, it’s probably worth your while reading KB974006 to understand what trace flag 4199 does!). In this session, Victor Isakov (MCA, MCM, MCT, MVP) and Julie Koesmarno (MCITP) will present the important trace flags that all DBAs should know, what they do, when to implement them and how best to implement them in a production environment. This will enable you to maximize the performance and reliability of your SQL Server infrastructure.

Transact-SQL Demo Scripts

I hope to get invited to speak at SQL PASS Summit in 2012.

 

Safe travels and happy journeys,

Victor

SQL PASS Summit 2011

The Ultimate Session on Indexing

 

First and foremost, thank you very much to all of the attendees to my session on indexing.

And thanks to all the people that have thanked me for it throughout the week at the conference. I am glad that you both enjoyed it and found it useful.

Session:

The Ultimate Session on Indexing [AD-302-HD]

You can have the best hardware possible: SSDs, TBs of RAM and 100s of processors. (Don’t we all wish we had such budgets.) But if you don’t have the correct indexing strategy in place your database can still perform poorly. With the next version of SQL Server there are even more options available. Confused about what indexing strategy to use? Then this is the session for you!
In this session Victor Isakov (MCA, MCM, MCT, MVP) will cover all of the indexing options that are available in SQL Server and when best to use them. Victor will cover "practical" internals that will help you understand how indexes work and why you need to maintain them. Finally he will cover what indexing strategies to use given the most important considerations: your users, their query patterns and your data.
This should prove to be a most informative and practical session that will enable you to optimize your database performance.

Transact-SQL Demo Scripts

I hope to get invited to speak at SQL PASS Summit in 2012.

 

Safe travels and happy journeys,

Victor

Australian SharePoint Conference 2011

Planning Your SQL Server Infrastructure for a SharePoint 2010 Solution

Thank you to all who attended my session and for your positive feedback.

Again, apologies for cutting into your lunch time!

As discussed, the rationale for the slide deck was to have a reference for your DBAs to use when designing your SQL Server infrastructure. So hopefully it will prove to be a handy print-out or “cheat-sheet” when liaising with your SharePoint developers.

As always, I would love to hear about your experiences in the field with SharePoint 2010.

Planning Your SQL Server Infrastructure for a SharePoint 2010 Solution 

SharePoint is becoming more ubiquitous in the market as a critical core business application. Consequently it is more important than ever for DBAs to be cognisant of SharePoint’s architecture and how to best provision, configure and manage the SQL Server infrastructure .

In this session Victor Isakov (MCA, MCM, MCT, MVP) will explore the SharePoint 2010 architecture and considerations that every DBA should know, including capacity planning, performance management, configuration, disaster recovery and high availability.

Sydney SQL Server Enterprise User Group: 2 March 2011

DBD: Delivering Self-Service Managed BI with PowerPivot

 

First of all, thank you to Peter Myers for delivering an informative presentation.

A big thank you to Tony Finnemore for “minding the store” while I was abroad at the MVP Summit.

Congratulations to Ade from NPG for winning the major door prize at the last meeting, a license of the Idera SQL Admin Toolset (valued at AUD $432) that was generously provided by Idera (www.idera.com) and Tony Finnemore from SQL Tools (www.sqltools.com.au).

News
DBD: Delivering Self-Service Managed BI with PowerPivot
One of the pillars of the SQL Server 2008 R2 release is Managed Self-Service BI. The SQL Server PowerPivot for Excel add-in is a key offering in this pillar, and delivers an entirely new analytic experience to Excel 2010. The add-in allows analysts to load and prepare large volumes of data from various sources to create a multidimensional model.The model can be enriched with sophisticated calculations. Then the model can then be used as the source for PivotTable and PivotChart reports. And yet that’s not the end of the story. With the SQL Server PowerPivot for SharePoint add-in, the Excel workbooks that host the PowerPivot model can be catalogued in SharePoint and exposed as a data source for other Excel and Reporting Services reports. These SharePoint hosted models can then be managed by IT with scheduled data refreshes from the originating data stores.In this session, you will be introduced to the two add-ins. An end-to-end series of theory, demonstrations and discussion will describe how you can effectively leverage the add-ins with your own analysis and reporting requirements.