Navigate Up
Sign In

SQL Server vNext: Code Name "Denali"

Item is currently unrated. Press SHIFT+ENTER to rate this item.1 star selected. Press SHIFT+ENTER to submit. Press TAB to increase rating. Press SHIFT+ESCAPE to leave rating submit mode.2 stars selected. Press SHIFT+ENTER to submit. Press TAB to increase rating. Press SHIFT+TAB to decrease rating. Press SHIFT+ESCAPE to leave rating submit mode.3 stars selected. Press SHIFT+ENTER to submit. Press TAB to increase rating. Press SHIFT+TAB to decrease rating. Press SHIFT+ESCAPE to leave rating submit mode.4 stars selected. Press SHIFT+ENTER to submit. Press TAB to increase rating. Press SHIFT+TAB to decrease rating. Press SHIFT+ESCAPE to leave rating submit mode.5 stars selected. Press SHIFT+ENTER to submit. Press SHIFT+TAB to decrease rating. Press SHIFT+ESCAPE to leave rating submit mode.


2011-06-20-SQLServervNext-01.pngOne of the key things that I planned on doing at TechEd this year was to learn as much as I could about the vNext of SQL Server, code named “Denali”. From all of the rumors I had heard and things I had read, vNext for SQL Server promises to be Microsoft’s most exciting offering in the database space to date (which is no surprise, they don’t attempt to put out worse versions of things… insert Vista jokes here).

What comes next is a series of observations that I made from the numerous sessions that I attended on the topic:

Contained Databases

While not fully baked in CTP1, we do get partially contained databases as a teaser of what is still to come once this feature is completed and ready for RTM. Contained Databases (CDB) will allow for 3 types of containment: uncontained (none), partially contained, and fully contained, which will follow one of two models: Application and Management.

My main interest as an ITPro in the CDBs is in the area of security. A CDB enabled database moves the user information out of the masterdb and into the database itself thereby carrying with it the SQL user and password information and Active Directory user login information to allow the database to be reused anywhere. When the database gets moved to a new server, and a user account has been granted access to the database while it resided on a different server, it now has access to the instance on the new server and can access any additional databases that the guest user on that instance has access to. This new feature makes moving database from environment to environment much simpler from a code perspective (if you use the same sql accounts or AD accounts in all environments, which you shouldn’t) however it does open up a potential threat vector if the guest user account is enabled (which it shouldn’t be).

Currently, Agent Jobs are not stored in the CDBs. They must be handled manually.

Valuable reference:

User Defined Server Roles

FINALLY! I literally stood up and applauded when this was talked about. This has been a feature that has been missing from the product forever and I have been begging for.

Do you have a Junior DBA that you wish that you could delegate read privileges to for all of your databases, no matter what new DBs show up over time? Have an auditor whose technical ability that you trust just as far as you can throw your server? How about Manager who keeps asking for separation of duties in SQL Server?

Now you can simply create roles for all of these scenarios and not worry about the impact these activities will have on your databases! Ok, so I wouldn’t go that far… a read policy where a “select * from [database_containing_3BillionRecords]” t-sql statement is run will still crash you faster than you can say foo, but you are going to do some education of people you give access to anyway right? Speaking of, you also get the ability to create triggers that can monitor for when logins are added to and removed from server roles so that you can make sure you are educating the right team members.

Valuable reference:

Availability Groups (AG)

First, let’s start with a caveat: Don’t plan on doing availability groups for SharePoint. I have not heard an official word from Microsoft on this, but based upon the information available at this early time, I do not believe that this is going to be your best case scenario for SharePoint. AlwaysOn Failover Cluster Instances, which we haven’t heard much about yet, I expect will be the SharePoint solution of choice. With that said, if you are doing any application development that is backed in SQL Server technology you will be very interested in what comes next.

This new feature of Denali offers new database centric HADR capabilities that couple the application and the database and allow the two to move together when a failure is detected. AlwaysOn Availability Groups require Windows Server Failover Cluster (WSFC) but do not require SQL to be clustered. A virtual name for fast application failover has incorporated. Both the Application and the Database get their own virtual name, the Application gets a Client Access Point in WSFC and the Database getting a VNN as a part of AG.

The AGs can consist of multiple databases which can failover as a single entity. A single AG can have up to 4 secondaries, 2 synchronous secondaries & 1 automatic failover pair. The addition of Active Secondaries (read-only secondaries) allows for use of the additional hardware investment. Backups and Reporting can now be done against your secondaries instead of simply having them sit idle waiting for a failure to occur. The performance improvements that enable these features are a new form of compression and enhanced encryption algorithms.

Caveat: This is what Microsoft has said will come by RTM, currently this CTP supports only a single, asynchronous secondary replica. Valuable reference:

Project “Apollo”

This as of yet unreleased feature of Denali promises to deliver ~100x faster performance for star join and similar queries, using the new column store index. “Apollo” is powered by the same VertiPaq engine that gives PowerPivot its abilities, which compresses the index data and allows it store the data from each column together so each data page contains data only from a single column.

Once column store index has been added to a table, the table itself becomes read-only, so no inserts, updates or deletes can be done. If the need to insert new rows or update existing ones arises, the index must be disabled, then the data modifications can be done and then a rebuild the column-store index must be performed. As a result, unless this limitation is removed before RTM, it is likely that “Apollo” will only be used in Data Warehouse situations.

Some of the benefits of using “Apollo” are that since it is using the VertiPaq engine, column data in the index is compressed allowing it to take up less space allowing it to be more easily stored in active memory. As a result of this new structure unnecessary reads are no longer taking place, increasing overall efficiency. While the limits may keep “Apollo” from being widely adopted, where it is implemented is sure to see screamingly fast performance.

In my next article, I will share some perspective on some of the new business intelligence features coming from the SQL Server team.

Categories: SQL; SharePoint; Search and Indexing; Performance and Optimization; Management


Notify me of comments to this article


Add Comment