FRAMINGHAM, MASS. — While SQL Server 2008 was little more than a service-pack level upgrade, the 2012 version of Microsoft’s database has a boatload of new features and delivers solid performance improvements.
Specifically, SQL Server 2012 offers Business Intelligence to help companies analyze business data, an AlwaysOn availability and uptime enhancement, Contained Databases for managing databases as a group and a quick-query tool called ColumnStore Index.
On the flip side, Microsoft’s new licensing model will probably cost enterprises more money. And database administrators should be aware that taking full advantage of these new features will require additional network bandwidth and will impose extra burdens on IT.
SQL Server 2012 comes in three versions: Standard, Business Intelligence and Enterprise, with most of the new features reserved for the Enterprise Edition. And Microsoft has replaced its per-CPU licensing model with a per-core model. (See how we conducted our test.)
For earlier SQL Server versions, you bought one license per physical processor regardless of how many CPU cores it had. If you chose your server hardware smartly, you could buy eight CPU cores for the cost of one SQL Server license and save enough in licensing fees to pay for the new server. To license SQL Server 2012 for that same server, you’ll need eight core licenses. The new core license fees are less than the previous per-CPU fees, but, if you do the math, Microsoft has conspicuously increased SQL Server’s price.
Here’s a rundown of the new features:
Business Intelligence
SQL Server 2012’s Business Intelligence improvements essentially let users view a database as a spreadsheet. Users can program sophisticated spreadsheet formulas and reports that operate directly on database contents.
A user can, for example, program a new database report via these spreadsheet operations and then take a notebook computer running the new report (and connected wirelessly to the database server) into a meeting. The attendees can watch the report update in real time as database contents change.
Business Intelligence is a godsend for companies whose corporate policies allow (or encourage) users to program their own spreadsheets. However, BI is anathema for companies that want to control ad hoc manipulation of databases – and the decisions that ensue from such manipulation.
In companies that embrace Business Intelligence, network and database administrators will see their workloads blossom. As we tested Business Intelligence in the lab, we saw this effect firsthand. Extrapolating our results across a large company, we estimate that the unbridled use of SQL Server 2012’s Business Intelligence feature will likely increase administrator workloads by 10% to 25%.
AlwaysOn
Think of AlwaysOn as database mirroring in which the secondary (substitute) server can be an active, already-in-use SQL Server 2012 instance. The secondary server takes up the slack when a primary instance fails. Because the substitute server may not have the horsepower of the primary server and because it’s also doing other work, response times may slow dramatically. But the application blithely carries on without suffering an outage. The mirror doesn’t have to be a standby server that sits idle until failover time.
Earlier SQL Server versions offered essentially two approaches to High Availability. You could configure SQL Server to perform log shipping, which instructed the failover server to replicate the primary server, or you could use clustering to cause a standby server to assume the role of primary server upon failover.
Both approaches have their limitations. Failing over an individual database can take time, during which the database is unavailable. Cluster-based failover is costly for the extra server(s) that does no work until the primary server(s) fails.
SQL Server 2012’s AlwaysOn feature borrows the concept of Database Availability Groups from Exchange Server 2010. AlwaysOn, however, implements the concept with a somewhat different architecture.
Unfortunately, AlwaysOn uses a great deal of bandwidth. In tests involving 50 clients feeding an Online Transaction Processing (OLTP) SQL Server 2012 database with an average 20 transactions per second, AlwaysOn’s data replication and inter-server coordination more than doubled network utilization, from 22% to 47%.
SQL Server 2012 has other high availability enhancements. For the many applications that access multiple databases concurrently, SQL Server 2012 offers Availability Groups. You assign multiple databases to an Availability Group and, when a server dies, all the databases fail over as a cohesive unit.
Availability Groups are particularly useful for transferring database accesses from a primary site to a remote site, if a primary site suffers a catastrophic disaster. You can also set up multiple Availability Group assignments for a single SQL Server 2012 instance.
If disaster strikes, AlwaysOn will divide up the database retrievals and updates across the multiple servers you’ve designated in your disaster plan. A single database superserver can thus fail over to several lesser-horsepower machines. Your standby servers don’t have to be expensive, idle-most-of-the-time copies of the primary.
The Availability Group concept worked well in the lab. When we “pulled the plug” on a database server, our simulated online transaction processing application kept running normally, completely unaware that it was accessing a different server.
Note that you’ll have to make separate arrangements for the application itself and for any other system components and data files that the application relies on. In that vein, be aware that there are other high availability mechanisms that protect more than just the database server. For example, CA’s ARCserve High Availability can perform sophisticated failovers for all of an application’s computing resources. It can restart a crashed background process (i.e., Windows Service), if that’s the cause of the problem. And it offers push-button failover and failback for the highest possible level of availability, plus bandwidth tuning/throttling and data compression to use the network more frugally.
Another convenient, impressive and practical new SQL Server 2012 feature is replication to a Read-only Secondary. By copying database changes to the read-only secondary in a way that assures the integrity of related database contents in the secondary database, SQL Server 2012 makes backing up an active, in-use database painless and quick … you simply make periodic backup copies of the read-only secondary database, not the primary.
If the read-only secondary is on a separate server, you even avoid using database server CPU and memory during the backup process. Furthermore, read-only secondaries become excellent candidates as the basis for data analysis and reporting, even while the primary database is actively in use. We liked read-only secondaries a lot.
SQL Server 2012’s new FileTable concept was somewhat less impressive, but only because we couldn’t think of a good, practical use for it. FileTable associates an NTFS file system directory with a database table. Any file you put in the directory appears in the database, and SQL Server 2012 reflects in the database any changes you make to a file.
Backing up the database also backs up files in the associated directory. If you have ancillary data files that bear a critical relationship to the contents of a database and you want to back up the database plus the ancillary files as a consistent single unit, FileTable may be for you.
Contained Databases
Before SQL Server 2012, migrating a database meant much more than just copying database files. You also had to set up or at least synchronize database login user IDs, ensure that collation (i.e., the sort order to be used for each character set as well as the code page used to store non-Unicode character data) was configured the same for the two databases, verify compatibility levels, migrate scheduled jobs and do other tasks to manage database-related data not stored directly in the database files.
SQL Server 2012’s Contained Databases feature makes database migration a bit easier by storing the collation setting and the database login user IDs within the database. You no longer have to synchronize database login IDs between the old server and the new one. However, you still have to worry about other database-related configuration steps, such as setting up scheduled jobs on the new server.
ColumnStore Indexes
SQL Server 2012’s ColumnStore Index stores data for columns you designate and then joins those database columns to give you a read-only, column-based index into the data (traditional indexes are row-oriented, storing data for each row and then joining those rows to complete the index).
Microsoft claims ColumnStore Index speeds up data retrieval by a factor of 10. Our tests confirmed the performance gain, exhibiting at least 10x and sometimes much faster (12x, 15x and even 20x) data retrieval speeds.
The big drawback to ColumnStore Indexes is their read-only status, which makes them useful only for queries in data warehouses with huge databases. OLTP databases and ColumnStore Indexes are, by their nature and almost by definition, mutually exclusive.
Even in a data warehouse milieu, frequently loading new data into read-only tables can be quite a hassle. Microsoft describes a workaround for the read-only problem by having you switch out table partitions in your data warehouse tables. If you are desperate for better performance, the workaround might be acceptable. Alternatively, you might opt to use SQL Server 2012’s read-only secondary feature to manage the database copies you use for analysis and reporting.
Speaking of indexes – SQL Server 2012’s improvements in online re-indexing are a welcome relief to administrators who from time to time have to re-index a database. SQL Server 2005 touted an online re-indexing feature, but the earlier version’s fine print mentioned that the indexing didn’t work for all data types (the problem types were varchar(max), nvarchar(max), varbinary(max) and XML). SQL Server 2012 removes the restriction so that administrators can have true online index maintenance for applications that are supposed to be online and available 24/7.
We don’t want to appear excessively greedy, but next we’d like to see in SQL Server an ability to re-index individual table partitions online. We have a few other issues, as well. Missing from SQL Server 2012 is any significant use of PowerShell, which helps customers automate tasks through the use of commandlets. Other than a few commandlets for AlwaysOn and some backup/restore functions, SQL Server 2012 has no reliance on PowerShell. With the emphasis Microsoft is putting on PowerShell, we found the omission disappointing.
Ironically, the SQL Server 2012 installation process uses PowerShell. As with virtually every other current version of a Microsoft server product, Windows PowerShell 2.0 is a requirement for deploying SQL Server 2012.
We were also disappointed by the lack of improvements to SQL Server Management Studio (SSMS). Yes, Microsoft has given SSMS a Visual Studio 2010 makeover, which means you get better snippet management as well as integration with Team Foundation Server, but SQL Server 2012 offers no new DBA management tools. For instance, we would have liked to have seen better multi-server management and reporting features, as well as some use of PowerShell in SSMS.
Conclusion
SQL Server 2012’s many new features (some of which, like programming language enhancements, we haven’t even touched on) are a good reason to upgrade. There’s something to like for nearly everyone. Just be aware that the new version costs more, will likely increase administrator workloads and might use quite a bit more bandwidth than earlier SQL Server versions.