SQL 2014 Storage Solutions – Azure Blob Storage

SQL Server 2014 has support for both on-prem and cloud database files.  This opens up a series of new possibilities which assuredly will spark debate.  Here we will dig into a few of the new scenarios and potential use cases. 

SQL Server running in Azure – Files in Azure Blob Storage

The scenario where you have your SQL Server running in Azure is likely the most common, or at least the first scenario that comes to mind for this feature.  Running your database with files in Azure Blob Storage provides you with the kind of dynamic disk allocation that suits agile development and unknown growth patterns.  Leaving the storage maintenance concerns to Azure leaves IT and development the time to focus on their applications, which is where most developers would choose to focus their efforts. 

SQL Server running On-Prem – Files in Azure Blob Storage

Running a database on-prem with files in Azure Blob Storage has the same benefits of managed storage, but also enables additional HA capabilities.  For example, if your server running your database happens to crash, the database files in Azure are accessible from anywhere (potentially) and the database can be attached to a new instance – on-prem or in Azure.  Additionally, keeping all of the data and log files in Azure Blob Storage enables Always-On Availability Groups to transition between on-prem and Azure. 

SQL Server running On-Prem – Hybrid (Data in Azure, Log On-Prem)

This scenario seems to be about tiered data storage.  The setup here is an on-prem database, with an on-prem log, and data in Azure Blob Storage.  The tiered data storage piece comes into play here because there would more than likely also be data on-prem.  The primary reason that there are files in Azure is that they are accessed infrequently and can handle being somewhat latent.  An example of this is a partitioned table, whereby old partitions that are not written to and rarely read are moved into a filegroup in Azure.  The active partition and log remains on-prem to maintain performance of reads and writes. 

One problem with this scenario is that loss of connectivity to the files in Azure will leave the database In Recovery. 

Creating Database Files in Azure Blob Storage

The setup here is similar to backing up a SQL Server database to Azure Blob Storage step 1 – 3. 

Setting up your storage

Step 1: Create account

Creating an Azure account is quick and easy, and there is currently a 1 month trial available at http://www.windowsazure.com.  Once you request your account, it may take 24 hours for it to be provisioned.  Keep this in mind as you will not be able to begin testing immediately.

Step 2: Add storage

Now that you have your account, you will need to add your storage.  This is quick and easy, with the only steps being to name your storage, pick a location, and choose whether your storage should be geo-redundant or locally redundant.  While backup files can be geo-redundant, data and log files do not currently support this.

Create Storage Account

Step 3: Create container

In order to save any files into your storage, you will need to add a container.  To do this, click on the name of your newly created storage.  From here, you can monitor and change the configuration of your storage.  Click on the navigation for “Containers”.  In the “New Container” dialog, pick a name for what files will be in the container.  When we set up our container for backups previously, we called our container “sqlbackups”.  For our purposes here, we can just call this “data”.  You also have the choice of access types – generally speaking you will leave this at the default of “Private”. 

Now that you have a container created, you will see that the container has a URL.  This is the URL that you will use to point your backups to this container.  The format of this URL is [storage name].core.windows.net/[container].  While the URL shows http://, https:// will also work. 

Storage Container - Data

 

Creating the database with files in the cloud

Step 1:  Creating a credential

As with backing up a database to Azure Blob Storage, a credential needs to be in place for accessing your cloud resources.  However, this credential uses different inputs, which was not very intuitive to me.  Notes on this credential:

1.      The name of the credential must me the URL of the container (you would need 2 credentials if you had a data container and a log container)

2.      The IDENTITY must be set to “SHARED ACCESS SIGNATURE”

3.      The SECRET is not one of the 2 automatically generated access keys in the storage account as it would be for creating a backup credential.  Rather, it is generated outside of the Azure portal using the Azure API.  For this, you will either need to code up some c# to generate a secret, or use one of the Storage Explorers to generate a secret.  If you want to write the code, head over here to see how.  Otherwise, here is how to use a storage explorer:

First, you need to get your hands on a storage explorer.  From your storage account quick start guide, click on Storage Explorers.  In this example I am using the Azure Storage Explorer, which is out on CodePlex.

Add account

StorageExplorer Add Account

Click on the container you will put your files in (“data” in this case), and then click “Security”.  On the “Shared Access Signature” tab, we will generate a secret to use in the credential.

generate shared access signature

Make sure that you check the box for “write”, otherwise your credential will not have access to create the files in your container.  When you click “Generate Signature”, you will get a string that looks like this:

https://blogdemostorage.blob.core.windows.net/data?st=2014-01-13T03%3A16%3A31Z&se=2020-01-13T04%3A16%3A31Z&sr=c&sp=rw&sig=H6c7exHNUpVMY63bBh4nE5Epve5g6oP1xOkW0ybjQTQ%3D

The section highlighted is what we will use for the SECRET in the credential.  You can you everything from “st=” to the end, but that will put a time based restriction on the access.

Now that we have generated a new SECRET, let’s create that credential!

USE MASTER
GO
CREATE CREDENTIAL [https://blogdemostorage.blob.core.windows.net/data]
WITH
	IDENTITY = 'SHARED ACCESS SIGNATURE',
	SECRET = 'sr=c&sp=rw&sig=H6c7exHNUpVMY63bBh4nE5Epve5g6oP1xOkW0ybjQTQ%3D'
GO

Step 2:  Create the database with files in Azure Blob Storage

All Cloud files

CREATE DATABASE [Test_CloudFiles]
	CONTAINMENT = NONE
ON PRIMARY
(
	NAME = N'Test_CloudFiles',
	FILENAME = N'https://blogdemostorage.blob.core.windows.net/data/Test_CloudFiles.mdf',
	SIZE = 4096KB,
	FILEGROWTH = 1024KB
)
LOG ON
(
	NAME = N'Test_CloudFiles_log',
	FILENAME = N'https://blogdemostorage.blob.core.windows.net/data/Test_CloudFiles_log.ldf',
	SIZE = 1024KB,
	FILEGROWTH = 10%
)
GO

On-Prem log, cloud data

CREATE DATABASE [Test_HybridFiles]
	CONTAINMENT = NONE
ON PRIMARY
(
	NAME = N'Test_HybridFiles',
	FILENAME = N'https://blogdemostorage.blob.core.windows.net/data/Test_HybridFiles.mdf',
	SIZE = 4096KB,
	FILEGROWTH = 1024KB
)
LOG ON
(
	NAME = N'Test_HybridFiles_log',
	FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\Test_HybridFiles_log.ldf',
	SIZE = 1024KB,
	FILEGROWTH = 10%
)
GO

Testing the performance of the databases

For a quick performance test, I threw together a script that sequentially inserts 250 integers into a table. 

	create table dbo.test
		(
		id int primary key clustered
		)

	declare @x int = 0

	while (@x < 250)
	begin
		insert into dbo.test(id)
		values(@x)

		set @x = @x + 1
	end

I then ran this script against each database.

On-Prem Data / On-Prem Log  – 0 seconds

Cloud Data / On-Prem Log – 0 seconds

Cloud Data / Cloud Log – 18 seconds

Even on my internet connection, I expected this to be pretty snappy in each scenario.  When working with Azure Blob Storage, it does take some usage to get the disks hot.  It appears that the local log brings the performance back in line, even if your only data file is in Azure Blob Storage.  Presumably this is due to the log buffer flush to the transaction log occurring local.  When I have a bit more time perhaps I will research whether Log Writer waits are occurring when the log is in Blob Storage. 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: