Greatly Increase the Performance of Azure Storage CloudBlobClient

Windows Azure Storage boasts some very impressive transactions-per-second and general throughput numbers. But in your own applications you may find that blob storage, tables and queues all perform much slower than you’d like. This post will teach you one simple trick that literally increased the throughput of my application over 50 times.

The fix is very simple, and only a few lines of code – but I’m not just going to give it away so easily. You need to understand why this is a “fix”. You need to understand what is happening under the hood when you are using anything to do with the Windows Azure API calls. And finally, you need to suffer a little pain like I did – so that’s the primary reason why I’m making you wait. :)

The Problem – Windows Azure uses a REST based API

At first glance, this may not seem like a throughput problem. In fact, if you’re a purist, you likely have already judged me a fool for the above statement. But hear me out on this one. If someone made a REST-based API, then it is very likely that a web-browser would be a client application that would consume this service. Now, what is one issue that web-browsers have by default when it comes to consuming web services from a single domain?

“Ah!” If you are a strong web developer – or you architect many web-based solutions,  you probably have just figured out the issue and are no longer reading this blog post. However, for the sake of completeness, I will continue.

Seeing that uploading a stream to a blob is a semi-lengthy and IO-bound procedure, I thought to just bump up the number of threads. The performance increased only a little, and that led me to my next question.

Why is the CloudBlobClient slow even if I increase threads?

At first I assumed that I had simply hit the limit of throughput on an Azure Blob Container. I was getting about 10 blobs per second, and thought that I probably just need to create more containers – “perhaps it’s a partitioning issue.”

This didn’t feel right because Azure Blobs are supposed to partition based on “container + blob-name”, and not just on container alone… but I was desperate. So, I created 10 containers and ran the test again. This time more threads, more containers… the result? Zero improvement. The throughput was the exact same.

Then it hit me. I decided to do a test that “shouldn’t” make a difference – but it’s one that I’ve done before in the past to prove that I’m not crazy (or in some cases, to prove that I am). I ran my console app program many times. The results were strange. One application was getting about 10 inserts per second – but 3 applications were getting 10 each. This means that my computer, my network and the Azure Storage Service was able to process far more than my one console application was doing!

This proved my hunch that “something” was throttling my application. But what could it be? My code was super simple:

while (true)
{
	// Create a random blob name.
	string blobName = string.Format("test-{0}.txt", Guid.NewGuid());
 
	// Get a reference to the blob storage system.
	var blobReference = blobContainer.GetBlockBlobReference(blobName);
 
	// Upload the word "hello" from a Memory Stream.
	blobReference.UploadFromStream(testStream);
 
	// Increment my stat-counter.
	Interlocked.Increment(ref count);
}

That’s when it hit me! My code is simple because I’m relying on other people who wrote code, in this case the Windows Azure Storage team! They, in turn, are relying on other people who wrote code… in their case the .Net Framework team!

So you might ask, “What functionality are they using that is so significant to the performance of their API?” That question leads us to the our final segment.

Putting it All Together – Getting More Throughput in Azure Storage

As was mentioned before, the Azure Storage system uses a REST (HTTP-based) API. As was also mentioned, the developers on the storage team used functionality that already existed in the .Net Framework to create web requests to call their API. That class – the WebRequest (or HttpWebRequest) class in particular was where our performance throttling was happening.

By default, a web browser – or in this case any .Net application that uses the System.Net.WebRequest class – will only allow up to 2 simultaneous threads at a time per host domain.

So no matter how many threads I added in my application code, ultimately I was being funneled back into a 2-thread-maximum bottleneck. Once I proved that out, all I had to do was add this simple configuration bit to my App.config file:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
	<system.net>
		<connectionManagement>
			<add address="*" maxconnection="1000" />
		</connectionManagement>
	</system.net>
</configuration>

Now my application inserts 50 times more than it used to:

Windows Azure, Licensing and Saving Money

Chances are, if I asked you why you wanted to “go to the cloud”… one of your answers would be “to save money”. True, at first glance this would seem counter intuitive because the price to rent cloud-hardware is far more expensive than buying it yourself. This argument is presented to me by IT Ops folks quite often. But the beauty of the cloud – and in particular I’m talking about Windows Azure here – is that you pay “for what you use”. Here in lies the ability to save tons of money. I’m not going to go in depth in the whole rent-vs-buy argument in general… but instead what I’m going to tell you is a key on how to save even more money once you decide to go Azure.

Quick Overview of How Azure Saves Money in General

I mentioned that I don’t want to get into this too much, but I feel that if you are relatively new to the idea of going to a hosted (or “public”) cloud environment – you may have done some bad math and decided that it would cost you too much money. So I feel compelled to give this brief explanation on a couple of scenarios that actually cost you far less.

Scenario 1: Daily Report Generator

Suppose you need to run a process every morning that takes 2 hours and chews up all of your massive business data to provide some excellent reports to your customers. As your company grows, you need to now buy new hardware just to run this very important task.

This scenario is very fitting for the Azure “burst-up” model. Because your process only runs for 2 hours – you can “rent” a VM (or 1,000 VMs) for just 2 hours and then you only pay 1/12th of the monthly cost. Don’t forget you are also not paying for hardware upgrades, electricity, etc.

Scenario 2: Seasonal Business

Suppose your business does the vast majority of your business in just a few months. This makes sense if you are, let’s say the Honey Baked Ham company, or a tax preparation company such as Jackson Hewitt (where I work). In this case, the cost of purchasing hardware that sits idle for many months out of the year is clearly a waste.

Now, How To Save Even More Money (with Licensing)

As is often the case, technology advances faster than legal entities can keep up with them. In this example, public clouds (Amazon EC2, Windows Azure, etc) allow you to quickly burst up 1,000 virtual machines just for a day if you need. But then, what if you needed to run licenses software such as Microsoft SQL Server? Are you going to buy 1,000 licenses just to use SQL for that short time? No way! Well, in this case the legal team at Microsoft has thought of that. So… you will be able to “rent” SQL Server licenses by the hour just like you can rent the actual virtual machine.

By the way – I’ve not posted this until now because I’ve been under NDA… also, there is a whole lot more I’d like to say, but can’t yet. Here is the publicly available Microsoft site where I’m quoting from: https://www.windowsazure.com/en-us/pricing/details/#header-3. And here is a screenshot in case the link above changes in the future :)

OK, so now that my paranoid “I’m not spilling any beans” has taken place, I’ll continue.

You might be asking, “What is the point here?” No, I’m not simply restating the fact that you can rent SQL licenses by hour in Azure. The point I’m about to make can save you thousands of dollars a month. And here it is…

Test your payload before choosing a VM size! The reason why I’m stressing this is as follows. If you notice in the chart above, the SQL licensing for an XL VM is twice that of a Large VM. This may seem normal to you, because an XL VM gives you twice as much horse-power in terms of CPUs and RAM. And typically, you pay for CPUs when it comes to SQL licensing.

The “gotcha” here is that an XL VM may not necessarily give you 2x the performance of a Large. The reason for this (in the case of Windows Azure) is because while you get 2x the CPU, RAM and bandwidth… you do not get any more performance in your most likely bottleneck – disk speed. I’ve brought this out several times in the last few blog posts, particularly this one: “Windows Azure IaaS Performance – SQL and IOps

In my most recent testing, I’ve proven out that a write-heavy workload on an Extra Large VM has the same throughput of a Large VM in Azure. For my test, the CPU usage was less than 5%, the RAM usuage was less than 5% and the network usage was nearly 0%. So cutting back to a Large VM cuts the price in half, but didn’t hinder the throughput at all.

Take these savings and multiply it by the number of machines in your workload and you can save lots of money. In the case of the real-world reason why I’m running this test, we’re saving tens-of-thousands-of-dollars every month.