The Daily Parker

Politics, Weather, Photography, and the Dog

Lowest electricity bill ever

Regular blog readers know that since moving to my current apartment in February 2008, the Inner Drive Technology International Data Center has occupied a couple square meters of my home office. I've also mentioned lower energy use since I started to move everything out of the IDTIDC and into Microsoft Azure.

Something else has happened to my electricity bill. In November, we citizens of Chicago voted to pool our electricity buying to get the lowest electricity cost possible. Well, the new regime kicked in last month, and the 660 kw/h I used in February cost 25% less than the 610 kw/h I used in January—which was the lowest use ever for this place.

It helps, also, that since moving my email to the cloud in June, I've used an average of 224 kw/h less electricity each month year-over-year.

I can't wait to see my bill for March. They read my meter on the 7th or 8th to prepare the bill I just got; the IDTIDC shut down on the 10th.

Azure training...?

I'm paying 90% of my attention right now to a Windows Azure online training class. I already knew a lot of the material presented so far, but not all of it. It's like re-taking a class you took as an undergraduate; the 10% you didn't know is actually really helpful.

Like next week's class, which will go over Infrastructure as a service: a lot has changed in the last year, so it should be valuable.

Apparently, though, my homework is to build an Azure web site this week. Not a multi-tier application with a worker role. Just a web site. How adorable.

Weather Now 4.0 in Production

The Inner Drive Technology International Data Center is no more.

This morning around 8:15 CDT I updated the master DNS records for Weather Now, and shut down the World Wide Web service on my Web server an hour later. All the databases are backed up and copied; all the logs are archived.

More to the point, all the servers (except my domain controller, which also acts as a storage device) are off. Not just off, but unplugged. The little vampires continue to draw tens of Watts of power even when they're off.

The timing works out, too. My electric meter got read Thursday or Friday, and my Azure billing month starts today. That means I have a clean break between running the IDTIDC and not running it,* and by the beginning of May I'll have more or less the exact figures on how much I saved by moving everything to the Cloud.

Meanwhile, my apartment is the quietest it's ever been.** The domain controller is a small, 1U server with only one cooling fan. Without the two monster 2U units and their four cooling fans (plus their 12 hard drives), I can suddenly hear the PDC...and now I want to shut it down as well.

* Except for the DSL and land-line, which should be down in a couple of weeks. I'll still have all the expense data by May.

** Except for the two blackouts. Now, of course, I never need worry about a blackout again—unless it hits the entire country at once, which would create new problems for me.

That's all he wrote

Weather Now is fully deployed to the Cloud. As soon as the Worker Role finishes parsing the last few hours of weather, I'll cut over the DNS change, and it will be live.

Actually, that's not entirely true; I'm going to cut over the DNS in the morning, after I know I fixed the bugs I found during this past week's shake-down cruise.* So if you want to see what a weather site looks like while it's back-filling its database, you can go to its alias, http://wx-now.cloudapp.net. (Because of how Azure works, this will remain its alias forever.)

Time to meet my friends, who are wondering where I am, no doubt.

* Bugs fixed: 13. Total time: 6.9 hours (including 2.4 to import and migrate the Gazetteer).

While the data uploads...

The final deployment of Weather Now encountered a hitch after loading exactly 3 million (of 7.2 million) place names. I've now kludged a response for the remaining 4.2 million rows, and a contingency plan should that upload fail.

Meanwhile, I have a saturated Internet connection. So rather than sit here and watch paint dry, so to speak, I'm bringing back some of the bugs that I decided to postpone fixing. The end result, I hope, will be a better-quality application than I'd planned to release—and a rainy Saturday made useful.

The Inner Drive Technology International Data Center's last day

Tomorrow morning, shortly after I have my coffee, I will finally turn off the last two production servers in my apartment the IDTIDC. The two servers in question, Cook and Kendall, have run more or less continuously since November 2006*, gobbling up power and making noise the whole time.

As I write this, I'm uploading the production Weather Now deployment along with the complete Inner Drive Gazetteer, a 7.2-million row catalog of place names that the site uses for finding people's local weather. It takes a while to upload 7.2 million of anything, of course; and it's only 35% done after two hours. Trying to deploy the Cloud package at the same time may not have made the most sense, but I need the weather downloader to start running now so that when I cut over to the new site, it has actual weather to show people.

I started this project on November 3rd, logging almost exactly 100 hours on it until today. I'm through the tunnel and almost done climbing up the embankment. One more night of whirring fans and then...quiet.

Update: Crap. The Gazetteer upload crashed after 3 million rows. Now Plan B...

* Yes, I did just link to the Wayback Machine there. The original Inner Drive blog is offline for the time being. I have a task to restore it, as I haven't updated it since 2008, it's not a priority.

Another update: the original link at (*) pointed to Wayback Machine, but after reconstituting the old blog I corrected the link. That's why the footnote above no longer makes a lot of sense.

The very bad week of Microsoft Windows Azure

Microsoft has suffered some unfortunate outages this week, first affecting SQL databases on Monday, and then yesterday storage:

On Friday, February 22 at 12:44 PM PST, Storage experienced a worldwide outage impacting HTTPS traffic due to an expired SSL certificate. This did not impact HTTP traffic. We have executed repair steps to update SSL certificate on the impacted clusters and have recovered to over 99% availability across all sub-regions. We will continue monitoring the health of the Storage service and SSL traffic for the next 24 hrs. Customers may experience intermittent failures during this period. We apologize for any inconvenience this causes our customers.

The outage caused problems throughout the Azure universe, because SSL-based storage underpins just about everything. Without Storage, for example, any VM that goes offline can't restart, because its VHD is kept in Storage. Web sites and Service Bus were also hosed. My customers were annoyed.

These problems can affect any computing system. The problem with Azure Storage going down was the scope of it: millions of applications. Even the largest colo data center only has tens of thousands of computers. With so many people affected, the outage looks like a disaster.

I'll be watching Microsoft closely over the next few days to see what more they can tell us about the outage. But if this was all do to certificates expiring, wow.

Two Microsoft Azure outages in 24 hours

Over the past two days, Microsoft Azure had two outages they're still investigating. The first, from 18:26 CST through 20:00 CST Monday (0026 to 0200 UTC Tuesday), and the second, from 13:50 to 15:27 CST (1950-2127 UTC) yesterday, affected SQL Database and related services in the Azure datacenter outside Washington, D.C.

I noticed the Monday evening outage as it happened, because when a database goes down, a number of applications start sending me emails. A couple of people had minor inconveniences, but as it happened on a holiday evening, the damage wasn't too severe.

I did not notice the Tuesday afternoon outage, which did affect a lot of people and made some of my clients very angry, because I was on an airplane. When I landed and turned on my phone, I had 300 emails from various applications and mercifully only 4 from angry clients. (Welcome home!)

Microsoft hasn't determined reported the cause yet, but given the maintenance they had planned, started, and then backed out on Sunday night, they may have a clue. They have a second round of maintenance planned for tonight at midnight CST (0600 UTC). I'll be watching carefully tomorrow morning.

When the Azure emulator is more forgiving than real life

Last night I made the mistake of testing a deployment to Azure right before going to bed. Everything had worked beautifully in development, I'd fixed all the bugs, and I had a virgin Windows Azure affinity group complete with a pre-populated test database ready for the Weather Now worker role's first trip up to the Big Time.

The first complete and total failure of the worker role I should have predicted. Just as I do in the brick-and-mortar development world, I create low-privilege SQL accounts for applications to use. So immediately I had a bunch of SQL exceptions that I resolved with a few GRANT EXEC commands. No big deal.

Once I restarted the worker role, it connected to the database, loaded its settings, downloaded a file from NOAA and...crashed:

Inner Drive Weather threw System.Data.Services.Client.DataServiceRequestException
...
OutOfRangeInput

One of the request inputs is out of range.
RequestId:572bcfee-9e0b-4a02-9163-1c6163798d60
Time:2013-02-10T06:05:41.5664525Z

at System.Data.Services.Client.DataServiceContext.SaveResult.d__1e.MoveNext()

Oh no. The dreaded Azure Storage exception that tells you absolutely nothing.

Flash forward fifteen minutes (now past midnight; and for context, I'm writing this on the 9am flight to Los Angeles), with Fiddler running on a local instance connecting to production Azure storage, and I found the XML block on which real Azure Storage barfed but the Azure storage emulator passed without a second thought. The offending table entity is metadata that the NOAA downloader worker task stores to let the weather parsing worker task know it has work to do:

<?xml version="1.0" encoding="utf-8" standalone="yes"?>
   <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" 
   xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" 
   xmlns="http://www.w3.org/2005/Atom">
  <title />
  <author>
    <name />
  </author>
  <updated>2013-02-10T05:55:49.3316301Z</updated>
  <id />
  <content type="application/xml">
    <m:properties>
      <d:BlobName>20130209-0535-sn.0034.txt</d:BlobName>
      <d:FileName>sn.0034.txt</d:FileName>
      <d:FileTime m:type="Edm.DateTime">2013-02-09T05:35:00Z</d:FileTime>
      <d:IsParsed m:type="Edm.Boolean">false</d:IsParsed>
      <d:ParseTime m:type="Edm.DateTime">0001-01-01T00:00:00</d:ParseTime>
      <d:PartitionKey>201302</d:PartitionKey>
      <d:RetrieveTime m:type="Edm.DateTime">2013-02-10T05:55:29.1084794Z</d:RetrieveTime>
      <d:RowKey>20130209-0535-41d536ff-2e70-4564-84bd-7559a0a71d4d</d:RowKey>
      <d:Size m:type="Edm.Int32">68202</d:Size>
      <d:Timestamp m:type="Edm.DateTime">0001-01-01T00:00:00</d:Timestamp>
    </m:properties>
  </content>
</entry>

Notice that the ParseTime and Timestamp values are equal to System.DateTimeOffset.MinValue, which, it turns out, is not a legal Azure table value. Wow, would it have helped me if the emulator horked on those values during development.

The fix was simply to make sure that neither System.DateTimeOffset.MinValue nor System.DateTime.MinValue ever got into an outbound table entity, which took me about five minutes to implement. Also, it turned out that even though my table entity inherited from TableServiceEntity, I still had to set the Timestamp property when using real Azure storage. (The emulator sets it for you.)

By this point it was 12:30 and I needed to get some sleep, however. So my plan to run an overnight test will have to wait until this evening at my hotel. Then I'll find the other bits of code that work fine against the emulator but, for reasons that pass understanding, the emulator gets completely wrong.