The Daily Parker

Politics, Weather, Photography, and the Dog

Thinking about vacation...where to go?

Unfortunately, that's not going to happen for a while. I'm going to spend a lot of time in airplanes over the next 11 days, including a long weekend with the folks. Good thing wifi is ubiquitous, even on airplanes, because it also looks like I'm going to burn at over 120% of utilization again this month. (Last month I was 118% billable, but if you add non-billable time I actually worked 134% of full time.)

The madness ends soon. We're hiring, projects are gelling, other projects are winding down, and at some point I'll just get on a plane for four days without taking my laptop.

I did take three hours yesterday to play pub trivia with my droogs, owing to the start of a four-week trivia tournament. We're in second place—by one point. I sincerely hope to make the next three Thursdays.

Quick link round-up

I'll be a lot less busy in March, they tell me. Meanwhile, here are some things I want to read:

I will get to them...soon...

Under the hood of Weather Now

My my most recent post mentioned finishing the GetWeather component of Weather Now, my demo project that provides near-real-time aviation weather for most of the world. I thought some readers might be interested to know how it works.

The GetWeather component has three principal tasks:

In the Inner Drive Technology world, an Azure worker process uses an arbitrary collection of objects that implement the IWorkerTask interface. The interface defines Interval and LastRun properties and an Execute method, which is all the worker process needs to know. The tasks are responsible for their own lifespans, reentry prevention, etc. (That's another discussion.)

In order to decouple the data source (NOAA now, other sources in the future) from the application, I split the three tasks into two IWorkerTask classes:

  • The NoaaFileDownloadingWorkerTask opens an FTP connection to the NOAA public weather servers, retrieves the files it hasn't already retrieved, and stores the contents in Azure Blob Storage; and
  • The NoaaFileParsingWorkerTask pulls the files out of Azure Storage, parses them, and stores the results in an Azure SQL Database and Azure table storage.

I'm using Azure storage as an intermediary between the two sides of the process because my analysis led me to the conclusion that they're really independent of each other. Coupling of the two tasks in the current (2002) version of GetWeather causes all kinds of problems, not least that a failure in one task can stop the whole thing. If, as happens given the nature of the Internet, the FTP side has an unrecoverable problem, the application has to restart. In actual practice it simply kills itself and waits for the next time it runs, which can be a while because it's running on a Windows Server 2008 Scheduler job every 30 minutes.

The new architecture will allow the parser to run every minute or two, see if it has anything to do by looking at some metadata, and do its job if needed. I can change a system setting to stop it from running (for example, because I need to do some database maintenance), while letting the downloader continue to work separately.

On the other side, the downloader can run every 5 minutes, snatch the one or two files it needs from NOAA, and shut down cleanly without waiting for the parser. NOAA likes this because the connection is only open for a few seconds, instead of the 27 minutes it stays open right now. And if the NOAA server isn't available, so what? It's a clean shutdown and a clean start a few minutes later.

This design also allows me to do something else: manually upload files for parsing and storage. This helps with testing, migration, service interruptions—all things that the current architecture has made nearly impossible.

I'm not entirely done with the application (and while writing this I just thought of an improvement I'll need to make to prevent infinite retries), but it's close. And I'm really pleased with the application so far. Stay tuned; I can now set a tentative public launch date of March 31st.

Resolving the oldest case

Five years ago, on 6 January 2008, I opened a FogBugz case (#528) to "Create NOAA Downloader". The NOAA downloader goes out to the National Weather Service, retrieves raw weather data files, and stores the files and some metadata in Windows Azure storage. Marking this work item "resolved"

Well, I just finished it, and therefore I have finished all of the pieces of the GetWeather application. And with that, I've finished the last significant piece of the Weather Now 4.0 rewrite. Total time to rewrite GetWeather: 42 hours. Total time for the rewrite so far: 66 hours.

Now all I have to do is...let's see...create worker role tasks to run the various pieces of the application (getting the weather, parsing the weather, storing the weather, and cleaning up the database), upgrading the Web site to a full Cloud Services application, deploy it to Azure, and deploy its gazetteer. That should be about 5 hours more work. Then, after a couple of weeks of mostly-passive testing, I can finally turn off the Inner Drive Technology Worldwide Data Center.

How to build software

Via Fallows, a software designer explains how a simple feature isn't:

This isn’t off the shelf, but that’s OK — we’ll just build it, it’s not rocket science. And it’s a feature that’s nice, not one that’s essential. Lot’s of people won’t use these tabs.

So, what do we need to think about when adding a bar of tabs like this?

  • The whole point is to have a view state that summarizes what you’re looking at and how it’s presented. You want to switch between view states. So we need a new object that encapsulates the View State, methods for updating the view state when the view changes or you switch tabs, methods for allocating memory for the view state and cleaning up afterward.
  • You need a bar in which the tabs live. That bar needs to have something drawn on it, which means choosing a suitable gradient or texture.
  • The tab needs a suitable shape. That shape is tricky enough to draw that we define an auxiliary object to frame and draw it.
  • Whoops! It gets drawn upside down! Slap head, fix that.

...and on for another 16 steps. He concludes, among other things:

This is a hell of a lot of design and implementation for $0.99. But that’s increasingly what people expect to pay for software. OK: maybe $19.95 for something really terrific. But can you sell an extra 100 copies of the program because it’s got draggable tabs? If you can’t, don’t you have better things to do with your time?

He's developing for a commercial application that he sells, so he may not be figuring the costs of development the same way I do. Since clients pay us for software development, it's a reflex for me to figure development costs over time. I don't know how much the tab feature cost him to develop, but I do know that to date, migrating Weather Now to Azure (discussed often enough on this blog) would have cost a commercial client about $9,000 so far, with another $3,000 or so to go. And the Inner Drive Extensible Architecture? That's close to $150,000 of development time—if someone else were paying for it.

And all you wanted was a little tab on your word processor...

Azure table partition schemes

I'm sitting at my remote office working on a conundrum: how to balance human usability against good software design.

The problem is: how can I create an Azure table partitioning scheme that uses Azure efficiently and still allows the user (me) efficiently to troubleshoot problems with the feature in question. This is a direct consequence of the issues I worked on this morning.

The feature is the component of the Weather Now parsing system that stores raw weather data from NOAA temporarily. By "temporarily" I mean, until I delete it. Keeping the raw data will allow me to figure out why problems occur and will allow the application to apply new features to old data in future.

NOAA publishes "cycle files" about every 3-6 minutes. The cycle uses a predictable sequence of 750 file names that repeats about every 4 days. The files go from file000 to file750, then back to file000. Sometimes, however, NOAA restarts the sequence at 0, skips files, or just crashes entirely, so the feature has to handle the file names as random. That said, the files have definite publication times, and generally—to an extent that Weather Now can optimize itself based on the pattern—the files contain weather data gathered within a short time before NOAA publishes the files.

You can have practically unlimited Azure tables in a storage account; I would imagine the number is close to the Int32 maximum value of 2.1 billion. Each table can have billions of partition keys as well. Searching on a combination of Azure table name and partition key takes the same length of time no matter how many tables are in the storage account or how many partition keys each table has. Under the hood, Azure manages the indexing so efficiently that network latency will be the bigger problem in all but a few edge cases.

For Weather Now, my first thought was to create a new table for each month's NOAA files and partition the table by day. So, weather parsing process would put the metadata for a file downloaded right now in the table "noaa201301" and use the partition key "20130127". That would give me about 5,700 rows in each table and about 190 rows in each partition.

I'm reconsidering. Given it's taken 11 years to change the way that Weather Now retrieves and stores weather data, using that scheme would give me 132 tables and 4,017 partitions, each of them kind of small. Azure wouldn't care, but it would over time clutter up the application's storage account. (The account will be cluttered enough as it is, with the millions of individual weather reports tabled by station and partitioned by month.)

On reflection, then, I'm going to create a new table of metadata each year, and partition by month. An Azure table with 69,000 rows (the number of NOAA files produced each year) isn't appreciably less efficient than one with 69 rows or 69 million, as it turns out. It will still partition the data as efficiently as the partition key suggests. But cutting the partitions down 30-fold could make a big difference in efficiency.

I'm open to contrary evidence. In fact, I'd love to find some. But given the frequency of data reads (one every 5 minutes or so), and the thousands of tables already in the application's storage account, I think this is the best way to go.

Why this last Azure move is taking so long

The Inner Drive Technology International Data Center continues to whir away (and use electricity), despite my best efforts to shut it down by moving everything to Microsoft Windows Azure.

Most of the delay finishing the move has nothing to do with its technology. Simply, my real job has taken a lot of time this month as we've worked toward launching a new application tomorrow. Against the 145 hours spent on that project this month, not counting the 38 hours spent helping with other projects, squeezing out the 22 hours I've managed to find for Weather Now has left me falling behind on the Oscar nominees.

For those just joining our story, Weather Now remains the last living application in the IDTIDC. This application shows real-time aviation weather for almost every airport in the world. I wrote the first version in 1998, moved it to its own domain in 2000, and published the last significant update in 2010.

The application benefited for most of its life by having practically unlimited hardware and system software to run on. As a Microsoft partner, I've gotten access to Windows Server, SQL Server, and other goodies for my entire professional life. Moving to Azure changes the calculus radically.

Weather Now runs on Microsoft SQL Server 2012 Enterprise with essentially limitless disk space. In the past 14 years, the application has quietly gathered 50 GB of data, merrily occupying a physical partition scheme that takes up a good bit of a RAID-5 volume. Creating a similar architecture in Azure exceeds my budget a bit: a single medium VM to run the application and its GetWeather component plus a 50 GB SQL Database would cost about $250 per month.

Fortunately, I don't have to do that. Most of the data, you see, hardly ever gets used.

Weather Now usually has around 4,500 current weather observations and 165,000 observations from the last 24 hours. Since each row is small, and the index is positively tiny (only the station ID and observation times are indexed), the current table uses about 1.5 MB of space and the last-24 table uses 43 MB. That doesn't even break a sweat on Azure SQL Database.

No, it's the Archive table that grows like the Beast from Below. That one has all of the past observations since the site started. In some cases I've pruned the table, but basically, it has one row per observation per station. For an average station like O'Hare, that means about 10,500 per year. For a chatty, automated station that spits out a report every 20 minutes, it stores about 27,000 per year. For 2012 alone, that works out to about 47 million* rows—growing at 4 million per month.

What to do? Well, stop storing it was my first thought. It hardly ever gets used, partially because the UI doesn't have a way to pull out historical data.

On the other hand, I've frequently wanted to illustrate blog entries with specific weather reports that have permanent links. And this problem, such as it is, does not have a difficult solution.

So, among its other features, Weather Now 4.0 will store archival weather reports in Azure table storage. It won't have the full 50 GB of material initially, possibly ever; but even if it did, it would only cost about $5 per month to store it. And I've hit on a partitioning scheme that will, eventually, make finding archival data really quick and easy, no matter how much of it there is.

The conclusion should be obvious: If you start looking at things the Azure way, using Azure can save you tons of money. My current estimate of the monthly cost to run Weather Now, assuming current visitor levels and acceptable performance on "very small" Cloud Services instances, is $40 per month. If it eventually amasses 50 GB of archives, it will cost...$42 per month. And if I get thousands of visitors that require upgrading to a "small" instance, I'll start selling subscriptions, but I won't have to buy new equipment because it's Azure.

More on this later. Right now, I've got to get back to work.

* Actually, 47,704,735 rows for 2012, an average of 130,341 new rows per day.

Census Dotmap

This is exceedingly cool:

Inset from the Census Dotmap showing Chicago, Madison, and Milwaukee

What is this

This is a map of every person counted by the 2010 US and 2011 Canadian censuses. The map has 341,817,095 dots - one for each person.

Why?

I wanted an image of human settlement patterns unmediated by proxies like city boundaries, arterial roads, state lines, &c. Also, it was an interesting challenge.

Who is responsible for this?

The US and Canadian censuses, mostly. I made the map. I'm Brandon Martin-Anderson. Kieran Huggins came to the rescue with spare server capacity and technical advice once this took off.

Unfortunately, I can't quite pick myself out of the crowd...