The Daily Parker

Politics, Weather, Photography, and the Dog

Back to normal in Illinois

With former governor George Ryan's release from prison this morning, Illinois has finally returned to the situation of having fewer former governors in prison than out of it. In an especially nice touch, former governor Jim Thompson is Ryan's attorney.

I guess Dan Walker and Jim Edgar are both still alive, too, so the current count is: 1 incumbent, non-convicted governor; 2 former, non-convicted governors; 2 former, convicted governors; and 1 former governor still in jail. There's a nice symmetry there, yes?

And now, mid-April

Chicago's normal high temperature for April 17th is 16°C, which by strange coincidence is the new record high for January 29th:

The warm front associated with the strong low pressure system passed through the Chicago area between 2 and 3AM on it’s way north and at 6AM is oriented east-west along the Illinois-Wisconsin state line. South of the front south to southwest winds 24 to 45 km/h and temperatures in the upper 10s°C prevail – Wheeling actually reported 15.6°C at 6AM. North of the front through southern Wisconsin and farther north, winds were east to southeast and temperatures near freezing. Milwaukee at 6AM was 3°C.


The 18°C high projected for Chicago Tuesday easily replaces the day's previous 99-year record high of 15°C set in 1914 and is a reading just 1.1°C shy of the city's all-time January record high temp of 19°C set back on Jan 25, 1950. Only 5 of the 34 January 60s [Fahrenheit] on the books have made it to 18°C.

Temps in the 60s [Fahrenheit] in January are incredibly rare—a fact which can't be overstated! In fact, just 21 of 143 Januarys since records here began in 1871 have produced 60s.

The city's last 16°C January temperature took place 5 years ago when the mercury hit 18°C on Jan 7, 2008.

Ordinarily in the middle of winter in Chicago it would be customary at this point to say "It was last this warm in..." and throw out a date from last summer. But no, this is the new world of climate change, so I can say: "It was last this warm December 3rd."

Of course, it can't last. Here's the temperature forecast starting at noon today (click for full size):

January to April to January in three easy steps...

Azure table partition schemes

I'm sitting at my remote office working on a conundrum: how to balance human usability against good software design.

The problem is: how can I create an Azure table partitioning scheme that uses Azure efficiently and still allows the user (me) efficiently to troubleshoot problems with the feature in question. This is a direct consequence of the issues I worked on this morning.

The feature is the component of the Weather Now parsing system that stores raw weather data from NOAA temporarily. By "temporarily" I mean, until I delete it. Keeping the raw data will allow me to figure out why problems occur and will allow the application to apply new features to old data in future.

NOAA publishes "cycle files" about every 3-6 minutes. The cycle uses a predictable sequence of 750 file names that repeats about every 4 days. The files go from file000 to file750, then back to file000. Sometimes, however, NOAA restarts the sequence at 0, skips files, or just crashes entirely, so the feature has to handle the file names as random. That said, the files have definite publication times, and generally—to an extent that Weather Now can optimize itself based on the pattern—the files contain weather data gathered within a short time before NOAA publishes the files.

You can have practically unlimited Azure tables in a storage account; I would imagine the number is close to the Int32 maximum value of 2.1 billion. Each table can have billions of partition keys as well. Searching on a combination of Azure table name and partition key takes the same length of time no matter how many tables are in the storage account or how many partition keys each table has. Under the hood, Azure manages the indexing so efficiently that network latency will be the bigger problem in all but a few edge cases.

For Weather Now, my first thought was to create a new table for each month's NOAA files and partition the table by day. So, weather parsing process would put the metadata for a file downloaded right now in the table "noaa201301" and use the partition key "20130127". That would give me about 5,700 rows in each table and about 190 rows in each partition.

I'm reconsidering. Given it's taken 11 years to change the way that Weather Now retrieves and stores weather data, using that scheme would give me 132 tables and 4,017 partitions, each of them kind of small. Azure wouldn't care, but it would over time clutter up the application's storage account. (The account will be cluttered enough as it is, with the millions of individual weather reports tabled by station and partitioned by month.)

On reflection, then, I'm going to create a new table of metadata each year, and partition by month. An Azure table with 69,000 rows (the number of NOAA files produced each year) isn't appreciably less efficient than one with 69 rows or 69 million, as it turns out. It will still partition the data as efficiently as the partition key suggests. But cutting the partitions down 30-fold could make a big difference in efficiency.

I'm open to contrary evidence. In fact, I'd love to find some. But given the frequency of data reads (one every 5 minutes or so), and the thousands of tables already in the application's storage account, I think this is the best way to go.

Nerdy but possibly welcome update

Even though we've just gotten our first snowfall, and today has started giving us snow, freezing rain, sleet, and icy roads, there is good news.

January 27th is when things officially start looking brighter in Chicago every year. Tonight, for the first time in almost two months, the sun sets at 5pm. Then things start to become noticeably brighter: a 7am sunrise next Monday, a 5:30pm sunset two weeks after that, then a 6:30am sunrise less than a week later.

Yes, this is dorky, but trust me: you'll notice it now.

Why this last Azure move is taking so long

The Inner Drive Technology International Data Center continues to whir away (and use electricity), despite my best efforts to shut it down by moving everything to Microsoft Windows Azure.

Most of the delay finishing the move has nothing to do with its technology. Simply, my real job has taken a lot of time this month as we've worked toward launching a new application tomorrow. Against the 145 hours spent on that project this month, not counting the 38 hours spent helping with other projects, squeezing out the 22 hours I've managed to find for Weather Now has left me falling behind on the Oscar nominees.

For those just joining our story, Weather Now remains the last living application in the IDTIDC. This application shows real-time aviation weather for almost every airport in the world. I wrote the first version in 1998, moved it to its own domain in 2000, and published the last significant update in 2010.

The application benefited for most of its life by having practically unlimited hardware and system software to run on. As a Microsoft partner, I've gotten access to Windows Server, SQL Server, and other goodies for my entire professional life. Moving to Azure changes the calculus radically.

Weather Now runs on Microsoft SQL Server 2012 Enterprise with essentially limitless disk space. In the past 14 years, the application has quietly gathered 50 GB of data, merrily occupying a physical partition scheme that takes up a good bit of a RAID-5 volume. Creating a similar architecture in Azure exceeds my budget a bit: a single medium VM to run the application and its GetWeather component plus a 50 GB SQL Database would cost about $250 per month.

Fortunately, I don't have to do that. Most of the data, you see, hardly ever gets used.

Weather Now usually has around 4,500 current weather observations and 165,000 observations from the last 24 hours. Since each row is small, and the index is positively tiny (only the station ID and observation times are indexed), the current table uses about 1.5 MB of space and the last-24 table uses 43 MB. That doesn't even break a sweat on Azure SQL Database.

No, it's the Archive table that grows like the Beast from Below. That one has all of the past observations since the site started. In some cases I've pruned the table, but basically, it has one row per observation per station. For an average station like O'Hare, that means about 10,500 per year. For a chatty, automated station that spits out a report every 20 minutes, it stores about 27,000 per year. For 2012 alone, that works out to about 47 million* rows—growing at 4 million per month.

What to do? Well, stop storing it was my first thought. It hardly ever gets used, partially because the UI doesn't have a way to pull out historical data.

On the other hand, I've frequently wanted to illustrate blog entries with specific weather reports that have permanent links. And this problem, such as it is, does not have a difficult solution.

So, among its other features, Weather Now 4.0 will store archival weather reports in Azure table storage. It won't have the full 50 GB of material initially, possibly ever; but even if it did, it would only cost about $5 per month to store it. And I've hit on a partitioning scheme that will, eventually, make finding archival data really quick and easy, no matter how much of it there is.

The conclusion should be obvious: If you start looking at things the Azure way, using Azure can save you tons of money. My current estimate of the monthly cost to run Weather Now, assuming current visitor levels and acceptable performance on "very small" Cloud Services instances, is $40 per month. If it eventually amasses 50 GB of archives, it will cost...$42 per month. And if I get thousands of visitors that require upgrading to a "small" instance, I'll start selling subscriptions, but I won't have to buy new equipment because it's Azure.

More on this later. Right now, I've got to get back to work.

* Actually, 47,704,735 rows for 2012, an average of 130,341 new rows per day.

Why the GOP is losing votes

I've come a across a number of stories over the last few days about the Republican Party's efforts to win elections. GOP chair Reince Preibus wonders where they go from here. Legislators in Mississippi apparently don't understand federalism. Republican legislatures gerrymandered every state they controlled in 2011—nothing new there—but now they want to get more Electoral College votes in swing states by going to proportional voting. Virginia's legislature passed a bill that would have thrown 9 of 13 votes to Romney in the last election, even though Obama won the popular vote state-wide, and did it by voting while the senior Democratic representative—a bona fide civil rights hero—was at the Inaguration on Monday. (They followed the vote by recognizing the contributions of Stonewall Jackson to American democracy.) And finally, Senate Minority Leader Mitch McConnell sent an email to supporters after the watered-down filibuster agreement passed gloating about beating liberals.

Actually, McConnell's email neatly sums up the broader pattern to all these activities: "You see, they had been pushing a plan to end the filibuster, allowing Harry Reid and the Obama Democrats to pass their agenda with a simple majority. Well, Mitch McConnell stood strong and stopped that scheme dead in its tracks."

Yes. That's right. The Republicans have declared war on majority rule, and for good reason. They're no longer a majority.

All of these events, and the shenanigans before the election in which state GOP leaders openly talked of denying the vote to more-urban, more-Democratic voters, point to a party unable to win on the merits, and determined to hold on to whatever power they can by any means at their disposal. What they don't seem to realize is that these tactics alienate people in the center who might vote Republican if they weren't a bunch of nutters.

Look at the UK's Conservatives: faced with declining votes and a strong government, in opposition they changed their policies to win elections. In just one concrete example, the Tories this week published a bill for full marriage equality, something the Republicans over here could not possibly countenance given their current membership.

I think the GOP will hold on to House in 2014, but lose a Senate seat or two. More states are majority-Democratic than majority-Republican, and the Senate represents the states. Long term, though, I think most Americans have had enough. And every day, the old white men who make up the Republican party become a smaller minority.

They won't go quietly. We can be certain of that.


Well, Chicago finally found out how long was the longest stretch in recorded history without a 25 mm snowfall: 335 days. The official tally through 6 am was 28 mm, which looked like this in Lincoln Park:

It really won't last. The forecast calls for 11°C by Tuesday.

I miss flying

Work, work, work, and more than an hour each way to the airport, and it turns out I haven't flown in three years. Time to renew my medical certificate and get back in the air.

I miss this, this, and this.

Oh, and this.

Daily Parker bait

Maps? Check. Dogs? Check. New York? Check. I give you, Dogs of NYC:

If you own a dog in New York City, odds are it’s a mutt named Max.

The city’s dog licensing records show that out of almost 100,000 registered dogs, this is the most common breed and name in town. WNYC obtained the complete list from the Department of Health and Mental Hygiene, which runs the dog licensing program.

The first thing you notice is the names. The most popular ones in the city hew pretty close to the most popular names across all English-speaking countries: Max, Bella, Lucky, etc. But this is New York, so there have to be some named Jeter (40 dogs) and Carmelo (7). In a town also known for its fashion, that explains the prevalence of dogs named Chanel (44), and Dolce (39). There are 83 dogs named Gucci. We've come a long way from Rover.

And if I want, I can get a custom T-Shirt that tells everyone "Parker is a mixed-breed dog, like the 23,185 registered in New York City."