The Daily Parker

Politics, Weather, Photography, and the Dog

That seemed to go well...

The deployment, I mean. Everything works, at least on the browsers I've used to test it. I ran the deployment three times in Test first, starting from a copy of the Production database each time, so I was as confident as I could be when I finally ran it against the Production database itself. And, I made sure I can swap everything back to the old version in about 15 minutes.

Also, I snuck away to shoot publicity photos for Spectralia again, same as last year. I'll have some up by the end of the week, after the director has seen them.

Scary software deployment

Jez Humble, who wrote the book on continuous delivery, believes deployments should be boring. I totally agree; it's one of the biggest reasons I like working with Microsoft Azure.

Occasionally, however, deploying software is not at all boring. Today, for example.

Because Microsoft has ended support for Windows Server 2008 as of next week, I've upgraded an old application that I first released to Azure in August 2012. Well, actually, I updated it back in March, so I could get ahead of the game, and the boring deployment turned horrifying when half of my client's customers couldn't use the application because the OS upgrade broke their Windows XP/IE8 user experience. Seriously.

All of my client's customers have now upgraded to Chrome, IE11, or Firefox, and I've tested the app on all three browsers. Everything works. But now I have to redeploy the upgrade, and I've got a real feeling of being once-bitten.

The hard part, the part that makes this a one-way upgrade, is a significant change to the database schema. All the application's lookup lists, event logging, auditing, and a few other data structures, are incompatible with the current Production version. Even if there weren't an OS upgrade involved, the database changes are overdue, so there is no going back after this.

Here are the steps that I will take for this deployment:

  1. Copy current Production database to new MigrationTest database
  2. Upgrade MigrationTest database
  3. Verify Test settings, connection strings, and storage keys
  4. Deploy Web project to Test instance (production slot)
  5. Validate Test instance
  6. Deploy Worker project to Test instance (production slot)
  7. Validate Worker instance
  8. Shut down Production instance
  9. Back up Production database to bacpac
  10. Copy Production database within SQL instance
  11. Upgrade Production database
  12. Verify Production settings, connection strings, and storage keys
  13. Deploy solution to Production instance (staging slot)
  14. Validate Production Web instance
  15. Validate Production Worker instance
  16. VIP swap to Production

Step 1 is already complete. Step 2 will be delayed for a moment while I apply a patch to Visual Studio over my painfully-slow internet connection (thanks, AT&T!). And I expect to be done with all of this in time for Game of Thrones.

How to look up Azure table data by ID

Short answer: You can't. So don't try.

Back in 2007, when I wrote a scheduling application for a (still ongoing!) client, Azure was a frustrating research project at Microsoft. Every bit of data the application stored went into SQL Server tables including field-level auditing and event logs.

The application migrated to Azure in August 2012, still logging every audit record and event to SQL tables, which are something like 10x more expensive per byte than Azure Table Storage. Recently, I completed an upgrade to the Inner Drive Extensible Architecture™ so it can now use Azure table storage for both.

The old application knew nothing about this. So upgrading the application with the new IDEA bits worked fine for writing to the audit and event logs, but completely broke reading from them.

Here's the code the app uses displaying a specific audit record so an administrator can see the field-level details:

// Get the repository from Castle IoC:
var repo = ActiveContainer.Instance.Container.Resolve<IAuditRepository>();

// Get the individual audit record by ID:
var audit = repo.Find(id);

That's great if the audit record uses a database identity key. Unfortunately it does not; it uses an Azure partition and row key combination.

I agonized for a couple of days how to fake database identities in Azure, or how to write a mapping table, or do some other code- or data-intensive thing or another. Then this afternoon, epiphany!

The user viewing an audit record is doing it in the context of reviewing a short list of audit headers. They click on one of these headers to pop up the detail box. The detail box uses the audit ID like this: https://scheduler.myclient.com/Auditing/AuditDetailViewer/12345.

It turns out, the only time a user cares about audit details is when she has the audit list right in front of her. So the audit ID is irrelevant. It only has to be unique within the context of the user's experience.

Here's the solution. In the Auditing class, which generates the audit list, I do this:

foreach (var item in orderedAudits)
{
	// Temporal cohesion: Add identity to Audit before using
	AddIdentity(item);
	Cache(item);
	CreateAuditRow(placeHolder, isObjectSpecified, item, timeZone);
	// End temporal cohesion
}

The cache uses least-frequently-used scavenging with a capacity of 512 items. (If the one user who cares about auditing ever needs to see more than 512 audit items in one list, I'll coach him not to do that.) Items live in the cache until it gets full, at which time the least-used ones are removed. This lets me do this in the audit detail view's control code:

var audit = Auditing.Find(id);

The Auditing.Find method simply plucks the item from the cache. If it returns null, oops, the audit details are missing or have expired from the cache, sorry. Just rerun the list of audits that you just clicked on.

I'm going to use a similar approach to the event log.

Not change I can believe in

Yesterday my trip to work was cold and wet, while on the West Coast it was so warm people in San Francisco were trying to remember if their apartments had air conditioning. (They don't.)

Well, it's no longer quite as hot in San Francisco, but here in Chicago it's still cold and wet: 4°C and—wait, you'll love this—snow.

That's right, past the mid-point of May and only two weeks from the start of meteorological summer, it snowed in Chicago.

March here, July in San Francisco?

Last night the temperature here got down to 5°C, which feels more like early March than mid-May. Meanwhile, in San Francisco, yesterday got up to 33°C, which to them feels like the pit of hell. In fact, even in the hottest part of the year (early October), San Francisco rarely gets that warm. The Tribune explains:

The North American jet stream pattern, a key driver of the country’s weather, has taken on the same incredibly “wavy”—or, as meteorologists say —“meridional”—configuration which has so often dominated the winter and spring. This sort of pattern leads to temperature extremes across the content.

Pools of unseasonably warm air are in place on each coast while unseasonably cool air is sandwiched between and dominates Chicago and Midwestern weather.

It’s within this slow-moving pool of chilly, unstable (i.e. cloud and precip-generating) air that Chicago resides—a situation likely to continue into Saturday. This is to keep extensive cloudiness and the potential for sporadic showers going over that period of time.

In other words, the forecast for this weekend is continued March with a possibility of April by Monday.

Corruption charges in red light camera scandal

Actually, there are two scandals: first, red light cameras in general, and second, an alleged $2m bribe:

The former City Hall manager who ran Chicago’s red-light camera program was arrested today on federal charges related to the investigation of an alleged $2 million bribery scheme involving the city’s longtime vendor, Redflex Traffic Systems.

A federal complaint filed in U.S. District Court today accused John Bills of taking money and other benefits related to the contract with Redlfex. Mayor Rahm Emanuel fired the company amid the bribery scandal.

The Tribune first revealed questions about a questionable relationship between Bills and Redflex in the fall of 2012, triggering a scandal that has shaken the foundation of the company and its Australian parent, Redflex Holdings Ltd., which acknowledged last year that its Chicago program was built on what federal authorities would likely consider a $2 million bribery scheme involving Bills. Six top Redflex officials were jettisoned, and the company has come under scrutiny for its procurement practices across the country.

Now, it's not hard to believe there was some "where's mine?" in a City of Chicago contract, but $2m seems a bit much. That's nothing to the $300m in fines the city has racked up using the things.

So, did Mayor Daley know about this? Is he going to be charged?

Another list of things to read

Ten days until I get a couple days off...

Smoke at low-altitude radar facility in Illinois

The FAA facility handing arrivals and departures for Chicago's two main airports shut down earlier today:

The FAA started issuing revised flight departure times to airlines Tuesday afternoon after an approximately two-hour “ground stop’’ halted all flights to and from Chicago’s two airports because of smoke in an air traffic radar facility serving northeastern Illinois, airline officials said.

The ground stop was ordered as FAA workers were evacuated from the radar facility and operations transferred to the FAA's Chicago Center in Aurora, which usually handles just high-altitude traffic.

The smoke was traced to a faulty ventilation motor and the workers were allowed back into the facility around 1 p.m.

No planes were imperiled by the outage. The Chicago Center facility has no trouble handling arrivals for an hour or two.

Schneier on why the NSA has made us less safe

Security expert Bruce Schneier is not an alarmist, but he is alarmed:

In addition to turning the Internet into a worldwide surveillance platform, the NSA has surreptitiously weakened the products, protocols, and standards we all use to protect ourselves. By doing so, it has destroyed the trust that underlies the Internet. We need that trust back.

By weakening security, we are weakening it against all attackers. By inserting vulnerabilities, we are making everyone vulnerable. The same vulnerabilities used by intelligence agencies to spy on each other are used by criminals to steal your passwords. It is surveillance versus security, and we all rise and fall together.

Security needs to win. The Internet is too important to the world -- and trust is too important to the Internet -- to squander it like this. We'll never get every power in the world to agree not to subvert the parts of the Internet they control, but we can stop subverting the parts we control. Most of the high-tech companies that make the Internet work are US companies, so our influence is disproportionate. And once we stop subverting, we can credibly devote our resources to detecting and preventing subversion by others.

It really is kind of stunning how much damage our intelligence services have done to the security they claim to be protecting. I don't think everyone gets it right now, but the NSA's crippling the Internet will probably be our generation's Mosaddegh.

Snopes on the Million Atari Cartridge Burial legend

Snopes just republished the legend of the E.T. game cartridges in light of the actual burial site being dug up recently. Forgetting for a moment the legend itself, the background story was a description of how Warner management killed Atari:

In 1982, Warner Communications could honestly claim to own a goose that laid golden eggs. Its money-producing fowl was called Atari, a video game company it purchased for $28 million in 1976 which had since burgeoned into a $2 billion concern. In the early 1980s Atari owned 80% of the video game market, it accounted for 70% of Warner's operating profits, and in the fourth quarter of 1982 the Wall Street "whisper number" concerning Atari's expected Atari symbol earnings predicted a 50% increase over the previous year.

The goose died at 3:04 P.M. EST on 7 December 1982, when Atari reported only a 10% to 15% increase in expected earnings, not the 50% figure so many people had been counting on. By the end of the following day Warner stock had plummeted to two-thirds of its previous value, and Warner closed out the quarter with its profits down a mind-boggling 56%. (Even worse, a minor scandal erupted when it was revealed that Atari's president and CEO had sold 5,000 shares of Warner stock a mere 23 minutes before announcing Atari's disappointing sales figures.) Atari racked up over half a billion dollars ($536 million) in losses in 1983, and by the end of 1984 Warner had sold the company.

What accounted for the sudden death of Warner's prized goose? A number of interrelated factors brought about its fatal illness...

The factors Snopes summarized highlight how acquisitions by incompatible companies can go wrong, among other things.