The Daily Parker

Politics, Weather, Photography, and the Dog

Quick link roundup

I haven't any time to write today, but I did want to call attention to these:

Back to the mines...

My poor, dead SSD

My laptop's solid-state drive died this afternoon. It had a long, long life (23 months—almost double what they usually get). I am thankful to the departed SSD for that, and:

  • for dying after the client presentation, not before;
  • for dying on the first day of a three-week project, not the last; and
  • for living 23 months, which is about as spectacular as a dog living 23 years.

I am now rebuilding my laptop on a larger but slightly slower SSD, which I hope lasts nearly as long.

How the Cloud helps people sleep

Last night, around 11:30pm, the power went out in my apartment building and the ones on either side. I know this because the five UPS units around my place all started screaming immediately. There are enough of them to give me about 10 minutes to cleanly shut down the servers, which I did, but not before texting the local power company to report it. They had it on again at 1:15am, just after I'd fallen asleep. I finally got to bed around 2 after bringing all the servers back online, rebooting my desktop computer, and checking to make sure no disk drives died horribly in the outage.

But unlike the last time I lost power, this time I did not lose email, issue tracking, this blog, everyone else's site I'm hosting, or the bulk of my active source control repositories. That's because they're all in the cloud now. (I'm still setting up Mercurial repositories on my Azure VM, but I had moved all of the really important ones to Mercurial earlier in the evening.)

So, really, only Weather Now remains in the Inner Drive Technology Worldwide Data Center, and after last night's events, I am even more keen to get it up to the Azure VM. Then, with only some routers and my domain controller running on a UPS that can go four hours with that load, a power outage will have less chance of waking me up in the middle of the night.

Azure Web Sites adds a middle option

My latest 10th Magnitude blog post is up, in which I dig into Microsoft's changes to Azure Web Sites announced Monday. The biggest change is that you can now point your own domain names at Azure Web Sites, which solves a critical failing with the product that has dogged them from its June release.

Since this Daily Parker post was embargoed for a day while my 10th Magnitude post got cleared with management, I've played with the new Shared tier some more. I've come to a couple of conclusions:

  • It might work for a site like Inner Drive's brochure, except for the administrative tools lurking on the site that need SSL. Azure Web sites still have no way to configure secure (https://) access.
  • They still don't expose the Azure role instance to .NET applications, making it difficult to use tools like the Inner Drive Extensible Architecture™ to access Azure table storage. The IDEA™ checks to see whether the Azure role instance exists (using RoleEnvironment.IsAvailable) before attempting to access Azure-specific things like tables and blobs.
  • The cost savings isn't exactly staggering. A "very small" Web Role instance costs about $15 per month. A Shared-level Web Site costs about $10. So moving to a Shared Web Site won't actually save much money.
  • Deployments, however, are a lot easier to Web Sites. You can make a change and upload it in seconds. Publishing to a Web Role takes about 15 minutes in the best circumstances. Also, since Web Sites expose FTP endpoints, you can even publish sites using Beyond Compare or your favorite FTP client.

I did upgrade one old site from Free to Shared to move its domain name off my VM. (The VM hosted a simple page that redirected users to the site's azurewebsites.net address.) I'll also be moving Hired Wrist in the next few days, as the overhead of running it on a VM doesn't make sense to me.

In other news, I've decided to go with Mercurial for source control. I'm sad to give up the tight integration with Visual Studio, but happy to gain DVCS capabilities and an awesomely simple way of ensuring that my source code stays under my control. I did look at Fog Creek's Kiln, but for one person who's comfortable mucking about inside a VM, it didn't seem worth the cost ($299).

Chicago's digital infrastructure

Crain's Chicago Business yesterday ran the first part in a series about How Chicago became one of the nation's most digital cities. Did you know we have the largest datacenter in the world here? True:

Inside the former R.R. Donnelley & Sons Co. printing plant on East Cermak Road, next to McCormick Place, is the world's largest, most-connected Internet data center, according to industry website Data Center Knowledge. It's where more than 200 carriers connect their networks to the rest of the world, home to many big Internet service providers and where the world's major financial exchanges connect to one another and to trading desks. "It's where the Internet happens," Cleversafe's Mr. Gladwin says.

Apparently Chicago also hosts the fifth-largest datacenter in the world, Microsoft's North Central Azure hub in Northlake. (Microsoft's Azure centers are the 5th-, 6th-, 9th-, and 10th-largest in the world, according to Data Center Knowledge.) And then there's Chicago's excellent fiber:

If all of the publicly available fiber coming in and out of the Chicago area were bundled together, it would be able to transmit about 8 terabits per second, according to Washington-based research firm TeleGeography. (A terabit per second is the equivalent of every person on the planet sending a Twitter message per second.)

New York would be capable of 12.3 terabits, and Washington 11.2 terabits. Los Angeles and San Francisco are close behind Chicago at 7.9 and 7.8 terabits, respectively. New York is the primary gateway to Europe, and Washington is the control center of the world's largest military and one of the main connection points of the Internet.

Chicago benefits from its midcontinent location and the presence of the financial markets. "The fiber optic lines that go from New York and New Jersey to Chicago are second to none," says Terrence Duffy, executive chairman of CME Group Inc., who says he carefully considered the city's infrastructure when the futures and commodities exchange contemplated moving its headquarters out of state last year because of tax issues. "It benefits us to be located where we're at."

Now, if I can just get a good fiber to my house...

Virtuous circle of productivity

It seems that the more I have to do, the more I'm able to do. In other words, when I haven't got a lot of assignments, I tend to veg out more. Right now I'm on a two-week development cycle, with an old client that predates my current job anxious for some bug fixes. Oddly, the old client tends to get his bug fixes when I have more to do at my regular gig.

Of course, blogging might suffer a bit. In fact I just submitted a draft blog entry for the 10th Magnitude Developer Blog that should hit tomorrow sometime. Until then, it's embargoed (which I hate because it's a timely and useful topic), and I have a feature to finish.

I guess all of this means, with apologies to René Magritte, ceci n'est pas un blog post.

The Azure migration hits a snag with source control

Remember how I've spent the last three months moving stuff into the Cloud? And how, as of three weeks ago, I only had two more services to move? I saved the best for last, and I don't know for sure now whether I can move them both without some major changes.

Let me explain the economics of this endeavor, and why it's now more urgent that I finish the migration. And then, as a bonus, I'll whinge a bit about why one of the services might have to go away completely.

I currently have a DSL and a 20-amp power line going into my little datacenter. The DSL ostensibly costs $50 per month, but really it's $150 per month because it comes as an adjunct to my landline. I don't need a landline, and haven't for years; I've only kept it because getting DSL without a landline would cost—you guessed it—$150 per month. The datacenter has six computers in it, two of which are now indefinitely offline thanks to the previous migrations to Azure. Each server uses between $10 and $20 of electricity per month. Turning two off in July cut my electricity use by about $30. Of the four remaining servers, I need to keep two of them on, but fortunately those two (a domain controller and a network attached storage, or NAS, box) are the most efficient; the two hogs, using $40 of electricity every month, are my Web and database servers. I get to turn them off as soon as the last two services get migrated.

So we're already up to $190 per month that goes away once I finish these migrations, down from $220-230 per month three months ago (or $280-300 in the summer, when I have to run A/C all the time to keep it cool). I've already brought Azure services online, including a small virtual machine, and I signed up for Outlook Online, too. Together, my Azure and Office 365 services, once everything is moved, should cost about $120-130 per month, which stays exactly the same during the summer, because Microsoft provides the air conditioning.

The new urgency comes from my free 90-day Azure trial expiring last week. Until then, all my Azure services have been free. Now, I'm paying for them. The faster I finish this migration, the faster I get to save over $100 per month ($180 in the summer!) in IT expenses—and have much more reliable services that don't collapse every time AT&T or Commonwealth Edison has a hiccup in my neighborhood.

Today, in the home stretch with only Vault and Weather Now left to move, it turns out I might have to give up on Vault completely. Vault requires integration between the Web and database servers that is only possible in Vault if they're running on the same virtual network or virtual machine.

I want to keep using Vault because it has my entire source control history on it. This includes all the changes to all the software I've written since 1998, and in fact, some of the only copies of programs I wrote back then. I don't want to lose any of this data.

Unfortunately, Vault's architecture leaves me with only three realistic options if I want to keep using it:

  • Keep the Web and database servers running and keep the DSL up, obviating the whole migration effort;
  • Move the database and Web services to the domain controller, allowing me to turn the servers off, which still leaves me with a $155 per month DSL and landline bill (and puts a domain controller on the Internet!); or
  • Upgrade the my Azure VM to Medium, doubling its cost (i.e., about $60 more per month), then install SQL Server and Vault on it.

None of these options really works for me. The third is the least worst, at least from a cost perspective, and also puts a naked SQL Server on the Internet. With, oh yeah, my entire source control history on it.

So suddenly, I'm considering a totally radical option, which solves the cost and access problems at the expense of convenient access to my source history: switch to a new source control system. I say "convenient access" because even after this migration, I have no plans to throw away the servers or delete any databases. Plus, it turns out there are tools available to migrate out of Vault. I'll evaluate a few options over the next two weeks, and then do what I can to migrate before the end of September.

Not to mention, it looks like Sourcegear may be re-evaluating Vault (as evidenced by a developer blog that hasn't changed in over a year), possibly for many of these reasons. Vault was developed as a replacement to the "source destruction system" Microsoft Visual SourceSafe, and achieved that mandate admirably. But with the incredible drop in cloud computing prices over the past two years, it may have lived long enough already.

As for the final service to migrate, Weather Now: I know how to move it, I just haven't forced myself to do it yet.

I wish stuff just worked

Despite my enthusiasm for Microsoft Windows Azure, in some ways it suffers from the same problem all Microsoft version 1 products have: incomplete debugging tools.

I've spent the last three hours trying to add an SSL certificate to an existing Azure Web application. In previous attempts with different applications, this has taken me about 30 minutes, start to finish.

Right now, however, the site won't launch at all in my Azure emulator, presenting a generic "Internal server error - 500" when I try to start the application. The emulator isn't hitting any of my code, however, nor is it logging anything to the Windows System or Application logs. So I have no idea why it's failing.

I've checked the code into source control and built it on another machine, where it had exactly the same problem. So I know it's something under source control. I just don't know what.

I hate very little in this world, but lazy developers who fail to provide debugging information bring me near to violence. A simple error stack would probably lead me to the answer in seconds.

Update: The problem was in the web.config file.

Earlier, I copied a connection string element from a transformation file into the master web.config file, but I forgot to remove the transformation attributes xdt:Transform="Replace" and xdt:Locator="Match(name)". This prevented the IIS emulator from parsing the configuration file, which caused the 500 error.

I must reiterate, however, that some lazy developer neglected to provide this simple piece of debugging information, and my afternoon was wasted as a result.

It reminds me of a scene in Terry Pratchett's and Neil Gaiman's Good Omens (one of the funniest books ever written). Three demons are comparing notes on how they have worked corruption on the souls of men. The first two have each spent years tempting a priest and corrupting a politician. Crowley's turn:

"I tied up every portable telephone system in Central London for forty-five minutes at lunchtime," he said.

"Yes?" said Hastur. "And then what?"

"Look, it wasn't easy," said Crowley.

"That's all?" said Ligur.

"Look, people—"

"And exactly what has that done to secure souls for our master?" said Hastur.

Crowley pulled himself together.

What could he tell them? That twenty thousand people got bloody furious? That you could hear the arteries clanging shut all around the city? And that then they went back and took it out on their secretaries or traffic wardens or whatever, and they took it out on other people? In all kinds of vindictive little ways which, and here was the good bit, they thought up themselves. The pass-along effects were incalculable. Thousands and thousands of souls all got a faint patina of tarnish, and you hardly have to lift a finger.

Somehow, debugging the Azure emulator made me think of Crowley, who no doubt helped Microsoft write the thing.

How Google builds its maps

This month's Atlantic explains:

"So you want to make a map," [former NASA engineer Michael] Weiss-Malik tells me as we sit down in front of a massive monitor. "There are a couple of steps. You acquire data through partners. You do a bunch of engineering on that data to get it into the right format and conflate it with other sources of data, and then you do a bunch of operations, which is what this tool is about, to hand massage the data. And out the other end pops something that is higher quality than the sum of its parts."

The sheer amount of human effort that goes into Google's maps is just mind-boggling. Every road that you see slightly askew in the top image has been hand-massaged by a human. The most telling moment for me came when we looked at couple of the several thousand user reports of problems with Google Maps that come in every day. The Geo team tries to address the majority of fixable problems within minutes. One complaint reported that Google did not show a new roundabout that had been built in a rural part of the country. The satellite imagery did not show the change, but a Street View car had recently driven down the street and its tracks showed the new road perfectly.

I've always been a map geek (which drove my Weather Now demo/application). The idea that Google will have a complete digital map of the entire world, and will presumably continue to maintain this map over the next several decades, warms my geeky heart. I wish some of this data had existed 50 years ago—or, alternately, that Google can integrate some of the existing photos and maps from earlier eras.

More Google Earth imagery released

They just launched high-resolution aerial photos of another batch of cities:

Improving the availability of more high quality imagery is one of the many ways we’re continuing to bring you the most comprehensive and accurate maps of the world. In this month’s update, you’ll find another extensive refresh to our high resolution aerial and satellite imagery (viewable in both Google Maps and Google Earth), as well as new 45 degree imagery in Google Maps spanning 30 new cities.

Google Maps and Earth now feature updated aerial imagery for more than 20 locations, and updated satellite imagery for more than 60 regions. Here are a few interesting locations included in our latest release.

Below is imagery of Mecca, Saudi Arabia where each year more than 15 million Muslims visit this important religious site. Here you can see Abraj Al Bait, one of the world largest clock towers, visible even from space!

Pretty soon they'll have photos of every square meter of the planet—at 10-cm resolution. I find it both really cool and really creepy. As long as they don't have near-real-time photos...