The Daily Parker

Politics, Weather, Photography, and the Dog

Taking an Azure shortcut

I hope to finish moving my websites into the cloud by the end of the year, including a ground-up rewrite of Weather Now. Meanwhile, I've decided to try moving that site and three others to an Azure Virtual Machine rather than trying to fit them into Azure Cloud Services.

For those of you just tuning in, Azure Cloud Services lets you run applications in roles that scale easily if the application grows. A virtual machine is like a standalone server, but it's actually running inside some other server. A really powerful computer can host a dozen small virtual machines, allocating space and computing power between them as necessary. You can also take a virtual machine offline, fold it up, and put it in your pocket—literally, as there are thumb drives easily as big as small VMs.

This is called infrastructure as a service (IaaS); putting applications into cloud services without bothering to set up a VM is called platform as a service (PaaS).

IaaS offers few advantages over PaaS. The principal disadvantage is that VMs behave like any other computers, so you have to care for them almost as if they were pieces of hardware on your own server rack. You just don't have to worry about licensing Windows or hoping the electricity stays on. Also, VMs are expensive. Instead of paying around $15 per month for a web role, I'll wind up paying about $80 per month for the VM and its associated storage, data transfers, and backup space. And this is for a small instance, with a 1.6 GHz processor and 2 GB of RAM. VMs go up to 8-core, 16 GB behemoths that cost over $500 per month.

On the other hand, my server rack costs easily $100 per month to operate, not counting licenses, certificates, me tearing my hair out when the power fails or my DSL goes down, and having to keep my living room the Inner Drive Technology Worldwide Data Center below 27°C year-round.

So it's not nearly as expensive as rack space would be, but it's less economical than PaaS. Unfortunately, my four most important web applications have special needs that make them difficult, and in two cases impossible, to port to PaaS:

  • The Daily Parker, this blog, which runs on the open-source dasBlog platform. I estimate that porting this blog to PaaS will take about 12 hours of work, and I have lots of other (paid) work ahead of it. In principal, I need to change its storage model to use Azure blobs instead of the local file system, which doesn't work the same way in Azure Web roles as it does on an VM.
  • Weather Now, which is overdue for a ground-up rewrite, and uses a lot of space. Porting the application will take about 12 hours, plus another 12 hours to port the GetWeather application (which keeps the site supplied with weather data) to an Azure worker process. That's time I can better spend rewriting it. Moving it to a VM shouldn't take more than an hour or two.
  • My SourceGear Vault source code control system. Since I don't own the source code, I simply can't port it. Plus, it uses a couple of worker processes on the server, which I also won't be able to port.
  • My FogBugz issue tracking system. Same problem: it's not my software, and it uses a couple of worker processes. I can either install it on a server from a commercial installation package, or I can sign up for FogBugz on demand for $25 per month. And also lose my Vault integration, which lets me track all of my issues back to actual pieces of code.

So watch this blog. In a couple of days, it's liable to migrate to an Azure VM.

Certified, again, and just as happy as the last time

Long-time readers will know how I feel about Microsoft certification exams. When it came time for 10th Magnitude to renew its Microsoft Partner designation, and that meant all of us had to take these tests again, I was not happy.

So, against my will, I took exam 70-583 ("Designing and Developing Windows Azure Applications") and passed it. I am once again a Microsoft Certified Professional.

Fwee.

Houses of cards + breeze

The brokerage house Evercore doesn't believe Groupon. No one else does either:

The brokerage said Groupon Goods, the company's consumer products category, is increasingly becoming the merchant of record - the owner of goods being sold or the first-party seller.

As first-party sales assume inventory risk and drive higher revenue contribution, the composition of Groupon's first-quarter revenue beat in North America has become questionable, analyst Ken Sena wrote in a note.

"Growth in unique visitors in the U.S. to Groupon.com, which can be looked at as a proxy for subscriber growth, exhibited negative year-over-year trends this quarter," Sena said.

Essentially, no one is buying stuff from Groupon, which leaves them holding the bag on lots of it.

In a related story, people are sick of Farmville, which is hurting Zynga:

A slew of analysts cut their ratings and price targets for Zynga after it reported lower-than-expected quarterly results on Wednesday and forecast a much smaller 2012 profit.

Zynga has been hit by user fatigue for some of its long-running games and a shift in the way Facebook Inc's social platform promotes games.

"The biggest factor impacting current performance appears to be the way Facebook is surfacing gaming content on its platform," JP Morgan's Doug Anmuth wrote in a note to clients.

Actually, Facebook users just got bored of FarmVille, and it's hard to blame them. This is what happens when companies stop innovating in favor of milking their cash cows. (Sorry.)

Dual Microsoft Azure deployment: Project synchronization

Last week I offered developers a simple way to simultaneously deploy a web application to a Microsoft Azure web site and an Azure Cloud Services web role. Today I'm going to point out a particular pain with this approach that may make you reconsider trying to deploy to both environments.

Just to recap: since Azure web sites are free, or nearly so, you can save at least $15 a month by putting a demo instance of your app there rather than having a second web role for it. You'll still use a web role for your staging and production environments, of course.

While reading my last post, though, sharp-eyed developers might have noticed that the dual approach creates some additional maintenance overhead. Specifically, you'll need to keep both solution (.sln) files and both web project (.csproj) files in sync. This becomes part of your staging deployment task list, which means you probably only have to synchronize the files once every few weeks, not such a big deal. Still, if you've never hand-edited a solution or project file before, it can be a little daunting.

The solution file probably won't require much synchronization, unless you've added new projects—or new files outside of projects—to the solution. For example, at 10th Magnitude we like to keep all of our database scripts in the solution tree, for easy access. (We also use the open-source RoundhousE tool for database deployment, which I'll talk about in a subsequent post.) If we add new database files to the web site solution, they won't automatically show up in the web role solution. Same with the web project file: adding new controllers, views, or web forms to a project is very common. So you'll have to make sure all the changes in the web site project get migrated over to the web role project.

Keep in mind, though, that some things will remain different between the two pairs of files. The web role solution will have a Cloud Services project, mapping the web and worker role entry points, which the web site won't have. And the web site project file will have references to Microsoft.WindowsAzure.ServiceRuntime.dll and msshrtmi.dll that the web role project won't have. Here's a synchronized pair of solution files, using Beyond Compare to show the deltas:

And here are the two project files, also synchronized, showing the differences you need to wind up with:

One final thing to consider: if you have a paying client, they might not want to pay for development time to synchronize the two deployment environments. If you're charging $125 an hour, and you spend 30 minutes every four weeks—$62.50—to save the client $15 for an additional cloud services instance, that isn't good value. But for an internal application (like the 10th Magnitude brochure site), or for a personal project, the savings might be worth the hassle.

Azure web sites and web roles

(Cross-posted to my company's blog.)

If you’ve looked at Microsoft’s Azure pricing model, you’ve no doubt had some difficulty figuring out what makes the most economic sense. What size instances do I need? How many roles? How much storage? What will my monthly bill actually be?

Since June 7th, Microsoft has had one price for an entry-level offering that is completely comprehensible: free. You can now run up to 10 web sites on a shared instance for free. (Well, you have to pay for data output over 165 MB per month at 12c per gigabyte, and if the site needs a SQL Database, that’s at least $5 a month, etc.)

At 10th Magnitude, we’ve switched to free Azure websites for our dev and staging instances of some internal applications and for our brochure site. And it’s saving us real money.

There are limitations, which I’ll get to, but c’mon: free. A shared-instance Azure website is perfect if you have a small, low-bandwidth, compute-light web application that only needs, maybe, a small MySQL database or some XML files. They even have a quick-start gallery that includes DotNetNuke, dasBlog, WordPress, and a few other open source packages—also free.

So here’s how those limitations hit: Free Azure web sites run on a shared virtual machine with who-knows-how-many other people, and you get an “extra-small” VM to boot (1 GHz processor, 768 MB of RAM). You can’t use Azure tables or blobs with it, and “free” only includes 5 hours of compute time and 165 MB of data going out per month. Most important, you can’t use a custom host header, so your site URL will be “something.azurewebsites.net” instead of “www.something.com”. You can get more, better, faster, and your own domain name by going to a Reserved web site instance—but that is decidedly not free.

Take a look at the pricing model. Our official brochure site runs in an extra-small Azure web role, but doesn’t use a SQL database, nor does it use much storage, compute power, or data egress. The bill comes to about $30 per month. That’s not bad at all, considering how much dedicated hosting costs generally (really, Rackspace? $150 per month is your cheapest deal?).

Let’s say we double that $30 because we’re not going to slap our chief marketing website up there without a private staging instance. So now our $30 site costs $60, and remember, we aren’t even using a database.

Or, in fact, go ahead and triple it to $90, because we need a dedicated dev instance as well. Our CMO, Jen, needs room to experiment, try new designs, and test-drive new marketing approaches, which we don’t want on our staging instance in case we accidentally promote it to production.

Why not use a virtual machine, then? Here’s where Microsoft’s pricing gets tricky. An extra-small VM is less than $10 per month during the “preview period” going on right now, but you’ll need storage to hold the VM, and you’ll still have to pay for bandwidth. That puts the real price around $30 a month.

We could, in theory, run all three environments (production, staging, preview) on the single VM. But who in his right mind would run all three environments on one VM? So we’re back to two VMs—or three—so $90 a month.

By the way, reserved instances have another limitation, which may have something to do with Microsoft’s own capacity constraints as they build out new datacenters. Extra-small reserved instances aren’t available right now, so you’re stuck getting a small instance at $60 per month. I’ll have more on reserved instances in a subsequent post, because they’re great if you have an existing, complex Web application you want to move to the Cloud but don’t want to refactor it to use Azure cloud services.

In short, we’re saving about $60 per month—67%—by using free Azure web sites instead of Web roles or VMs. And that’s just for our corporate brochure. Add what we’re saving for our internal applications, and now we’re talking about more pizza and beer for the developers real savings.

More next post on solving challenges with staging on an Azure web site and hosting the production version in a Web role.

Out of the apartment, into the cloud (Part 2)

Last weekend I described moving my email hosting from my living room home office out to Microsoft Exchange Online. And Thursday I spent all day at a Microsoft workshop about Windows Azure, the cloud computing platform on which my employer, 10th Magnitude, has developed software for the past two years.

In this post, I'm going to describe the actual process of migrating from an on-site Exchange 2007 server to Exchange Online. If you'd prefer more photos of Parker or discussions about politics, go ahead and skip this one. It's pretty technical and Parker only makes a brief cameo.

About 18 months ago, 10th Magnitude's CTO tried to move us to the predecessor offering now replaced by Exchange Online and Office 365's, Business Productivity Online Suite, AKA "BPOS." He was quite adamant that BPOS was a CPOS, and made just setting up the service a complete PITA. I'd like to assure him and everyone else thinking about cloud-based email that the situation today has improved.

The new migration tools start you with a step-by-step checklist, liked to all the documentation you need, that takes you through the entire process:

Step 1 took fifteen seconds. I called my dad and told him I was moving his email account to a different server, and that he probably wouldn't even notice the change except his password would change. He said fine. That was easy.

Step 2 was to add my domains to Exchange Online. My existing Exchange organization hosted eight domains, which it had acquired over the 12 years or so I'd run development servers in my office. Each domain required going into my DNS registration account at DNS Made Easy and adding a TXT records proving I owned it. Fortunately, my DNS provider and Microsoft communicated in real time about the updates, so I got through 7 of 8 domains in about 10 minutes. The 8th domain, which unfortunately was the Active Directory root domain, had its nameservers pointed at the DNS registrar that I used before switching to DNS Made Easy. Switching nameservers took an entire day, for reasons that pass understanding.

Step 3, mailbox migration, had a few hiccups, and required about more effort than I anticipated. First, using the Remote Connectivity Analyzer, I discovered that the specific combination of DNS records, firewall rules, and mailbox configuration on my Exchange server wouldn't allow migration. It took about two hours of playing whack-a-mole to get just one of the tests in the suite to work. Microsoft provided (generally) comprehensive instructions on how to fix the problems I encountered, however. The test suite itself gave me a good idea of what I was doing wrong on its own, even without the TechNet articles.

The remaining steps in the plan—redirecting mail to the new server, completing the mailbox migration, activating users, and starting to use Exchange Online—took about fifteen minutes. Seriously.

The whole effort took six hours total. Part of this includes the post-move configuration changes I had to make to several services and Web sites, as my Exchange server was also my internal SMTP server. This blog, all of my hosted websites, and the collection of services that support those websites (like Weather Now, for example) all had to have a new SMTP server to send emails out. That was a little tricky, and required using IIS6 tools on a Windows 2008 server. But that's another story.

Also, my RSS feeds didn't fare well in the switch. With Exchange 2007 and Outlook 2010, your RSS feeds are stored on the server, not the client. So I had to add all of them back by hand after the migration.

It's important to note a few things that would make this more difficult for a larger business than mine. I had two active mailboxes for people and a couple for support services, I controlled both the Exchange server and the network, and I had no critical business issues during the switch. Larger organizations will have to handle a migration much more carefully than I did.

In the end, my email experience is exactly the same. And my apartment home office is noticeably quieter with two fewer servers gobbling electricity.

Cloud email working fine; Azure symposium today

The email migration I did over the weekend so far has made my email experience better, in part because the server rack temperatures have dipped a full degree C (despite really hot weather outside). More details about the migration will follow this weekend.

Since 10th Magnitude has become a 100% Azure shop, Microsoft has invited us to participate in an all-day summit here in Chicago about the Azure cloud-computing platform. I'm leaving for it anon; I'll report this, too, weekend.

Out of the apartment, into the cloud (part 1)

Before coming to 10th Magnitude, I was an independent consultant, mostly writing software but occasionally configuring networks. I hate configuring networks. And yet, since 2008, I've had a 48U server rack in my apartment.*

A “U” is 25mm, so this means I have a 1.2 m steel rack behind an antique dressing screen in my living room home office, which sits between my dining room and my bedroom in a compact apartment in Chicago:

It looks modest enough, yes?

On the server rack are three 2U and one 1U rack servers. Behind the server rack is an old desktop box that got drafted for server duties. All of these machines have cooling fans that whirr constantly. Under the servers are the routers, uninterruptible power sources, and wires that connect the servers with the rest of the world:

I spent about $10,000 on the servers and the rack in the last decade. All of the servers are nearing the ends of their lives—the newest is from 2008—and need replacing soon. Plus, every month since then they've used about $90 per month in electricity. They need air conditioning, too, which costs another $30 or so in the summer beyond what I'd spend on my own comfort, because the bastards create a lot of heat.

Imagine my glee when, about two weeks ago, Microsoft began offering a new configuration for its cloud-based Azure platform that dropped the price of moving (most) websites into the cloud under $15 per month. That, combined with the onset of summer in Chicago, pushed me over the edge. I am now committed to getting the server rack out of my house by the end of September. This will accomplish three things:

  • It will cost less;
  • It will be quieter; and
  • Someone else can deal with the hardware and network maintenance.

I’ve already started. Over the weekend I moved my email to Microsoft Exchange Online, which costs $4 per month per mailbox, and so far works better than my old Exchange server. As just one example, if the power goes out in my apartment while I’m traveling, I won’t lose email connectivity.

Tomorrow I’ll describe the process in detail. Spoiler: the only thing that made me swear was getting my mobile phone connected.

* Before 2008, the rack was in my office in Evanston. I didn’t want to keep the office when I moved to Lincoln Park, so the servers moved in with me "just to save money." Never a good idea.

Why office dogs are awesome, cont'd

Because they improved downtown L.A. immensely:

In 1999, Los Angeles passed its Adaptive Reuse Ordinance, making it easier and cheaper for real estate developers to convert old offices to new housing. While the ordinance arguably jump-started the revitalization of downtown L.A., a key (though overlooked) element was pet-friendly policies in these newly converted lofts.

Walking dogs drove residents out of their homes and into the street at least twice each day. Elsewhere in Los Angeles, where single-family homes predominate, dog owners often have the luxury of sending Fido out to the yard to do his business. But downtown, dogs and their owners have become a crucial component of the rebounding neighborhood's culture.

Of course, if the office dog poops on the CEO's carpet, he'll still get fired.

Groupon shares decline to saner levels

As just about everyone who watches these things predicted, Groupon's shares declined 9% just as soon as insiders were able to start trading them:

Friday marked the end of the company's lock-up period, which prevented insiders from unloading their Groupon stock. Groupon went public in November with a small float. The expiration of the lock-up period puts into play 600 million shares, amounting to 93 percent of the company's total outstanding shares. About one-third of those shares will not be sold, as they are in the hands of co-founders Andrew Mason, Eric Lefkofsky and Brad Keywell. Mason, who is also chief executive, said last month that the trio had no intention of selling their holdings.

Analysts had said they expected downward pressure on Groupon's shares as a result of the lock-up expiration but that many insiders -- a group that includes current and former senior executives, board members and early investors -- would hang onto their stock to wait for a rebound in the price. While Groupon's shares rebounded last month after the company reported first-quarter earnings, they remained well below their IPO price of $20.

Why did Groupon even have an IPO? Probably for the same reason Facebook did: to enrich the VCs and founders. That's easy. But why did anyone buy Groupon at $20 or Facebook at $38? Because math class is tough, but history is tougher, apparently.