The Daily Parker

Politics, Weather, Photography, and the Dog

Illinois electric utility adds power for the Cloud

The Cloud—known to us in the industry as "someone else's computers"—takes a lot of power to run. Which is why our local electric utility, ComEd, is beefing up their service to the O'Hare area:

Last month, it broke ground to expand its substation in northwest suburban Itasca to increase its output by about 180 megawatts by the end of 2019. Large data centers with multiple users often consume about 24 megawatts. For scale, 1 megawatt is enough to supply as many as 285 homes.

ComEd also has acquired land for a new substation to serve the proposed 1 million-square-foot Busse Farm technology park in Elk Grove Village that will include a data center component. The last time ComEd built a substation was in 2015 in Romeoville, to serve nearby warehouses. In the past year, Elk Grove Village issued permits for four data center projects totaling 600,000 square feet and $175 million in construction. If built, it's a 40 percent increase in total data center capacity in the village.

Insiders say Apple, Google, Microsoft and Oracle have taken on more capacity at data centers in metro Chicago in the past year or so.

One deal that got plenty of tongues wagging was from DuPont Fabros Technology, which started work earlier this year on a 305,000-square-foot data center in Elk Grove Village. DuPont, which recently was acquired by Digital Realty Trust, pre-leased half of it, or about 14 megawatts, to a single customer, believed to be Apple.

One of the oldest cloud data centers, Microsoft's North Central Azure DC, is about three kilometers south of the airport here. Notice the substation just across the tollway to the west.

Rainy Monday lunchtime links

A succession of cold fronts has started traversing the Chicago area, so after an absolutely gorgeous Saturday we're now in the second day of cold, wet, gray weather. In other words, autumn in Chicago.

So here's what I'd like to read today but probably won't have time:

Meeting time. Yay.

Still in DLL hell

The problem with NuGet is that installers don't always update assembly binding mappings.

As I mentioned earlier, I'm trying to upgrade a very large project to a new version of the ASP.NET runtime to try to solve a lingering problem. This required updating somewhere around 20 NuGet packages, only some of which make correct changes to configuration files.

I've just gone through a 15-minute publish cycle that ended with an old and familiar error message for old and familiar reasons.

Guys. Quit messing with my configuration files. But if you have to, do it correctly. Seriously.

Simple Azure management using CloudMonix

We've been using CloudMonix for a while to manage and monitor our Microsoft Azure assets. By "we" I mean both Inner Drive Technology (home of The Daily Parker) and Holden (my day job).

CloudMonix recently added a new feature that automates virtual machine (VM) management. See, Microsoft charges for VMs by the hour. So if you have a VM that is only used at specific times, you're wasting money by having it run all the time.

A great example: Our continuous integration (CI) server, which builds and tests our (Holden's) applications every time a developer publishes a change to our master Git repository. Typically no one is making changes outside of business hours. So most of the time, the CI server just sits there, doing nothing.

Last week I configured CloudMonix to shut down our CI server every night at 6pm and wake it up at 7am the next morning. I only made two minor errors.

First, shutting down a VM in Azure makes its IP address evaporate, which screwed up some of the tests that connect to Salesforce. Second, the CI server runs a weekly build and smoke test early Monday morning, so we know first thing that the build is OK for the week. It was running at 4:15; I had to move it to 7:15. And all is good.

Troubleshooting an upgrade conflict

After upgrading to the Azure SDK 2.8.1 yesterday, I'm unable to debug this application locally without an uncomfortable contortion.

The application is a Microsoft ASP.NET MVC website set up to run using IIS Express. It uses some Azure components, in particular the evil msshrtmi.dll that has caused so many versioning headaches in the past.

The symptoms are these: when starting to debug the application in Visual Studio 2015, the application compiles but immediately causes a system toast message to appear that announces "One or more errors occurred running IIS Express." Clicking there for more information opens this unhelpful dialog box:

The log file contains this single line:

Failed to register URL "http://localhost:64079/" for site "BlogEngine.NET" application "/". Error description: Access is denied. (0x80070005)

The other links point to articles on the MSKB, one of which is out of date and the other of which is probably irrelevant (because I'm running VS2015 as administrator).

I'll get to those in a second, because in reviewing the Windows application and system logs, I found some suspicious events that seem related.

In the Application log, there are multiple error events with IDs 2269 and 2276 that start after I installed the Azure SDK update. Event 2269 is: "The worker process for app pool 'Clr4IntegratedAppPool', PID='11000', failed to initialize the http.sys communication when asked to start processing http requests and therefore will be considered ill by W3SVC and terminated. The data field contains the error number." The error number is 0x80070005 with another code 13780. Event 2271 is just a cascading error, "The worker process failed to initialize correctly and therefore could not be started."

Googling Event 2269 yields quite a few articles but they seem to diverge from my problem very quickly. I'll plow through those in a minute.

The other interesting event is in the System log. Whenever I attempt to debug the app, Event 15005 appears: "Unable to bind to the underlying transport for [::]:64079. The IP Listen-Only list may contain a reference to an interface which may not exist on this machine. The data field contains the error number."

Well, that's a lot more interesting. And it led directly to this article, which led to me looking at what is actually listening for what, which led me to changing the port in my debugger from 64079 to 49156. (I could see that 49156 was free by running netstat -aon.)

Sigh. I have no idea why upgrading to the latest Azure SDK would hose an IIS Express port, but even more than that, I am not entirely sure whether blaming the SDK is itself post hoc reasoning. But like so many things in systems this complex, I have now fixed the symptoms, and will go on with my life. Such a time suck, though.

Upgrade headache

I just upgraded my system to the Azure SDK 2.8.1, released earlier today, and also merged the latest code from the BlogEngine.NET master repo into my custom codebase. Do you see where I'm heading?

Once I "solved" the version issue with msshrtmi.dll (a perennial bête noir [not to be confused with this bête noir]), then published the changes, and promptly killed the blog for an hour.

It looks better now, but I'm still having trouble debugging it locally. Tomorrow, after I finish fixing a bug for work, I'll figure out why.