The Daily Parker

Politics, Weather, Photography, and the Dog

Windows Azure is now fully PCI-compliant

This is a big deal for shops like 10th Magnitude, my employer, especially given that we developed the API for Arrow Payments. PCI compliance means banks—who have skin in the game—have certified Azure is secure enough for credit-card processing:

The PCI DSS is the global standard that any organization of any size must adhere to in order to accept payment cards, and to store, process, and/or transmit cardholder data. By providing PCI DSS validated infrastructure and platform services, Windows Azure delivers a compliant platform for you to run your own secure and compliant applications. You can now achieve PCI DSS certification for those applications using Windows Azure.

To assist customers in achieving PCI DSS certification, Microsoft is making the Windows Azure PCI Attestation of Compliance and Windows Azure Customer PCI Guide available for immediate download.

The latest Azure release also has a bunch of other great features for developers, including monitoring tools and Web site improvements, but PCI is the big one.

The IDTIDC is no more

Right before Christmas I removed the four dormant servers from the Inner Drive Technology International Data Center (IDTIDC), vowing to complete the job posthaste. Well, haste was Wednesday, so now, post that, I've finally finished.

There are no more servers in my apartment. The only computers running right now are my laptop and the new NAS. (The old switch, hidden under a chair, still has a whirring fan. I may replace it with a smaller, non-fanned switch at some point.)

Here's before:

And here's the after:

Notice that Parker doesn't seem too freaked out by the change, though he did seem uncomfortable while things were actually changing. He's resilient, though.

Fully 18 months ago I started moving all my stuff to Microsoft Azure. Today, the project is completely done. My apartment is oddly quiet, and seems oddly larger. I can get used to this.

Shutting it all down

Right before Christmas I removed all the long-dormant servers from the Inner Drive Technology Worldwide Data Center. Today I'd planned to shut off the last two live devices, my domain controller and my TeraStation network attached storage (NAS) appliance, replacing the first with nothing and the second with a new NAS.

(The NAS is the little black box on the floor to the right; the domain controller is the thin rack-mounted server at the top.)

It turns out, today was a good day to shut down the old NAS. When I logged into its UI, I discovered that one of its disks had failed, cutting its capacity by a third. Fortunately, I configured the device with 4 x 256 GB drives in RAID 5. This meant that when one of the drives failed, the other three kept the data alive just fine, but the array lost 128 GB of space and a whole lot of speed.

The new NAS cost $200 and has 4 TB of space—almost 6 times more than the old, ailing NAS. I'll have a photo of it when I put it in its permanent home next weekend. (Right now there's a server rack in the way, and right now it's busy getting completely loaded.)

For perspective: the TeraStation cost $900 in May 2006. It's run nearly continuously since then, which means three of the drives lasted about 67,000 hours, with an amortized cost of 32c per day.

I'll discuss how much damage to a network killing the domain causes once I'm done cleaning up the debris.

The IDTIDC thins out

I had a reasonably productive morning cleaning up the Inner Drive Technology World Headquarters, including removing all all the decommissioned hardware from the Inner Drive Technology International Data Center. Contrast the before with the during:

Both DSL modems are still there; so is the NAS, the PDC, and the switch. However, the dead UPS (thank you, TrippLite, for creating a UPS whose battery you can't replace), four decommissioned servers (including one in the back you can't really see), and a whole bunch of cables, are now out of my apartment.

I'm still debating whether to break my domain or move it to the cloud, so the domain controller gets a stay of execution for a week or so. Either way, it's gone next weekend.

All the rack servers, along with the rack itself, are free or nearly so. Let me know if you want one.

Who wants a server? Or a rack?

The Inner Drive Technology Worldwide Data Center (IDTWDC) will shortly be decommissioned. I first wrote about this in June 2012, when it looked like I could migrate all the apps running on my servers to Azure quickly. (It actually took until March.)

Now, however, I'm done. And now I have about 100 kg of equipment to remove from my apartment.

So: does anyone want some equipment? Here's the inventory:

  • Two Dell PowerEdge 2950 2U servers with 1.6 GHz Xeon dual-core processors. One has 4GB of RAM, the other has 2GB. These were my Web and database servers.
  • Another Dell PowerEdge 2950 2U server, but with a 1.8 GHz Xeon single-core processor and 4 GB of RAM. This was my Exchange server.
  • A Dell PowerEdge 860 1U server with a 1.8 GHz Xeon single-core processor and 2 GB of RAM. This is my PDC.
  • All four servers have SCSI PERC RAID controllers.
  • A Netgear gigabit switch with 24 ports.
  • A 42U steel rack, as pictured, with removable shelf.
  • A 17" LCD screen, Dell keyboard, and 4-port KVM switch.
  • Assorted network cables, power cables, and APC battery backup units, some of which may need new batteries.

Since I'm essentially giving these things away (except for the rack, for which I'm asking $50), they're conveyed as-is, no liability. And again, the servers will not have disk drives.

If you want these, or know anyone who might, let me know through Inner Drive feedback.

Am I bringing my laptop to Korea?

Oh, you betcha:

On a year-over-year basis, average connection speeds grew by 25 percent. South Korea had an average speed of 14 Mbps while Japan came in second with 10.8 Mbps and the U.S. came in the eighth spot with 7.4 Mbps.

Year-over-year, global average peak connection speeds once again demonstrated significant improvement, rising 35 percent. Hong Kong came in first with peak speed of 57.5 Mbps while South Korea came in at 49.3 Mbps. The United States came in 13th at 31.5 Mbps.

Yes, South Korea has the fastest connectivity in the world. This I gotta see.

Plus, you know, clients.

EF6 "The default RetryManager has not been set" problem

Aw, buggre alle this for a Larke.

I'm all in favor of upgrades, but for Foucault's sake, don't break things. I'm trying to upgrade a .NET project to Entity Framework 6, and I want to smack the developers.

Under previous versions, you could set the retry manager through configuration. This was really helpful for unit testing, when you might want to change the configuration and have the application block load a transient fault handler automatically.

With Entity Framework 6 (EF6 — yes, this is blatant SEO), you have to set up the default transient fault handler in code:

public void TestFixtureSetup()
	var strategy = new FixedInterval("fixed", 10, TimeSpan.FromSeconds(3));
	var strategies = new List { strategy };
	var manager = new RetryManager(strategies, "fixed");

I mean, really? With EF6, you've got to put this code in every unit test sequence in your solution. Basically, that means you need to put it in every unit test file. Before, it had its own defaults.

Despite being all in favor of upgrades, I do get impatient when (a) the upgrade breaks existing code and (b) the entity performing the upgrade is one of the wealthiest entities in the history of business.

All right, then. Bring on the copy pasta...

jQuery: Party like it's 1989

Programming languages have come a long way since I banged out my first BASIC "Hello, World" in 1977. We have great compilers, wonderful editors, and strong typing.

In the past few years, jQuery and JSON, both based on JavaScript, have become ubiquitous. I use them all the time now.

jQuery and JSON are weakly-typed and late-bound. The practical effect of these characteristics is that you can introduce subtle, maddening bugs merely by changing the letter case of a single variable (e.g., from "ID" to "Id").

I've just spent three hours of a perfectly decent Sunday trying to find exactly that problem in some client code. And I want to punch someone.

Two things from this:

1. Use conventions consistently. I'm going to go through all the code we have and make sure that ID is always ID, not Id or id.

2. When debugging JSON, search backwards. I'll have more to say about that later, but my day would have involved much more walking Parker had I gone from the error symptom backwards to the code rather than trying to step through the code into the error.

OK, walkies now.

Lunchtime link list

Once again, here's a list of things I'm sending straight to Kindle (on my Android tablet) to read after work:

Back to work. All of you.

Before I forget...

I've got about an hour to prepare for a Meet-Up I'm presenting. While I'm doing that, you read these:

OK, prep time.