The Daily Parker

Politics, Weather, Photography, and the Dog

Maybe I'll have free time later today

If so, these are queued up:

More later...

The heart bleeds for OpenSSL

Bruce Schneier, not one for hyperbole, calls the Heartbleed defect an 11 on a 10 scale:

Basically, an attacker can grab 64K of memory from a server. The attack leaves no trace, and can be done multiple times to grab a different random 64K of memory. This means that anything in memory -- SSL private keys, user keys, anything -- is vulnerable. And you have to assume that it is all compromised. All of it.

"Catastrophic" is the right word.

At this point, the odds are close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

It turns out, Windows systems don't use OpenSSL very much, favoring TLS 1.2 these days. So if you're visiting a Windows system (basically anything with ".aspx" at the end), you're fine.

Still, if you've used Yahoo! or any other system that has this bug, change your password. Now.

And the day started so well...

At 8:16 this morning, a long-time client sent me an email saying that one of his customers couldn't was getting a strange bug in their scheduling application. They could see everything except for the tabbed UI control they needed to use. In other words, there was a hole in the screen where the data entry should have been.

Here's how the rest of the day went around this issue. It's the kind of thing that makes me proud to be an engineer, in the same way the guys who built Galloping Gertie were proud.

It all started when I updated a Windows Azure cloud service from the no-longer-supported SDK 1.7 running on Windows Server 2008 to the current SDK (2.2) and operating system (Windows Server 2012 R2). I also upgraded the language from C# 4.0 to C# 4.5.1, which is only possible on WS2012R2.

This upgrade started months ago, and proceeded slowly because both I and the clients had other priorities. I mean, who wants to spend a lot of money upgrading a platform without upgrading the application running on it? So the last build of the application went to production in October, and I haven't touched it since. I mean, it worked fine, why mess with it? Other than the fact that the operating system and Azure SDK are no longer supported.

Before pushing the update, I thoroughly tested the application. I mean, unit tests up the ying, with a tens-of-steps-long regression test on my local, and on an Azure test instance, before even looking askance at the Production instance. When I had tested everything I could imagine, I did this:

  1. Stopped the application, to ensure no one changed any data during the upgrade.
  2. Made a full copy of the production database ("CREATE DATABASE productioncopy AS COPY OF production")
  3. Once the data was fully copied, I uploaded the new bits to the Staging slot of the application.
  4. I updated the configuration info to the current standards.
  5. VIP swap! (I swapped the staging and production instances, so the old production instance was now in the staging slot.)
  6. And....it's running just fine. All that planning and testing worked!

So what happened? Well, it turns out there's one thing I didn't anticipate: Internet Explorer 8, released five years ago Thursday, and known to have difficulties with JavaScript. Plus, the controls we used when we orignally deployed in January 2008, made by Infragistics, have known incompatibilities with IE8, but again: the application has worked just fine the whole time.

Since everything worked just fine on earlier versions of the application, and since this update didn't directly change the UI, and since IE8 hasn't been supported in quite some time, I figured there wouldn't be any problems.

It turns out that a sizable portion of my client's customers use IE8, because they're big hospitals with big IT departments and little budgets for updates.

Once I realized with abject horror that the application was simply broken for most of the people using it, I resigned myself to rolling back to the previous release, which had worked just fine. When I got home, I started this task, and the following things happened:

  1. Once again, I stopped the application.
  2. The actual database restore went fine, as did the VIP swap putting the previous version back in the Production slot and the new version in the Staging slot.
  3. When the application started up, I realized I'd forgotten to roll back the configuration information for the logging and messaging component. So the application failed to start.
  4. I rolled back the config.
  5. The application again failed to start. Only now, because the logging and messaging component is the part that's failing, I can't see any diagnostics.
  6. Fortunately, I deployed the application with Remote Desktop enabled, so I tried connecting to the virtual machine directly.
  7. The Remote Desktop user account had expired.
  8. Fortunately I use great source control. In Mercurial, I updated to the last production build before the update, and loaded it into Visual Studio.
  9. Tried to load into Visual Studio, and failed. See, I no longer have the Azure SDK v1.7. I never installed it on this machine, in fact. I'm running SDK 2.2, and I have no easy way of running an older version.

So, as far as I knew at this point, there is simply no way to get into the application, and no way for me to re-upload the old version.

I decided to try a different tack. I rolled back the rollback and restarted the new version. I also started trying to get my last remaining Windows XP machine running so that I could confirm the bug, and start testing fixes on a Test instance running Windows Server 2012 R2.

Getting a 10-year-old laptop to boot, let me log in, stop wasting time with all the detritus it acquired in its years of service, connect to my network, and open up IE8, took 45 minutes.

Some time in there I walked Parker.

So now, I can see that the error exists in IE8, and I also have found an article on how to reset the RDP password expiration date. Only, I'm really tired, and I am worried I'll make stupid errors if I keep trying to debug this right now.

So I have two approaches I will try first thing in the morning: first, roll back to the October release, and manually update the RDP expiration date so I can remote in and debug the configuration problem. Then I'll have to re-create all the data my client added yesterday, which will take me at least an hour. If I'm supremely lucky I'll have this done by 8am. Since I've had no luck at all so far on this upgrade, I am not optimistic.

Second, I'll start removing the outdated Infragistics code. Believe it or not, jQuery works fine on IE8, despite it being pretty much the latest thing in user interface languages. It's the custom crap Infragistics pushed out years ago that fails. Unfortunately I won't be able to deploy this before leaving on Thursday morning. Fortunately the application isn't going to stop working suddenly; the OS and SDK are no longer supported, but they won't actually turn the OS off until June.

And there's the irony in a nutshell. I thought I did everything right in the deployment cycle, especially the part where I got three months ahead of the due date. The things that went wrong to get me to this state of frustration and exhaustion were numerous and tiny, kind of like the things that go wrong to cause an aviation accident. That said, the client has suffered no data loss, and I preserved a whole catalog of options to fix the problem (relatively) quickly. This isn't the disaster it would have been without the deployment tools you get with Azure.

Plus, I've learned to test everything on IE8 whenever health care companies are involved. Sheesh.

About that iOS "flaw"

Security guru Bruce Schneier wonders if the iOS security flaw recently reported was deliberate:

Last October, I speculated on the best ways to go about designing and implementing a software backdoor. I suggested three characteristics of a good backdoor: low chance of discovery, high deniability if discovered, and minimal conspiracy to implement.

The critical iOS vulnerability that Apple patched last week is an excellent example. Look at the code. What caused the vulnerability is a single line of code: a second "goto fail;" statement. Since that statement isn't a conditional, it causes the whole procedure to terminate.

If the Apple auditing system is any good, they would be able to trace this errant goto line not just to the source-code check-in details, but to the specific login that made the change. And they would quickly know whether this was just an error, or a deliberate change by a bad actor. Does anyone know what's going on inside Apple?

Schneier has argued previously that the NSA's biggest mistake was dishonesty. Because we don't know what they're up to, and because they've lied so often about it, people start to believe the worst about technology flaws. This Apple error could have been a stupid programmer error, merge conflict, or something in that category. But we no longer trust Apple to work in our best interests.

This is a sad state of affairs.

How to really irritate Internet users

I've figured out the hotel's WiFi. It's not the slowest Internet connection in the world; it's the slowest SSL in the world. In other words, they're not really throttling the Internet per se. But somewhere between here and the U.S., someone is only letting through a very few secure packets.

The genius of this is that things like restaurant reviews (think: TripAdvisor.com) come up normally. But just try to get your email, check your airline reservations, or heaven forbid, get to your bank's website. It's excruciating.

Normal port-80 traffic is running about 500 kbps. That's not especially fast, but it's not painful. But SSL traffic is getting to my laptop at 100 kbps peak speeds and 30 kbps average speeds. Let's party like it's 1999! W00t!

Further, I can't tell where the bottleneck is. Anyone from here to Miami could be throttling SSL: the local Sint Maarten ISP, the hotel, the government of Sint Maarten, the government of the U.S....there's no way to tell. It's just like the Chinese firewall, maddening, inefficient, almost certainly deliberate, but too difficult to diagnose to ever find the right face to punch.

I have a very simple problem to solve that my hotel's WiFi is preventing me from solving. That makes me very annoyed. This will, I assure you, go on my Trip Advisor review.

Confessions of a TSA agent

If most of what Jason Harrington wrote in Politico last week is true, I'm disappointed to have my suspicions confirmed:

Each day I had to look into the eyes of passengers in niqabs and thawbs undergoing full-body pat-downs, having been guilty of nothing besides holding passports from the wrong nations. As the son of a German-American mother and an African-American father who was born in the Jim Crow South, I can pass for Middle Eastern, so the glares directed at me felt particularly accusatory. The thought nagged at me that I was enabling the same government-sanctioned bigotry my father had fought so hard to escape.

Most of us knew the directives were questionable, but orders were orders. And in practice, officers with common sense were able to cut corners on the most absurd rules, provided supervisors or managers weren’t looking.

[T]he only people who hated the body-scanners more than the public were TSA employees themselves. Many of my co-workers felt uncomfortable even standing next to the radiation-emitting machines we were forcing members of the public to stand inside. Several told me they submitted formal requests for dosimeters, to measure their exposure to radiation. The agency’s stance was that dosimeters were not necessary—the radiation doses from the machines were perfectly acceptable, they told us. We would just have to take their word for it. When concerned passengers—usually pregnant women—asked how much radiation the machines emitted and whether they were safe, we were instructed by our superiors to assure them everything was fine.

In one of his blog posts, Harrington points to "the neurotic, collectively 9/11-traumatized, pathological nature of American airport security" as the source of all this wasted effort and money.

I've always thought TSA screeners are doing the best they can with the ridiculous, contradictory orders they have. It's got to be at least as frustrating for them as it is for us. Harrington pretty much confirms that.

Can the NSA prevent another 9/11? Well, it failed to prevent the first one

CNN national-security analyst Peter Bergen argues that the NSA, CIA, and FBI had all the information they needed to prevent 9/11, but the Bush Administration failed to follow through. Providing more tools to the NSA would do nothing except give them more power:

The government missed multiple opportunities to catch al Qaeda hijacker Khalid al-Mihdhar when he was living in San Diego for a year and a half in the run up to 9/11, not because it lacked access to all Americans phone records but because it didn't share the information it already possessed about the soon-to-be hijacker within other branches of the government.

The CIA also did not alert the FBI about the identities of the suspected terrorists so that the bureau could look for them once they were inside the United States.

These multiple missed opportunities challenge the administration's claims that the NSA's bulk phone data surveillance program could have prevented the 9/11 attacks. The key problem was one of information sharing, not the lack of information.

Since we can't run history backward, all we can say with certainty is that it is an indisputable fact that the proper sharing of intelligence by the CIA with other agencies about al-Mihdhar may well have derailed the 9/11 plot. And it is merely an untestable hypothesis that if the NSA bulk phone collection program had been in place at the time that it might have helped to find the soon-to-be-hijackers in San Diego.

Indeed, the overall problem for U.S. counterterrorism officials is not that they don't gather enough information from the bulk surveillance of American phone data but that they don't sufficiently understand or widely share the information they already possess that is derived from conventional law enforcement and intelligence techniques.

The blanket Hoovering up of data by the NSA threatens everyone's liberties. But that cost isn't worth the results by any measure, since the NSA isn't actually making us safer. Their arguments to fear don't change the existing evidence.

How the NSA is making us less safe

Bruce Schneier makes the case:

We have no evidence that any of this surveillance makes us safer. NSA Director General Keith Alexander responded to these stories in June by claiming that he disrupted 54 terrorist plots. In October, he revised that number downward to 13, and then to "one or two." At this point, the only "plot" prevented was that of a San Diego man sending $8,500 to support a Somali militant group. We have been repeatedly told that these surveillance programs would have been able to stop 9/11, yet the NSA didn't detect the Boston bombings -- even though one of the two terrorists was on the watch list and the other had a sloppy social media trail. Bulk collection of data and metadata is an ineffective counterterrorism tool.

Not only is ubiquitous surveillance ineffective, it is extraordinarily costly. I don't mean just the budgets, which will continue to skyrocket. Or the diplomatic costs, as country after country learns of our surveillance programs against their citizens. I'm also talking about the cost to our society. It breaks so much of what our society has built. It breaks our political systems, as Congress is unable to provide any meaningful oversight and citizens are kept in the dark about what government does. It breaks our legal systems, as laws are ignored or reinterpreted, and people are unable to challenge government actions in court. It breaks our commercial systems, as US computer products and services are no longer trusted worldwide. It breaks our technical systems, as the very protocols of the Internet become untrusted. And it breaks our social systems; the loss of privacy, freedom, and liberty is much more damaging to our society than the occasional act of random violence.

It's all stuff he's said before, but it needs saying again.

Customer service that can't think for itself

I just received an alert on a credit card I used to share with an ex. The account, which is in her name since we split, has a small balance for the first time in 6 years.

There are two possibilities here, which should be obvious:

1. My ex does not know I still receive alerts on her credit card.

2. My ex does not know the card is active again.

Regardless of which is true (and they both may be), she needs to know about it. Given that (2) could expose her to liability for fraud, so does the card issuer.

So I called Bank of America to point out these twin possibilities, and after arguing with their phone system for five minutes, finally got to speak with an agent. I cannot say the conversation went well. After I explained the situation, I said, "so you should let her know about this."

"Is Miss ---- there with you?"

"What? No, we haven't seen each other in years, which is why this is so odd."

"OK, but without her authorization I can't give out account information."

"I don't want any account information. You need to tell her that I am getting account information by email, and that an account I thought we closed in 2007 is active again."

"OK, she is getting the alerts too, so I will make a note on the account for when she calls in next time."

"She may not be getting the alerts, if she has a new email address. Look, I'm talking about potential fraud here, you need to call her today."

"OK, we will call her and let her know."

Look, I understand that some aspects of technology security are too esoteric for most people, and I'm sorry there wasn't a Customer Service script for this. But some flaw in B of A's systems allowed personal financial data to leak to someone who shouldn't have it (me), in such a way that the account owner (my ex) doesn't know about the leak. I'm trying to help you here.

Also, I'm posting these details here on the off-chance they don't let her know and that she ever reads this blog. So, if this post applies to you, I did what I could. And you may want to switch to a less-moronic card provider.

Small world

The Chicago technology scene is tight. I just had a meeting with a guy I worked with from 2003-2004. Back then, we were both consultants on a project with a local financial services company. Today he's CTO of the company that bought it—so, really, the same company. Apparently, they're still using software I wrote back then, too.

I love when these things happen.

This guy was also witness to my biggest-ever screw-up. (By "biggest" I mean "costliest.") I won't go into details, except to say that whenever I write a SQL delete statement today, I do this first:

-- DELETE
SELECT *
FROM MissionCriticalDataWorthMillionsOfDollars
WHERE ID = 12345

That way, I get to see exactly what rows will be deleted before committing to the delete. Also, even if I accidentally hit <F5> before verifying the WHERE clause, all it will do is select more rows than I expect.

You can fill in the rest of the story on your own.