The Daily Parker

Politics, Weather, Photography, and the Dog

Can the NSA prevent another 9/11? Well, it failed to prevent the first one

CNN national-security analyst Peter Bergen argues that the NSA, CIA, and FBI had all the information they needed to prevent 9/11, but the Bush Administration failed to follow through. Providing more tools to the NSA would do nothing except give them more power:

The government missed multiple opportunities to catch al Qaeda hijacker Khalid al-Mihdhar when he was living in San Diego for a year and a half in the run up to 9/11, not because it lacked access to all Americans phone records but because it didn't share the information it already possessed about the soon-to-be hijacker within other branches of the government.

The CIA also did not alert the FBI about the identities of the suspected terrorists so that the bureau could look for them once they were inside the United States.

These multiple missed opportunities challenge the administration's claims that the NSA's bulk phone data surveillance program could have prevented the 9/11 attacks. The key problem was one of information sharing, not the lack of information.

Since we can't run history backward, all we can say with certainty is that it is an indisputable fact that the proper sharing of intelligence by the CIA with other agencies about al-Mihdhar may well have derailed the 9/11 plot. And it is merely an untestable hypothesis that if the NSA bulk phone collection program had been in place at the time that it might have helped to find the soon-to-be-hijackers in San Diego.

Indeed, the overall problem for U.S. counterterrorism officials is not that they don't gather enough information from the bulk surveillance of American phone data but that they don't sufficiently understand or widely share the information they already possess that is derived from conventional law enforcement and intelligence techniques.

The blanket Hoovering up of data by the NSA threatens everyone's liberties. But that cost isn't worth the results by any measure, since the NSA isn't actually making us safer. Their arguments to fear don't change the existing evidence.

How the NSA is making us less safe

Bruce Schneier makes the case:

We have no evidence that any of this surveillance makes us safer. NSA Director General Keith Alexander responded to these stories in June by claiming that he disrupted 54 terrorist plots. In October, he revised that number downward to 13, and then to "one or two." At this point, the only "plot" prevented was that of a San Diego man sending $8,500 to support a Somali militant group. We have been repeatedly told that these surveillance programs would have been able to stop 9/11, yet the NSA didn't detect the Boston bombings -- even though one of the two terrorists was on the watch list and the other had a sloppy social media trail. Bulk collection of data and metadata is an ineffective counterterrorism tool.

Not only is ubiquitous surveillance ineffective, it is extraordinarily costly. I don't mean just the budgets, which will continue to skyrocket. Or the diplomatic costs, as country after country learns of our surveillance programs against their citizens. I'm also talking about the cost to our society. It breaks so much of what our society has built. It breaks our political systems, as Congress is unable to provide any meaningful oversight and citizens are kept in the dark about what government does. It breaks our legal systems, as laws are ignored or reinterpreted, and people are unable to challenge government actions in court. It breaks our commercial systems, as US computer products and services are no longer trusted worldwide. It breaks our technical systems, as the very protocols of the Internet become untrusted. And it breaks our social systems; the loss of privacy, freedom, and liberty is much more damaging to our society than the occasional act of random violence.

It's all stuff he's said before, but it needs saying again.

Customer service that can't think for itself

I just received an alert on a credit card I used to share with an ex. The account, which is in her name since we split, has a small balance for the first time in 6 years.

There are two possibilities here, which should be obvious:

1. My ex does not know I still receive alerts on her credit card.

2. My ex does not know the card is active again.

Regardless of which is true (and they both may be), she needs to know about it. Given that (2) could expose her to liability for fraud, so does the card issuer.

So I called Bank of America to point out these twin possibilities, and after arguing with their phone system for five minutes, finally got to speak with an agent. I cannot say the conversation went well. After I explained the situation, I said, "so you should let her know about this."

"Is Miss ---- there with you?"

"What? No, we haven't seen each other in years, which is why this is so odd."

"OK, but without her authorization I can't give out account information."

"I don't want any account information. You need to tell her that I am getting account information by email, and that an account I thought we closed in 2007 is active again."

"OK, she is getting the alerts too, so I will make a note on the account for when she calls in next time."

"She may not be getting the alerts, if she has a new email address. Look, I'm talking about potential fraud here, you need to call her today."

"OK, we will call her and let her know."

Look, I understand that some aspects of technology security are too esoteric for most people, and I'm sorry there wasn't a Customer Service script for this. But some flaw in B of A's systems allowed personal financial data to leak to someone who shouldn't have it (me), in such a way that the account owner (my ex) doesn't know about the leak. I'm trying to help you here.

Also, I'm posting these details here on the off-chance they don't let her know and that she ever reads this blog. So, if this post applies to you, I did what I could. And you may want to switch to a less-moronic card provider.

Small world

The Chicago technology scene is tight. I just had a meeting with a guy I worked with from 2003-2004. Back then, we were both consultants on a project with a local financial services company. Today he's CTO of the company that bought it—so, really, the same company. Apparently, they're still using software I wrote back then, too.

I love when these things happen.

This guy was also witness to my biggest-ever screw-up. (By "biggest" I mean "costliest.") I won't go into details, except to say that whenever I write a SQL delete statement today, I do this first:

-- DELETE
SELECT *
FROM MissionCriticalDataWorthMillionsOfDollars
WHERE ID = 12345

That way, I get to see exactly what rows will be deleted before committing to the delete. Also, even if I accidentally hit <F5> before verifying the WHERE clause, all it will do is select more rows than I expect.

You can fill in the rest of the story on your own.

Institutional failure in Internet security

Security guru Bruce Schneier has two essays in the Guardian this week. The first explains how the US government betrayed the Internet:

By subverting the internet at every level to make it a vast, multi-layered and robust surveillance platform, the NSA has undermined a fundamental social contract. The companies that build and manage our internet infrastructure, the companies that create and sell us our hardware and software, or the companies that host our data: we can no longer trust them to be ethical internet stewards.

I have resisted saying this up to now, and I am saddened to say it, but the US has proved to be an unethical steward of the internet. The UK is no better. The NSA's actions are legitimizing the internet abuses by China, Russia, Iran and others. We need to figure out new means of internet governance, ones that makes it harder for powerful tech countries to monitor everything. For example, we need to demand transparency, oversight, and accountability from our governments and corporations.

Unfortunately, this is going play directly into the hands of totalitarian governments that want to control their country's internet for even more extreme forms of surveillance. We need to figure out how to prevent that, too. We need to avoid the mistakes of the International Telecommunications Union, which has become a forum to legitimize bad government behavior, and create truly international governance that can't be dominated or abused by any one country.

He followed up today with a guide to staying secure against the NSA:

1) Hide in the network. Implement hidden services. Use Tor to anonymize yourself. Yes, the NSA targets Tor users, but it's work for them. The less obvious you are, the safer you are.

2) Encrypt your communications. Use TLS. Use IPsec. Again, while it's true that the NSA targets encrypted connections – and it may have explicit exploits against these protocols – you're much better protected than if you communicate in the clear.

There are three other points, all pretty simple.

Quis custodiet robote?

Bruce Schneier thinks the NSA's plan to fire 90% of its sysadmins and replace them with automation has a flaw:

Does anyone know a sysadmin anywhere who believes it's possible to automate 90% of his job? Or who thinks any such automation will actually improve security?

[NSA Director Kieth Alexander is] stuck. Computerized systems require trusted people to administer them. And any agency with all that computing power is going to need thousands of sysadmins. Some of them are going to be whistleblowers.

Leaking secret information is the civil disobedience of our age. Alexander has to get used to it.

The agency's leaks have also forced the president's hand by opening up our security apparatus to public scrutiny—which he may have wanted to do anyway.

Unexpectedly productive weekend

Yes, I know the weather's beautiful in Chicago this weekend, but sometimes you just have to run with things. So that's what I did the last day and a half.

A few things collided in my head yesterday morning, and this afternoon my computing landscape looks completely different.

First, for a couple of weeks I've led my company's efforts to consolidate and upgrade our tools. That means I've seen a few head-to-head comparisons between FogBugz, Atlassian tools, and a couple other products.

Second, in the process of moving this blog to Orchard, I've had some, ah, challenges getting Mercurial and Git to play nicely together. Orchard just switched to Git, and promptly broke Hg-Git, forcing contributors to enlist in Git directly.

Third, my remote Mercurial repositories are sitting out on an Azure VM with no automation around them. Every time I want to add a remote repository I have to remote into the VM and add it to the file system. Or just use my last remaining server, which, still, requires cloning and copying.

Fourth, even though it was doing a lot more when I created it a year ago, right now it's got just a few things running on it: The Daily Parker, Hired Wrist, my FogBugz instance, and two extinct sites that I keep up because I'm a good Internet citizen: the Inner Drive blog and a party site I did ten years ago.

Fifth, that damn VM costs me about $65 a month, because I built a small instance so I'd have adequate space and power. Well, serving 10,000 page views per day takes about as much computational power as the average phone has these days, so its CPU never ticks over 5%. Microsoft has an "extra small" size that costs 83% less than "small" and is only 50% less powerful.

Finally, on Friday my company's MSDN benefits renewed for another year, one benefit being $200 of Azure credits every month.

I put all this together and thought to myself, "Self, why am I spending $65 a month on a virtual machine that has nothing on it but a few personal websites and makes me maintain my own source repository and issue tracker?"

Then yesterday morning came along, and these things happened:

  1. I signed up for Atlassian's tools, Bitbucket (which supports both Git and Mercurial) and JIRA. The first month is free; after, the combination costs $20 a month for up to 10 users.
  2. I learned how to use JIRA. I don't mean I added a couple of cases and poked around with the default workflow; I mean I figured out how to set up projects, permissions, notifications, email routing, and on and on, almost to the extent I know FogBugz, which I've used for six years.
  3. I wrote a utility in C# to export my FogBugz data to JIRA, and then exported all of my active projects with their archives (about 2,000 cases).
  4. I moved the VM to my MSDN subscription. This means I copied the virtual hard disk (VHD) underpinning my VM to the other subscription and set up a new VM using the same disk over there. This also isn't trivial; it took over two hours.
  5. I changed all the DNS entries pointing to the old VM so they'd point to the new VM.
  6. Somewhere during all that time, I took Parker on a couple of long walks for about 2½ hours.

At each point in the process, I only planned to do a small proof-of-concept that somehow became a completed task. Really that wasn't my intention. In fact, yesterday I'd intended to pick up my drycleaning, but somehow I went from 10am to 5pm without knowing how much time had gone by. I haven't experienced flow in a while so I didn't recognize it at the time. Parker, good dog he is, let me go until about 5:30 before insisting he had to go outside.

I guess the last day and a half was an apotheosis of sorts. Fourteen months ago, I had a data center in my living room; today I've not only got everything in the Cloud, but I'm no longer wasting valuable hours messing around configuring things.

Oh, and I also just bought a 2 TB portable drive for $130, making my 512 GB NAS completely redundant. One fewer thing using electricity in my house...

Update: I forgot to include the code I whipped up to create .csv export files from FogBugz.

The national security state

Security guru Bruce Schneier warns about the lack of trust resulting from revelations about NSA domestic spying:

Both government agencies and corporations have cloaked themselves in so much secrecy that it's impossible to verify anything they say; revelation after revelation demonstrates that they've been lying to us regularly and tell the truth only when there's no alternative.

There's much more to come. Right now, the press has published only a tiny percentage of the documents Snowden took with him. And Snowden's files are only a tiny percentage of the number of secrets our government is keeping, awaiting the next whistle-blower.

Ronald Reagan once said "trust but verify." That works only if we can verify. In a world where everyone lies to us all the time, we have no choice but to trust blindly, and we have no reason to believe that anyone is worthy of blind trust. It's no wonder that most people are ignoring the story; it's just too much cognitive dissonance to try to cope with it.

Meanwhile, at the Wall Street Journal, Ted Koppel has an op-ed warning about our over-reactions to terrorism:

[O]nly 18 months [after 9/11], with the invasion of Iraq in 2003, ... the U.S. began to inflict upon itself a degree of damage that no external power could have achieved. Even bin Laden must have been astounded. He had, it has been reported, hoped that the U.S. would be drawn into a ground war in Afghanistan, that graveyard to so many foreign armies. But Iraq! In the end, the war left 4,500 American soldiers dead and 32,000 wounded. It cost well in excess of a trillion dollars—every penny of which was borrowed money.

Saddam was killed, it's true, and the world is a better place for it. What prior U.S. administrations understood, however, was Saddam's value as a regional counterweight to Iran. It is hard to look at Iraq today and find that the U.S. gained much for its sacrifices there. Nor, as we seek to untangle ourselves from Afghanistan, can U.S. achievements there be seen as much of a bargain for the price paid in blood and treasure.

At home, the U.S. has constructed an antiterrorism enterprise so immense, so costly and so inexorably interwoven with the defense establishment, police and intelligence agencies, communications systems, and with social media, travel networks and their attendant security apparatus, that the idea of downsizing, let alone disbanding such a construct, is an exercise in futility.

Do you feel safer now?

Microsoft ID age-verification hell

Our company needs a specific Microsoft account, not attached to a specific employee, to be the "Account Holder" for our Azure subscriptions.

Azure only allows one and only one account holder, you see, and more than one person needs access to the billing information for these accounts. Setting up a specific account for that purpose solves that problem.

So, I went ahead and set up an email account for our putative Azure administrator, and then went to the Live ID signup process. It asked me for my "birthdate." Figuring, what the hell?, I entered the birthdate of the company.

That got me here:

Annoying, but fine, I get why they do this.

So I got all the way through the process, including giving them a credit card to prove I'm real, and then I got this:

By the way, those screen-shots are from the third attempt, including one giving them a different credit card.

I have sent a message to Microsoft customer support, but haven't gotten an acknowledgement yet. I think I'm just going to cancel the account and start over.

Update: Yes, killing the account and starting over (by denying the email verification step) worked. So why couldn't the average pre-teen figure this out too? This has to be one of the dumber things companies do.