The Daily Parker

Politics, Weather, Photography, and the Dog

Not a slow news day

Let's see, where to begin?

Finally, RawStory has a collection of responses to the President's Sharpie-altered weather map. (This is not, however, the first time the Administration has tried to make one of its Dear Leader's errors be true.) Enjoy.

The myth of "consumer" security systems

Bruce Schneier takes apart Attorney General Bill Barr's proposal to weaken civilian computer security:

The Department of Justice wants access to encrypted consumer devices but promises not to infiltrate business products or affect critical infrastructure. Yet that's not possible, because there is no longer any difference between those categories of devices. Consumer devices are critical infrastructure. They affect national security. And it would be foolish to weaken them, even at the request of law enforcement.

The thing is, that distinction between military and consumer products largely doesn't exist. All of those "consumer products" Barr wants access to are used by government officials -- heads of state, legislators, judges, military commanders and everyone else -- worldwide. They're used by election officials, police at all levels, nuclear power plant operators, CEOs and human rights activists. They're critical to national security as well as personal security.

Barr can't weaken consumer systems without also weakening commercial, government, and military systems. There's one world, one network, and one answer. As a matter of policy, the nation has to decide which takes precedence: offense or defense. If security is deliberately weakened, it will be weakened for everybody. And if security is strengthened, it is strengthened for everybody. It's time to accept the fact that these systems are too critical to society to weaken. Everyone will be more secure with stronger encryption, even if it means the bad guys get to use that encryption as well.

Schneier doesn't say it explicitly, but this is one more example of how Barr and other Republicans of his generation haven't caught up to the rest of the world.

How to combat influence operations

Bruce Schneier has an eight-step plan—though he recognizes Step 1 might not be possible:

Since the 2016 US presidential election, there have been an endless series of ideas about how countries can defend themselves. It's time to pull those together into a comprehensive approach to defending the public sphere and the institutions of democracy.

Influence operations don't come out of nowhere. They exploit a series of predictable weaknesses -- and fixing those holes should be the first step in fighting them. In cybersecurity, this is known as a "kill chain." That can work in fighting influence operations, too­ -- laying out the steps of an attack and building the taxonomy of countermeasures.

Step 1: Find the cracks in the fabric of society­ -- the social, demographic, economic, and ethnic divisions. For campaigns that just try to weaken collective trust in government's institutions, lots of cracks will do. But for influence operations that are more directly focused on a particular policy outcome, only those related to that issue will be effective.

Countermeasures: There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the "common political knowledge" necessary for democracies to function. That shared knowledge has to be strengthened, thereby making it harder to exploit the inevitable cracks. It needs to be made unacceptable -- or at least costly -- for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. The public must learn to become reflexively suspicious of information that makes them angry at fellow citizens. These cracks can't be entirely sealed, as they emerge from the diversity that makes democracies strong, but they can be made harder to exploit. Much of the work in "norms" falls here, although this is essentially an unfixable problem. This makes the countermeasures in the later steps even more important.

Also unfortunately, most of the countermeasures require informed and conscientious political leaders. Good luck with that.

A Harlequin hacker romance?

Via Bruce Schneier, this is literally* a thing:

The book opens with Massimo working in his combination laboratory and server farm; we know it's ironclad because of the required thumbprint and biometrics scan, but we also know it's classy because it's in an old wine cellar beneath his family villa outside Milan. Plus, he has three screens, so you know he's a serious cybersecurity hacker man.

Nat is a 20-something who lives a poverty-driven boho life. Massimo—who is Mr. Cyber—is, in her eyes, a "sleek, lean, sex-on-legs stud" who looks nothing like the stereotypical tech billionaire. And the chemistry between them ignites as he drags her back to his server room and tells her to do some... penetration testing.

She demurs.

Six chapters in. I am convinced that this book was written by a Harlequin Markov bot.

I may not add this to my book list just now. But at least I know it's out there...

*Yah, sorry. That's "literally" twice.

Lunchtime links

Just a few head-to-desk articles this afternoon:

I'm going to continue writing code and trying not to think about any of this.

Rethinking the surveillance society

Via Bruce Schneier, San Francisco-based "computer guy" Maciej Cegłowski put up a cogent, clear blog post last week showing how we might better regulate privacy:

Until recently, ambient privacy was a simple fact of life. Recording something for posterity required making special arrangements, and most of our shared experience of the past was filtered through the attenuating haze of human memory. Even police states like East Germany, where one in seven citizens was an informer, were not able to keep tabs on their entire population. Today computers have given us that power. Authoritarian states like China and Saudi Arabia are using this newfound capacity as a tool of social control. Here in the United States, we’re using it to show ads. But the infrastructure of total surveillance is everywhere the same, and everywhere being deployed at scale.

Ambient privacy is not a property of people, or of their data, but of the world around us. Just like you can’t drop out of the oil economy by refusing to drive a car, you can’t opt out of the surveillance economy by forswearing technology (and for many people, that choice is not an option). While there may be worthy reasons to take your life off the grid, the infrastructure will go up around you whether you use it or not.

All of this leads me to see a parallel between privacy law and environmental law, another area where a technological shift forced us to protect a dwindling resource that earlier generations could take for granted.

The idea of passing laws to protect the natural world was not one that came naturally to early Americans. In their experience, the wilderness was something that hungry bears came out of, not an endangered resource that required lawyers to defend. Our mastery over nature was the very measure of our civilization.

But as the balance of power between humans and nature shifted, it became clear that wild spaces could not survive without some kind of protection.

Read the whole thing. He makes a compelling case for regulating privacy the same way we regulated the environment.

How to protect your data from being stolen

Sadly, you can't. But you can protect yourself from identity theft, as Bruce Schneier explains:

The reality is that your sensitive data has likely already been stolen, multiple times. Cybercriminals have your credit card information. They have your social security number and your mother's maiden name. They have your address and phone number. They obtained the data by hacking any one of the hundreds of companies you entrust with the data­ -- and you have no visibility into those companies' security practices, and no recourse when they lose your data.

Given this, your best option is to turn your efforts toward trying to make sure that your data isn't used against you. Enable two-factor authentication for all important accounts whenever possible. Don't reuse passwords for anything important -- ­and get a password manager to remember them all.

Do your best to disable the "secret questions" and other backup authentication mechanisms companies use when you forget your password­ -- those are invariably insecure. Watch your credit reports and your bank accounts for suspicious activity. Set up credit freezes with the major credit bureaus. Be wary of email and phone calls you get from people purporting to be from companies you do business with.

At the very least, download a password safe (like the one Schneier himself helped write) and make sure that you use a different, random password for everything.

Azure DNS failure causes widespread outage

Yesterday, Microsoft made an error making a nameserver delegation chage (where they switch computers for their internal address book), causing large swaths of Azure to lose track of itself:

Summary of impact: Between 19:43 and 22:35 UTC on 02 May 2019, customers may have experienced intermittent connectivity issues with Azure and other Microsoft services (including M365, Dynamics, DevOps, etc). Most services were recovered by 21:30 UTC with the remaining recovered by 22:35 UTC. 

Preliminary root cause: Engineers identified the underlying root cause as a nameserver delegation change affecting DNS resolution and resulting in downstream impact to Compute, Storage, App Service, AAD, and SQL Database services. During the migration of a legacy DNS system to Azure DNS, some domains for Microsoft services were incorrectly updated. No customer DNS records were impacted during this incident, and the availability of Azure DNS remained at 100% throughout the incident. The problem impacted only records for Microsoft services.

Mitigation: To mitigate, engineers corrected the nameserver delegation issue. Applications and services that accessed the incorrectly configured domains may have cached the incorrect information, leading to a longer restoration time until their cached information expired.

Next steps: Engineers will continue to investigate to establish the full root cause and prevent future occurrences. A detailed RCA will be provided within approximately 72 hours.

If you tried to get to the Daily Parker yesterday afternoon Chicago time, you might have gotten nothing, or gotten the whole blog. All I know is I spent half an hour tracking it down from my end before Microsoft copped to the problem.

That's not a criticism of Microsoft. In fact, they're a lot more transparent about problems like this than most other organizations. And having spent a lot of time trying to figure out why something has broken, half an hour doesn't seem like a lot of time.

So, bad for Microsoft that they tanked their entire universe with a misconfigured DNS server. Good for them that they fixed it completely in just over an hour.

Quick links

The day after a 3-day, 3-flight weekend doesn't usually make it into the top-10 productive days of my life. Like today for instance.

So here are some things I'm too lazy to write more about today:

Now, to write tomorrow's A-to-Z entry...

Two on data security

First, Bruce Schneier takes a look at Facebook's privacy shift:

There is ample reason to question Zuckerberg's pronouncement: The company has made -- and broken -- many privacy promises over the years. And if you read his 3,000-word post carefully, Zuckerberg says nothing about changing Facebook's surveillance capitalism business model. All the post discusses is making private chats more central to the company, which seems to be a play for increased market dominance and to counter the Chinese company WeChat.

We don't expect Facebook to abandon its advertising business model, relent in its push for monopolistic dominance, or fundamentally alter its social networking platforms. But the company can give users important privacy protections and controls without abandoning surveillance capitalism. While some of these changes will reduce profits in the short term, we hope Facebook's leadership realizes that they are in the best long-term interest of the company.

Facebook talks about community and bringing people together. These are admirable goals, and there's plenty of value (and profit) in having a sustainable platform for connecting people. But as long as the most important measure of success is short-term profit, doing things that help strengthen communities will fall by the wayside. Surveillance, which allows individually targeted advertising, will be prioritized over user privacy. Outrage, which drives engagement, will be prioritized over feelings of belonging. And corporate secrecy, which allows Facebook to evade both regulators and its users, will be prioritized over societal oversight. If Facebook now truly believes that these latter options are critical to its long-term success as a company, we welcome the changes that are forthcoming.

And Cory Doctorow describes a critical flaw in Switzerland's e-voting system:

[E]-voting is a terrible idea and the general consensus among security experts who don't work for e-voting vendors is that it shouldn't be attempted, but if you put out an RFP for magic beans, someone will always show up to sell you magic beans, whether or not magic beans exist.

The belief that companies can be trusted with this power [to fix security defects while preventing people from disclosing them] defies all logic, but it persists. Someone found Swiss Post's embrace of the idea too odious to bear, and they leaked the source code that Swiss Post had shared under its nondisclosure terms, and then an international team of some of the world's top security experts (including some of our favorites, like Matthew Green) set about analyzing that code, and (as every security expert who doesn't work for an e-voting company has predicted since the beginning of time), they found an incredibly powerful bug that would allow a single untrusted party at Swiss Post to undetectably alter the election results.

You might be thinking, "Well, what is the big deal? If you don't trust the people administering an election, you can't trust the election's outcome, right?" Not really: we design election systems so that multiple, uncoordinated people all act as checks and balances on each other. To suborn a well-run election takes massive coordination at many polling- and counting-places, as well as independent scrutineers from different political parties, as well as outside observers, etc.

And even other insecure e-voting systems like the ones in the USA are not this bad: they decentralized, and would-be vote-riggers would have to compromise many systems, all around the nation, in each poll that they wanted to alter. But Swiss Post's defect allows a single party to alter all the polling data, and subvert all the audit systems. As Matthew Green told Motherboard: "I don’t think this was deliberate. However, if I set out to design a backdoor that allowed someone to compromise the election, it would look exactly like this."

Switzerland is going ahead with the election anyway, because that's what people do when they're called out on stupidity.