U.S. Magistrate Judge Sheri Pym yesterday ordered Apple, Inc., to bypass security on the iPhone 5c owned by the San Bernadino shooters. Apple said no:
In his statement, [Apple CEO Tim] Cook called the court order an “unprecedented step” by the federal government. “We oppose this order, which has implications far beyond the legal case at hand,” he wrote.
“The F.B.I. may use different words to describe this tool, but make no mistake: Building a version of iOS that bypasses security in this way would undeniably create a back door,” Mr. Cook wrote. “And while the government may argue that its use would be limited to this case, there is no way to guarantee such control.”
The Electronic Frontier Foundation, a nonprofit organization that defends digital rights, said it was siding with Apple.
“The government is asking Apple to create a master key so that it can open a single phone,” it said Tuesday evening. “And once that master key is created, we’re certain that our government will ask for it again and again, for other phones, and turn this power against any software or device that has the audacity to offer strong security.”
This reminds me of the incremental logic of Joss Whedon's Dollhouse, where every choice the characters make along the way seems like the right thing to do at the time, if you skip the inconvenient implications of it.
On Friday I mused about which new technology (or technologies) I should learn in the next few weeks. As if they're reading my mind (or blog) up in Redmond, just this morning Microsoft's Brady Gaster blogged about a little Raspberry Pi project he did:
I broke out my Raspberry Pi and my Azure SDK 2.8.2-enabled Visual Studio 2015 Community Edition and worked up a quick-and-dirty application that can send sensor data to an API App running in Azure App Service. This post walks through the creation of this sample, the code for which is stored in this GitHub repository.
The code that will run on the Raspberry Pi is also extremely simple, deliberately so that you can use your own imagination and add functionality however you want. Here’s a picture of my Raspberry Pi running in our team room, on the big screen. As you can see the app is quite basic – it consists solely of a toggle button, when clicked, kicks off a timer. Each time the timer fires, a request is made to the App Service I just deployed.
Since Gaster is the Azure SDK & Tools Program Manager, his post is really about Azure. But hey, for $50, why not whip up a little toy?
One of the companies I work with recently used Raspberry Pi devices with motion sensors to publicize when conference rooms were free. Maybe I can resurrect the Parker Cam with a motion sensor?
Software developer Todd Schneider has analyzed 22 million CitiBikes trips (the New York equivalent of Chicago's Divvy). He's even got some cool animations:
If you stare at the animation for a bit, you start to see some trends. My personal favorite spots to watch are the bridges that connect Brooklyn to Lower Manhattan. In the morning, beginning around 8 AM, you see a steady volume of bikes crossing from Brooklyn into Manhattan over the Brooklyn, Manhattan, and Williamsburg bridges. In the middle of the day, the bridges are generally less busy, then starting around 5:30 PM, we see the blue dots streaming from Manhattan back into Brooklyn, as riders leave their Manhattan offices to head back to their Brooklyn homes.
Sure enough, in the mornings there are more rides from Brooklyn to Manhattan than vice versa, while in the evenings there are more people riding from Manhattan to Brooklyn. For what it’s worth, most Citi Bike trips start and end in Manhattan. The overall breakdown since the program’s expansion in August 2015:
- 88% of trips start and end in Manhattan
- 8% of trips start and end in an outer borough
- 4% of trips travel between Manhattan and an outer borough
There are other distinct commuting patterns in the animation: the stretch of 1st Avenue heading north from 59th Street has very little Citi Bike traffic in the morning, but starting around 5 PM the volume picks up as people presumably head home from their Midtown offices to the Upper East Side.
Schneider previously analyzed 1.1 billion New York taxi trips.
I'm debating what new area I should explore, assuming I have the time:
I'm thinking about a few side projects, obviously. And this article on new "universal remote" apps in today's Times got me thinking about home automation, too. But that's less a skill to learn than a set of toys to play with.
The Daily WTF (a must-read if you're in a technology job) today described how poor testing caused 2,000 ballots to be thrown out in a 2014 election in Brussels:
It wasn’t enough to sway any one election, but the media had already caught wind of the potential voter fraud. Adrien’s company was hired for an independent code review of Delacroy Europe’s voting program to determine if anything criminal had transpired.
He noticed something strange in the UI selection functions, triggered when the user selects a candidate on the viewscreen.
He found two commented lines, dated June 28, 2013, a year before election day. A developer, looking at
Card_Unselect(), realized that by unselecting a candidate, it also unselected everyone in that candidate’s list. They commented out two lines, thinking they had fixed the error. However, the unselection algorithm never decremented the check counter, which kept track of how many candidates had been chosen. If a user checked a candidate on one list, changed their mind, and picked another from a separate list, then both votes would be counted.
It hadn’t been a case of fraud, but some poorly-placed comments.
It also could have been prevented—or at least discovered immediately—through automated unit testing.
"Never ascribe to malice what can be adequately explained by incompetence."
I have three books in the works and two on deck (imminently, not just in my to-be-read stack) right now. Reading:
- Kevin Hearne, "Iron Druid Chronicles" book 8: Staked.
- Kim Stanley Robinson, "Mars" trilogy book 2: Green Mars.
Meanwhile, I have these articles and blog posts to read, some for work, some because they're interesting:
Time to read.
Meanwhile, I seem to have a cold. Yuck.
Citylab reports that Chicago's open-sourced food safety analysis software has made our food inspectors much more effective. Other cities aren't adopting it, though:
Chicago started using the prediction tool for daily operations in February 2015, and the transition worked very smoothly, says Raed Mansour, innovation projects lead for the Department of Public Health. That’s because the department was careful to incorporate the algorithm in a way that minimally altered the existing business practices. Inspectors still get their assignments from a manager, for instance, but now the manager is generating schedules from the algorithm. The department will conduct an evaluation of the program after a year, and Mansour anticipates that the performance will meet or exceed the metrics from the test run.
But that was never meant to be the end of it. Back in November 2014, Schenk published the code for the algorithm on the programming website GitHub, so anyone in any other city could see exactly what Chicago did and adapt the program to their own community’s needs. That’s about as far as they could go to promote it, short of knocking on the door of every city hall in America. But the months since then have shown that it takes more than code to launch a municipal data program.
Chicago passed around the free samples, but a year later only one government has taken a bite: Montgomery County, Maryland, just northwest of Washington, D.C. The county hired a private company called Open Data Nation to adapt Chicago’s code for use in the new location. Carey Anne Nadeau, who heads the company, ran a two-month test of the adapted algorithm in fall 2015 that identified 27 percent more violations in the first month than business as usual, and finding them three days earlier.
There's a classic anti-pattern called "not invented here." That may be one of the factors. Another could be that the other cities' tech staff just aren't interested in trying new things. Chicago hasn't always been ahead of the curve, but I'm glad we've at least got the one guy.
The New York Times Magazine has an in-depth analysis of the daily fantasy sports (DFS) industry. I'm not that interested in fantasy sports, but this article had me riveted:
Here’s how it works: Let’s say you run D.F.S. Site A, and D.F.S Site B has just announced a weekly megacontest in which first place will take home $1 million. Now you have to find a way to host a comparable contest, or all your customers will flee to Site B to chase that seven-figure jackpot. The problem is that you have only 25,000 users, and the most you can charge them to enter is $20 per game (anything higher is prohibitively expensive). And you’ll need $2 million or even $3 million in a prize pool if first prize is valued at $1 million (remember, you still have to pay second place, third place and beyond). So you need to somehow quadruple the number of entries. But how? You’re already paying high cost-per-acquisition fees to sites like RotoGrinders, which charge, according to Harber, anywhere between $100 and $200 per person they refer to your site, and you’ve already put your logo on every bus, trash can and ESPN screaming-heads show out there. You’ve also kicked in some of your own money (known as “overlay”) to spice up the pot.
The solution is simple: You let each contestant enter hundreds of times. But even given this freedom, a majority of people will enter only a few more times, which will help but probably won’t get you all you need. If, however, you can attract a few high rollers who are willing to book several hundred or even several thousand entries apiece, the path to the $1 million first prize becomes a lot more manageable. And as long as you can make sure those players keep pouring in their thousands of entries, you can keep posting the $1 million first prize all over your ads.
In the game lobbies of DraftKings and FanDuel, however, sharks are free to flood the marketplace with thousands of entries every day, luring inexperienced, bad players into games in which they are at a sizable disadvantage. The imbalanced winnings in D.F.S. have been an open secret since this past September, when Bloomberg Businessweek published an exposé on the habits of high-volume players. The numbers are damning. According to DraftKings data obtained by the New York State attorney general’s office, between 2013 and 2014, 89.3 percent of players had a negative return on investment. A recent McKinsey study showed that in the first half of the 2015 Major League Baseball season, 91 percent of the prize money was won by a mere 1.3 percent of the players.
So, how is this at all fun to casual players? Someone explain it to me.
People just starting out in software often ask me what they should learn. I submit for discussion the contents of my desk:
Some of these books are more valuable than others. I leave the ranking of those books as an exercise for the reader.
Actually, that's not true. Anyone who wants to be a professional software developer (as loaded a phrase as has ever appeared on this blog) needs to read all of these books. Every one of them.
No, really. In 1998 Microsoft wanted to demonstrate its SQL Server database engine with a terabyte-sized database, so it built a map called Terraserver. Motherboard's Jason Koebler has the story:
Terraserver could have, should have been a product that ensured Microsoft would remain the world’s most important internet company well into the 21st century. It was the first-ever publicly available interactive satellite map of the world. The world’s first-ever terabyte-sized database. In fact, it was the world’s largest database for several years, and that Compaq was—physically speaking—the world's largest computer. Terraserver was a functional and popular Google Earth predecessor that launched and worked well before Google even thought of the concept. It let you see your house, from space.
So why aren’t we all using Terraserver on our smartphones right now?
Probably for the same reason Microsoft barely put up a fight as Google outpaced it with search, email, browser, and just about every other consumer service. Microsoft, the corporation, didn't seem to care very much about the people who actually used Terraserver, and it didn’t care about the vast amount of data about consumers it was gleaning from how they used the service.
In sum, Microsoft saw itself as a software company, not an information company. It's similar to how Borders got destroyed: it thought of itself as a bookstore, while Amazon thought of itself as a delivery service.
I remember how cool Terraserver was, and how sad I felt when it disappeared for a couple of years before it morphed into Google Earth.