The Daily Parker

Politics, Weather, Photography, and the Dog

Busy day, time to read the news

Oh boy:

Cassie has bugged me for the last hour, even though we went out two hours ago. I assume she wants dinner. I will take care of that presently.

Yes, software is an ongoing F-up

Remy Porter, owner of the hilarious blog The Daily WTF, responded to Facebook's catastrophic BGP update by pointing out how software actually gets made:

IT in general, and software in specific, is a rather bizarre field in terms of how skills work. If, for example, you wanted to get good at basketball, you might practice free-throws. As you practice, you'd expect the number of free-throws you make to gradually increase. It'll never be 100%, but the error rate will decline, the success rate will increase. Big-name players can expect a 90% success rate, and on average a professional player can expect about an 80% success rate, at least according to this article. I don't actually know anything about basketball.

But my ignorance aside, I want you to imagine writing a non-trivial block of code and having it compile, run, and pass its tests on the first try. Now, imagine doing that 80% of the time.

It's a joke in our industry, right? It's a joke that's so overplayed that perhaps it should join "It's hard to exit VIM" in the bin of jokes that needs a break. But why is this experience so universal? Why do we have a moment of panic when our code just works the first time, and we wonder what we screwed up?

It's because we already know the truth of software development: effing up is actually your job.

You absolutely don't get a choice. Effing up is your job. You're going to watch your program crash. You're going to make a simple change and watch all the tests go from green to red. That semicolon you forgot is going to break the build. And you will stare at one line of code for six hours, silently screaming, WHY DON'T YOU WORK?

Yep. And still, we do it every day.

Facebook is as Facebook does

Josh Marshall points out that the harm Facebook causes comes from its basic design, making a quick fix impossible:

First, set aside all morality. Let’s say we have a 16 year old girl who’s been doing searches about average weights, whether boys care if a girl is overweight and maybe some diets. She’s also spent some time on a site called AmIFat.com. Now I set you this task. You’re on the other side of the Facebook screen and I want you to get her to click on as many things as possible and spend as much time clicking or reading as possible. Are you going to show her movie reviews? Funny cat videos? Homework tips? Of course, not. If you’re really trying to grab her attention you’re going to show her content about really thin girls, how their thinness has gotten them the attention of boys who turn out to really love them, and more diets. If you’re clever you probably wouldn’t start with content that’s going to make this 16 year old feel super bad about herself because that might just get her to log off. You’ll inspire or provoke enough negative feelings to get clicks and engagement without going too far.

This is what artificial intelligence and machine learning are. Facebook is a series of algorithms and goals aimed at maximizing engagement with Facebook. That’s why it’s worth hundreds of billions of dollars. It has a vast army of computer scientists and programmers whose job it is to make that machine more efficient. The truth is we’re all teen girls and boys about some topic. Maybe the subject isn’t tied as much to depression or self-destructive behavior. Maybe you don’t have the same amount of social anxiety or depressive thoughts in the mix. But the Facebook engine is designed to scope you out, take a psychographic profile of who you are and then use its data compiled from literally billions of humans to serve you content designed to maximize your engagement with Facebook.

Put in those terms, you barely have a chance.

He goes on to draw a comparison between Facebook's executives and Big Tobacco's, circa 1975:

At a certain point you realize: our product is bad. If used as intended it causes lung cancer, heart disease and various other ailments in a high proportion of the people who use the product. And our business model is based on the fact that the product is chemically addictive. Our product is getting people addicted to tobacco so that they no longer really have a choice over whether to buy it. And then a high proportion of them will die because we’ve succeeded.

So what to do? The decision of all the companies, if not all individuals, was just to lie. What else are you going to do? Say we’re closing down our multi-billion dollar company because our product shouldn’t exist?

You can add filters and claim you’re not marketing to kids. But really you’re only ramping back the vast social harm marginally at best. That’s the product. It is what it is.

Yesterday's 6-hour reprieve from Facebook seems to have hurt almost no one. The jokes started right away, about how anti-vaxxers could no longer "do research" and how people have started reading again. I didn't even notice until I read that it had gone offline, because I had too much work to do. So maybe that's what regulators should do: limit the company to 8 hours a day or something. What a thought...

Beautiful autumn morning

I've opened nearly every window in my house to let in the 15°C breeze and really experience the first real fall morning in a while. Chicago will get above-normal temperatures for the next 10 days or so, but in the beginning of October that means highs in the mid-20s and lows in the mid-teens. Even Cassie likes the change.

Since I plan to spend nearly every moment of daylight outside for the rest of this weekend, I want to note a few things to read this evening when I come back inside:

Finally, if you really want to dig into some cool stuff in C# 10, Scott Hanselman explains implicit namespace support.

Bug report: Garmin Venu - Usability - High severity

Summary: When displaying a notification over a paused activity, swiping down will delete the paused activity instead of the notification, without an Undo feature.

SeverityHigh (accidental but irrevocable data loss)

Steps to reproduce:

  1. Take a PTO day to enjoy a 7-hour outdoor exercise.
  2. Start the exercise on the Garmin Venu device.
  3. Spend 82 minutes in the exercise.
  4. Press Button A on the Venu to pause the activity. The activity will show as Paused, with a Discard (X) indication on the top of the display and a Save (check) indication on the bottom.
  5. Have a friend innocently text you about a nonessential matter. A notification shows up on the Venu display.
  6. As you have done thousands of times before, swipe down to dismiss the notification. The activity is deleted, but the notification just stays there, mocking you.
  7. Stare at the device for a moment in stunned silence.
  8. Frantically swipe up on the device to try to undo the deletion. Nothing happens because there is no Undo feature for this action.
  9. (Omitted)
  10. (Omitted again, but this time with reference to the usability engineers at Garmin who apparently forgot the rule that inadvertent data loss must never happen.)
  11. (Omitted once more, but this time with reference to said engineers' standardized test scores, parentage, and general usefulness to humanity.)
  12. Begin drafting a strongly-worded bug report to share with the above-mentioned Garmin usability engineers.
  13. Spend the next five and a half hours trying to calculate split times without knowing for sure that the first activity was 82 minutes, not 75 or 90.

Device details: Garmin Venu, SW version 6.30, API version 3.2.6

Welcome to autumn

The first day of autumn has brought us lovely cool weather with even lovelier cool dewpoints. We expect similar weather through the weekend. I hope so; Friday I plan another marathon walk, and Saturday I'm throwing a small party.

Meanwhile, we have a major deliverable tomorrow at my real job, and Cassie has a routine vet check-up this afternoon. But with this weather, I'm extra happy that I moved my office to the sunroom.

End-of-summer reading

Only about 7 more hours of meteorological summer remain in Chicago. I opened my windows this afternoon for the first time in more than two weeks, which made debugging a pile of questionable code* more enjoyable.

Said debugging required me to put these aside for future reading:

Finally, one tiny bit of good news: more Americans believe in evolution than ever before, perhaps due to the success of the SARS-COV-2 virus at evolving.

Goodbye, Summer 2021. It's been a hoot.

* Three guesses who wrote the questionable code. Ahem.

Facing limitations of security software

Via Bruce Schneier, researchers have developed software that can bamboozle facial-recognition software up to 60% of the time:

The work suggests that it’s possible to generate such ‘master keys’ for more than 40% of the population using only 9 faces synthesized by the StyleGAN Generative Adversarial Network (GAN), via three leading face recognition systems.

The paper is a collaboration between the Blavatnik School of Computer Science and the school of Electrical Engineering, both at Tel Aviv.

StyleGAN is initially used in this approach under a black box optimization method focusing (unsurprisingly) on high dimensional data, since it’s important to find the broadest and most generalized facial features that will satisfy an authentication system.

This process is then repeated iteratively to encompass identities that were not encoded in the initial pass. In varying test conditions, the researchers found that it was possible to obtain authentication for 40-60% with only nine generated images.

The paper contends that ‘face based authentication is extremely vulnerable, even if there is no information on the target identity’, and the researchers consider their initiative a valid approach to a security incursion methodology for facial recognition systems.

Hey, humans have evolved for 20,000 years or longer to recognize faces, and we make mistakes all the time. Maybe security software just needs more time?

Welcome to August

While I look out my hermetically-sealed office window at some beautiful September weather in Chicago (another argument for working from home), I have a lot of news to digest:

And finally, Jakob Nielsen explains to web designers as patiently as possible why pop-ups piss off users.

We're dumb, but we're not that dumb

Two sad-funny examples of how, nah, we're exactly that dumb. The first, from TDWTF, points out the fundamental problem with training a machine-learning system how to write software:

Any ML system is only as good as its training data, and this leads to some seriously negative outcomes. We usually call this algorithmic bias, and we all know the examples. It's why voice assistants have a hard time with certain names or accents. It's why sentencing tools for law enforcement mis-classify defendants. It's why facial recognition systems have a hard time with darker skin tones.

In the case of an ML tool that was trained on publicly available code, there's a blatantly obvious flaw in the training data: MOST CODE IS BAD.

If you feed a big pile of Open Source code into OpenAI, the only thing you're doing is automating the generation of bad code, because most of the code you fed the system is bad. It's ironic that the biggest obstacle to automating programmers out of a job is that we are terrible at our jobs.

I regret to inform the non-programmer portion of the world that this is true.

But still, most of the world's bad code isn't nearly as bad as the deposition Paula Deen gave in her harassment suit in May 2013. This came up in a conversation over the weekend, and the person I discussed this with insisted that, no, she really said incredibly dumb things that one has to imagine made her attorney weep. She reminds us that the Venn diagram of casual bigotry and stupidity has a large overlapping area labeled "Murica."

Just wait for the bit where the plaintiff's attorney asks Deen to give an example of a nice way to use the N-word.

I will now continue writing code I hope never winds up in either a deposition or on TDWTF.