Josh Marshall points out that the harm Facebook causes comes from its basic design, making a quick fix impossible:
First, set aside all morality. Let’s say we have a 16 year old girl who’s been doing searches about average weights, whether boys care if a girl is overweight and maybe some diets. She’s also spent some time on a site called AmIFat.com. Now I set you this task. You’re on the other side of the Facebook screen and I want you to get her to click on as many things as possible and spend as much time clicking or reading as possible. Are you going to show her movie reviews? Funny cat videos? Homework tips? Of course, not. If you’re really trying to grab her attention you’re going to show her content about really thin girls, how their thinness has gotten them the attention of boys who turn out to really love them, and more diets. If you’re clever you probably wouldn’t start with content that’s going to make this 16 year old feel super bad about herself because that might just get her to log off. You’ll inspire or provoke enough negative feelings to get clicks and engagement without going too far.
This is what artificial intelligence and machine learning are. Facebook is a series of algorithms and goals aimed at maximizing engagement with Facebook. That’s why it’s worth hundreds of billions of dollars. It has a vast army of computer scientists and programmers whose job it is to make that machine more efficient. The truth is we’re all teen girls and boys about some topic. Maybe the subject isn’t tied as much to depression or self-destructive behavior. Maybe you don’t have the same amount of social anxiety or depressive thoughts in the mix. But the Facebook engine is designed to scope you out, take a psychographic profile of who you are and then use its data compiled from literally billions of humans to serve you content designed to maximize your engagement with Facebook.
Put in those terms, you barely have a chance.
He goes on to draw a comparison between Facebook's executives and Big Tobacco's, circa 1975:
At a certain point you realize: our product is bad. If used as intended it causes lung cancer, heart disease and various other ailments in a high proportion of the people who use the product. And our business model is based on the fact that the product is chemically addictive. Our product is getting people addicted to tobacco so that they no longer really have a choice over whether to buy it. And then a high proportion of them will die because we’ve succeeded.
So what to do? The decision of all the companies, if not all individuals, was just to lie. What else are you going to do? Say we’re closing down our multi-billion dollar company because our product shouldn’t exist?
You can add filters and claim you’re not marketing to kids. But really you’re only ramping back the vast social harm marginally at best. That’s the product. It is what it is.
Yesterday's 6-hour reprieve from Facebook seems to have hurt almost no one. The jokes started right away, about how anti-vaxxers could no longer "do research" and how people have started reading again. I didn't even notice until I read that it had gone offline, because I had too much work to do. So maybe that's what regulators should do: limit the company to 8 hours a day or something. What a thought...