Facebook’s After-the-Fact Oversight | Social Networks
I wanted to like Kara Swisher’s
piece in The New York Times about Facebook’s attempt to wrestle with its demons, but I can’t. It feels too much like self-delusion.
To cut to the chase, Facebook announced it was forming an oversight board that eventually will have about 40 members with responsibilities for policing its domain and reducing or even eliminating the fake news and propagandistic uses the service has been subjected to at least since the 2016 election cycle.
I am not impressed.
I am not swayed because it appears to me that the company can’t or won’t come to terms with a valid definition of the problem. Facebook has a quality control problem, and building structures that chase after bad actors and their content once they’ve had a chance to pollute social media doesn’t work.
If it was a manufacturer with defective products, we’d all quickly conclude that the way to improve product quality is to build it in and not to attempt to add it after the fact, which is what the oversight board would do.
Before and After
The auto industry of the 1970s provides all the case study information you need.
American car companies tried to improve the quality of their products after they were built, shunting aside cars that needed rework.
The Japanese, in contrast, broke down the assembly process and tried to improve every aspect of it to drive error out and build quality in.
During the 1970s and 1980s, Detroit lost roughly half its market share to foreign competition that just built better cars, and it has not recovered.
Hands Off and Hands Out
Social media is a bit different. Bad actors are building defects into social media so that any policing strategy or oversight board always will be a day late and a dollar short.
Social media, and Facebook in particular, need to face the fact that products once intended for private individuals to communicate with have been adopted by government and industry for other purposes, because they represent a commoditization of other modes of communication. They’re cheap and effective, and nothing attracts government and business like cheap and effective.
Right now you could call yourself Neil Armstrong and launch a page that said the moon landing was faked and you’d be off to the races.
Social media companies want to be able to wash their hands of responsibility for the misinformation, except that they also want to capture revenue from it. This is a two-tier business model masquerading as one: They are platform and apps.
Clean Out the Cesspool
Solving the social media problem requires a model that separates ownership and requires commercial and government users to demonstrate a fundamental understanding of the tools and resources, including their proper use. Penalties for misusing the services would be a big plus.
Lots of people will call this draconian and a violation of imagined rights, but it is the way we’ve regulated other businesses for a long time.
Every plumber, electrician, beautician, doctor, dentist, lawyer and many more professions have to sit for licensing exams before they’re allowed to practice on the public. This sets up a reasonable malpractice regimen too.
Social media use by commercial and government entities should face the same regulation.
Regulating at the user level would do a lot to reduce or even eliminate bad actors and misinformation. It effectively would add quality to an industry that once was a good idea but increasingly is becoming a cesspool with geopolitical ramifications.
So, I am not a fan of an oversight board for Facebook — and with due respect, Kara, you should know better.