I swear. If Silicon Valley had to invent a ball point pen, they’d say “it’s just really hard getting the ink to flow smoothly and at a consistent rate out of the pen. You don’t understand how hard it is.” They seem to be under the impression that anything not invented in Silicon Valley does not exist. They also seem to be under the impression that we haven’t been dealing with the nuisance of fake news for hundreds of years.
I am going to say this up front, because I have friends working on these problems: no one is saying the technical challenges to perfectly arbitrating truth and fiction are easy. I am not saying that. And there are good people — at Facebook and elsewhere — who are working very hard on this problem. I believe, however, that management is tying their hands, because they are only looking at a single solution set, and ignoring history. And, I believe, humans don’t expect perfect arbitration. What they expect are openness, context, and labeling, along with the neutering of even the most blatant, clear-cut cases of lying.
The term “yellow journalism” has a real past, a real history. The history is rooted in some of the very names that still dominate news in America: Hearst and, ironically, Pulitzer. Once upon a time in America, our mainstream news sources couldn’t be trusted much more than we can now trust our assorted social platforms. In fixing the situation, they didn’t just say “Hey, our bad. We will stop lying to you.” They had, after all, been saying that the whole time they had been lying to us. What they did, rather is institute a set of policies, people and procedures to fix the situation, and they did so openly.
In all my reading about fake news and the internet, this seems to be a key point that is missed, especially with the pushback from Silicon Valley on the topic of fake news. Here, for example, is a New York Times editorial from a tech journalist that says “hey what, really, is ‘fake’ and how do we expect Facebook to arbitrate this for us? And isn’t that a bad idea?” Okay, that was paraphrasing. So here’s a real quote: “No matter how many editors Facebook hired, it would be unable to monitor the volume of information that flows through its site, and it would be similarly impossible for readers to verify what was checked. The minute Facebook accepts responsibility for ferreting out misinformation, users will start believing that it is fact-checking everything on the site.”
These questions ignore the basic approaches that the media has been using to mitigate fake news for a hundred years now.
The mainstream media, of course, doesn’t have a hundred thousand people sending them news articles every hour that they have to sort through. But, as we shall see, their approaches to avoiding printing fake news (too often) are surprisingly applicable to Facebook.
Because despite all the complaints and “reality checks,” the media doesn’t actually avoid fake news at the editorial level. No editor is faced with a deluge of false stories, wherein their opinion and only their opinion is what matters in deciding if the fake gets published or not. The systems that people are implying social platforms would have to use to avoid fake news ignore a hundred years of solutions that the media world have developed.
- Stories are sourced, and double-sourced. When something cannot be confirmed by multiple parties, the story is either not printed, or the single-source tidbit of information is caveated and presented accordingly.
- The media publishes corrections. This is huge. Of even the most minor mistake. Those corrections columns that we make fun of because they correct some obvious, glaring error in fact in a story (“the Times regrets they got the name of the subject of the article wrong”) are there for more than our entertainment. The media corrects even the most minor inaccuracy because the whole point is to engender trust in people that they will correct anything, no matter how embarrassed. The media understands sometimes you might get it wrong. This is why they correct everything they can.
- There is a public editor. The aforementioned Times opinion piece on Facebook, for example, was written by the wife of one of Zuck’s best friends, who also got rich off of Facebook. She also happens to be a tech writer. She mentioned way down in the piece that her husband worked at Facebook, but did not mention how close they were. I and several other people complained to the public editor. The public editor wrote a personal email back to me, and they published this story, critiquing their own moves.
- In any case, the story in the Times was an opinion piece. And it was marked as an opinion piece. And it was published in an opinion section. Media companies distinguish between opinion pieces and news, and they separate their editorial board from their news board. Now, there is often a risk of conflation here, but often the journalists of the very same paper write about that risk, and certainly the public editor does.
All of these solutions are available to social platforms, and none of them require a host of editors. Social platforms could:
- Favor publication from publications that have published code of ethics and standards, that are independently verifiable.
- Employ a public ombudsmen who is not beholden to the social platforms’s management for what they say, whose job is to write stories and address public concerns.
- Mark opinion piece, and blogs, as opinion and blogs. Publications could conform to a certain set of standards and publish opinion pieces tagged as opinion and news items tagged as news, and Facebook could verify the platform as using the tags in a responsible manner. If not, the piece is marked as an opinion piece.
- Facebook could also enforce the codification of corrections — which any publication conforming to a code of ethics will publish — and develop technology that automatically appends corrections to the news pieces that are published on the site.
Fake Apples and Oranges
To some, we are talking apples and oranges: they view the problem as more fundamental and existential: what is fake news? Witness this Tweet from a Rolling Stone reporter that now has over 12,000 likes (it is also the gist of Lessin’s NY Times Editorial):
First,the policies implemented by major media publications do distinguish between these items, and the procedures noted above — to which most major publications conform — handle each of these in a different way (though I’m not sure what the difference between A and B is). There is nothing in the code of ethics of major media publications that prohibit it from publishing a conspiracy theory — it is just duly noted as not having much evidence, or called what it is, a conspiracy theory.
It is also worth pointing out that we were not all bitching about fake news in 2012. There is reason we’re bitching about it now. Because a new, virulent strand of it — that does not fall into any “grey area” is now rampant. The Pope did not endorse Donald Trump. There is no grey area here. This is not an opinion. This is not conjecture. An endorsement is a very specific thing, and it is done in public. You can just go ask the pope’s press office! It’s easy. This sort of 100% pure fake news is a resurgent phenomenon. The social platforms are aiding it.
So, then, really what it comes down to are two very simple tactics that any social platform could implement, without resorting to the hire of thousands of editors. Though, I should point out, there are perhaps 5,000 major publications on the entire planet, each with a couple editors, and, say, 12,000 new employees at a company that made over seventeen billion dollars of profit would not be a huge burden. But let’s ignore that. Let’s stay lean. All any social platform would have to do, then, is:
- Implement the policies and procedures that any respectable media organization has implemented. Do it with technology. Mark opinions as opinions, have a published code of ethics and favor those who also have a published code of ethics. Mark stories that are only verified by a single source — push this on the publishers to deliver this data technologically. They’re all in your playpen already and will play by your rules especially if it helps their stories be seen. And employ a respected public editor who is independent and cannot be fired for their opinions.
- Beyond that, focus on only the low hanging fruit. Only the stories that are 100% false and verifiably so. If someone says “Bill Clinton is trafficking in kiddie porn,” this, of course, can never be 100% verified (“when did you stop beating your wife?”). So it should be marked as an opinion.
We are adults. We don’t expect our social platforms to be a nanny state. We do expect them to not publish lies. Get us back to the state of the union of 2012 and I, for one, will be happy. But please don’t hide behind intentional obfuscation. And learn the lessons we learned from the Yellow Journalism period, and please, god, help us avoid having to reinvent the wheel.