I’m Sorry Mr. Zuckerberg, But You Are Wrong

By

Image

It is heartening to see you begin to grapple with your role in the election. We are all doing the same. And like us, you are going through multiple stages of grief, each with its own set of cognitive biases. If Buster Benson’s handy cheat sheet is to be believed, you are going through a modest version of the “blame others” heuristic. Or, at least, its benevolent cousin, “It’s not my fault.”

(To be fair, in re-reading your post, I notice you didn’t ever come out and say Facebook didn’t deserve part of the blame. Like I said, people tell me you’re a good guy. I suspect you know that Facebook does bear some responsibility after all.)

There’s plenty of blame to go around. I don’t believe Facebook is responsible for the election. I believe it deserves maybe 1–2% of the blame. But as we’ll see, that is more than enough.

I know you’re a good person. We have many mutual friends and they all say great things about you. But you are profoundly mistaken in your interpretation of Facebook’s role in this election.

A Little Goes a Long Way

This election was extraordinarily close. Within half a point. Right off the bat, even if you’re bad at math, you’ll see a similarity here to the “more than 99% accurate” quote. The truth is at the margins.

For the past three years, I’ve been reading all the economic and academic literature around advertising, propaganda and marketing. Every empirical study — from the behavioral economists like Kahneman to the studies that marketing behemoths like Proctor and Gamble commission, to wonky math nerds sequestered away in ivory towers with coveted datasets. Going back to the 1700’s. Time and time again, we hear the same refrain: this stuff works because it works on all of us just a little bit. We may not even feel it working — which is why, when asked, the average person may not even notice that their preferences between two products they don’t even care about — say Coke or Pepsi — they will tell you their preferences haven’t changed after absorbing a marketing assault. But those preferences have changed — on the order of 1–2%. Now, this doesn’t make a huge difference in the soda consumption habits of any one individual, but it makes a huge difference at scale. It’s not hard to see how, in this election, the same forces are at play.

If you’re curious about this stuff, I refer you to the work of John Philip Jones and his mentor Andrew Ehrenberg. Here is a good primer written by Jones, who has written several great empirical books on the topic. Historically this stuff has been confined mostly to academia and the research departments of the huge consumer goods companies. But a recent book by Ehrenberg disciple Byron Sharp, How Brands Grow, has brought it to a larger audience in the ad world. I strongly recommend it. All of this, by the way, is based on empirical evidence. Studies. Many studies. Replicable studies. We’re not in the world of theories here. This is not guru talk. This is math. (My favorite, by the way, is the work of Colin MacDonald, which is truly mind-blowing, albeit in need of an update for the digital age. There’s money to be made in that, but that’s a topic for another time.)

(Edit: A friend asked for a link to Colin McDonald. Here it is. Though this stuff is dense in its original form, which is why I’ve been working on an overview book of this stuff. But it would be awesome if suddenly there was a run on Colin McDonald books on Amazon 37 years later. Lest ye think this is a one-off study, this book by Jones surveys innumerable single-source and metastudies confirming McDonald’s basic findings).

Historically, there have been reasons to doubt Ehrenberg’s theories apply to political campaigns. One reason is because the theories find that advertising works cumulatively, over time, and it’s hard to make a big difference in a short time span. Thus, the tactics don’t work as well in the political sphere. The empirical research around marketing (and let’s be honest — fake articles are marketing) and politics has been less concrete, although on balance it shows similar results. But three things are going on to make Ehrenberg’s theories more relevant:

  1. Campaigns are growing longer, entering well into the realm where Ehrenberg/Sharp/Jones’ work applies — years not months
  2. Budgets have reached — and surpassed – the levels of major CPG brands
  3. The marketing against Hillary Clinton has been going on a long, long time (this is a topic for another time)

It’s also worth noting that Facebook’s viral and social mechanisms allow the amplification of content well beyond its original media budget, thus exacerbating the issue. This one point, alone, after you’ve read the research, should bring about some serious soul searching.

The upshot of all this is to say that a 1–2% difference in election results based on the assault of the <1% of fake news articles on Facebook is an entirely plausible outcome. We can’t know for sure, of course, but there is real, hard science to back up this theory.

Not Just The News

There’s a subtle shift in your article at the beginning of paragraph two. It’s really well done! You have really good PR people! (no really!). Paragraph two starts “After the election, many people are asking whether fake news contributed to the result, and what our responsibility is to prevent fake news from spreading.” Well, yes, many people are asking whether fake news contributed to the result. But even more people are asking whether Facebook contributed to the result — by fake news, by its lax policing of racist memes, by its fundamental original lie that only real people are on Facebook when it is overrun with fake accounts. Fake news is only one of the problem with Facebook. A thousand “Hillary Clinton is a crook” “opinion pieces” probably don’t fall into your count of fake news per se (even though, it’s worth pointing out that their very title would constitute libel and is a lie). And opinion pieces are much, much harder to police. But we should never forget that hate speech, offensive memes, and hit pieces are the bread and butter of Facebook politicking, and they are, presumably, not counted in the rosy 99% figure. There are legitimate critiques around Facebook’s policies and implementations here. Like you said, “there is more we can do here.” But, since the “embrace” (the cold, cold, deathly embrace) of news is a recent Facebook endeavor, let us focus on it here. Thus we’ll confine ourselves to the news today.

99% is a Complete Failure in News

Moving to news, then, let’s state the obvious. Saying you’re 99% accurate in the news is a complete failure. Can you imagine the New York Times, Wall Street Journal, CNN or, hell, even Breitbart saying “hey, only 1% or so of our stories are wrong.” Even one incorrect news story is a terrible tragedy — especially at scale. Considering there are tens of thousands of news stories passing through Facebook, we are talking, at 1%, a minimum of hundreds of incorrect news stories. Countries have gone to war on less. Countries have executed people on less.

Think of the complete failure to which the New York Times admitted with the discovery that one of its journalists, Jayson Blair, committed fabrications and plagiarism in thirty-six articles through the course of a year. They called the incident “a profound betrayal of trust and a low point in the 152-year history of the newspaper.” Thirty-six! That wouldn’t have constituted 1% of the Times’ stories in a week. Furthermore, Jayson Blair didn’t have a political agenda! Yet the Times still published a six thousand word mea culpa. Worth noting is that the section on what they planned on doing to fix the problem was 200 words longer than your non-apology blog post. This is the news. A 99% success ratio is shit.

1% is way more than 1%

The number of news stories is only part of the equation. I don’t think it’s a coincidence that you didn’t talk about propagation rate of fake news stories vs real ones. Buzzfeed reports of a pro-Trump fake news story receiving 480,000 shares and compared it to the 175,000 shares a real, and very important New York Times story — the one that broke the news about Trump’s taxes. Fake news travels way faster than real news because it’s incredible, seems like it would be a big deal if it were correct (“huge if true”), and it confirms our biases.

The Idiocy of Crowds

Thus far, all of your proposed solutions to these very real problems are predicated upon a still-unproven hypothesis: That communities at scale can police themselves (“We have already launched work enabling our community to flag hoaxes and fake news…”). This is not true, and has never been true. Just because Silicon Valley has desperately wanted to believe for twenty years that communities can self-police does not make it true. Point to one example where this has worked. Twitter is a cesspool — one which you are rapidly joining. Google gave up long ago on simple “wisdom of the crowds” algorithms around its search rankings and heavily skews the algorithm against spammers, cheats and other anomalies (and God knows what else, but that’s a different story). At Tumblr, we employed real, live humans to edit. At scale. It worked. Did fake news still appear? Yes. Did it leverage our “curating” and cause us as a platform to help falsities spread? No.

More relevantly — there is no reliable news organization out there that relies solely on community moderations. Reddit, perhaps, is the closest, but a) it barely works and has problems similar to yours, b) it relies on not just the community as a whole but on empowered moderators, c) has nowhere near Facebook’s reach (your size demands a more responsible approach) and d) it doesn’t just shove these stories in your face when you’re trying to talk to your friends. You have to go look for them, or at least be interested in the topic. It would seem to me that interest in a topic connotes some level of understanding of the topic.

The Idiocy of Algorithms

Wait. Do you use “the wisdom of the crowds” or algorithms? We don’t even know. You’ve been very taciturn on how stories actually get propagated. What is clear, though, is you certainly don’t use expert editors. Your news editing is catastrophically bad. Right now the trending topics are what is clearly an ad — “2018 Buick Regal Wagon,” and two celebrities I have zero interest in and who aren’t particularly famous. As usual, both are because a celebrity Did Something.

Now, I like celebrity gossip as much as the next person. I love Britney Spears, and I want to see her succeed in life. I’ve seen her live like five times. I am always up for a Britney story. So are millions of other people. But you know what? Just because I like Britney news doesn’t mean I want it shoved in my face. But more importantly, just because 1 million people liked a Britney story doesn’t mean that it’s a better story or a more important one than a post about Trump’s taxes. This is called editing. There is a reason the Times has a celebrity editor and an international editor — because each topic needs to be edited differently. And they give each section placement commensurate with its importance.

What makes editing so hard for you? Scale? Nonsense. Your scale in the US is not bigger in any relevant way than NBC’s and they manage just fine (okay, well, they manage better than you).

The only reason scale is a problem for you is because you want to remove the humans and give personalized news at scale, which is exactly the problem when it comes to news about society as a whole. Have you ever stopped to ask if stories that effect society as a whole should be personalized at all? What possible good comes of this?

And if there is a good of personalized societal news at scale, and algorithms are the only way to achieve this (by no means a given), your algorithms have failed. They’ve failed because by all appearances they seem ridiculously rudimentary. I hesitate to even write this because I believe, ultimately, that algorithms are the wrong way to go, and knowing what I know of Facebook, I assume that once you get past the denial phase you’ll decide to double down on algorithms as the solution by putting a ton of resources into making them better. Hell, I’d bet a good chunk of change that despite your public protestations you are already doing this. But okay. Let’s explore.

Does each Facebook user get treated equally when sharing a story of any topic? Why? Why should, at the very least, our profession, academic experience, and general expertise on the topic at hand (inferred through your much-ballyhoo’d data) not be used in the algorithm? Why should a story about, say, tariffs and trade be shared with equal weight by a trade expert, a manufacturer, an economist and my bully from junior high?

(update/edit: of course, the bully from junior high, in this example, might well be a victim of some trade deal, and his voice should be heard. But we are not talking here about my bully’s experiences, we’re talking about news he shares. There is a difference.)

In the real world, sure, I like hearing the news stories that my friends read, but I don’t care what their friends read. And I would never rely only on one or two over-eager friends to feed me all of my news. Offline, historically, social was only ever a small part of our news consumption patterns and you’ve given us no reason why that should change.

Following a news organization on a social network is a good idea on paper, sure. But you mucked it up with the algorithm. I can’t actually follow The New York Times or Buzzfeed or Fox News on Facebook. “Follow” is a misleading term. All I can do is click a button that says “hey if one of their stories is super popular, maybe think about showing it to me.” It’s like if I subscribed to a newspaper and, rather than being able to read the whole thing, I let my crazy aunt Mable cut it up, annotate it, and only let me read those parts.

On top of that, a news organization’s prowess in mastering your ridiculous, oblique, impenetrable algorithm’s should not be a basis on which a story succeeds or fails. Even Google has, more or less, attempted to fix this problem.

In any case, right now the approach is broken. Either using algorithms is a flawed approach fundamentally, or your own algorithms are flawed.

Can I suggest a Silicon Valley approach to the topic, if you’re not feeling actually bold? A-B test it. Hire a few good editors — one for each “type” of news — and work on your algorithms simultaneously. Make some real, serious socially responsible success criteria — perhaps a quiz on knowledge of facts given to a small sample of users before and after. See who wins. And share the results with us.

The News is not your Playground

It’s no secret that the news is being economically challenged. And it has been long before Facebook came around. Nonetheless, I believe that Facebook is partially responsible — though not as much as Google — for the decimation of news income and a fundamental change in how people consume news. You’ve worsened existing trends, and done it without even Google’s — Google’s! — level of editorial judiciousness. While some of your products have been in a nominal endeavor to help news organizations, even these products encourage the selective consumption of news, and reduce the ability of a consumer to look at the news in a holistic way. And you’ve shown zero interest in fixing that problem and helping media organizations, or in any of the social ramifications.

In short, you’ve set foot into being a player in the news media, with zero interest in actually helping the news media, or in the social responsibilities that come with it. Now sure. You share ad revenue. But only popular stories garner ad revenue. You’ve aggravated the fundamental problem with internet news: only the most sensationalist stories generate the revenue. Whether the income came from subscriptions or ad revenue, in the old days, revenue to a paper was revenue to a paper. Sure, their research departments knew that some stories were being read more than others — I admit, there are days that I’ll pick up a paper and only read the celebrity stories. But I subscribe to that paper, and I picked up that copy, because I believed in that paper. You could have helped fix this on the internet, but you didn’t. You made it worse. Because none of this explained in any way your desire to set foot in the arena of news.

It is, perhaps, not your fault, but you’re now part of the problem instead of the solution. I thought you guys were supposed to be innovative.

You’ve chosen to take a step into news — one which the world didn’t want or need – and seems to be driven primarily by your current (slowly improving!) understanding of the advertising industry commensurate with a desire to increase profits by encouraging people to consume the news on your platform instead of elsewhere. Not that anyone ever asked for that.

In doing so, it seems that you operated under the insight that, somehow, Facebook’s very nature would doubtlessly improve the experience. It does not seem an unreasonable supposition on the surface — why wouldn’t my friends be able to tell me about interesting news stories? But in the end, either the very premise was flawed, or the implementation was faulty.


Join us for the NewCo Shift Forum, where 400 of the best minds in business, technology, and government will come together for two days of focused, action-oriented dialog.

Leave a Reply