The Automatic Weapons of Social Media

By

It’s time for the platforms to admit their response is flawed, and work together to protect our civil discourse.

image | Flicr

This is not an easy essay to write, because I have believed that technology companies are a force for good for more than 30 years. And for the past ten years, I’ve been an unabashed optimist when it comes to the impact of social platforms like YouTube, Twitter, and even Facebook. I want to believe they create more good than bad in our world. But recently I’ve lost that faith.

What’s changed my mind is the recalcitrant posture of these companies in the face of overwhelming evidence that their platforms are being intentionally manipulated to undermine our democracy. This is an existential crisis, both for civil society and for the health of the businesses being manipulated. But to date the response from the platforms is the equivalent of politicians’ “hopes and prayers” after a school shooting: Soothing murmurs, evasion of truly hard conversations, and a refusal to acknowledge the core problem: Their automated business models.

I’m not advocating the elimination of those models, any more than any sane person would advocate the elimination of guns. I am, however, arguing for curbs on the most destructive elements of those models: The machines which create the carnage. In the case of guns, it’s weapons of warfare like the AR-15. In the case of the platforms’ models, it’s self-service dashboards which allow anyone with rudimentary knowledge of a platform’s engagement algorithms to leverage that platform’s APIs to target public discourse.

Case in point: Russian linked accounts pivoting to gun issues in the wake of the Parkland shootings this week. That these bots and trolls are actively sowing discord is not in dispute — read this from NPR, or this from Wired. The top trending hashtags over the past 24 hours were all driven by identified propaganda accounts. Independent researchers have irrefutable proof that these actors are leveraging social media — in this case Twitter — to force divisive and often false narratives into our public discourse. (The same is true of #BlackPanther propaganda, yet another proof point of my argument).

Why, when faced with exactly these facts in the past, have companies taken the stance that “hoaxes happen, but they are ultimately discredited by our user community”?

Perhaps it’s because the complexity and scope of these platforms are beyond comprehension or control. This is the only non-cynical conclusion I can draw, and when it comes to advertising, I’ve even argued as much. “We built this thing, it’s super complicated, and people are outsmarting us, the creators, on our own platform. It’s out of our hands!”

But I don’t believe that explanation when it comes to dealing with bots and trolls. Information warfare leveraging social media is sophisticated and nuanced, as this Molly McKew thread superbly demonstrates. But does that mean it’s beyond the abilities of the smartest engineers and product managers in the world?

No, a more reasonable explanation for why Twitter, Facebook, and Google have not taken a more ambitious approach to stopping abuse of their platforms is because they are afraid to do so. Taking strong action would place limits on the driving force of their growth and their profits: Automation. And it would require that they acknowledge that their working interpretation of the law which protects them from liability, specifically Section 230 of the Communications Decency Act, is flawed. I’ve broken down the issues behind 230 elsewhere, so let me dig into the subject of automation.

Just like guns, social media platforms can be automated — they all have APIs that allow accounts to post content in an automated fashion. And just like guns, when accounts are automated, they can do significant damage.

There are plenty of good use cases for automation — publications posting their articles, developers creating chatbots, artists pranking society, advertisers creating customized messaging for specific audiences. But when malicious actors get their hands on the tools, divisive carnage ensues.

The results seriously threaten democracy. Bots have targeted the FBI and Robert Mueller, and aided by a President well aware of where his support truly emanates, has driven significant swings of public opinion, corrupting the very essence of our nation’s rule of law.


So what can be done? Understanding the problem is hard, sure. Here’s one piece of it: a deep dive into the links shared by Russian-linked accounts on Twitter by Jonathan Albright. This is crucial, urgent, democracy-saving work. Why aren’t Twitter, YouTube, and Facebook doing it?

Here’s my suggestion: Platform companies should acknowledge the scope of this problem, admit their past actions have not been enough, and convene an cross-sector, transparent, and urgent working group committed to a solution. Fifteen or so years ago our industry brought spam to heel by sharing data, insights, and best practices. It’s how progress is made in a heterogeneous ecosystem. Have we forgotten how to do that? When companies like Facebook unilaterally state “we’re on it, trust us” — well, that’s not good enough anymore. No single company can — or should — solve this problem alone.

Second, the companies should immediately issue a call for proposals focused on product designs that address the problem. I’ve got one for them already: Flag every automated account that has been identified as suspicious with a visual cue that lets users immediately understand that the information they are consuming is potentially corrupt. Non suspicious accounts can be identified and qualify out.

It’s not rocket science to identify and flag a malicious automated account. They act with pretty simple logic. Researchers have already identified thousands of them, yet they continue unabated.

I’ve considered the arguments from Facebook, Google, Twitter and others — that they are neutral platforms for speech, that they cannot become arbiters of what people say, they cannot tell people how to speak, nor can they determine or judge the intent of a person’s speech. When it comes to automated accounts, I no longer believe those arguments hold water given the damage their platforms are creating.

Automated machines in the hands of malicious actors are not only destroying our rational discourse, they are destroying the future value of the companies which host them. This is existential, people. It’s time to act.

Update: The Mueller indictments, which came out immediately after this column was published, certainly add fuel to these thoughts.


We’ll be discussing these issues in detail at the Shift Forum in less than two weeks. We’ve got 15 tickets left. Join us!

Leave a Reply