The NewCo Daily: Today’s Top Stories
Facebook is giving its content-moderation effort a big injection of artificial intelligence to try to stem the flood of “extremist” material on the social network (The New York Times). For those who are outraged that Facebook and other online platforms haven’t done enough to counter terrorist recruiting materials and organizers, this will be welcome news. But it raises lots of dilemmas for Facebook that we fear the company isn’t ready to resolve, despite VP Elliot Schrage’s admission that this is one of the “hard questions” the company now confronts.
“We agree with those who say that social media should not be a place where terrorists have a voice,” two Facebook managers wrote, explaining company policy. Their post names ISIS and Al Qaeda as examples of groups they’re aiming to limit. But it barely acknowledges the larger issue of defining “terrorism” and “terrorist content” in a more rational, appropriate, and universal way than just “Muslims who bomb people,” or neatly distinguishing between posts that describe terrorist acts and those that promote them.
When Facebook’s social-networking predecessor Friendster had its heyday, snarky users made fun of its binary thinking, which forced you to classify every other human being as “friend” or “not friend.” Facebook’s Boolean terrorism policy today looks just as blinkered. It’s built on an assumption that AI and people, working together, can cleanly classify all content as “terrorist” or “not terrorist.” Global politics and human affairs don’t work that way. Much of the world, including the U.S. government, classified Nelson Mandela as a terrorist for much of his life.
You can empathize with Facebook’s leadership, to a point. Balancing user safety and free expression isn’t easy. But the company is getting a little old to still be displaying this level of naivete. They just wanted to connect everyone on the planet! Who knew it would be so complicated?
More daily items: