The NewCo Daily: Today’s Top Stories
Facebook is giving its content-moderation effort a big injection of artificial intelligence to try to stem the flood of “extremist” material on the social network (The New York Times). For those who are outraged that Facebook and other online platforms haven’t done enough to counter terrorist recruiting materials and organizers, this will be welcome news. But it raises lots of dilemmas for Facebook that we fear the company isn’t ready to resolve, despite VP Elliot Schrage’s admission that this is one of the “hard questions” the company now confronts.
“We agree with those who say that social media should not be a place where terrorists have a voice,” two Facebook managers wrote, explaining company policy. Their post names ISIS and Al Qaeda as examples of groups they’re aiming to limit. But it barely acknowledges the larger issue of defining “terrorism” and “terrorist content” in a more rational, appropriate, and universal way than just “Muslims who bomb people,” or neatly distinguishing between posts that describe terrorist acts and those that promote them.
Read More