Terrorism, super AI risk, and how Sam Harris became Sam Harris
I recently recorded a luxuriously unhurried conversation with author, neuroscientist, and public intellectual Sam Harris. Sam first entered the public eye with the release of his 2004 bestseller The End of Faith. A rumination on 9/11 and an endorsement of atheism (though that word is used precisely once in its text), The End of Faith peaked at #4 on the New York Times bestseller list. Sam’s subsequent bestsellers have included Letter to a Christian Nation, Waking Up: A Guide to Spirituality Without Religion, and the collaboration Islam and the Future of Tolerance.
To access our interview, either:
Type “After On” into your podcast app’s search field, or . . .
Click the “play” button at the top of this page, or . . .
Click here, then click the blue “View on iTunes” button in the upper left corner of the page (requires iTunes, of course),
Facebook is giving its content-moderation effort a big injection of artificial intelligence to try to stem the flood of “extremist” material on the social network (The New York Times). For those who are outraged that Facebook and other online platforms haven’t done enough to counter terrorist recruiting materials and organizers, this will be welcome news. But it raises lots of dilemmas for Facebook that we fear the company isn’t ready to resolve, despite VP Elliot Schrage’s admission that this is one of the “hard questions” the company now confronts.
“We agree with those who say that social media should not be a place where terrorists have a voice,” two Facebook managers wrote, explaining company policy. Their post names ISIS and Al Qaeda as examples of groups they’re aiming to limit. But it barely acknowledges the larger issue of defining “terrorism” and “terrorist content” in a more rational, appropriate, and universal way than just “Muslims who bomb people,” or neatly distinguishing between posts that describe terrorist acts and those that promote them.