Apple’s secrecy is a legendary and defining corporate trait. Like the quasi-government the company is increasingly becoming, it has an extensive program to fight leaks. We know that because, well, somebody leaked a recording of an hour-long presentation on Apple’s campaign (William Turton in The Outline). It turns out Apple employs a global team of leak-stoppers that includes former employees of the NSA, the FBI, the Secret Service, and branches of the U.S. military.
The purpose of all this secrecy, Apple execs insist, is “surprise and delight” among customers when they finally learn of some new Apple product or feature at the time of the company’s choosing. That kind of choreographed product launch has long been an Apple trademark, to be sure. But the company’s insistence on secrecy, like the inward-turning design of its gigantic new headquarters, underscores the increasingly insular nature of Apple’s culture.
Facebook is giving its content-moderation effort a big injection of artificial intelligence to try to stem the flood of “extremist” material on the social network (The New York Times). For those who are outraged that Facebook and other online platforms haven’t done enough to counter terrorist recruiting materials and organizers, this will be welcome news. But it raises lots of dilemmas for Facebook that we fear the company isn’t ready to resolve, despite VP Elliot Schrage’s admission that this is one of the “hard questions” the company now confronts.
“We agree with those who say that social media should not be a place where terrorists have a voice,” two Facebook managers wrote, explaining company policy. Their post names ISIS and Al Qaeda as examples of groups they’re aiming to limit. But it barely acknowledges the larger issue of defining “terrorism” and “terrorist content” in a more rational, appropriate, and universal way than just “Muslims who bomb people,” or neatly distinguishing between posts that describe terrorist acts and those that promote them.
Facebook, Microsoft, Twitter, and YouTube are “partnering to help curb the spread of terrorist content online.” They’ve announced a joint plan to use hashes (“digital fingerprints”) as a way to share that they’ve taken down a particular article or video. Get something banned from one service and it will now be much easier for the other services to block it, too (Ars Technica).
Sounds great! We’ve all read stories about “self-radicalizing” loners who watch one too many ISIS/Daesh recruiting videos and become dangerous. It makes sense for the big platforms to cooperate in fighting this problem, right?
Nestle Was a Trusted Brand in India. Then It Wasn’t Fortune’s Erika Fry goes deep on a noodle debacle in India that cost Nestle half a billion dollars. After the company’s vastly popular Maggi noodles — one of India’s top brands — were found by state laboratories to include both MSG and lead, Nestle took the product off the market (a step ahead of regulators). But Nestle insisted the product was safe despite government results — and failed to engage with the government in a productive way. Maggi is now back on the market, but the brand is tainted. This is a vivid example of how companies should respect outside concerns even if they think they’ve done nothing wrong — and the tremendous costs when they misstep.