How AIs Will Fight It Out To Show You Ads

By

The NewCo Daily: Today’s Top Stories

JD Hancock | Flickr

Artificial intelligence is making its entry into our lives right now — and, as with so many other innovations, it isn’t coming in through the door everyone expected. Sure, eventually we’ll all ride in self-driving cars on our permanent vacations from our jobs that are being done by machine intelligences. But for now, the AIs in our lives are going to be very busy with far more mundane work: delivering — and blocking — ads.

You already know that the ads you see in your browser and on your phone are selected via the complex interaction of bidding algorithms working from targeting data. You may also already be running an ad blocker that tries to speed the loading time of pages you want to read by bypassing all of that advertising machinery.

Advertisers and ad blockers have been engaged in an arms race for years now; the blockers look for code that identifies ads, but the advertisers can keep altering their own code. At Princeton and Stanford, researchers have invented a new kind of ad blocker that relies instead on computer vision techniques (Motherboard). That could end the ad-block arms race with a clear win for the blockers.

But hang on: Today, at least, computer-vision programs are pretty easy to fool (The Verge). The machine-learning techniques that computer eyes use to recognize things depend on training data. If you show them data that’s designed to mislead them, they can get the wrong idea. AI researchers call such tactics “adversarial attacks”: weird patterns that mean nothing to you and me but could be used to disarm or disable AI-based tech — for example, military targeting systems. Or ad targeting systems. We’re all going to have to be careful out there.

Meanwhile, at Google, researchers have begun exploring the concept of “generative adversarial networks” (Wired). Think: two AIs squaring off, with the first trying to achieve some goal — like manufacturing images that are indistinguishable from “real” images — and the second trying to “defeat” the first by correctly distinguishing the first AI’s handiwork. Over time, the work of the first, “generative” AI will improve under the tutelage of the second, “critical” AI. (Imagine two computer programs playing both roles in a Turing test scenario.)

Generative adversarial networks are rife with profound possibilities. But don’t be shocked if their first in-the-field use turns out to be something pretty shallow — like, say, helping advertising networks get past your digital defenses.


Can Facebook Manage Augmented Reality Any Better Than Plain Reality?

“Augmented reality” means adding a layer of data (or decorations) over the real world that we see. Google Glass was an early take on this tech, Pokemon Go introduced it to the masses, and now Facebook is going for it, big time. In an announcement yesterday at Facebook’s annual developer conference, Mark Zuckerberg unveiled his company’s play to build a platform for AR that could serve as the technology’s public playground (The New York Times) and encourage other firms to dive in.

Today AR looks mostly like the next stage in Snapchat-style camera filters and effects, but you can see how quickly the combination of good data and good glasses could become useful, too. Zuckerberg envisions friends leaving “virtual notes for one another on the walls outside their favorite restaurants, noting which menu item is the most delicious.”

The problem — underscored by this week’s grisly Facebook Killer story — is that Facebook already stumbles trying to manage its existing vast and unruly text-and-image platform. As Mat Honan argues in Buzzfeed, “The problem with connecting everyone on the planet is that a lot of people are assholes.” The more power you put in their hands, the more you have to plan for how it will be misused. As Honan writes: “They will leave bogus reviews of restaurants to which they’ve never been, attacking pizzerias for pedophilia. If anyone can create a mask, some people will inevitably create ones that are hateful.”

What if Facebook hit “pause” on its plans to invent a wow-filled new world, and worked harder on learning how to cope with the flawed world we share now? It might lose a competitive lap. But it could also earn our trust.


Is Inequality An Inevitable Byproduct of Capitalism?

Over time, do capitalism and a free market trend toward more equality or more inequality? For decades economists bought into a theory by Simon Kuznets that suggested capitalism eventually makes societies more equal. More recently, work by Thomas Piketty and his colleagues has challenged that view.

Bloomberg features an interesting exchange on this question between economics columnist Noah Smith and Weapons of Math Destruction author Cathy O’Neil. In March, a column by Smith used data from developing countries to defend the Kuznets view. O’Neil counters that in the countries Smith used as examples, inequality is shrinking chiefly thanks to cash-transfer programs that helped poor parents send their kids to school. Classical economists often dislike such programs because they distort the market.

Are such policies “socialist” or “capitalist”? That argument is ultimately one of nomenclature, not substance. What really matters is that wealth inequality and income inequality function differently. Smith notes that Piketty focuses on wealth inequality, while Kuznets’ ideas emerged from income data. Maybe the next generation of economists will figure out how income and wealth inequality interact— and what we need to do to reduce both.


The NewCo Daily is taking Thursday and Friday off this week. See you again on Monday!

https://upscri.be/9ca96e

Leave a Reply