Are We Smart Enough To Control AI?

By

Maybe the right way to master artificial intelligence isn’t through the markets, but through open collaboration, pure research, and … (shudder) our governments


One of the most intriguing public discussions to emerge over the past year is humanity’s wrestling match with the threat and promise of artificial intelligence. AI has long lurked in our collective consciousness — negatively so, if we’re to take Hollywood movie plots as our guide — but its recent and very real advances are driving critical conversations about the future not only of our economy, but of humanity’s very existence.

In May 2014, the world received a wakeup call from famed physicist Stephen Hawking. Together with three respected AI researchers, the world’s most renowned scientist warned that the commercially-driven creation of intelligent machines could be “potentially our worst mistake in history.” Comparing the impact of AI on humanity to the arrival of “a superior alien species,” Hawking and his co-authors found humanity’s current state of preparedness deeply wanting. “Although we are facing potentially the best or worst thing ever to happen to humanity,” they wrote, “little serious research is devoted to these issues outside small nonprofit institutes.”

That was two years ago. So where are we now?

Insofar as the tech industry is concerned, AI is already here, it’s just not evenly distributed. Which is to say, the titans of tech control most of it. Google has completely reorganized itself around AI and machine learning. IBM has done the same, declaring itself the leader in “cognitive computing.” Facebook is all in as well. The major tech players are locked in an escalating race for talent, paying as much for top AI researchers as NFL teams do for star quarterbacks.

Let’s review. Two years ago, the world’s smartest man said that ungoverned AI could well end humanity. Since then, most of the work in the field has been limited to a handful of extremely powerful for-profit companies locked in a competitive arms race. And that call for governance? A work in progress, to put it charitably. Not exactly the early plot lines we’d want, should we care to see things work out for humanity.

When it comes to managing the birth of a technology generally understood to be the most powerful force ever invented by humanity, exactly what kind of regulatory regime should prevail?

Which begs the question: When it comes to managing the birth of a technology generally understood to be the most powerful force ever invented by humanity, exactly what kind of regulation do we need?

Predictably, last week The Economist says we shouldn’t worry too much about it, because we’ve seen this movie before, in the transition to industrial society — and despite a couple of World Wars, that turned out alright. Move along, nothing to see here. But many of us have an uneasy sense that this time is different — it’s one thing to replace manual labor with machines and move up the ladder to a service and intellectual property-based economy. But what does an economy look like that’s based on the automation of service and intellect? The Economist’s extensive review of the field is worthy reading. But it left me unsettled.

“The idea that you can pull free physical work out of the ground, that was a really good trick.” That’s Max Ventilla, the former head of personalization for Google, who left the mothership to start the mission and data-driven education startup AltSchool. In an interview for an upcoming episode of our Shift Dialogs video series, Ventilla echoed The Economist’s take on the shift from manual labor to industrialized society and the rise of the fossil fuel economy. But he feels that this time, something’s different.

“Now we’re discovering how to pull free mental work out of the ground,” he told me. “(AI) is going to be a huge trick over the next 50 years. It’s going to create even more opportunity — and much more displacement.”

Hawking’s call to action singled out “an IT arms race fueled by unprecedented investments” by the world’s richest companies. A future in which super-intelligent AI is controlled by an elite group of massive tech firms is bound to make many of us uneasy. What if the well-intentioned missions of Google (organize the world’s information!) and Facebook (let people easily share!) are co-opted by a new generation of corporate bosses with less friendly goals?

As you might expect, the Valley has an answer: OpenAI. A uniquely technological antidote to the problem, OpenAI is led by an impressive cadre of Valley entrepreneurs, including Elon Musk, Sam Altman, Reid Hoffman, and Peter Thiel. But instead of creating yet another for-profit company with a moon-shot mission (protect humanity from evil AI!), their creation takes the form of a research lab with a decidedly nonprofit purpose: To corral breakthroughs in artificial intelligence and open them up to any and everyone, for free. The lab’s stated mission is “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

OpenAI has managed to convince a small but growing roster of AI researchers to spurn offers from Facebook, Google, and elsewhere, and instead work on what might best be seen as a public commons for AI. The whole endeavor has the whiff of the Manhattan Project — but without the government (or the secrecy). And instead of racing against the Nazis, the good guys are competing with … well, the Valley itself.

One really can’t blame the big tech companies for trying to win the AI arms race. Sure, there are extraordinary profits if they do, but in the end they really have no choice in the matter. If you’re a huge, data-driven software business, you either have cutting-edge AI driving your company’s products, or you’re out of business. Once Google uses AI to make its Photos product magical, Facebook has to respond in kind.

Smart photostreams are one thing. But if we don’t want market-bound, for-profit companies determining the future of superhuman intelligence, we need to be asking ourselves: What role should government play? What about universities? In truth, we probably haven’t invented the institutions capable of containing this new form of fire. “It’s a race between the growing power of the technology, and the growing wisdom we need to manage it,” said Max Tegmark, a founder of the Future of Life Institute, one of the small AI think tanks called out in Hawking’s original op-ed. Speaking to the Washington Post, Tegmark continued: “Right now, almost all the resources tend to go into growing the power of the tech.”

Who determines what is “good”? We are just now grappling with the very real possibility that we might create a force more powerful than ourselves. Now is the time to ask ourselves — how do we get ready?

It’s not clear if OpenAI is going to spend most of its time on building new kinds of AI, or if it will become something of an open-source clearing house for the creation of AI failsafes (the lab is doing early work in both). Regardless, it’s both comforting and a bit disconcerting to realize that the very same people who drive the Valley’s culture may also be responsible for reigning it in. Over the weekend, The New York Times op-ed pages took up the issue, noting AI’s “white guy problem” (it’s worth noting the author is a female researcher at Microsoft). Take a look at the founding team of OpenAI: A solid supermajority of white men.

“It’s hard to imagine anything more amazing and positively impactful than successfully creating AI,” writes Greg Brockman, the founding CTO of OpenAI. But he continues with a caveat: “So long as it’s done in a good way.”

Indeed. But who determines what is good? We are just now grappling with the very real possibility that we might create a force more powerful than ourselves. Now is the time to ask ourselves — how do we get ready?

Can a small set of top-level researchers in AI provide the intellectual, moral, and ethical compass for a technology that might well destroy — or liberate — the world? Or should we engage all stakeholders in such a decision — traditionally the role of government? Regardless of whether the government is involved in framing this question, it certainly will be involved in cleaning up the mess if we fail to plan properly.

Back when AI was in early development, its single most powerful critique was its “brittle” nature: it didn’t work because it failed to be aware of all possible inputs and parameters. Now that we stand on the brink of strong AI, we’d be wise to include a diversity of opinion — in particular those who live outside the Valley, those who don’t look and think like the Valley, and those who disagree with our native techno-optimism — in the debate about how we manage its impact.

If you want to share this story, please hit “Recommend” below. It really helps us spread the word. Also, this story is sent first to readers of NewCo’s new weekly newsletter, now read by thousands of smart folks just like yourself. Want to get it first? Subscribe free here.


Leave a Reply