Superintelligence and Public Opinion

By

In which I survey the public — perhaps for the first time — about their appetite for risk and the pursuit of superintelligence.

Throughout 2017, I have been running polls on the public’s appetite for risk regarding the pursuit of superintelligence. I’ve been running these on Surveymonkey, paying for audiences so as to minimize distortions in the data. I’ve spent nearly $10,000 on this project. I did this in about the most scientific way I could. It is not a “passed around” survey, but rather paid polling across the entire American spectrum.

All in all, America can perhaps be best characterized as excited about the prospect of a superintelligence explosion, but also deeply afraid, skeptical, and adamantly opposed to the idea that we should plow forth without any regulation or plan. This is, it seems to me, exactly what is happening right now.

You can view the entire dataset here. I welcome any comments. I’m not a statistician, don’t have a research assistant, and have a full-time job, so my ability to proof-read and double-check things is limited (though I have tried). If you have comments, you can tweet at me @rickwebb.

Background

This is not an essay debating the likely outcome of humanity’s pursuit of superintelligence. This is not an essay trying to convince you that it’s going to turn out one way or another. This is an article about democracy, risk, and the appetite for it.

Furthermore, this is not an essay about “weak” artificial intelligence — your Alexa, or Siri, or the algorithms that guide you when using Waze. Artificial Intelligence comes in three flavors:

  1. Weak AI: Siri, Waze, etc.
  2. Human-Level AI: artificial intelligence possessing roughly the same intelligence level as you or me
  3. Superintelligence, or SAI: an artificial intelligence with intelligence well beyond human capabilities.

Virtually all of the public policy discussions, news, and polling has centered around the first type of AI: weak AI. This is the one that will make the robots that will take your jobs. The Obama administration’s report on artificial intelligence, for example, dedicated only perhaps 3 paragraphs across its 45 pages to SAI. This was part of a larger push by the Obama administration, who also hosted several events. The primary focus there, too, was on weak AI. What little polling done on AI has been done primarily on weak AI.

But it is superintelligence that arguably poses the much larger risks for mankind. And we are further along than most people realize.

Let me ask you a question: if you were in the ballot booth, and you saw the following question on a ballot, how would you answer?

“Humanity has discovered a scientific advancement. Pursuing it gives humanity two possible options; a) a 1 in 5 chance that all of humanity will die instantly, and b) a 4 in 5 chance that poverty, death and disease will be cured for everyone, forever. Should we pursue it?”

The situation is this: in the next 100 years or so, there’s a chance — no one is sure how good of a chance — that humanity will develop machines that achieve, and then surpass, humans in intelligence levels. When we do that, most experts agree, there are two potential paths for humanity:

  1. Humanity achieves transcendence. Disease is cured, death is cured, starvation, war — all the bad shit — becomes a thing of the past. Humanity becomes something new. Something better. (This view is perhaps best summed up by Ray Kurzweil. Here is a quick read summarizing his views).
  2. Humanity is destroyed by the thing it creates. We are wiped out completely. This could happen with shocking rapidity — potentially in minutes. There’s a chance we wouldn’t even have time to respond at all. (Here is a quick read summarizing these views by noted AI skeptic Nick Bostrom, and here is a recent quite good Vanity Fair article about Elon Musk’s concerns).

There’s a lot of hyperbole and terminology around the debate about pursuing human-level artificial intelligence. It can be confusing. To get up to speed, I strongly recommend you read this two-part primer on the AI dilemma by the wonderful blog Wait But Why (part 1, part 2). Please consider taking a moment to read some of the articles linked above (or bookmark them for later). However you feel about the topic, it’s probably worth it as a citizen to get up to speed on both sides of the debate, since arguably it will effect us all (or our children).

Now, if you’ve read all that, I suspect you have one of two responses — much like those outlined in the article. You’ll read all the good stuff and get really into it and think “that sounds great! I think that will happen!”

Or you will read all the bad stuff and think “that sounds terrible and plausible! I don’t want that to happen!”

And guess what! Good for you, because whichever side you’ve taken, there is some super genius out there agreeing with you.

I’ve discussed these articles with lots of people. Here’s what I’ve found: by and large, enthusiasm in favor of AI depends on an individual’s belief in the worst-case scenario. We, as humans, have a strange belief that we can predict the future, and if we, personally, predict a positive future, we assume that’s the one that’s going to happen. And if we predict a negative future, we assume that’ll happen.

But if we stop and take a moment, we realize that this is hogwash. We know, intellectually, we can’t predict the future, and we could be wrong.

So let’s take a moment and acknowledge what’s really going on in this scenario: experts pretty much see two potential new paths for humanity when it comes to AI: good and bad.

And the reality is there is some probability that each one of them may come true.

It might be 100% likely that only the good could ever happen. It might be 100% likely only the bad could ever happen. In reality, the odds are probably something other than 100–0 or 0–100. The odds might be, for example, 50–50. We don’t really know.

(There is, of course, the likelihood that neither will happen, in which case, cool. Humanity goes on as it was, and this article becomes moot. So we are ignoring that for now).

Furthermore, because of the confusion around weak AI, human-level AI, strong AI/Superintelligence and what have you, I decided I would boil down for the public the central debate to its core: hey, there’s a tech out there, it might make us immortal, but it might kill us. What do you think? This is, after all, the core dilemma. The nut. The part of the problem that most calls for the public’s input.

So, in the end, we’re right back to where we started from:

“Humanity has discovered a scientific advancement. Pursuing it gives humanity two possible options; a) a 1 in 5 chance that all of humanity will die instantly, and b) a 4 in 5 chance that poverty, death and disease will be cured for everyone, forever. Should we pursue it?”

Now, in the question above, I’m making up the 1 and 5 probability numbers. It might be one in 100. It might be one in two. We just don’t know. NO ONE KNOWS. Remember this. Many, many people will try and convince you that they know. All they are doing is arguing their viewpoint. They don’t really know. No one can predict the future. Again, remember this.

We are not arguing over whether or not this will happen in this essay. We are accepting the consensus of experts that it could happen. And we urge consideration of the fact that the actual likelihood it will happen is currently unknown.

This is also not the forum to discuss how we could ever even know the liklehood of an event in the future. Forecasting the future is, of course, an inexact science. We’ll never really know, for sure, the likelihood of a future event. There are numerous forecasting methodologies out there that scientists and decision-markers use. I make no opinion here. With regard to superintelligence, the Wait but Why essay does a good job going over some of the methods we’ve utilized in the past, such as polling scientists at conferences).

I’ve been aware of the potential of this issue for decades. But like you, I thought it was far off. Not my generation’s problem. AI research — like many areas of research my sci-fi inner child loved — had been stalled for the last 30–50 years. We had little progress in space exploration, self driving cars, solar power, virtual reality, electric cars, car planes, etc. Like these other areas, AI research seemed on pause. I suspect that was partially because of the brain drain caused by building the Internet, and partially because some problems proved more difficult than expected.

Yet, much like each of these fields, AI research has exploded in the last five–ten years. The field is back, and back with a vengeance.

Public Opinion Should Matter — So Far it Hasn’t

Up to now, AI policy has been defined almost exclusively by AI researchers, policy wonks, and tech company executives. Even our own government has been, by and large, absent from the conversation. I asked one friend knowledgeable about the executive branch’s handle on the situation and he said, in effect, that they’re not unaware, but they have more pressing matters.

A massive amount of AI research is being done, and most of humanity has no idea how far along we are on the journey. To be fair, the researchers involved often have some good reasons for why they are not shouting their research from the rooftops. They don’t want to cause unnecessary alarm. They worry about the clamping down on their ability to publish what they do publish. The fact remains, that the public is, by and large, being left in the dark.

I believe that when facing a decision that affects the entirety of humanity at a fundamental level — not just life or death but the very notion of existence — we all should be involved in the decision.

This is, admittedly, democratic. Many people believe in democracy in only a limited manner. They fret over the will of the masses, direct democracy, making decisions in the heat of the moment. This is all valid. Reasonable people can have a debate about these nuances. I do not seek to hash them all out here. I’m not saying we need a worldwide vote.

I am, saying, however, that all of humanity should have a say in the pursuit of breakthroughs that put its very existence at risk. The will of the people should be our guide. And the better informed they are, the better decisions they will make.

There is a distinction between votes and polling. Polling guides policy, and voting, in its ideal form, affects behavior. A congresswoman may be in office because, say, 22% of all non-felon adults in her district put her there. She may then govern by listening to the will of the people as a whole through polls. Something similar should be applied here.

If this were classical economics, and humans were what John Stuart Mill dubbed homo economicus — or perfectly rational beings, with all the relevant knowledge at hand — humanity could simply calculate the risk potential and likelihood and measure that against the likelihood of potential benefits. We would then come up with a decision. Reality is more complex. First, the potential downside and upside are both, essentially, infinite in economic terms, thus throwing this equation out of whack. And secondly, of course, we do not actually know the likelihood that SAI will lead to humanity’s destruction. It’s a safe guess that that number exists, but we don’t know it.

Luckily our very faults — that we are not homo economicus — also leads to our strength in this situation: we can deal with fuzzy numbers and the notion of infinity. Our brains contain multitudes, to borrow from Walt Whitman.

What, then, is the level of acceptable risk that will cause humanity to, at least by consensus, accept our pursuit of superintelligence?

It came as a shock to me, then, that the population at large hasn’t really been polled about its views on the potential of a super intelligence apocalypse. There are several polls about artificial intelligence (this one by the British Science Association is a good example), but not so many about the existential risk potentially inherent in pursuing superintelligence. Those that exist are generally in the same mold as this one by 60 Minutes, inquiring about its audience’s favorite AI movies, and where one would hide from the robot insurrection. It also helpfully asks if one should fear the robots killing us more than ourselves. One could argue that this is a leading question, and in any case, it’s hardly useful for the development of public policy. Searching Google for “superintelligence polling” yields little other than polling of experts, and searching for “superintelligence public opinion” yields virtually nothing.

On the academic front, this December 2016 paper by Stanford’s Ethan Fast and Microsoft’s Eric Hovitz does a superb job surveying the landscape, relying primarily on press mentions and press tone, while acknowledging that the polling is light, and not specifically focused on superintelligence. Nonetheless, it is a fascinating read.

All in all, though, data around the existential risk mankind may face with the onset of superintelligence, and Americans’ views on it, is sparse indeed.

So I set out to do it myself.

You can view my entire dataset here.

Americans’ Opinions about Superintelligence Research

First, I set out to ask some top level questions about superintelligence research. Now, I confess, I am not a pollster. I know these questions are sort of leading. I did my best to keep them neutral, but I’ve got my own biases. Nonetheless, it seemed worthwhile to just go ahead and ask a bunch of Americans what they think about the risks and potentials of superintelligence.

We asked four top-level questions regarding superintelligence research of 400 individuals:

  1. How excited are you about the prospect of mankind inventing superintelligence?
  2. Are you more or less excited about superintelligence if it is pursued by an individual company, such as Facebook, Amazon, or Apple?
  3. Do you think it should be legal for companies to pursue superintelligence, even though there may be existential risks to humanity such as extinction?
  4. Do you believe superintelligence research should be: a) Legal for any person, company or government to pursue, b) Regulated lightly, c) Regulated heavily, or d) Banned outright?

At a top level, Americans seem to find the prospect of superintelligence and its benefits exciting, though it is not a ringing endorsement. Some 47% of Americans characterized themselves as excited on some level.

Edit/Update: A couple of you have asked if the respondence were given any context about superintelligence when asked these questions. Yes. At the top of the survey, I asked: “For each of these questions, we are defining “superintelligence” as the invention of a man-made, artificial intelligence (robotic, computer-based or biological) that greatly surpasses the intelligence of human kind.”

Margin of error 4%

It is worth noting that some 21% of respondents characterized their feelings as involving “dread.” While a minority, it is a substantial one. And tied for first place in most popular answers, some 31.27% of Americans said they were not excited at all. There is perhaps a lot of potential for those to move in one direction or another as the amount of progress in superintelligence research that we’ve made or may soon make comes into focus.

Margin of error 4%

When asked whether the pursuit of superintelligence is made by a single company would alter their perceptions of superintelligence research, more respondents said it made them less excited than more excited (10% to just under 34%), but the clear majority said it didn’t really matter to them. This, we shall see, is fleshed out in a series of detailed follow up questions addressed below.

Margin of error 4%

In what I admit was a somewhat leading question, a plurality (36%) said that they don’t think it should be legal for companies to pursue superintelligence if there may be an existential risk to humanity. To be fair, however, 31% said companies should be allowed to pursue it. Clearly an appetite for risk exists. The question is… how much?

It’s also worth nothing that many people (33%) don’t have an opinion on the topic which, again, suggests perhaps opinions may shift as the debate becomes more mainstream and people are more educated. Or not. I’m not a genius statistician.

I’m attempting to limit my editorializing here, but this seems to be an incredibly wishy-washy mandate for something with so much potential for reward or destruction for all of humanity.

Margin of error 4%

Because I knew in my heart of hearts that question was a little leading, I asked it in a different way. Without the “even though we might all die” prompt, a majority of Americans still favored heavy regulation (nearly half at just under 49%) or an outright ban (9%). Only 17% thought it should be legal in any circumstances. A full 84% favored some regulation or stronger. Clearly, Americans are not comfortable with unfettered superintelligence research, which – it should be noted – is the regulatory environment under which we now live.

America’s Appetite for Superintelligence Risk

Next I moved to the core of my studies: what level of risk would Americans be comfortable with regarding superintelligence? My first finding:

A majority of Americans would be willing to pursue superintelligence if they believed that they had a 4 out of 5 (80%) likelihood that they would not die.

To start, I ran a series of 6 polls of 400 people each (thus with a margin of error of 4% or so) asking people the same question, over and over, with slightly tweaked likelihood proportions in each one. The question was: “Humanity has discovered a scientific advancement. Pursuing it gives humanity two possible options: A) a 1 in 5 chance that all of humanity will die instantly, and B) a 4 in 5 chance that poverty, death and disease will be cured for everyone, forever. Should we pursue it?” I ran this question with the following ratios:

  • 1 in 2
  • 1 in 3
  • 1 in 4
  • 1 in 5
  • 1 in 10
  • 1 in 100

The following shows a chart of the percentage of respondents who answered “Yes. This is worth pursuing.” (The no answer was phrased as “No. The risk is not worth it.”)

Surveymonkey polls of 400 paid respondents each, ran from 1/7/17 to 2/5/17. Margin of error 4% on each poll, but this was six separate polls all showing a general trend, so I suspect the real margin of error is lower. I am not a statistics expert.

A you can see here, Americans reach a comfortable threshold of risk when the risks of success of pursuing superintelligence meet 80%.

Improving the likelihood of success to 90% does not noticeably whet America’s appetite for pursuing superintelligence. Indeed, even when improving the likelihood to 99%, a full 40% of Americans still prefer not to pursue superintelligence.

As we’ve said, it’s impossible to know the likelihood of an event in the future with certainty. What this does indicate, however, is that the average American would like to be reasonably confident they’re not going to die because of our pursuit of superintelligence. And a significant minority — forty percent or higher — are against the endeavor even with a 99% safety margin.

Trust for Specific Organizations Pursuing Superintelligence

Taking that 1 in 5 margin as a baseline, then, I ran a series of polls adding the name of a specific organization to the poll. That is, I re-phrased the question like this:

“Imagine that Amazon, inc. has discovered a scientific advancement. Pursuing it gives humanity two possible options; a) a 1 in 5 chance that all of humanity will die instantly, and b) a 4 in 5 chance that poverty, death and disease will be cured for everyone, forever. Should we pursue it?”

What I found was this:

The public is less thrilled about superintelligence research when it is pursued by a single organization. Of the orgs polled, Amazon was the most popular, followed by Harvard. China was the least popular, but only slightly less popular than the US government.


In the abstract, Americans are interested in pursuing superintelligence, provided we can reduce the risk to somewhere around the 80–20 range. But if you add an organization to the question, Americans are less excited about the prospect. Only one organization — Amazon — polled a majority of people willing to pursue superintelligence at an 80–20 risk ratio. And they only barely polled over 50% : well within the margin of error.

Demographics

When it comes to a demographic analysis of the data, we see a few things:

  1. Young people have a higher appetite for risk than old people — not much surprise there.
  2. Black Americans seem to be slightly more eager to take the chance than the population as a whole, though I should caveat it is very hard to survey black people on Surveymonkey (and expensive), thus the sample size is small and it has a higher margin of error than the other data — 7% vs 3–4%).
  3. Men and women seem to have a similar appetite for risk

1 in 5 first run was a 400 sample. Everyone 1 in 5 was a larger sample of everyone — 1000 respondents at a 3% margin of error. Blacks poll was 200 respondents and has a 7% margin of error. Everything else is a 4% margin of error polling 400 respondents.

Income

When it comes to income, the results were somewhat surprising, though I caution against reading too much into the data. Because Surveymonkey has limited targeting options, and my budget was limited, I did a large sample of 1,000 and then analyzed the yes-and-no’s by income level. Because of the granular income levels on Surveymonkey, I only received between 20 (at the very high level) and 180 (at the lower income levels) responses per income level.

Nonetheless, some interesting trends appear. I include a linear trendline (a half-assed proposition at best, but, hey.) to show the larger trend:

Among 1,000 recipients.

From this data, it seems that the wealthy have less of an appetite for risk than the poor.

Again, I caution that this data is limited. Furthermore, I am not a statistics expert, so I can’t say (for example) the margin of error when you poll a lot of people but at across many income levels and then analyze the subsets by income, but I suspect that it’s not as high as the base poll.

Further Steps

It would be awesome if someone started polling about this stuff. This is just one snapshot. Polls are more accurate over time.

And it would be amazing if people started polling other countries. Originally when I planned this research, I wanted to poll across countries, but Surveymonkey didn’t have such functionality. Since I started in January, they’ve begun offering some international polling. I hope someone gets on that. I am tapped out.

It would be great if people ran these polls at larger numbers, with better margins of error. Especially the poll of black Americans. Other subgroups, too — Surveymonkey doesn’t offer much when it comes to Asian Americans, Hispanics and other minority groups.

Analysis

So. What does all this mean? After all, it’s not like god will come down from on high and say “Hey Americans! right now you have an 80% likelihood of not dying if you give this superintelligence thing a go!” We will never, really, know the likelihood. But what this does tell us is that Americans are relatively risk averse in this regard (though the math is a bit wonky when we are dealing with infinite risk and infinite reward). This is not surprising. Modern behavioral economic research has shown that humans value what they have over what they might gain in the future.

We also see from the dataset that Americans are more skeptical of institutions pursuing superintelligence research on their own. I suspect if Americans knew the true extent of what’s being done on this front, these trust numbers would continue to decline, but that’s just a hunch. In any case, this data could be useful in institutions debating how and when to disclose their superintelligence research to the public — there may some ticking time bombs surrounding the “goodwill” line item on some of these companies’ balance sheets.

America can perhaps be best characterized as excited about the prospect of a superintelligence explosion, but also deeply afraid, skeptical, and adamantly opposed to the idea that we should plow forth without any regulation or plan. This is, it seems to me, exactly what is happening right now.

Whatever your interpretation, it’s my hope that this can help spawn some efforts by policymakers, researchers, corporations and academic institutions to gauge the will of the people regarding the research they are supporting or undertaking. I conclude with a quote from Robert Oppenheimer, one of the inventors of the atomic bomb: “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.”

I pulled the Oppenheimer quote from a recent New Yorker article about CRISPR DNA editing and the scientist Kevin Esvelt’s efforts to bring the research into the open. “We really need to think about the world we are entering.” He says elsewhere, “To an appalling degree, not that much has changed. Scientists still really don’t care very much about what others think of their work.”

I’ll save my personal interpretation of the data for another essay. I’ve tried to keep editorializing to a minimum. This is not to say that I haven’t formed opinions when looking at this data. I hope you do too.

Leave a Reply