The Real Risks of AI

By

Augmenting our own abilities is the best possible defense


“There is a movie called ‘Arrival’ that I happened to work on as a science adviser. And it’s about communicating. I realized that the problem of communicating with extraterrestrials is bizarrely similar to the problem of communicating intentions to AIs. An AI is an example of an alien intelligence, so to speak.”

— Stephen Wolfram, CEO, Wolfram Research

Discussions today about the risks of AI often bark up the wrong tree and tend to be misleading.

This matters because AI is being invoked left and right. Basic misunderstandings of what AI is and isn’t are also creating unnecessary fear, and those fears in turn stifle adoption.

Confoundingly, by running away from AI or making false assumptions about its evolution, we potentially make our worst case scenario more likely, not less. We should also be as focused on understanding human intelligence as we are on AI.

We need to first get straight on the many different meanings of AI. I’ll move quickly.

One definition of AI, which is most frequently used in business and academic contexts, is a discipline of computer science that entails using computers in which the system (ie: a machine) learns from data. This definition is largely synonymous with machine learning and encompasses computationally complex tasks such as natural language processing, predictive analytics, pattern recognition, computer vision, robotics, and more.

Use cases for this first kind of AI include autonomous cars, robots, chatbots, trading systems, facial recognition, and virtual assistants. These use cases typically combine multiple computational tasks using machine learning to achieve seemingly miraculous results such as self-driving cars. Virtually every “AI” article is about this type of AI which will be referenced here as “VAI” for Vertical Artificial Intelligence.

The risk posed by VAI is not significant, but it does exist. The most likely scenario for VAI danger is faulty (ie: badly programmed, trained or managed) systems. Weapon systems are the worst-case scenario; they obviously have a much higher risk of killing humans by accident or in a large-scale way, compared to a domain like virtual assistants. To be clear, any apocalyptic scenario involving autonomous weapons systems would be initiated by humans.

Healthcare-related faulty scenarios are pretty scary too. But on the whole, VAI is not something to be worried about in and of itself. VAI is as error-prone as any other computational system, but is is not existentially worrisome.

Autonomous cars, one the most rapidly advancing VAIs that will likely be one of the first real experiences most people have with this type of technology, also carry risk. These systems, like healthcare, will be heavily regulated which will help reduce risk of system faults. There are still significant edge cases, embodied by the trolley problem, which are difficult to solve at the human (ethics) level.

A second definition of AI, which is the more widely perceived pop-cultural meaning, is a self-aware or conscious system that is intelligent in a more profound sense.

Such a system can act on its own and can do so in ways that are not fully dependent on specific programming. It may choose to act in its own self-interest or to ensure its survival. This type of AI is more accurately called an “AGI,” for Artificial General Intelligence. This type of AI is also sometimes called “HLI” for Human Level Intelligence, which helps further frame what it is and of what it might be capable.

For the purposes of this article, AGI will be used to mean a “true” artificial intelligence. The closest thing (that we know about) to this in the real world is probably Google’s DeepMind.

The Actual Worst Case

The “artificial” in artificial intelligence means man-made, which does not necessarily mean understood.

Humans can already make lots of things we don’t understand, and even more things we cannot control. There is a strong case to be made that a true AI emerges without a full understanding of how it works.

The best argument for why we will not be able understand AGI is quite simple: we don’t understand human intelligence. This is also a commonly given reason for why we are not close to creating AGI ourselves. An intelligence of this type is more likely to be emergent.

This may preclude humanity from understanding it or controlling it or, in what may turn out to be the most terrifying scenario, from even recognizing AGI in the first place. In this case, a more direct way to think of this AGI is that it is alien rather than artificial. Stephen Wolfram’s quote that opens this post nails this argument.

AGI can be something that we have virtually no understanding or recognition of, but which may have a significant understanding of us if it is given access to the Internet or a significant data repository.

Such a lack of mutual understanding is where all of the real risks reside. This is what should be talking about when we talk about worst case scenarios. We erroneously assume that we will be able to recognize AGI as such.

We erroneously assume that we will be able to recognize artificial general intelligence as such.

Imagine something simple, like an exploratory rover on the Moon or Mars that is tasked with autonomous exploration and sends periodic raw data back to Earth, or an AI developed for games, or an AI developed purely for research. Such a system could become generally intelligent in ways we are unable to perceive for some time, if at all.

This is an important point in identifying risks of a new AGI. Many assume it will be easy to spot, possibly by passing the Turing Test. However, an AGI will, by definition, be very smart and may choose to conceal itself by intentionally failing these types of tests or by hiding itself (perhaps by just performing its mundane VAI tasks) until such a time as it is ready to reveal itself — most likely after an intelligence explosion.

Curiosity and Self-Modification

“An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

— Irving John Good

What makes an AGI an AGI? A likely prerequisite of a true AGI is curiosity — “I wonder what would happen if…?” Curiosity takes a step beyond learning, which can just be a task. Curiosity is also a step beyond adaption, which can also just be a task.

True intelligence moves past simple ideas like goal-seeking, which is often considered another cornerstone of varying levels of AI and as a potential control mechanism. Almost all human progress is driven by curiosity, which is a short hop to ambition. Ironically it is likely that curiosity is the human trait most likely to lead not just to AGI, but past it to superintelligence.

There is a capability beyond learning, adapting, curiosity, and even general intelligence that humans do not currently have. It is the ability to self-modify. Self-modification makes AGI a runaway train, for better or worse. Self-modification makes it possible to act on curiosity, and therefore makes it very unlikely we’ll be able to intervene or control an AGI.

An AGI could achieve self-modification in several ways. The easiest would be if it could alter its software to improve its performance. This is a common software engineering task, so there is no reason to think it would not be possible. An AGI could also improve its hardware through either the addition of new resources (computational cores, perhaps via one or more clouds) or new designs (designing chips or systems that are beyond the capabilities of current hardware engineering).

This concept has inspired books, such as Our Final Invention, Superintelligence, and lots of thinking on the topic of what would happen if an AGI with the ability to modify itself is ever created.

This self-modification is the precursor to an intelligence explosion creating an Artificial Super Intelligence or “ASI.”

An ASI, perhaps thousands of times smarter than any human and with instant access to all of humanity’s accrued knowledge, creates the real potential of an existential risk for us, especially if human intelligence doesn’t keep pace.

These risks are simple, real, and escalating. The most obvious consequence is easy for anyone to grasp: self-defense or self-preservation. An ASI cannot be simply turned off and would likely react very negatively to any attempt to do so.

A variant of this risk would be a situation where the ASI simply needs resources and humans happen to be in the way. An intelligent machine could make the leap that humans have not been particularly forgiving of lower order creatures that have been in the way of their progress and might assume this is normal behaviour.

It is also a high likelihood that any ASI, which as you will remember knows all human history, would move immediately to take control of any threats to its existence. (And the Internet would clearly show it humanity considered it a threat.) Any government that did not believe it controlled an ASI but knew of its existence would likely attempt to control it and the ASI would know this.

The basic ideas of self-defense and self-preservation combined with a knowledge of human history seem to inevitably lead to a bad situation for humans.

The Best Alternative Case

There is an alternative case to be made that if or when an AGI emerges that we will recognize and understand it. Any effort to define and govern the ethics of AI development and usage relies heavily on this case.

Most movies and books, probably because it is human nature, attempt to anthropomorphize hypothetical AGIs in ways that make them seem human. This is also a fault of the famous Turing Test, in which a human-like AI is assumed in the test. This may not be unreasonable, if an AGI arises from a system designed to solve human problems — say trading in the stock market or driving cars.

These systems will necessarily view human behavior as a significant input and much of the learning process leading to their awakening will likely be based on analysis of human behaviors. Further human interaction will likely be an important part of their basic design and development.

One example of how this might occur is in a scenario where the precursor to an AGI involves technology that mimics the human brain or relies on brain scans as a foundational element. An AGI that develops this way might be very similar to in “personality” to humans or may even be based on a human (perhaps we develop a way to upload human consciousness to a machine.)

If this turns out to be the case, the fundamental AGI risks associated with the artificial problem will be much lower.

The most impactful thing we can do to increase the likelihood of this best case scenario is to have developed a much more fundamental understanding of our own intelligence as a prerequisite. By understanding what makes human intelligence uniquely valuable, we help ensure our future.

Humans may also achieve the ability to self-modify in ways that are likewise wholly unique to us. This ability would allow us to improve exponentially — effectively achieving a super intelligence for humans. This path has its own risks, for example, if there is only one super intelligent human we can easily imagine many outcomes at both extreme ends of the positive and negative spectrum.

Synthetic biology and genetic modification using CRISPR are great examples of how this might happen, as is some form of mental augmentation yet to be discovered. These would allow us to control our own evolution, as Craig Venter has argued.

Bryan Johnson (Kernel), Elon Musk (Neuralink), and others, are exploring ways to integrate computers with the human brain. This could be another path to humans assuring control of their destiny. Here is some fun reading about this: https://waitbutwhy.com/2017/04/neuralink.html.

So What Do We Do?

The best thing we can do right now is to first teach each other about the difference between the computer science discipline of VAI that is growing massively and will contribute greatly to the advancement of humanity, and “pop AI” or AGI, which is notionally much more dangerous. By conflating the two we hide from what really matters.

Movies like Ex Machina demonstrate how difficult it is to maintain control of an AGI once it is created, even if the creator is fully cognizant that they have created an AGI (which is not itself a given, as we have discussed). The difference between a well-trained machine, and a curious, self-modifying, self-defending system is everything.

We must not be deterred in our dogged pursuit of vertical, more narrow AI, which will meaningfully increase our own abilities, and the effectiveness of the software and hardware systems we in turn create. These VAIs have the potential to move humanity massively forward, leading the way to a bright future.

Siri’s Adam Cheyer calls this exponential programming: “soon, developers will create new programs in collaboration with an artificial intelligence that does much of the heavy lifting. This shift represents the biggest leap forward in productivity and scalability yet.”

In my opinion creating VAI technologies makes it no more or less likely that AGI will emerge. Rather, they make it more likely that AGI, if it emerges, will emerge with a design we can contemplate.

Augmenting our own abilities is the best possible defense imaginable. This requires an increasingly detailed understanding of our own intelligence, and the ability to likewise self-modify. Our best case scenario requires an AGI that relies on us just as we rely on it. Our best case scenario also requires that we be able to keep pace.


Andrew is an entrepreneur and the author of Accidental Gods, a sci-fi novel. He’s also founder and Chief Product Officer of Conversable, a conversational intelligence platform that facilitates commerce and customer care for companies like Whole Foods, Pizza Hut and Sam’s Club.

Leave a Reply