Strong AI, Weak AI & Superintelligence | Skymind

A.I. Wiki

Subscribe to our bi-weekly AI newsletter:

Strong AI, Weak AI & Superintelligence

The press, the machine, the railway, the telegraph are premises whose thousand-year conclusion no one has yet dared to draw. - Friedrich Nietzsche

Strong AI vs. Weak AI

Technological progress, like evolution, is usually non-linear. That is, it can operate at different speeds, and accelerate quickly. Why? Because both phenomena result from the slow accumulation of many causes, a confluence of criteria. Many of those factors are necessary, but in themselves insufficient, to trigger a breakthrough. Only when present in their totality is the event unleashed.

The strange thing is that all of this took so long and happened so suddenly. -Ted Nelson, author of “Computer Lib”, on the advent of the personal computer

What Is Strong AI? What Is General AI?

Let’s talk synonyms: strong AI, general AI, artificial general intelligence (AGI) and superintelligence all basically refer to the same thing. And what they refer to is an algorithm or set of algorithms that can perform all tasks as well as or better than humans. And to be clear, that does not exist. Not only does strong AI not exist as anything more than an idea, but we don’t know how to get there yet. We are in the slow phase of building AI.

We call it strong because we imagine it will be strong than us. We call it general because it will apply to all problems. The opposite of strong AI is weak AI. The opposite of general AI is narrow AI.

As these words are being written in 2018, we live in an age of weak AI. Weak AI is an algorithm that has been trained to do one thing, and it does that one thing very well. For viewers familiar with the movie Rainman, Dustin Hoffman’s character is a good analogy for weak AI: good at solving a few very specific problems (like counting match sticks on the floor), bad at life.

The AI that data scientists are deploying to the world right now is a bunch of machine-learning models, each of which performing one task well. They are like a crowd of savants babbling their narrow responses to the world. That said, DeepMind’s algorithms, most recently AlphaZero, are able to solve a wider and wider array of video games. They are generalizing beyond a single problem. This may be their secret.

Learn to build AI apps now »

So AI Isn’t Strong, but Is It Getting Stronger?

The two organizations doing the most interesting work on general AI are probably Google and Open AI, a think tank created by Elon Musk and Sam Altman, among others. Google’s AI research mostly happens at DeepMind and Google Brain.

If AI is getting stronger, it’s because of them. And AI is getting stronger, at least in the sense that it is able to produce more and more accurate predictions about the data you feed it. The progress we have made in computer vision over the last decade, approaching 100% accuracy in recognizes objects in images correctly, is one indicator of increasingly strong AI. The ability of DeepMind algorithms to win more and more games, and to transfer learning from one game to another, is a second indication. But we’re not there yet.1

While strong AI is something to worry about, it’s nowhere near the most important threat that humanity faces. Those who have made it their life mission to raise awareness about superintelligence will probably be annihilated by another threat before we get to strong AI. It’s not that they’re totally wrong, it’s just that their priorities are skewed.

The beliefs of AI millenarians suffer another, deeper flaw. Their propositions are almost metaphysical, although they pretend to apply to the physical world. Superintelligence, in its positive or negative incarnation, has been prophesied for decades, but in the eyes of its prophets, it is always a few decades away. Near enough to frighten us with its possibility, and far enough away for fearmongers to avoid any accountability. In the meantime, however, they raise a lot of money for research.

This brings us to an important point: The discussion about AI is fundamentally fideistic. That is, it has the characteristics of a faith-based argument, rather than a scientific debate. As G.K. Chesteron said, “The special mark of the modern world is not that it is skeptical, but that it is dogmatic without knowing it.”

In its association with faith and divinity, AI is like many other powerful technologies. Prometheus stole fire from Zeus. The railroad gave us the gospel train. And more recently, Anthony Levandowski, whom Google accused of stealing their self-driving technology, founded his own church of AI.

Scare Quotes About Superintelligence, in Case You Were Wondering

A number of respected figures in science and technology have attempted to warn humanity about the dangers of strong AI. They are the AI millenarians.

The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans. - Stephen Hawking in WIRED

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that…. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out. - Elon Musk at MIT

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned. - Bill Gates during his Reddit AMA

It is the business of the future to be dangerous. - Alfred North Whitehead

Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid. - Chris Nicholson in WIRED ;)

Footnotes

1) In his review of Steven Pinker’s book, “Enlightenment Now”, Scott Aaronson analyzes Pinker’s AI optimism in the following paragraphs:

Then there’s the matter of takeover by superintelligent AI. I’ve now spent years hanging around communities where it’s widely accepted that “AI value alignment” is the most pressing problem facing humanity. I strongly disagree with this view—but on reflection, not because I don’t think AI could be a threat; only because I think other, more prosaic things are much more imminent threats! I feel the urge to invent a new, 21st-century Yiddish-style proverb: “oy, that we should only survive so long to see the AI-bots become our worst problem!”

Pinker’s view is different: he’s dismissive of the fear (even putting it in the context of the Y2K bug, and people marching around sidewalks with sandwich boards that say “REPENT”), and thinks the AI-risk folks are simply making elementary mistakes about the nature of intelligence. Pinker’s arguments are as follows: first, intelligence is not some magic, all-purpose pixie dust, which humans have more of than animals, and which a hypothetical future AI would have more of than humans. Instead, the brain is a bundle of special-purpose modules that evolved for particular reasons, so “the concept [of artificial general intelligence] is barely coherent” (p. 298). Second, it’s only humans’ specific history that causes them to think immediately about conquering and taking over, as goals to which superintelligence would be applied. An AI could have different motivations entirely—and it will, if its programmers have any sense. Third, any AI would be constrained by the resource limits of the physical world. For example, just because an AI hatched a brilliant plan to recursively improve itself, doesn’t mean it could execute that plan without (say) building a new microchip fab, acquiring the necessary raw materials, and procuring the cooperation of humans. Fourth, it’s absurd to imagine a superintelligence converting the universe into paperclips because of some simple programming flaw or overliteral interpretation of human commands, since understanding nuances is what intelligence is all about:

“The ability to choose an action that best satisfies conflicting goals is not an add-on to intelligence that engineers might slap themselves in the forehead for forgetting to install; it is intelligence. So is the ability to interpret the intentions of a language user in context” (p. 300).

I’ll leave it to those who’ve spent more time thinking about these issues to examine these arguments in detail (in the comments of this post, if they like). But let me indicate briefly why I don’t think they fare too well under scrutiny.

For one thing, notice that the fourth argument is in fundamental tension with the first and second. If intelligence is not an all-purpose elixir but a bundle of special-purpose tools, and if those tools can be wholly uncoupled from motivation, then why couldn’t we easily get vast intelligence expended toward goals that looked insane from our perspective? Have humans never been known to put great intelligence in the service of ends that strike many of us as base, evil, simpleminded, or bizarre? Consider the phrase often applied to men: “thinking with their dicks.” Is there any sub-Einsteinian upper bound on the intelligence of the men who’ve been guilty of that?

Second, while it seems clear that there are many special-purpose mental modules—the hunting instincts of a cat, the mating calls of a bird, the pincer-grasping or language-acquisition skills of a human—it seems equally clear that there is some such thing as “general problem-solving ability,” which Newton had more of than Roofus McDoofus, and which even Roofus has more of than a chicken. But whatever we take that ability to consist of, and whether we measure it by a scalar or a vector, it’s hard to imagine that Newton was anywhere near whatever limits on it are imposed by physics. His brain was subject to all sorts of archaic evolutionary constraints, from the width of the birth canal to the amount of food available in the ancestral environment, and possibly also to diminishing returns on intelligence in humans’ social environment (Newton did, after all, die a virgin). But if so, then given the impact that Newton, and others near the ceiling of known human problem-solving ability, managed to achieve even with their biology-constrained brains, how could we possibly see the prospect of removing those constraints as just a narrow technological matter, like building a faster calculator or a more precise clock?

Third, the argument about intelligence being constrained by physical limits would seem to work equally well for a mammoth or cheetah scoping out the early hominids. The mammoth might say: yes, these funny new hairless apes are smarter than me, but intelligence is just one factor among many, and often not the decisive one. I’m much bigger and stronger, and the cheetah is faster. (If the mammoth did say that, it would be an unusually smart mammoth as well, but never mind.) Of course we know what happened: from wild animals’ perspective, the arrival of humans really was a catastrophic singularity, comparable to the Chicxulub asteroid (and far from over), albeit one that took between 104 and 106 years depending on when we start the clock. Over the short term, the optimistic mammoths would be right: pure, disembodied intelligence can’t just magically transform itself into spears and poisoned arrows that render you extinct. Over the long term, the most paranoid mammoth on the tundra couldn’t imagine the half of what the new “superintelligence” would do.

Finally, any argument that relies on human programmers choosing not to build an AI with destructive potential, has to contend with the fact that humans did invent, among other things, nuclear weapons—and moreover, for what seemed like morally impeccable reasons at the time. And a dangerous AI would be a lot harder to keep from proliferating, since it would consist of copyable code. And it would only take one. You could, of course, imagine building a good AI to neutralize the bad AIs, but by that point there’s not much daylight left between you and the AI-risk people.

Ask an Expert

Schedule a 30-minute demo and Q&A with our enterprise Machine Learning experts.

Talk to a Machine Learning Solutions Expert