A.I. Wiki

Do you like this content? We'll send you more.

Strong AI & General AI

The strange thing is that all of this took so long and happened so suddenly. -Ted Nelson, author of “Computer Lib”, referring to the advent of the personal computer

Strong AI, General AI, and a Confluence of Criteria

The quote at the top is the tech sector’s version of a dialog written by Ernest Hemingway in The Sun Also Rises:

  “How did you go bankrupt?” Bill asked.

  “Two ways,” Mike said. “Gradually and then suddenly.”

Technological progress and personal bankruptcy are usually non-linear. That is, they can operate at different speeds, and accelerate quickly. Why? Because they result from the slow accumulation of many causes. Many of those factors are necessary, but in themselves insufficient, to trigger a breakthrough. Only when present in their totality is the event unleashed.

What Is Strong AI? What Is General AI?

Let’s talk synonyms: strong AI, general AI, artificial general intelligence (AGI) and superintelligence all basically refer to the same thing. And what they refer to is an algorithm or set of algorithms that can perform all tasks as well as or better than humans. And to be clear, that does not exist. Not only does strong AI not exist as anything more than an idea, but we don’t know how to get there yet. We are in the slow phase of building AI.

We call it strong because we imagine it will be strong than us. We call it general because it will apply to all problems. The opposite of strong AI is weak AI. The opposite of general AI is narrow AI.

As these words are being written in 2018, we live in an age of weak AI. Weak AI is an algorithm that has been trained to do one thing, and it does that one thing very well. For viewers familiar with the movie Rainman, Dustin Hoffman’s character is a good analogy for weak AI: good at solving a few very specific problems (like counting match sticks on the floor), bad at life.

The AI that data scientists are deploying to the world right now is a bunch of machine-learning models, each of which performing one task well. They are like a crowd of savants babbling their narrow responses to the world. That said, DeepMind’s algorithms, most recently AlphaZero, are able to solve a wider and wider array of video games. They are generalizing beyond a single problem. This may be their secret.

So AI Isn’t Strong, but Is It Getting Stronger?

The two organizations doing the most interesting work on general AI are probably Google and Open AI, a think tank created by Elon Musk and Sam Altman, among others. Google’s AI research mostly happens at DeepMind and Google Brain.

If AI is getting stronger, it’s because of them. And AI is getting stronger, at least in the sense that it is able to produce more and more accurate predictions about the data you feed it. The progress we have made in computer vision over the last decade, approaching 100% accuracy in recognizes objects in images correctly, is one indicator of increasingly strong AI. The ability of DeepMind algorithms to win more and more games, and to transfer learning from one game to another, is a second indication. But we’re not there yet.

In his review of Steven Pinker’s book, “Enlightenment Now”, Scott Aaronson analyzes Pinker’s AI optimism in the following paragraphs:

Then there’s the matter of takeover by superintelligent AI. I’ve now spent years hanging around communities where it’s widely accepted that “AI value alignment” is the most pressing problem facing humanity. I strongly disagree with this view—but on reflection, not because I don’t think AI could be a threat; only because I think other, more prosaic things are much more imminent threats! I feel the urge to invent a new, 21st-century Yiddish-style proverb: “oy, that we should only survive so long to see the AI-bots become our worst problem!”

Pinker’s view is different: he’s dismissive of the fear (even putting it in the context of the Y2K bug, and people marching around sidewalks with sandwich boards that say “REPENT”), and thinks the AI-risk folks are simply making elementary mistakes about the nature of intelligence. Pinker’s arguments are as follows: first, intelligence is not some magic, all-purpose pixie dust, which humans have more of than animals, and which a hypothetical future AI would have more of than humans. Instead, the brain is a bundle of special-purpose modules that evolved for particular reasons, so “the concept [of artificial general intelligence] is barely coherent” (p. 298). Second, it’s only humans’ specific history that causes them to think immediately about conquering and taking over, as goals to which superintelligence would be applied. An AI could have different motivations entirely—and it will, if its programmers have any sense. Third, any AI would be constrained by the resource limits of the physical world. For example, just because an AI hatched a brilliant plan to recursively improve itself, doesn’t mean it could execute that plan without (say) building a new microchip fab, acquiring the necessary raw materials, and procuring the cooperation of humans. Fourth, it’s absurd to imagine a superintelligence converting the universe into paperclips because of some simple programming flaw or overliteral interpretation of human commands, since understanding nuances is what intelligence is all about:

“The ability to choose an action that best satisfies conflicting goals is not an add-on to intelligence that engineers might slap themselves in the forehead for forgetting to install; it is intelligence. So is the ability to interpret the intentions of a language user in context” (p. 300).

I’ll leave it to those who’ve spent more time thinking about these issues to examine these arguments in detail (in the comments of this post, if they like). But let me indicate briefly why I don’t think they fare too well under scrutiny.

For one thing, notice that the fourth argument is in fundamental tension with the first and second. If intelligence is not an all-purpose elixir but a bundle of special-purpose tools, and if those tools can be wholly uncoupled from motivation, then why couldn’t we easily get vast intelligence expended toward goals that looked insane from our perspective? Have humans never been known to put great intelligence in the service of ends that strike many of us as base, evil, simpleminded, or bizarre? Consider the phrase often applied to men: “thinking with their dicks.” Is there any sub-Einsteinian upper bound on the intelligence of the men who’ve been guilty of that?

Second, while it seems clear that there are many special-purpose mental modules—the hunting instincts of a cat, the mating calls of a bird, the pincer-grasping or language-acquisition skills of a human—it seems equally clear that there is some such thing as “general problem-solving ability,” which Newton had more of than Roofus McDoofus, and which even Roofus has more of than a chicken. But whatever we take that ability to consist of, and whether we measure it by a scalar or a vector, it’s hard to imagine that Newton was anywhere near whatever limits on it are imposed by physics. His brain was subject to all sorts of archaic evolutionary constraints, from the width of the birth canal to the amount of food available in the ancestral environment, and possibly also to diminishing returns on intelligence in humans’ social environment (Newton did, after all, die a virgin). But if so, then given the impact that Newton, and others near the ceiling of known human problem-solving ability, managed to achieve even with their biology-constrained brains, how could we possibly see the prospect of removing those constraints as just a narrow technological matter, like building a faster calculator or a more precise clock?

Third, the argument about intelligence being constrained by physical limits would seem to work equally well for a mammoth or cheetah scoping out the early hominids. The mammoth might say: yes, these funny new hairless apes are smarter than me, but intelligence is just one factor among many, and often not the decisive one. I’m much bigger and stronger, and the cheetah is faster. (If the mammoth did say that, it would be an unusually smart mammoth as well, but never mind.) Of course we know what happened: from wild animals’ perspective, the arrival of humans really was a catastrophic singularity, comparable to the Chicxulub asteroid (and far from over), albeit one that took between 104 and 106 years depending on when we start the clock. Over the short term, the optimistic mammoths would be right: pure, disembodied intelligence can’t just magically transform itself into spears and poisoned arrows that render you extinct. Over the long term, the most paranoid mammoth on the tundra couldn’t imagine the half of what the new “superintelligence” would do.

Finally, any argument that relies on human programmers choosing not to build an AI with destructive potential, has to contend with the fact that humans did invent, among other things, nuclear weapons—and moreover, for what seemed like morally impeccable reasons at the time. And a dangerous AI would be a lot harder to keep from proliferating, since it would consist of copyable code. And it would only take one. You could, of course, imagine building a good AI to neutralize the bad AIs, but by that point there’s not much daylight left between you and the AI-risk people.

Free Consultation

Schedule a 30-minute Q&A with our AI experts.