•  7 min read  •  444 views

Is AI Intelligent, or is it just very good at pretending?

Is AI Intelligent, or is it just very good at pretending?

The recent releases of AI systems such as GPT4 have sparked popular interest in such systems to an unparalleled degree; ChatGPT has reached it's first million users much faster than any other website in the past. The reason for this is simple: ChatGPT and it's cousins (MidJourney, etc.) appear capable of intelligence and creativity and are thus moving into domains that have previously been reserved for humans.

A new fundamental question

The oldest and most fundamental question we have been asking ourselves is probably “why are we here, what’s the meaning of life?”. Until now (or at least very recently) the question “what is intelligence” was not quite as important, because the answer in most cases was obvious: humans - some more so than others - are intelligent and animals are either much less so than humans or not at all. With the release of systems such as GPT, Midjourney, etc. answering the question of what is intelligence has probably been infused with renewed relevance and thus it behooves us to revisit some of the arguments underpinning this question.

Defining Intelligence

Before we can ask ourselves whether the current generation of AIs are intelligent, we somehow need to define what actually constitutes intelligence. Generally speaking, intelligence can be seen as one's ability to acquire and apply knowledge and skills and to be capable of some abstract reasoning. But even that is not enough for a comprehensive definition – just think about creativity or emotional intelligence.

Whatever the exact definition of intelligence may be, the current generation of AIs seems to be able to acquire and apply general knowledge. Current AI systems like ChatGPT can process and assess data more quickly and accurately than humans and they can, to a limited degree, even come up with new solutions that we as humans would never have thought of. They are (at least in some respects) more capable than we are.

However, we should not forget that while being extremely capable in some respects, they sometimes fail at some basic comprehension tests and in other cases will confidently return falsehoods that are easily identified as being false by a human familiar with the subject matter.

Ranking Intelligence

If we were to make a ranking on an intelligence scale, it would look something like this:

  1. Albert Einstein, Isaac Newton, etc.
  2. The great bulk of humanity
  3. Village idiots, etc.
  4. Apes, monkeys, etc.
  5. Other mammals, some birds, etc.

The ranking of #3 and #4 could possibly be flipped, but you get what I'm trying to do here. Anything below #5 is typically agreed upon as just not being intelligent. It should be noted that being intelligent is not a prerequisite for being successful in an evolutionary sense. Viruses are not intelligent, yet they are amazingly successful, so much so that they literally permeate earth's biosphere.

Enter GPT/ChatGPT

When you ask ChatGPT some questions and it replies with answers that appear spot on, it is easy to get swept away at the moment and think it is indeed intelligent. The fact that ChatGPT sometimes gets things wrong should not change our fundamental evaluation of whether it’s intelligent or not: after all, we humans get stuff wrong all the time without us immediately discounting our intelligence.

So if our criteria of intelligence are the ability to absorb knowledge and then use this knowledge in previously unseen ways to answer questions and even create write new pieces of text, then it is clear that AIs like GPT and ChatGPT are indeed intelligent.

(Mis)Judging a book by its cover

At this point, you might be tempted to say “not so fast”. Because ChatGPT ‘lives’ (not really, but you get my point) inside a computer and communicates through a browser window, you might be tempted to discount any chance of it being intelligent, because we all know that computers aren’t and can’t be intelligent. They just process data, massage it according to some complicated formulas and spit it back out. That is all fine and well, but what if our brains essentially do the same? What if our thoughts are just statistical models of even greater complexity? We’re remarkably bad at understanding how we think, but research over the years has shown that it is often surprisingly easy to trick our brains into misreporting or misunderstanding reality.

Another thing to consider is the form factor: we discount a computer’s ability to be intelligent because that’s the view we’ve become accustomed to. However, if a gorilla, chimpanzee, or even your dog would be able to do what ChatGPT is capable of, you’d be amazed at its intelligence and once you would have made sure that you’re not being fooled by some sort of trick, you’d marvel at the miracle you’ve just witnessed.

Are we asking the wrong question?

So maybe our definition of intelligence is too narrow or just plain wrong. After all, if a chess program manages to beat the world’s best players, does that mean that it’s intelligent? I think most of us would agree that the answer to this question is either just a plain “No” or at best a “Not really”. So even if we disregard the difficulty of measuring intelligence, the question of "is it intelligent” is at best misleading and at worst entirely the wrong question to ask.

So what do we really mean when we ask “is it intelligent”? I would say that what we really mean is to ask “is it sentient”, thus asking whether it is aware of its existence and thus conscious. And on that scale, ChatGPT is a pretty clear “No”. Because while it may be able to answer all sorts of questions, it has no emotional concept of self. It is a very fancy word-combining machine that may appear intelligent simply because of the capabilities it offers, but it’s certainly not sentient. It will readily admit so yourself if you ask it about how it feels about certain things, thus throwing any semblance of consciousness out the window. For example, it may be able to show you what a piece of bread looks like, describe it, tell how it is made and give you 50 recepies along with a history of baking techniques, but it will never yearn for the taste of freshly baked bread, because it can't yearn for (or for that matter, feel) anything.

To AGI or not to AGI, that is the question

What we have now with this new generation of AI systems is absolutely amazing. They clearly are capable of performing tasks that show their potential to make our lives better and easier (and put quite a few people out of work) but they are specialized systems, when what we really want when we think about intelligence is to have an Artificial General Intelligence (AGI). Because if the machine/system/whatever is capable of possessing general intelligence, then something like consciousness might emerge from it.

At this point, we can only speculate what impact(s) such systems would have, but we’ll cross that bridge when we get to it. Human history shows that when we know that something can be done, it eventually will be done, even if the consequences are entirely unclear. But since GPT, ChatGPT, Midjourney, and other such systems are specialized systems tuned to a specific purpose, they don’t get us one step closer to AGI, at least for now. However, I believe that there is the possiblity that consiousness doesn't require an secret sauce beyond the type of models we already have, but rather that it emerges spontaneously once the underlying models reach a sufficient level of complexity. If this is the case, humanity might just be in for a surprise as the current systems are further iterated and improved.

The again, maybe we'll find that these systems will get better but still not display consciousness and self awareness. And, no matter how exciting the prospect of interacting with a conscious machine intelligence may be, maybe that’s a good thing. Because while a proper AGI will most likely turn out to be fascinating on many levels and have the potential to unleash a number of scientific and philospohical revolutions, it could also lead to all sorts potentially terrifying consequences. While the eventual emergence of an AIG is probalby unavoidable, we may be well advised to be careful what we wish for...

A parting thought

The year is 2031. The first AI has started training on a qubit cluster. God is born a nanosecond later.

We live in fascinating but also slightly terrifying times.


Related Posts