AI: Yes, It's clever, But Can It Ever Be 'Intelligent'...?
- ncameron
- Apr 15, 2020
- 11 min read

Recently re-watching 2001: A Space Odyssey, and researching around the subject, I found a number of documentaries on my remastered Blu-Ray edition of the film released in 2007 about the making of the movie. One of these documentaries was about the technology as foreseen in the movie, and in it there were three contiguous quotations that I found interesting in their totally unsupportable naivety:
"By 2009, your laptop will have the same memory as a human being; and by 2019 your laptop will have the same memory as all human beings. So the question is, at what point do these machines become sentient?".
Richard Edlund, Visual Effects Supervisor, 2010
"Within probably 25 years from now, we will have intelligent machines - as intelligent, or more intelligent, than a human being".
Doug Trumbull, Visual Effects Assistant, 2001: A Space Odyssey
"Eventually you can look forward to the time when you might have a conversation and you couldn't tell whether it was a computer or a human being at the other end, and that is a sort of crucial test".
Arthur C Clarke
I'll come back to Arthur C Clarke's formulation, which is essentially the classic 'Turing Test', later on.
While no less entitled than any one else to pontificate about the nature of machine intelligence, it is difficult to see why special effects supervisors should be questioned on this topic as if they had any particularly relevant expertise, which one might consider was more the province of philosophers and computer experts. Maybe this is why these statements are overwhelmingly simplistic and unconvincing.
Let's start with the first statement, which makes two classic categorisation errors; firstly, that computer's 'memory' is what it 'thinks' with, it may be, it may not. Secondly, that the size of this memory is related to its ability to become sentient.
To the first point; when most people talk about a computer's 'memory' they are often confusing RAM with storage. Adding storage to a computer provides a lot of additional utility to any system, but does not really contribute to its 'thinking' space, which - surely - is CPU RAM. [For the purposes of this discussion we can ignore virtual memory paging processes]
So we should better equate human brain cells to bytes of RAM. As it is currently assumed that a human brain consists of some 100 billion cells, it can be said that by 2009 Edlund's comparison was not wholly ridiculous, since a computer with 93Gb RAM could be said to have the equivalent of 100 billion neurons - not that I know of any computer in that era with that much RAM - 32Gb would have been a lot then, and is still a lot now. But still, 93 Gb RAM, is not 'unfeasible'.
By any standards though, given that the world population is now around 7.5 billion, his 2019 forecast must be considered way off, as this would mean a computer with 700 million Terabytes of RAM. This is currently comfortably way beyond the realm of possibility.
However, we don't need him to be wrong on that count, as he is palpably wrong on the other. RAM (or memory) size cannot logically be the tipping factor into sentience. Consider this, why, in the future, should a newly upgraded computer with, say, 700 million Terabytes of RAM, suddenly gain self-consciousness when with a mere 350 million Terabytes of RAM, it did not? This is obviously ridiculous. Whatever the tipping point into sentience is, if there is one, its is surely something that is being done with this 'processor', not how large it is.
I relation to Doug Trumbull's statement, we have to question that on two counts as well; firstly, what amazing thing is spontaneously going to happen over those 25 years and why? Secondly, what does he mean by 'intelligent'?
This leads us to the Arthur C Clarke, and Turing Test. In his paper Computing Machinery and Intelligence in 1950, Alan Turing posed this question:
“I propose to consider the question, 'Can machines think?'"
However, slow down - Turing then says that, because "thinking" is difficult to define, as a consequence he will "replace this question by another, which is closely related to it and is expressed in relatively unambiguous words". Instead, Turing describes the new form of the problem in terms of a three-person game called the "imitation game", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players.
Turing's new question is: "Are there imaginable digital computers which would do well in this imitation game?” In other words, can a computer persuade a human that it is a human?
In essence he proposes to change the question from "can machines think?" to "can machines appear to do what we (as thinking entities) can do? This question, Turing believed, is one that can actually be answered.
Maybe it can, but it is immediately a much less interesting proposition than 'can machines think', as it is essentially being diluted - even before we start - to the question 'can machines pretend to think'?
We know they can do that. Not only can they do that, they are very, very good at it. So good that the Turing Test has been passed on many occasions. I took it as a definitive passing of the Turing Test in 1977 when at the end of Chess Game 2 in the IBM Deep Blue v Kasparov chess match in New York, Kasparov accused IBM of cheating, by alleging that a human grandmaster had been behind a certain move that he considered to be outwith the capabilities of any machine. In essence, a human accusing a computer of being 'too human' to be a computer - which certainly qualifies as passing the Turing Test.
Then we have the experience of the Google AlphaGo that, was not programmed to play Go, in the way that Deep Blue was programmed to play Chess; rather it was programmed to learn and teach itself to play Go – by studying thousands of games.
Fan Hui, a three-time European Go champion, was beaten 5-0 by AlphaGo in 2018.
“So beautiful,” was his verdict of one of the program’s moves against Mr Lee. “It’s not a human move. I’ve never seen a human play this”. That, also comfortably exceeds, or even somersaults, the Turing Test - but it still does not necessitate postulating that AlphaGo 'thinks' in the way that we do.
Computers using advanced AI algorithms can already exceed the diagnostic capabilities of the best consultant oncologists; as recently reported in Medical News Today:
AI is being successfully applied for image analysis in radiology, pathology, and dermatology, with diagnostic speed exceeding, and accuracy paralleling, medical experts.
This is hardly surprising as this task is essentially an exercise in pattern matching, and whilst human are tolerably good at pattern matching (way back to spotting predators and game in the veldt in the Rift Valley), well-crafted algorithms together with the power and speed of modern computers can easily outperform it. In my world of legal IT we have examples of computers being able to undertake a sophisticated subjective analysis on thousands of pages of documents, all in a fraction of the speed of expert lawyers and with a demonstrably higher degree of accuracy. I concede these examples of computers outperforming human experts willingly.
But what has that got to do with real 'intelligence'? These examples are credible without the necessity of arguing that the machines are truly 'intelligent'. I now reach the point where, I also concede, I have to define what I understand by the term 'intelligence'.
Given that, manifestly, I do not believe that mimicking intelligence, is real intelligence; or that outperforming really intelligent beings in some tasks is real intelligence. What do I mean by 'intelligence'? As my starting point I am happy to going along with two other expressions of the concept of intelligence by two commentators:
- one would be an expression of Arthur C Clarke used on a different occasion that it means being 'self-conscious' or 'self-aware' - a great definition, but very difficult to test. You can easily program a computer to answer a question put to it, "are you self-aware?" with the answer "yes"
- the other is the definition used by philosopher Professor John Searle of the University of California, Berkeley, who defines it as 'intentionality', or 'the quality of mental states (e.g., thoughts, beliefs, desires, hopes) that consists in their being directed toward some object or state of affairs'. In more prosaic English we might say, 'the desire to achieve an objective or outcome'.
In either case, this must mean that the entity in question is not merely obeying a sequence of human directives, i.e. "a computer program". A program can tell a computer how to diagnose cancer from x-rays better that people, it can tell it how to perform a thousand tasks and follow a thousand different programs. It can even tell it how to behave randomly so as to convincingly 'mimic' self determination, but how does it bring about the ability to break outside its operating system and programs to develop its own true self-consciousness and real self-determination?
My answer is, that I do not believe that it can, but I can't prove it. Luckily, thanks to William of Occam (or Ockham) I don't have to. On the basis that one should not multiply entities (or explanations) without necessity - otherwise stated that one should not invent an explanation any more complex than required - the theory that a machine can exhibit 'real' intelligence must remain to be proven by its proponents. This has never been achieved. I do not say that a currently 'non-intelligent' entity cannot achieve intentionality - after all, animals descended from the single-cell organisms did do this at some point. However, my starting point would be that we have no historical evidence that any inorganic entity can spontaneously develop intentionality under any stimulus.
As a confirmed atheist, I should say at this point that I am deliberately removing the concept of a 'life-breathing' deity from this entire discussion, as I am a grown-up and I believe it is too important to leave this issue to imaginary friends.
I am a adherent of John Searle's 'Chinese Room' hypothesis.
Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as rational and relevant output.
Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing Test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI". He then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing Test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.
He then asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behaviour which is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. "I don't understand or speak a word of Chinese", he points out. Therefore, he argues, it follows that the computer does not (and does have to be able to) 'understand' the conversation either for the model to operate.
Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that the "strong AI" hypothesis is false.
To summarise: "syntax is not semantics".
After 35 years of consideration on this issue, I can find no fault with Searle's reasoning.
Nevertheless, at this point in our analysis we also have to deal with the fact that many commentators equate 'self-consciousness' and 'intentionality' with the concept of 'free will', and contend with the determinist argument by some reputable philosophers that mankind actually has no more 'free will' than computers do.
These people arguer that humans are no more than a highly complex 'machine' that have - effectively - been 'programmed' over millennia by conditioning, conceptual biases, Darwinism etc. Essentially, they argue that, our choices are a function of our past - in which case, we can actually have no more 'intentionality', or consciousness, than a computer
Yes, I know what you're thinking, but nevertheless we have to deal with it, as it is a theory taken seriously by many. If correct, their argument means that both humans and computers are in the same boat, and this renders any discussion of whether AI is really 'thinking' completely moot - as we are not capable of thinking on a higher plane than they do.
My naïve belief is that I do have free will; that I can choose at any moment to raise my left arm, or my right arm, and then have a sandwich (and choose what to put in it) - or not. Further, I believe that no computer (in the way we currently understand it) in existence (or which I can contemplate) can possibly have the same feeling or exercise of autonomy as we do - as it will always be bound by a human designed operating system and software application. Finally, I cannot equate - in my mind - such programming with any human 'conditioning'.
Queen Elizabeth I famously said that she 'would not open windows into men's souls', which is just as well, as she could not possibly have done so even if she had wanted to. Our remaining problem is that we cannot see into a computers 'soul' - assuming it has one. We can only judge whether a machine has 'real' intelligence by interacting with it and seeing what says or does.
Doing this, we are constantly amazed by how powerful, and occasionally superhuman, they are, but - up until now - everything they have been able to do is easily explicable by the notion that they are just obeying increasingly sophisticated human programming. Again, we have the right (nay duty) to adopt Occam's rusty old razor and posit that until a computer does do something that is no longer explicable by reference to its programming, then - and only then - can we conclude that it has achieved some kind of 'real' intelligence, or sentience.
At present I cannot even imagine how (or why) that could ever happen, without millennia of organic development. Some have argued that, in the future, we will be able to build and program computers of such sophistication, that they themselves will be able to develop new higher orders of computers and programming that will be able to do things that we humans will be incapable of understanding, and - at that point - they may snap into sentience.
In my view at that point they will still only be following the rules of the original human programming, and even though they may at that point be able to do things that we did not originally contemplate, there will still be no necessity to believe that they have developed 'real' intelligence to do it. After all, as I have said, AlphaGo was not taught to play Go in the way that Deep Blue was taught to play chess, rather it 'learned' the game itself. This is maybe why it was capable of playing in such a beautiful 'inhuman' fashion. But still, we do not have to assume that it had to have become sentient in the process to explain what it has done.
For the avoidance of doubt, I would have to admit that if any computer in the future behaved the way that HAL9000 does in 2001: A Space Odyssey, that would be a step on the road to convincing me that it had achieved a state of intentionality to rival that of a human. After all, how much more human an example can we contemplate than the desire to kill out of an instinct of self-preservation.
Meanwhile, we must resist the all too human temptation to anthropomorphise any future generation of Alexa's or Siri's or AlphaGo's until we see a machine's otherwise inexplicable quantum leap outwith its humanly engineered capabilities. I am not holding my breath...

Comments