All Nonfiction
- Bullying
- Books
- Academic
- Author Interviews
- Celebrity interviews
- College Articles
- College Essays
- Educator of the Year
- Heroes
- Interviews
- Memoir
- Personal Experience
- Sports
- Travel & Culture
All Opinions
- Bullying
- Current Events / Politics
- Discrimination
- Drugs / Alcohol / Smoking
- Entertainment / Celebrities
- Environment
- Love / Relationships
- Movies / Music / TV
- Pop Culture / Trends
- School / College
- Social Issues / Civics
- Spirituality / Religion
- Sports / Hobbies
All Hot Topics
- Bullying
- Community Service
- Environment
- Health
- Letters to the Editor
- Pride & Prejudice
- What Matters
- Back
Summer Guide
- Program Links
- Program Reviews
- Back
College Guide
- College Links
- College Reviews
- College Essays
- College Articles
- Back
AI, Language, Creativity, Evolution & Cyborgs: What Will Happen when Machines Pass the Turing Test?
Why this counter-factual setting?
It may be argued that due to the disintegration from reality, the value of this research will be significantly undermined when in some unpredictable future, scientific evidence would prove that artificial intellects can never reach complete similarity with human intellects. Yet this cannot fully deny the value of using an imaginary background. In the first place, science, over the course of history, has constantly disproven things that it proved earlier. Humans would have not accepted with all the scientific knowledge and reasoning capability they had the existence of irrational numbers, Pluto, or quantum physics that contradicted some of the traditional physics theories, until the advancement of technologies allowed a more comprehensible perception of these objects. Though it is likely that human-like AI could be disproved by some theories in the recent future, it is arguably impossible that these theories remain as eternal facts. In the second place, technology has often advanced at a pace that keeps itself ahead of other concerns: applications, safety concerns, ethical issues, etc. As an intelligence exceeding any other on earth, it is completely reasonable that the emergence of true AI can cause serious concerns to human intellectuals. Since it is highly possible that it will continue to percolate to more areas of interaction with human lives, mindful anticipations and thinking beyond experience avail the preparation for the arrival of such technology. Cases of humans speculating through rationales some scientific or philosophical propositions are not rare in history, such as the thought experiment of Galileo on gravitational acceleration in opposition to Aristotle’s idea, as well as that of Einstein to visualize general and specific relativity – both of which (and many others) not yet proven at the time, but lay paths crucial to future scientific and technological advancements, if not later verified.
1. Language as foundation
One of the most significant distinctions of the human species from the rest of biological life forms lies in the humans’ understanding and application of complex language systems. Verbal and symbolic languages may be the highest form of representation for the highest form of intellect for carbon-based lives. “The limits of language are the limits of my world”, said Ludwig Wittgenstein[4]. However, rather than as the ultimate composition, language is the fundamental element for, if possible, the existence of artificial silicone-based intellectuals. Either the machine itself has comprehension of the meanings or not, the principle of its function is as follows: receive a certain input, and following some protocols or instructions, process it and give a certain output. Now take a look at Wittgenstein’s depiction of the language-games: “one party calls out the words, the other acts on them”. With the openness of the depiction (“definition” being less proper) of language, as well as human activities being the center of which, the instruction for the computing machinery fits very well into the category of language. In other words, AI does not learn language – it is made up of language – in a stricter sense, symbolic language, for AI cannot perceive instructions that are not turned into specific understandable symbols. In this sense, there is such a sequence of representation: language being the symbolic representation of human minds, while computers (ordinary or potent of acquiring intelligence) the physical representation of language.
2. Instructions as all
The intelligence of computers, therefore, comes not with the capability of its processing or storage units – for arguably, it has already exceeded that of the human brain – but the rules or instructions that these units are following, so much as the quality of answers that produced by the Chinese Room[5] depends not on the person inside the room, but the comprehensiveness of the instructions, as long as the computer or the person is capable of executing them. A more proper name for such instructions in today’s context could be algorithms, codes, etc.
Will this be different in the case of artificial neural networks (ANNs)? Again, borrowing the notion of the Chinese Room – now that multiple persons, rather than one, are assigned to the room, with certain parts of the instructions distributed to each – the difference between the ANNs and a von Neumann computer is that in the von Neumann room, the input is processed linearly by persons in series, who can only process the input from the previous person, following a certain fixed sequence, while in the ANNs room, there are multiple persons, each representing a node[6], positioned not only in series but also parallel to each other[7]. What this means is that input can be simultaneously processed by different persons at the same level and time, that among the processing, things go so complex as supervising what each and every person is doing to form an accurate understanding of the progress would be nearly impossible, and one can only tell from the final output the quality of the entire processing. This is why neural networks can deal with a much more complicated input, and with a seemingly mysterious process, produce a more human-like response that does not resemble the rigidness of a machine. Up to this point of discussion though, there is no positive evidence that the ANNs have the capability to extend beyond man-made instructions.
Now, supposing that such instructions have been achieved, what are the next targets for investigation?
3. Real Creativity or Not?
First, given the inference that the limit of current computational technology, in whatever form, lies in the boundary of precedent instructions or symbolic languages set by humans, what makes, if possible, AI creative?
There are questions to be addressed before answering the one above: Why would a machine need to be creative? What is the definition of human creativity in this context?
Replying to the former, this article considers the purpose of machines, as previously implied, to be the projection of human intellect, so much as each human individual is a projection of the intellect of the species as a whole. There is a ubiquitous urge in this entire intellect to make something new out of something old, or out of nothing, which is projected to every human individual carrier of this intellect. Since this article is concerned with turning machines intellectually more similar to humans, it doubts that designers of these machines would not want to project the same urge to create on the machines, if that is possible.
As for the latter, a quote can be borrowed from Marvin Minsky, the famous computer scientist. “But the big feature of human-level intelligence is not what it does when it works but what it does when it’s stuck.”[8] The notion of creativity in this article follows the insight Minsky has on the specialty of human intelligence – to make something out of seemingly nothing, be it a mathematical equation to explain certain phenomenon, a piece of art or music, or devising a new sport or game.
Back to the initial question, an analogy to the thought experiment of a human child undergoing academic education would help understand this inquiry: the academic ability of this child used to be completely dependent on what the teachers offered, as the child had no other sources of information or techniques. It kept on being such a way as the child gained more and more knowledge, but at a certain point, the child encountered a problem that none of the teachers taught on how to solve, not even on questions of similar forms or contents. What if there is a “Eureka” moment when the child started doing something that none of the teachers perceived as having either taught or seen before – and the child solved it?
Would this be possible for AI? Would there be anything different between a child solving the problem and a machine solving it?
The Imitation Game[9] would not suffice to examine the creativity of the AI, for by following a comprehensive instruction only, it could still answer those questions in a way similar to the supposed sonnet-writing machine answering the viva voce in the Argument of Consciousness in Turing’s counter-argument against his contrary views – so long as the program himself or herself is creative enough to put instructions of what structure the poem is written in, what emotions do what words convey, etc. – of course, as mentioned, the instructions would have to be comprehensive.
4. Further interrogation: “Now this is the case, but what’s next?”
Now, return to the thought experiment: we may say that when the child encountered the problem, the child was stuck, at least for the time being before he came up with the solution. We could as well say that this child encounters this question: “Now this is the case, but what’s next?” We often meet this same question in different (various) scenarios in daily lives, and consciously or subconsciously solve them. Yet, generally speaking, we (and the child) solve them by taking two different paths. In the first one, we make careful observations on what “this is the case” is. We may find that, when looked from a particular perspective, though far-fetched, is in fact an unordinary example of the case that we have been instructed (the child having been taught) on how to solve before. We are not creative in the process of solving, but out of the creativity (or perhaps even out of contingency) we possess to look at the matter at a different angle, and find the unprecedented solution. This the article defines as far-fetched or pseudo creativity – not ignoring the creativity in viewing things differently, but only to count this as part of the second path. Moving on to the second path of solution, we are applying what this article would call the actual creativity – what are we doing here? We now are not examining the case for a hidden niche where we can cunningly apply the precedent instructions to; rather we jump out of all instructions that we’ve been indoctrinated, and make a supposition by some intuition a priori: “what if such and such is the case, though none of us know it, and this being the case would solve the problem?” Beginning from that, we begin to derive through logic and practice and observe through experiments, until we are convinced that our supposition is correct. Now, this supposition does not belong to any previous instructions, and is therefore what this article calls the product of actual creativity. An example of such could be Newton proposing the three laws of motion, all of which previously not proposed and remain logically unprovable, only valid through observations. The reason that the bit of creativity involved with looking at the case from a different view point in the first case of pseudo creativity is included into this latter path is that the very bit of creativity involved can be paraphrased as “by some intuition, let’s suppose that looking from this specific and unusual perspective would solve the problem?”
So much discussion above leads to a plausible solution that this article proposes, as able to examine whether the counter-factual machines really have the creativity or not. After having it solve the problem “Now this is the case, but what’s next?”, we further interrogate: “How did you get from where you started to this final solution?” Such interrogation is indeed somewhat similar to the situation that Wittgenstein discussed in #217 of Philosophical Investigations:
'How am I able to obey a rule?'--if this is not a question about causes, then it is about the justification for my following the rule in the way I do.
If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: 'This is simply what I do.'[10]
It is exactly the “reaching of the bedrock and the turning of the spade” that we are examining. Counterintuitively, this article speculates that it is the sooner that the machine reaches the unquestionable phase that the greater portion of actual creativity it uses in solving the problem, in the same sense that it cannot explain its creative intuition just as humans cannot prove Newton’s laws, or that at the end, a creative artist or musician ceases to explain and say “that’s how it looks or sounds good.” The more instructions the machine is following, the longer indeed it takes for it to reach the bedrock.
Of course, a huge number of assumptions are made in through the analysis, such as on that actual creativity relies on but intuition, some unspeakable thoughts, and that machines cannot lie in the process of the interrogation or are not programmed to appear intentionally, in certain predesignated circumstances, as if having reached the bedrock. These assumptions, though cannot be expounded on in this particular article, are essential to the discussion, and it is doubtful that they would become the weak points of the analysis.
5. The Evolution of Evolutionism (Darwinism)
The prerequisite for Evolution to occur, in short, according to On the Origin of Species by Natural Selection: a). Struggle for survival as a result of insufficient resources for all b). Varying traits of lives that are heritable through genes c). The chances of survival and reproduction being dependent on the fitness of the heritable traits under specific circumstances[11]. Pushing AI to a greater and greater resemblance with the most intellectually capable form of life on Earth, however, humans are bringing the theory of Evolution to an unprecedented level that genes are no longer the only factor on the traits or reproduction of life forms. Though completely different from the rest of the life forms, AI’s features – the rational necessity for existence, the consumption on energy, the varying traits that develop toward the same end as maintaining a place to exist and function, and even to expand the place, etc. – puts it to a head-to-head collision with the rest of the life forms, humans included.
Upon this discussion, no need is seen to further discuss whether the future AI should be considered as (biological) life, in terms of discussing its role in evolution.
Much of the controversy and anxiety that has enveloped Darwin's idea
ever since can be understood as a series of failed campaigns in the struggle
to contain Darwin's idea within some acceptably "safe" and merely partial
revolution. – Daniel Dennet[12]
That Darwin’s idea is a “universal acid[13]” compels humans to only raise awareness to the ever-stronger-and-more-intelligent artifact – its impact to the biological and social ecosystems is independent from its (dis)similarity to other members of the systems. Inevitable from hence are the emergence of new ties of commensalism, competition, predation, parasitism, mutualism, etc. Which of the above relationships humans and AI will share, though, is influenced by many factors even existing at the present. Among many speculations of a diversity of future scenarios, the very ubiquitous is that humans have to vastly shift in lifestyle to accommodate their novel artifact.
6. Cyborg as a Solution
One essential cause of fear for AI and other machinery with high computing abilities comes from the otherness humans perceive from their own products of technology. This fear for the projection of one’s own intelligence on a new and unnatural carrier is reasonable, yet not necessary. In the sense that humans create these themselves, it wouldn’t be too huge a gap to move forward a step and consider AI and high-level computers as part of who they are, intellectually, i.e., a human would not be less human just by substituting some of his or her natal features with something her or she creates, or by adding some features to himself or herself according to the will – the human simply becomes the human that he or she wills to become. From such a perspective, the notion of combining cybernetics into organisms can be readily acceptable. Though one may argue that humans have undergone millions of years of evolution to arrive at attaining the current biological features which adapt the process of natural selection, and alternating them using technology that did not emerge until the recent decades would be an arbitrary act, it wouldn’t harm to think the other way around: humans’ genes have evolved for so long a time to provide humans the intelligence unsurpassable by other species, that not exerting it or following it would be a waste in obtaining optimum advantage for humans’ survival. More importantly, if AI really joins the competition among all other organisms undergoing natural selection, the pace of evolution will become so much quicker that changes within a few decades will indeed be equivalent to those extending through millions of years of time.
In the documentary Transcendent Man, Ray Kurzweil, along with a few other cutting-edge scientists, estimate that the symbiosis of humans and machines are not only feasible, but also necessary in the near future. This specific type of commensalism is the exact necessity that humans look for in solving some contemporary and other futuristic problems. At the very least, even if humans will maintain an intellectual advantage over AI, which this article doubts, the physical advantages that machines have over humans are already obvious. It is therefore wiser for us to obtain some of their advantages to ourselves, both intellectually and physically – in a similar way that we have taken advantages of machines – by embedding some kind of AI or machines into our own biological features.
7. Machines to learn rules: leave the tedious work to the artificial brains
A great part of our intellectual activity involves, not with our intuition or actual creativity, even if we have either or both, but with following rules just like machines do – some only being more complex than can be executed with the capability of current machines. Yet with this imaginary machine capable of correctly memorizing and applying various rules that humans need, human brains are freed to exert their almost only intellectual advantage over machines - intuition or actual creativity, if machines are not capable of such. For example, AI may pre-learn a language for humans – form those connections between symbolic words (the signifier) and real objects (the signified), and input those connections directly into human brains like transmitting data from one storage to another (wired or wirelessly), so that humans may understand and use the language immediately. Humans may even choose how they will to understand these connections by customizing the way the AI absorbs and learns them, rather than being confined to the specific indoctrination from a specific instructor – again a case of overcoming barriers of embodied cognition. No longer necessary to learn the rules themselves, humans then seek for a completely different form of education. With what are we to educate humans, if not rules? Is it even possible to educate intuitions or actual creativity? Is some comprehension of rules essential to the emergence of intuitive thoughts and actions? How this industry will shift faces many and critical factors of uncertainty, but can become one of the most interesting and valuable topics of research in the near future.
Bibliography
[1] A Turing, COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236, October 1950, P. 442.
[2] E Keedwell, The Loebner Prize, a Turing Test competition at Bletchley Park, The Exeter Blog, Dec 2014.
[3] chat.kuki.ai/chat
[4] L Wittgenstein, Tractatus logigo-philosphicus, 1922.
[5] D Cole, "The Chinese Room Argument", The Stanford Encyclopedia of Philosophy, 2020 Edward N. Zalta (ed.), plato.stanford.edu/archives/win2020/entries/chinese-room/.ri
[6] IBM Cloud Education, Neural Networks, ibm.com/cloud/learn/neural-networks.
[7] C Clabaugh, D Myszewski, J Pang, Neural Networks, Stanford University, cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/index.html.
[8] Secondary quoting from M Minsky, by R Kurzweil, The Singularity Is Near, 2005.
[9] Turing, Computing Machinery.
[10] L Wittgenstein, Philosophical Investigations, 1953.
[11] C Darwin, On the Origin of Species by Natural Selection, 1859.
[12] D Dennet, Darwin’s Dangerous Idea, 1995, P. 63.
[13] Dennet, Dangerous, P. 61.
[14] C Carole, "Are the robots about to rise? Google's new director of engineering thinks so…, 2014, Archived 2018-10-04 at the Wayback Machine" The Guardian, Guardian News and Media Limited.
[15] A Clarke, 2001: A Space Odyssey, 1968.
[16] M Donahue, How a Color-Blind Artist Became the World’s First Cyborg, National Geographic, 2017. nationalgeographic.com/science/article/worlds-first-cyborg-human-evolution-science.
Similar Articles
JOIN THE DISCUSSION
This article has 0 comments.
Introduction:
In 1950, Alan Turing in Computing Machinery and Intelligence, predicted that “in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning.”[1] That means, if things went as optimistically as Turing anticipated, machines would have an intellectuality so well developed that when humans have mere conversations with the machines, they would have a 30% chance of mistaking the machines as other humans by the beginning of the new millennium. More than 70 years have passed, and we have machines that easily can store way over 1 gigabyte of information. More importantly, although we did not manage to build such a machine in 2000, and still have not achieved the percentage that Turing expected after a 20-year postponement, we are getting close: Kuki, the consecutive-5-year champion of the Loebner Prize which organizes annual Turing Test on machines[2] - has managed to confuse around 25% of the human judges, making them believe that the chatbot was a human[3]. Be it a close one or not, this future is coming. However, instead of discussing the scientific or philosophical proof of the possibility for machines to become human-like or trying to give precise estimations on when this technology will finally be realized, this is an article for philosophical speculations in the counter-factual context that this has been achieved. What chances are humans facing? What advantages can be taken? What risks underlie? This article offers several possible directions of investigation for answers to these generic inquiries with a more specific perspective.