Majid Razvi
Professor Catherine Sutton
Philosophy 301: Mind & Reality
March 2010
The Conditions Necessary for Artificial Intelligence
It is ironic that, while “consciousness is the most intimately and immediately known fact of our
experience,” it is one of the least understood phenomena.1 Any knowledge necessarily exists within a
mind, making the shroud of mystery enveloping the knower particularly strange. The realization that
that which knows is itself relatively unknown is, to say the least, counter-intuitive.2 From consideration
of thought arises an important question: is it possible, theoretically, to produce a thinking entity via
non-copulatory means? I posit that the creation of a conscious entity is in principle achievable. My
underlying premises are as follows: (1) humans are thinking creatures; (2) if x exists, then x is possible;
and (3) a human is a machine, in that humans are the conjunction of various parts operating under a
certain set of rules.3 Deduction concludes, then, that thinking machines are possible, by virtue of being
actual.4 This illuminates the actual question: can humans create a thinking machine?5
It is appropriate to highlight the strength of the statement, “x is in principle impossible”. The
inquisitive human nature finds this assertion reprehensible. Nature abhors a vacuum; similarly,
humanity strives to fill epistemic voids. Furthermore, any assertion of theoretical impossibility on
grounds other than contradiction is susceptible to argumentum ad ignorantiam. This is sufficient reason
1 Laszlo 39
2 It is not so strange if you consider consciousness to be another sense which takes thought as its object, analogous to an eye
and a visual object, an ear and an auditory object, a nose and an olfactory object, etc.
3 One may be put off by the suggestion that humans are “merely” an advanced form of machinery, but the only justification
for rejecting this assertion is to say that because we think, we are not machines. This however sets up the two, machinery
and thinking things, as mutually exclusive, which leaves the question “can machines think, in theory?” about as meaningful
as “can squares be circles, in theory?”
4 In accepting the premises, this conclusion becomes curiously recursive, as deduction itself is an act of thought done by a
machine.
5 Again, we must stipulate “create” as excluding biological reproduction, to maintain a topic of discussion.
Razvi 2
to not reject the theoretical possibility.6 As Niels Bohr succinctly stated, “prediction is very difficult,
especially if it's about the future.” Hence, I shall not only argue for the possibility of creating
consciousness, but propose a loose guideline for how to do so.
Initially, it is important to address the computing metaphor to be used, as this is a potential point
of objection. The hardware/software distinction is not intended to be directly analogous to body/mind,
nor does it assert a position regarding the ontological status of the mind, although this is relevant.
Indeed, “we should not assume human level intelligence can only be achieved through computer
science. Rationalizing this topic through computers is living too much in the now.”7 However, given
that we do exclusively inhabit the now, computational vocabulary is the appropriate jargon.
Contemporary computing may be replaced by a variety of alternatives, such as quantum computing,
organic computing, or some presently-inconceivable medium. I find speculation about the specific
medium that will carry us to strong artificial intelligence to be purposeless, as it is purely conjecture.
That said, there's a surge of literature, academic or otherwise, declaring quantum computing to be the
holy grail of not only artificial intelligence but human intelligence. Far be it from me to judge the
validity of these hypotheses, but I suggest we not get too caught up in the quantum hype until the qubit
is as well understood as the bit. Quantum mechanics may be the next paradigm shift of physics, but
presently it is still in its infancy. Caveat in place, let's examine what is necessary for consciousness.
Experience must be analyzed to identify what needs to be reproduced. B.A. Wallace notes that
introspection has been deemed an invalid means of acquiring knowledge.8 When it comes to the mind,
however, it becomes necessary. The primary aspect of experience is physical – more specifically,
6 Consider the difficulty in empirically proving a negative existential quantifier to be false. Adding modality to the situation
produces extremely compelling grounds to reject any proposition of the form, “x is in principle impossible”, unless x entails
a contradiction.
7 Andrew Kassing, via personal communication.
8 “[scientists] are trying to formulate mechanical theories of [consciousness] without ever relying upon precise, firsthand
observations of states of consciousness themselves. This approach is far more analogous to that of medieval astronomers
than that of the founders of the Scientific Revolution” (81).
Razvi 3
sensory. Sensory input gives thoughts extension. The lack of external data does not intrinsically
preclude the possibility of thought, but it would not be of any recognizable form, as even thought with
no object must stem from some former moment of empirical input, no matter how far removed. Hence,
a prerequisite for thought is a body that provides a means of interaction with the world.9 Our
relationship to the outside world is contingent upon our sensory faculties; thus, the first step in creating
consciousness is the hardware. While there is presumably a myriad of possible vehicles for thought, the
human organism serves as the most useful paradigm simply because we know it works. The sense
organs and the brain require reverse engineering to serve, respectively, as input and interpreting
devices. For present purposes, the rest of the body serves as life support for these systems. The
reproductions may use an organic medium, or, given sufficient understanding of the brain, a nonorganic
medium. The progress of nanotechnology precludes reasonable objection to the possibility of
replicating these structures. Importantly, the reproduction must incorporate not only the design of the
organism, but the functionality. Specifically, the characteristic neuroplasticity of the brain must be
maintained.
Having built hardware based heavily, if not entirely, on the human archetype, the challenge
becomes writing the software. The mind is a place of receptiveness; it is an open space that receives
stimuli and responds accordingly. Software preprogrammed to receive, analyze, and respond to all
possible input would be remarkable - there is an infinite set of information the software could
encounter. Furthermore, the human mind is capable of combining two existents, x and y, to create a
previously-unimagined hybrid, xy, thus necessitating a larger order of infinite variables that the
program must be able to process. This approach is unrealistic, if not impossible, and should be rejected.
If we were already endowed with the knowledge we needed to navigate reality, two consequents would
follow: first, we would exist in some quasi-omniscient state; and second, learning would be a
9 Not only passive interaction, i.e. sensory input, but active interaction, i.e. some degree of actual or simulated mobility.
Razvi 4
meaningless concept. The second of these is particularly relevant, for it is this property our artificial
intelligence must possess: the ability to educate itself.10 I define learning as the referencing and
utilization of previous events to predict the actions required for the optimal effect. This, essentially, is a
complex inductive algorithm. In turn, this necessitates the ability to take the pervasive wave of
incoming data and isolate the individual phenomena and events that the induction formula analyzes. In
other words, the program must be capable of chopping the totality of potentially-available reality into
nouns and verbs.11 To summarize, with the latter necessitating the former, the software requires the
ability to discriminate phenomena into background and foreground(/s) from one image, combine
memory and induction to predict the optimal action, and learn from these calculations.12 13 Frequently
used neural pathways are reinforced, representing the difference between learned, habitual responses
and calculated, intentional responses. If the hardware has been accurately and precisely recreated, this
should also occur in our artificial organism. Finally, the artificial intelligence needs to be compensated
for a distinct human advantage: years of evolution. We've built up mechanisms for avoiding harm and
perpetuating survival, such as fear, anger, and facial recognition. These must be imparted on the
creature, along with all other properties that are a matter of nature and not nurture. Nurture, however, is
the last phase of the project.
At this point we have an intelligence that could be compared to an animal or human infant. Like
any mind freshly-thrust into existence, our creation must be cared for and taught how the world works.
The artificial intelligence needs time to learn how to operate independently of its adopted parents, just
like any human baby – this is clear, given the parameters laid out for the software. My final assertion is
10 I consider classroom education to be self-education, as even given the presentation of knowledge, the individual must
internalize it themselves. In fact, I don't see any type of education that isn't ultimately self-education, by this definition.
11 “The map is not the territory,” wrote Alfred Korzybski. Mind is the map-maker of reality's terrain.
12 Image recognition technology is well on its way, see Google's “Google Goggles” for a straightforward mainstream
example.
13 When it comes to rational decision-making, humans do not fare so well. A detailed analysis would be a paper in itself,
but a glance at Wikipedia's “List of cognitive biases” page is entertaining at best, likely falling closer to “depressinglyterrifying.”
Razvi 5
that given the physical causes and conditions, self-awareness arises naturally.14 The conjunction of
memory, the ability to distinguish (or, to be precise, define) an entity as distinct from its background,
and inductive capabilities will result in the program identifying itself as an independently operating
agent, giving rise to the dichotomic sense of self and other. That a baby is not born self-aware adds
weight to this theory. Self-awareness arises from the perpetual interaction between an intelligent
organism and its environment. Laying down the groundwork is all that is necessary.
One may reasonably accuse me of presupposing physicalism. A physical account of mind would
make its reproduction far more straightforward, however, it is not necessary. If qualia are an
epiphenomenon of the brain, they still follow from a sufficiently faithful physical reproduction. Even
dualists of the Cartesian ilk must acknowledge a degree of correlation as to when and where mind
meets body. The mechanism behind the conjoining and subsequent interaction of the the two
substances would not need to be understood. Even if the non-physical Cartesian substance operates
under an different set of laws, there will be an overarching set of rules that governs both the physical
and the non-physical. Combined with the necessary conditions, a cause must give rise to its effect,
otherwise what grounds are there for positing causal relationship? All phenomena have causes; nothing
arises ex nihilo. From the observation that consciousness does not spontaneously arise anywhere, i.e.,
there is a pattern, it follows that sufficient causes and conditions could give rise to mind regardless of it
ontological status as purely physical, epiphenomenal, or purely non-physical.15 Humanity is not wont to
admit impotency; if we have to figure out how an entirely new substance of reality operates, the effort
to do so will be put forth.
The backbone of my argument is twofold, and can be generalized to issues beyond artificial
14 I believe I've adequately, albeit rather vaguely due to present technological limits, laid out which causes and conditions
need to be reproduced.
15 The only way to escape an effect following from the conjunction of in principle controllable causes and conditions is to
posit some type of deity as the distributor of consciousness, a suggestion which both scientific and philosophical minds
should find an affront to their quest for knowledge.
Razvi 6
intelligence. First, all phenomena may be described as effects, necessarily arising when sufficient
causes and conditions obtain. Second, nothing is impossible in theory unless it entails a contradiction or
similarly unacceptable consequence. The upshot of these is that humanity's ability to create is limited
solely by our knowledge and technological ability to actualize what we know in theory. Regarding
artificial intelligence, neither of these hurdles seem unrealistically high. The combination of wisdom
and method has borne remarkable fruit in the past – there is no reason to think this trend won't continue
long into the future.
The implications of manufactured intelligence would be pervasive throughout all facets of
society. Noting the various equality movements focused on race, gender, and sexual orientation, one is
tempted to begin fighting for the civil rights of artificial intelligences today, in hopes of having them
secured by the time of AI's advent. To avoid yet another descent into classism, these machines must be
identified as a sentient beings and not just complex imitations.16 Unfortunately, we are barred from
empirical access to others' minds.17 Thus, we infer the presence others' minds based on how “us-like”
we've observed them to be; we treat those with whom we have a relationship more humanely than the
hordes of strangers we cross paths with daily. The more opportunities we have to realize someone's
humanity, the more likely we will act accordingly. The question is, do the artificial minds need to be
designed to be sufficiently anthropic for this reason, or do humans need to broaden their definition of
personhood?18 Alas, this is not an easily-answered question, but it was one that will inevitably be an
issue in times to come.
16 Radical materialists such as Churchland may contest this as a false distinction: if humans are ultimately complex
input/output functions based on stimuli, an advanced imitation is not qualitatively different, just lower on the spectrum.
17 Perhaps this is rather fortunate.
18 Granted, the Star Trek universe can not be relied upon to prove a point, but consider the human interaction with the
Vulcan species. Taught to suppress their emotions, they were cold and operated on pure rationality, and yet the humans
considered them to be people.
Razvi 7
References
"Designer Nanomaterials on Demand: Scientists Report Universal Method for Creating Nanoscale Composites." Science
Daily: News & Articles in Science, Health, Environment & Technology. 20 Mar. 2010. Web. 21 Mar. 2010.
Easwaran, Eknath. The Dhammapada. Tomales, CA: Nilgiri, 2007. Print.
Frey, Warren. "D-Wave Systems’ Quantum Computing Aims at Human Level AI." H+ Magazine. 12 Mar. 2010. Web. 13
Mar. 2010.
Goertzel, Ben. "Can Bots Feel Joy?" H+ Magazine. 29 Sept. 2009. Web. 10 Mar. 2010.
Goertzel, Ben, Seth Baum, and Ted Goertzel. "How Long Till Human-Level AI?" H+ Magazine. 5 Feb. 2010. Web. 8 Mar. 2
010.
Horstman, Judith. The Scientific American Day in the Life of Your Brain. San Francisco: Jossey-Bass, 2009. Print.
Laszlo, Ervin. Science and the Akashic Field: an Integral Theory of Everything. Rochester, Vt.: Inner Traditions, 2004.
Print.
Laszlo, Ervin. "Using Your Quantum Brain to Connect to the World." Web Log post. The Huffington Post. 17 Mar. 2010.
Web. 17 Mar. 2010.
Moseman, Andrew. "Neuroscientists Take One Step Closer to Reading Your Mind." Discover Magazine. Disney, 12 Mar.
2010. Web. 13 Mar. 2010.
Noƫ, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness. New
York: Hill and Wang, 2009. Print.
Orca, Surfdaddy. "The Race to Reverse Engineer the Human Brain." H+ Magazine. 30 Nov. 2009. Web. 12 Mar. 2010.
Perdue, Daniel E. Debate in Tibetan Buddhism. Ithaca, N.Y.: Snow Lion Publications, 1992. Print.
Svoboda, Karel. Long-Term Changes in Experience Cause Neurons to Sprout New Long-Lasting Connections. Rep.
Howard Hughes Medical Institute, 22 Jan. 2006. Web. 11 Mar. 2010.
Wallace, B. Alan. The Taboo of Subjectivity: toward a New Science of Consciousness. New York: Oxford UP, 2000. Print