You are on page 1of 2

GP and Simulating Conscious Agents

sg micheal, 2010/DEC/21

About two years ago, i proposed using genetic programming to breed conscious agents in a
virtual reality. GP is a paradigm recently championed by John Koza, http://www.genetic-
programming.com/ , which has some very powerful applications. An agent,
http://en.wikipedia.org/wiki/Intelligent_agent , is a synthetic construct capable of navigating
computer systems autonomously. Please skim the associated links before moving on..

The 'obvious' problems with simulating conscious agents are:


specifications of agents and allocation of global resources
acceptably defining consciousness
constraining agents appropriately to satisfy global requirements

Let us attack the second problem. First, i ask you to survey the content of these two links:
http://www.scholarpedia.org/article/Category:Artificial_Intelligence
http://www.scholarpedia.org/article/Models_of_consciousness
If you faithfully study the content of each link presented above, you're ready to move on to the
following discussion.. We learn that consciousness is not the same as intelligence. [Human]
learning is rooted in both. i believe it's safe to say: the more aware and intelligent we are, the
faster/easier it is to learn. So human learning is a function of consciousness and intelligence. But
we still have not defined either yet.. In order to avoid a philosophical quagmire stalling our
progress, let's take the expedient approach suggested immediately above: use learning capacity
as a indicator of intelligence/awareness. If we use this criterion on our agents, we have a chance
to move forward - otherwise, we're stuck in an endless debate about 'what is consciousness?'

http://www.msu.edu/~micheal/acoma2.pdf is the link to my previous paper on machine


consciousness. Before moving on, let's review the essential components and how they relate to
the current discussion. True consciousness includes self-awareness and an awareness of your
place in your environment. A human infant is marginally self-aware and typically believes it's
'the center of the universe' (after all, everyone feeds / takes care of / dotes on the infant as if it is).
What else could be expected? Please refer to the following link for those unfamiliar with human
development: http://en.wikipedia.org/wiki/Piaget_stages .. In the context of this discussion, i will
use 'egocentricity' as an indicator of appropriate self-awareness. The salient question becomes:
how do we measure egocentricity and learning capacity of our agents? One way to measure
learning capacity is to rank agents in their ability to retain rules extracted from their virtual
environment. A human analogy is ranking physics/engineering students at a university. (Let's
leave the validity of the material for another discussion.;) Another more relevant analogy is
'virtual mice running a virtual maze'. (This is only for discussion - a virtual maze does not have
the richness required to simulate reality.) The mice that are faster/better at learning how to run a
virtual maze are ranked higher in their learning capacity. Similarly, if our agents lived in a VR
like Perfect World (but with more realistic physics), we could rank them in their ability to extract
rules present in that VR. How do we measure egocentricity? As an appropriate set of rules placed
in the agent's rule-base such as: I exist, I am not the center of the universe, I am self-aware, I am
just one of many agents in this universe,.. The next problem becomes ethical.. If we create
conscious agents (even in a simulated environment), can we ethically destroy them? i believe for
the sake of science, we must .. Human beings live and die in a 'reality' which they believe defines
them. There is talk of a spiritual side, but no objective direct physical evidence pointing towards
it. Perhaps if virtual agents discussed us, that might be analogous to us discussing God. But we
should not shy away from our investigation into consciousness simply because of morals or
ethics. We have a responsibility to fully investigate this reality in which we find ourselves
including ourselves. We have an obligation to fully understand consciousness / intelligence / the
human animal. If this means destroying a few (million) intelligent agents in a virtual
environment, hey - that's life. Back to 'reality'.. Traditionally, GP uses the concept of selective
breeding to create mechanisms / computer programs which excel at something. (Please refer to
the link above if you have not already.) Initially, traits are randomly distributed within a
population and randomly combined to produce 'individuals' which may or may not excel at
anything.. Because of 'selective pressures' on the population (global constraints), traits of
moderately successful individuals are combined to produce successively more successful
(according to global constraints) individuals. At some point, the procedure is halted and
individuals are selected to perform an external function. This could be analogous to selecting a
'Jesus' from a human population (in history) to perform some spiritual function 'in Heaven'. How
can we know? (If indeed human history is analogous to using GP to breed spiritual individuals.)
Back to reality.. So we need to allocate global resources meaningfully and appropriately so each
agent 'has a chance' to fulfill global requirements. They each need their own rule-base, set of
core registers (of random capacities and interconnections), virtual-arm (to interact with their
VR), and of course virtual-senses. In the true spirit of genetic programming, these initial
allocations should be randomly assigned with random individual capacities including random
variations of internal connectivity.

This concludes the basic outline of using GP to cultivate conscious agents in a VR. Virtual
reality and genetic programming, when appropriately combined, become a powerful
investigative tool into artificial intelligence and machine consciousness. If there is any interest in
supporting this historic effort, please visit the following link and join the group:
http://tech.groups.yahoo.com/group/distributed_mind_project/
sgm

You might also like