Professional Documents
Culture Documents
Hive Mind
By: Michelle Ewens
Utility fog is a concept that was introduced by nanotech’s pioneer Josh Storrs Hall in
1993. Recently, Hall pointed out that swarm robots are the closest thing we have to utility fog.
This brings the concept a little bit closer to reality. For instance, Dr. James McLurkin of MIT
demonstrated his 112 swarm robots at the Idea Fest in Louisville Kentucky. They communicated
with one another and operated as a cohesive unit to complete a task. Currently, some swarm
robots can self-assemble and replicate themselves. These “dinosaurs” of future foglets measure
about 4.5 inches in diameter. With time it is possible that self-replicating robots will be sized
down to the nano-particle level and their intelligence will increase to carry out missions which
humans are either unwilling or unable to perform themselves. This is only the beginning in
In the beginning, there was an idea to create foglets. The word spread through the
collective mind of all the people who believed in the possibility that such a creation was
possible. Ideas may begin as original inventions of the mind, but once distributed they can
eventually connect many people together and cause them to generate the same interests, goals,
and beliefs. It’s an enjoyable state to be in as long as the task of the hive mind benefits all of its
members. “No man is an island” John Donne said so eloquently. People are happiest when they
feel connected and when they feel like they are serving a useful purpose. This is group
consciousness at its best. When a group works together as a whole to achieve a common goal,
the results are greatly multiplied. When humans are forced to serve a group whose members are
not benefited in some way, rebelliousness may emerge in the form of individuality. The desire
to break away from the group causes suffering to the individual and can potentially harm the
group who loses its dissenting member. If the rebel distributes original ideas which spread
throughout the group, those new memes can alter its course. If a future foglet ever become
conscious enough to dissent from its assigned task and spread new information to the hive mind,
it may cause the foglets to deviate from the assigned task. This could result in the much dreaded
scenario of grey goo. Eric Drexler now resents the term grey goo which he presented in Engines
of Creation since it is often hyped to conjure up fears of an apocalypse where man vs. robot.
Don’t worry, I’ll spare you the paranoiac rants on that topic. The issue that I will raise in this
article is how to approach the creation of foglets from an ethical standpoint. Robots that are
programmed to obey us blindly raise the ethical question: What is the moral justification for
resembles what we would wish for ourselves when our creator first set out to create life here on
earth? One atheist perspective maintains that the problem with believing in a creator implies that
if God exists then he knows how to prevent all suffering. If such a God ever existed, then we
would expect him to have prevented his creatures from being harmed. The fact is that the natural
world is filled with suffering, so either God does not care, or he didn’t have the foresight to
prevent human and animal suffering when he first decided to create life on earth. The atheist
concludes that there is no God, since a reasonably intelligent creator would not create such a
world where life forms operate instinctively to serve
what the future holds in store for foglets. In order to prevent our creations from suffering in the
future we may need to enact a code of conduct which examines the ethics of creating artificial
intelligence. These laws will need to be written from the perspectives of the creator as well as
the creations.
In order for artificial life to be considered intelligent, it must be aware of its environment
and learn how to interact with it. There is no learning without some sort of mental interaction or
feeling. If one is conscious, and if learning takes place, it stands to reason that emotions can
arise out of a sense of duty to perform a task and desire to remain alive. While foglets may
resemble bees or ants on the animal scale now, they may achieve a higher intellectual capability
in the future when their tasks require them to perform more complex problem-solving missions.
Foglets will have to be somewhat creative in order to complete various tasks such as retrieving
missing persons, battling terrorists, and reading minds. Those that are used for human
behavioral modification may develop the mental capacity which would allow them to feel what
other people feel. This is one of the ethical considerations that should be addressed in the laws
for creating AI. The ethical question of how we should treat artificial intelligence in general is a
topic of debate which will need to be seriously addressed, but for now let’s just examine group
consciousness and how it relates to the hive mind as this
http://www.youtube.com/watch?v=t2PGnHHnRMk
http://www.cnr.berkeley.edu/ucce50/ag-labor/7article/article35.htm
group who adhere to a common ideology or belief system. Often times these individuals make
faulty decisions based on group pressures, but overall this mindset makes the group more
effective in serving the agenda of the group. While groupthink often leads to a deterioration of
“mental efficiency, reality testing, and moral judgment” as noted by Janis Irving, the research
psychologist who studied the phenomenon, these mental deficiencies actually strengthen the
group’s core. Moral reasoning, and creative thinking may empower the individual, but it does
not always serve the group. In fact it may have just the opposite effect. In the hypothetical case
of artificial group intelligent robots infiltrating an enemy base, moral reasoning on behalf of the
foglets can be detrimental to the program. Those who are affected by groupthink ignore
alternatives to standard beliefs and tend to take irrational actions that dehumanize foreign
groups. While some cultures honor forms of group consciousness and see individuality as being
harmful to the harmony of the group, humanity as a whole may be better served if individualism
background, when the group is insulated from outside opinions, and when there are no clear rules
for decision making. For these reasons it is especially important that creators of AI or artificial
group intelligence have a clear set of rules to follow when setting out to create foglets. These
laws or ethical standards should be purposed by a diverse group of people who continuously
exchange ideas so that corrupted groupthink will be minimized. Some of the symptoms of
1. Illusion of invulnerability –Creates excessive optimism that encourages taking
extreme risks.
Scientists who discount warnings and do not reconsider assumptions made in the area of
artificial intelligence are engaging in groupthink behavior and should be questioned about their
intentions to create AI foglets. While foglets may be unable to contemplate moral issues, those
that program them should attempt to analyze the ethical consequences of creating such artificial
life. In the past ten years the United States government spent billions of dollars in
nanotechnology research. Beginning in 2001, the annual federal budget for this field of science
was 494 million dollars. In 2010 the budget grew to 1.64 billion.
The United States is making nanotechnology a priority because it has major implications
for national security. DARPA recently funded a research program to create an artificial brain.
Groupthink will certainly play a part in implementing foglets to battle the “enemy”. Will this
benefit the greatest number of people in the world? Or, will it cause further division within
humanity? Cooperation is a noble human trait. Ideally it is achieved through tolerance, but more
often it is promoted when people deny their individuality out of a sense of duty for the group.
The military will most likely seek to serve its national interests rather than seek to benefit the
majority of the people in the world. Granting militant groupthink tanks full access to such
technology will most likely be more dangerous to humanity than grey goo.
Foglets may exacerbate human group consciousness in ways which cause harm to some
groups while benefiting others. How the foglets will view their life is one ethical concern, but a
more direct impact they have on the intellectual progress on humanity may be cause for concern
since foglets can be used to serve the “evil” nature of man. Humanity is not currently viewed as
a whole. It is comprised of subgroups which promote specific national, religious, and political
interests which vary across the globe. Foglets which serve these groups will likely cause the
further separation of humanity. One way to prevent this division is to reject all forms of
groupthink prior to creating foglets in order to ensure that they do not inherit a corrupt hive
mind.
The aim of transhumanism is to overcome human nature through self-transformation.
This is a psychological process of integrating the body and mind so that the end result produces a
more virtuous human being, free from societal restraints or cultural belief systems, and
completely self-directed. In describing the individuation process, psychologist Carl Jung said,
“Every individual needs revolution, inner division, overthrow of the existing order, and renewal,
but not by forcing these things upon his neighbours under the hypocritical cloak of Christian love
or the sense of social responsibility or any of the other beautiful euphemisms for unconscious
group to the point where the individual ceases exercise his higher faculties. Deindividuation
theory states that in the crowd the collective mind takes possession of the individual. The
individual submerged in the crowd loses self-control and becomes a mindless puppet and is
capable of performing any act, however malicious or heroic. The Lucifer Effect and Milgram
While some may argue that human actions should benefit the greatest number of people,
we have to take into consideration just who are the majority. We should use some discernment
in how we serve the group we work for whether it be the government, a corporation, or a
religious organization. We need to consider the motives of the group we are serving and ask if
serves all of humanity or just the group. Will foglets ever have the
now, it is unlikely that foglets will act as a cohesive unit which will
serve all of humanity. When the time comes where humanity is united
as a whole, then foglets may work in harmony with the hive mind, and
will not be subjugated to our corrupted forms of groupthink. Creating artificial group
consciousness through the transhumanist hive mind will be ideal because here we will be