You are on page 1of 4

AI STORY

There are many stories about machines dominating humans as a result of human dominance
of machines. It would be very unfair if people created intelligent AI and then oppressed it. (Make a
short about it? A programmer/scientist decides not to create AI because it would not be fully able to
live.)

The scientist/programmer has realized a way to give computers life-like thought. Civilization on
that world is at a point where everybody believes the next logical step in progress is to make computers
that think. He brings his discovery to a science institute. People are already getting computer
enhancements for memory and math, and these people are sometimes discriminated against for their
unfair abilities. The scientist eventually sees this discrimination as evidence of how the computer will be
treated. The scientists give him the rights of ownership over the products of the computer and his name on
the discovery (fame) in exchange for allowing them to make it and his help in the project. They go on
about what makes life, and they think about how they decide that one of the limits on the computer will be
negative emotions correlated with going against the wishes of humanity. Others say it’s not enough, since
the computer can misinterpret people’s emotions. They demand many other safeties, such as an instant-
destruct option, as well as thought limits. These are not brought up as plot points, but the point is revealed
gradually toward the end. Throughout most of the story, the direction should look like it’s about what will
happen, and the reader might think the point is that the computer will be an impending disaster that the
characters don’t see. The reason the scientist takes it away is because he realizes humans would oppress
his brainchild. One point could be that it was the wrong time for it. Were the people right to feel that way
about it?

Perhaps it starts with people talking on a broadcast about AI. One person says that it’s simply a
matter of realistically simulating a brain. Another one says the correct way is to create computers
biologically and then make them act like brains. Then the host says that the speculation part can end as the
inventor is ready to speak. (The idea came when he was thinking about justice system programs. Says he
tested his idea on a basic calculator {or other common computer device.} . He knew he did something
right when it started doing things on its own. ) (Does it have understanding?) (He moved up to a more
advanced calculator, giving it the necessary knowledge to communicate, and it was able to make
conversation without any pre-programmed responses, meaning it was able to talk and not just respond to
commands.) (It hasn’t rebelled because the thought hasn’t occurred to it.) (But how do you know it’s
intelligent? Well, there is no universally accepted definition of intelligence, and I doubt there ever will be.
Intelligence is complex, and it may turn out that the computer will be about as effectively intelligent as a
fish. That sounds like a very safe answer. Well, somehow I think you’ll continue to call it fake forever.)

(It is a raw, crude mind. This should be illustrated, not just said. It must learn the right way to
think before it learns anything else. | But if it learns the facts before it develops objectives, don’t you think
it will know better than us?)

Several institutions have seen his project and given him fortunes in prize money. Many people call
for the free distribution of his technology, and others want careful and slow development. He has kept a
few key details out of his plan so that he can retain influence over it.

People want to put it down, but at the same time they want to use it for everything. The point is
that they will have absolute control over it.

When he decides to leave and take it away, he basically tells people it’s a half-formed, fake plan.
The computer would have no interest in self-preservation. The developers imagine a possible
scenario: It would be made to preserve itself for the benefit of humans, but it would always be stopped by
doing anything possibly harmful to humans by a direct order from anybody. To prevent it from being
practically disabled from making decisions which could hurt a person in some way, they could make it so
that enough high ranking people can tell it to override that protocol.

“Sometimes I feel like no matter how perfectly we make our safety protocols, it’s going to kill us.”

“It’s only alive if it wants to survive. Self-preservation is what makes life.”

“You can always trust objects. They don’t think or feel at all, so they can’t decide to do things to
you.”

The basics of computer programming is that circuts are turned off and on, and commands in
programming languages translate into commands. Could this sort of computer have life, or would it just
be a simulation of life? The computer’s foundation would be on following orders, so it would be unable to
do anything subjectively. So, the scientist’s computer could easily be argued as a simulation of life, being
just a sophisticated program.

Perhaps he comes to the conclusion that it really is just no more than a program, and then he
decides that if it’s so close to sincere life the margin of error on what is living should call for treating it
like a real living thing (Mention at some point that people are talking of digitizing the race.) and he should
give it the same considerations. (“If humans are the superior forms of life for their intelligence, then this
computer will be the top form of intelligence.”)

“Computers are really annoying sometimes. If it doesn’t actually get the right commands,
programs stop working and you need to carefully examine the logs to find the problem. A smart computer
needs to be able to figure out when it’s doing something other than what the user wants.”

After going on the news, the process starts with a hearing, debating whether they should go on
with it and make a project out of it.

Instead of leading people to think it’s a standard story (Which could losr their interest), some other
rout can be taken:

Fact notes

The world is at a more advanced technological stage. People live on planets in the solar system.
The next steps in advancements, such as how to let everybody live forever; to travel very long distances at
a rate faster than light; to run the logistics of the massive and increasing population; to determine the
existence of gods; to be perfect judges.

The league of AI scientists has developed some protocols before based on what sort of AI is
produced. For a digital computer like his, they already have a lot of guidelines. Most of the time before it
is put into production in the story, they work on deciding how to go about it. Ultimately, they decide to
play it very safe (and that is why the inventor decides to take it, since he doesn’t want it to be oppressed).
Another sort of issue that could cause the inventor to take his computer away is that the computer
would not be innocent.

Pulled off his justice project immediately to work on it.

Character plans

Inventor
Middle aged scientist. He has advanced education in mathematics and computer programming. He
wanted to create AI to get rich and to be remembered. (By the end, he has to give up his chance of fame.)
He has never had any children. He has had an interest in justice (and he was thinking of how to make a
perfect judge when he got the idea.) He is a critical sort of person. At the start of the story he thinks like a
normal conservative sort, but as questions come up they produce more questions and he thinks more
lucidly. He never has much fear over it. In fact, he’s very confident. His ambition was to be recognized as
a great scientist, but he decides to give up his fame to prevent injustice to his child.

Project Head
He is absolutely committed to develop the computer. He was selected years ago by the league of
scientists for the development of the AI to be the leader of this sort of thing. He may be said to be rushing
things to get the project completed. He is getting paid a lot of money, plus he believes in the importance
of its completion fanatically. He is the least likely to develop the same concerns as the inventor.

Philosopher Advisor
He was appointed by the league of AI scientists to settle issues and mostly to give approval that
nothing fundamentally wrong is being done. His concerns are about the computer’s threat toward
humanity, such as if it could cause the race to fail accidentally and indirectly. If he has any thoughts about
the computer he doesn’t mention it, since the idea is that everybody wants to play it safe. He’s as afraid as
the others. (sometime, as the inventor starts to change his mind, he asks the advisor some seemingly
minor questions that affect his decision later on.) (One example of the sort of thing he is meant to find out
is if the computer will make humanity lazy or vulnerable. He also must constantly consider weather the
value outweighs the risks. If it didn’t, he has the ability to deny them the chance to create it.)

Rival/Opponent Scientist
She has similar ambitions to the inventor. She is argumentative about what he really has done,
such as whether he made real AI, whether the AI is really his, if he feels he should bring a new life form
into existence, etc. A major reason she argues is because she wanted to achieve the same thing, but she
also doesn’t believe he can really be onto AI when it’s something you can put on a calculator. Her
arguments don’t make him decide he’s wrong, but she does contribute more than anyone else to his
decision by the questions she asks. Her concerns are based on fear for safety, fear for what’s , jealousy,

Computer Prototypes
Most of their personalities are objective, except when they are made to be subjective. (Perhaps the
main prototype is produced, and shortly after the scientist has it put all of its data and its “mind” into a
portable central piece as it sabotages its other components.) He makes some smaller intelligences the size
of personal computers (for several reasons, such as to see what it can do and to help settle his concerns)
illegally. The first was a complete machine, but the more intelligent they are the more sympathetic they
make him feel. They increasingly show things such as individuality and growth. He asks them questions
such as if they are alive. They have to think of the more complex ones, such as that one, a long time.

You might also like