You are on page 1of 21

Date of Submission [Mechatronics Assignment]

2/25/2019
[ ME – 475 ]

Submitted by
Name : Md. Shihab Rahman
Student ID : 1410148
Department : Mechanical Engineering
(1) Machine Learning through Artificial Intelligence

1.1 WHAT IS AI?


We have claimed that AI is exciting, but we have not said what it is. In Figure 1.1 we see eight definitions
of AI, laid out along two dimensions. The definitions on top are concerned with thought processes and
reasoning, whereas the ones on the bottom address behavior. The definitions on the left measure
success in terms of fidelity to human performance, whereas the ones on the right measure against an
ideal performance measure, called rationality. A system is rational if it does the “right thing,” given
what it knows. Historically, all four approaches to AI have been followed, each by different people with
different methods. A human-centered approach must be in part an empirical science, involving
observations and hypotheses about human behavior. A rationalist1 approach involves a combination of
mathematics and engineering. The various group have both disparaged and helped each other. Let us
look at the four approaches in more detail.

Thinking Humanly Thinking Rationally

“The exciting new effort to make computers “The study of mental faculties through the
think . . . machines with minds, in the use of computational models.”
full and literal sense.” (Haugeland, 1985) (Charniak and McDermott, 1985)

“[The automation of] activities that we “The study of the computations that make
associate with human thinking, activities it possible to perceive, reason, and act.”
such as decision-making, problem solving, (Winston, 1992)
learning . . .” (Bellman, 1978)
Acting Humanly Acting Rationally

“The art of creating machines that perform “Computational Intelligence is the study
functions that require intelligence of the design of intelligent agents.” (Poole
when performed by people.” (Kurzweil, et al., 1998)
1990)
“AI . . . is concerned with intelligent behavior
“The study of how to make computers do in artifacts.” (Nilsson, 1998)
things at which, at the moment, people are
better.” (Rich and Knight, 1991)

Figure 1.1 Some definitions of artificial intelligence, organized into four categories.

2.1 THE HISTORY OF ARTIFICIAL INTELLIGENCE


2.1.1 The gestation of artificial intelligence (1943–1955)

The first work that is now generally recognized as AI was done by Warren McCulloch and
Walter Pitts (1943). They drew on three sources: knowledge of the basic physiology and function
of neurons in the brain; a formal analysis of propositional logic due to Russell and
Whitehead; and Turing’s theory of computation. They proposed a model of artificial neurons in
which each neuron is characterized as being “on” or “off,” with a switch to “on” occurring in
response to stimulation by a sufficient number of neighboring neurons. The state of a neuron was
conceived of as “factually equivalent to a proposition which proposed its adequate stimulus.”
They showed, for example, that any computable function could be computed by some network of
connected neurons, and that all the logical connectives (and, or, not, etc.) could be implemented
by simple net structures. McCulloch and Pitts also suggested that suitably defined networks
could learn. Donald Hebb (1949) demonstrated a simple updating rule for modifying the
connection strengths between neurons. His rule, now called Hebbian
learning, remains an influential model to this day.
Two undergraduate students at Harvard, Marvin Minsky and Dean Edmonds, built the first
neural network computer in 1950. The SNARC, as it was called, used 3000 vacuum tubes and a
surplus automatic pilot mechanism from a B-24 bomber to simulate a network of
40 neurons. Later, at Princeton, Minsky studied universal computation in neural networks.
His Ph.D. committee was skeptical about whether this kind of work should be considered
mathematics, but von Neumann reportedly said, “If it isn’t now, it will be someday.” Minsky
was later to prove influential theorems showing the limitations of neural network research.
There were a number of early examples of work that can be characterized as AI, but
Alan Turing’s vision was perhaps the most influential. He gave lectures on the topic as early as
1947 at the London Mathematical Society and articulated a persuasive agenda in his 1950 article
“Computing Machinery and Intelligence.” Therein, he introduced the Turing Test, machine
learning, genetic algorithms, and reinforcement learning. He proposed the Child
Programme idea, explaining “Instead of trying to produce a programme to simulate the adult
mind, why not rather try to produce one which simulated the child’s?”

2.1.2 The birth of artificial intelligence (1956)


Princeton was home to another influential figure in AI, John McCarthy. After receiving his
PhD there in 1951 and working for two years as an instructor, McCarthy moved to Stanford and
then to Dartmouth College, which was to become the official birthplace of the field.
McCarthy convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him bring
together U.S. researchers interested in automata theory, neural nets, and the study of intelligence.
They organized a two-month workshop at Dartmouth in the summer of 1956. The proposal
states: 10 We propose that a 2 month, 10 man study of artificial intelligence be carried out during
the summer of 1956 at Dartmouth College in Hanover, New Hampshire.
The study is to proceed on the basis of the conjecture that every aspect of learning or any other
feature of intelligence can in principle be so precisely described that a machine can be made to
simulate it. An attempt will be made to find how to make machines use language, form
abstractions and concepts, solve kinds of problems now reserved for humans, and improve
themselves. We think that a significant advance can be made in one or more of these problems if
a carefully selected group of scientists work on it together for a summer.
There were 10 attendees in all, including Trenchard More from Princeton, Arthur Samuel from
IBM, and Ray Solomonoff and Oliver Selfridge from MIT.
Two researchers from Carnegie Tech11 Allen Newell and Herbert Simon, rather stole the show.
Although the others had ideas and in some cases programs for particular applications such as
checkers, Newell and Simon already had a reasoning program, the Logic
Theorist (LT), about which Simon claimed, “We have invented a computer program capable of
thinking non-numerically, and thereby solved the venerable mind–body problem.”12 Soon after
the workshop, the program was able to prove most of the theorems in Chapter 2 of Rus-sell and
Whitehead’s Principia Mathematica. Russell was reportedly delighted when Simon showed him
that the program had come up with a proof for one theorem that was shorter than the one in
Principia. The editors of the Journal of Symbolic Logic were less impressed; they rejected a
paper coauthored by Newell, Simon, and Logic Theorist.
The Dartmouth workshop did not lead to any new breakthroughs, but it did introduce all the
major figures to each other. For the next 20 years, the field would be dominated by these people
and their students and colleagues at MIT, CMU, Stanford, and IBM.
Looking at the proposal for the Dartmouth workshop (McCarthy et al., 1955), we can see why it
was necessary for AI to become a separate field. Why couldn’t all the work done in AI have
taken place under the name of control theory or operations research or decision theory, which,
after all, have objectives similar to those of AI? Or why isn’t AI a branch of mathematics? The
first answer is that AI from the start embraced the idea of duplicating human faculties such as
creativity, self-improvement, and language use. None of the other fields were addressing these
issues. The second answer is methodology. AI is the only on of these fields that is clearly a
branch of computer science (although operations research does share an emphasis on computer
simulations), and AI is the only field to attempt to build machines that will function
autonomously in complex, changing environments.

2.1.3 AI becomes an industry (1980–present)

The first successful commercial expert system, R1, began operation at the Digital Equipment
Corporation (McDermott, 1982). The program helped configure orders for new computer
systems; by 1986, it was saving the company an estimated $40 million a year. By 1988,
DEC’s AI group had 40 expert systems deployed, with more on the way. DuPont had 100 in use
and 500 in development, saving an estimated $10 million a year. Nearly every major U.S.
Corporation had its own AI group and was either using or investigating expert systems.
In 1981, the Japanese announced the “Fifth Generation” project, a 10-year plan to build
intelligent computers running Prolog. In response, the United States formed the Microelectronics
and Computer Technology Corporation (MCC) as a research consortium designed to assure
national competitiveness. In both cases, AI was part of a broad effort, including chip design and
human-interface research. In Britain, the Alvey report reinstated the funding that was cut by the
Lighthill report.13 In all three countries, however, the projects never met their ambitious goals.
Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988,
including hundreds of companies building expert systems, vision systems, robots, and software
and hardware specialized for these purposes. Soon after that came a period called the
“AIWinter,” in which many companies fell by the wayside as they failed to deliver on
extravagant promises.

2.1.5 The availability of very large data sets (2001–present)

Throughout the 60-year history of computer science, the emphasis has been on the algorithm as
the main subject of study. But some recent work in AI suggests that for many problems, it makes
more sense to worry about the data and be less picky about what algorithm to apply.
This is true because of the increasing availability of very large data sources: for example,
trillions of words of English and billions of images from the Web (Kilgarriff and Grefenstette,
2006); or billions of base pairs of genomic sequences (Collins et al., 2003).
One influential paper in this line was Yarowsky’s (1995) work on word-sense disambiguation:
given the use of the word “plant” in a sentence, does that refer to flora or factory?
Previous approaches to the problem had relied on human-labeled examples combined with
machine learning algorithms. Yarowsky showed that the task can be done, with accuracy above
96%, with no labeled examples at all. Instead, given a very large corpus of unannotated text and
just the dictionary definitions of the two senses—“works, industrial plant” and “flora, plant
life”—one can label examples in the corpus, and from there bootstrap to learn
28 Chapter 1. Introduction new patterns that help label new examples. Banko and Brill (2001)
show that techniques like this perform even better as the amount of available text goes from a
million words to a billion and that the increase in performance from using more data exceeds any
difference in algorithm choice; a mediocre algorithm with 100 million words of unlabeled
training data outperforms the best known algorithm with 1 million words.
As another example, Hays and Efros (2007) discuss the problem of filling in holes in a
photograph. Suppose you use Photoshop to mask out an ex-friend from a group photo, but now
you need to fill in the masked area with something that matches the background. Hays and Efros
defined an algorithm that searches through a collection of photos to find something that will
match. They found the performance of their algorithm was poor when they used a collection of
only ten thousand photos, but crossed a threshold into excellent performance when they grew the
collection to two million photos.
Work like this suggests that the “knowledge bottleneck” in AI—the problem of how to express
all the knowledge that a system needs—may be solved in many applications by learning methods
rather than hand-coded knowledge engineering, provided the learning algorithms have enough
data to go on (Halevy et al., 2009). Reporters have noticed the surge of new applications and
have written that “AI Winter” may be yielding to a new spring (Havenstein,
2005). As Kurzweil (2005) writes, “today, many thousands of AI applications are deeply
embedded in the infrastructure of every industry.”

3.1 Scope of Artificial Intelligence

1. In the next 10 years technologies in narrow fields such as speech recognition


will continue to improve and will reach human levels.
2. In 10 years AI will be able to communicate with humans in unstructured English
using text or voice, navigate (not perfectly) in an unprepared environment and
will have some rudimentary common sense (and domain-specific intelligence).
3. We will recreate some parts of the human (animal) brain in silicon. The feasibility
of this is demonstrated by tentative hippocampus experiments in rats there are
two major projects aiming for human brain simulation, CCortex and IBM Blue
Brain.
4. There will be an increasing number of practical applications based on digitally
recreated aspects human intelligence, such as cognition, perception, rehearsal
learning, or learning by repetitive practice.
5. The development of meaningful artificial intelligence will require that machines
acquire some variant of human consciousness.
6. Systems that do not possess self-awareness and sentience will at best always be
very brittle.
7. Without these uniquely human characteristics, truely useful and powerful
assistants will remain a goal to achieve. To be sure, advances in hardware,
storage, parallel processing architectures will enable ever greater leaps in
functionality.
8. Systems that are able to demonstrate conclusively that they possess self-
awareness, language skills, surface, shallow and deep knowledge about the
world around them and their role within it will be needed going forward.
9. However the field of artificial consciousness remains in its infancy.
10. The early years of the 21st century should see dramatic strides forward in this
area however.

4.1 Conclusion of AI

We conclude that if the machine could successfully pretend to be human to a


knowledgeable observer then you certainly should consider it intelligent. AI systems are
now in routine use in various field such as economics, medicine, engineering and the
military, as well as being built into many common home computer software applications,
traditional strategy games etc.AI is an exciting and rewarding discipline. AI is branch of
computer science that is concerned with the automation of intelligent behavior. The
revised definition of AI is - AI is the study of mechanisms underlying intelligent behavior
through the construction and evaluation of artifacts that attempt to enact those
mechanisms. So it is concluded that it work as an artificial human brain which have an
unbelievable artificial thinking power.
(2) Industrial Robotics

1.1 The History of Robotics


The history of robotics is one that is highlighted by a fantasy world that has provided the
inspiration to convert fantasy into reality. It is a history rich with cinematic creativity,
scientific ingenuity, and entrepreneurial vision. Quite surprisingly, the definition of a
robot is controversial, even among roboticists. At one end of the spectrum is the science
fiction version of a robot, typically one of a human form — an android or humanoid —
with anthropomorphic features. At the other end of the spectrum is the repetitive,
efficient robot of industrial automation. In ISO 8373, the International Organization for
Standardization defines a robot as “an automatically controlled, reprogrammable,
multipurpose manipulator with three or more axes.” The Robot Institute of America
designates a robot as “a reprogrammable, multifunctional manipulator designed to move
material, parts, tools, or specialized devices through various programmed motions for the
performance of a variety of tasks.” A more inspiring definition is offered by
MerriamWebster, stating that a robot is “a machine that looks like a human being and
performs various complex acts (as walking or talking) of a human being.”

1.1.1 The Birth of the Industrial Robot

Following World War II, America experienced a strong industrial push, reinvigorating the
economy. Rapid advancement in technology drove this industrial wave—servos, digital
logic, solid state electronics, etc. The merger of this technology and the world of science
fiction came in the form of the vision of Joseph Engelberger, the ingenuity of George
Devol, and their chance meeting in 1956. Joseph F. Engelberger was born on July 26, 1925,
in New York City. Growing up, Engelberger developed a fascination for science fiction,
especially that written by Isaac Asimov. Of particular interest in the science fiction world
was the robot, which led him to pursue physics at Columbia University, where he earned
both his bachelor’s and master’s degrees. Engelberger served in the U.S. Navy and later
worked as a nuclear physicist in the aerospace industry. In 1946, a creative inventor by
the name of George C. Devol, Jr., patented a playback device used for controlling
machines. The device used a magnetic process recorder to accomplish the control. Devol’s
drive toward automation led him to another invention in 1954, for which he applied for a
patent, writing, “The present invention makes available for the first time a more or less
general purpose machine that has universal application to a vast diversity of applications
where cyclic control is desired.” Devol had dubbed his invention universal automation, or
unimation for short. Whether it was fate, chance, or just good luck, Devol and Engelberger
met at a cocktail party in 1956. Their conversation revolved around robotics, automation,
Asimov, and Devol’s patent application, “A Programmed Article Transfer,” which
Engelberger’s imagination translated into “robot.” Following this chance meeting,
Engelberger and Devol formed a partnership that lead to the birth of the industrial robot.
Engelberger took out a license under Devol’s patent and bought out his employer,
renaming the new company Consolidated Controls Corporation, based out of his garage.
His team of engineers that had been working on aerospace and nuclear applications
refocused their efforts on the development of the first industrial robot, named the
Unimate, after Devol’s “unimation.” The first Unimate was born in 1961 and was delivered
to General Motors in Trenton, New Jersey, where it unloaded high temperature parts
from a die casting machine — a very unpopular job for manual labor. Also in 1961, patent
number 2,998,237 was granted to Devol —the first U.S. robot patent. In 1962 with the
backing of Consolidated Diesel Electric Company (Condec) and Pullman Corporation,
Engelberger formed Unimation, Inc., which eventually blossomed into a prosperous
business —GM alone had ordered 66 Unimates. Although it took until 1975 to turn a
profit, Unimation became the world leader in robotics, with 1983 annual sales of $70
million and 25 percent of the world market share. For his visionary pursuit and
entrepreneurship, Joseph Engelberger is widely considered the “Father of Robotics.”
Since 1977, the Robotic Industries Association has presented the annual Engelberger
Robotics Awards to world leaders in both application and leadership in the field of
robotics.

1.1.2 Robotics in Research Laboratories

The post-World War II technology boom brought a host of developments. In 1946 the world’s first
electronic digital computer emerged at the University of Pennsylvania at the hands of American
scientists J. Presper Eckert and John Mauchly. Their computer, called ENIAC (electronic numerical
integrator and computer), weighed over 30 tons. Just on the heels of ENIAC, Whirlwind was
introduced by Jay Forrester and his research team at the Massachusetts Institute of Technology
(MIT) as the first general purpose digital computer, originally commissioned by the U. S. Navy to
develop a flight simulator to train its pilots. Although the simulator did not develop, a computer
that shaped the path of business computers was born. Whirlwind was the first computer to
perform real-time computations and to use a video display as an output device. At the same time
as ENIAC and Whirlwind were making their appearance on the East Coast of the United States, a
critical research center was formed on the West Coast. In 1946, the Stanford Research Institute
(SRI) was founded by a small group of business executives in conjunction with Stanford University.
Located in Menlo Park, California, SRI’s purpose was to serve as a center for technological
innovation to support regional economic development. In 1966 the Artificial Intelligence Center
(AIC) was founded at SRI, pioneering the field of artificial intelligence (AI), which gives computers
the heuristics and algorithms to make decisions in complex situations. From 1966 to 1972 Shakey,
the Robot, was developed at the AIC by Dr. Charles Rosen (1917–2002) and his team. Shakey was
the first mobile robot to reason its way about its surroundings and had a far-reaching influence
on AI and robotics. Shakey was equipped with a television camera, a triangulating range finder,
and bump sensors. It was connected by radio and video links to DEC PDP-10 and PDP-15
computers. Shakey was equipped with three levels of programming for perceiving, modeling, and
interacting with its environment. The lowest level routines were designed for basic locomotion —
movement, turning, and route planning. The intermediate level combined the low-level routines
together to accomplish more difficult tasks. The highest-level routines were designed to generate
and execute plans to accomplish tasks presented by a user. Although Shakey had been likened to
a small unstable box on wheels—thus the name— it represented a significant milestone in AI and
in developing a robot’s ability to interact with its environment.
1.1.3 Robotics in Industry
Running in parallel with the developments in research laboratories, the use of robotics in
industry blossomed beyond the time of Engelberger and Devol’s historic meeting. In 1959,
Planet Corporation developed the first commercially available robot, which was
controlled by limit switches and cams. The next year, Harry Johnson and Veljko Milenkovic
of American Machine and Foundry, later known as AMF Corporation, developed a robot
called Versatran, from the words versatile transfer; the Versatran became commercially
available in 1963. In Norway, a 1964 labor shortage led a wheelbarrow manufacturer to
install the first Trallfa robot, which was used to paint the wheelbarrows. Trallfa robots,
produced by Trallfa Nils Underhaug of Norway, were hydraulic robots with five or six
degrees of freedom and were the first industrial robots to use the revolute coordinate
system and continuous-path motion. In 1966, Trallfa introduced a spray-painting robot
into factories in Byrne, Norway. This spray-painting robot was modified in 1976 by
Ransome, Sims, and Jefferies, a British producer of agricultural machinery, for use in arc
welding applications. Painting and welding developed into the most common applications
of robots in industry. Seeing success with their Unimates in New Jersey, General Motors
used 26 Unimate robots to assemble the Chevrolet Vega automobile bodies in Lordstown,
Ohio, beginning in 1969. GM became the first company to use machine vision in an
industrial setting, installing the Consight system at their foundry in St. Catherines,
Ontario, Canada, in 1970. At the same time, Japanese manufacturers were making
quantum leaps in manufacturing: cutting costs, reducing variation, and improving
efficiency. One of the major factors contributing to this transformation was the
incorporation of robots in the manufacturing process. Japan imported its first industrial
robot in 1967, a Versatran from AMF. In 1971 the Japanese Industrial Robot Association
(JIRA) was formed, providing encouragement from the government to incorporate
robotics. This move helped to move the Japanese to the forefront in total number of
robots used in the world. In 1972 Kawasaki installed a robot assembly line, composed of
Unimation robots at their plant in Nissan, Japan. After purchasing the Unimate design
from Unimation, Kawasaki improved the robot to create an arc-welding robot in 1974,
used to fabricate their motorcycle frames. Also in 1974, Hitachi developed touch and
force-sensing capabilities in their Hi-T-Hand robot, which enabled the robot to guide pins
into holes at a rate of one second per pin. At Cincinnati Milacron Corporation, Richard
Hohn developed the robot called The Tomorrow Tool, or T3. Released in 1973, the T3 was
the first commercially available industrial robot controlled by a microcomputer, as well
as the first U.S. robot to use the revolute configuration. Hydraulically actuated, the T3
was used in applications such as welding automobile bodies, transferring automobile
bumpers, and loading machine tools. In 1975, the T3 was introduced for drilling
applications, and in the same year, the T3 became the first robot to be used in the
aerospace industry. In 1970, Victor Scheinman, of Stanford Arm fame, left his position as
professor at Stanford University to take his robot arm to industry. Four years later,
Scheinman had developed a minicomputer-controlled robotic arm, known as the Vicarm,
thus founding Vicarm, Inc. This arm design later came to be known as the “standard arm.”
Unimation purchased Vicarm in 1977, and later, relying on support from GM, used the
technology from Vicarm to develop the PUMA (Programmable Universal Machine for
Assembly), a relatively small electronic robot that ran on an LSI II computer. The ASEA
Group of Vaster ¨ as, Sweeden, made signi ˚ ficant advances in electric robots in the
1970’s. To handle automated grinding operations, ASEA introduced its IRb 6 and IRb 60
all-electric robots in 1973. Two years later, ASEA became the first to install a robot in an
iron foundry, tackling yet more industrial jobs that are not favored by manual labor. In
1977 ASEA introduced two more electric-powered industrial robots, both of which used
microcomputers for programming and operation. Later, in 1988, ASEA merged with BBC
Brown Boveri Ltd of Baden, Switzerland, to form ABB (ASEA, Brown, and Boveri), one of
the world leaders in power and automation technology. At Yamanashi University in Japan,
IBM and Sankyo joined forces to develop the Selective Compliance Assembly Robot Arm
(SCARA) in 1979. The SCARA was designed with revolute joints that had vertical axes, thus
providing stiffness in the vertical direction. The gripper was controlled in compliant mode,
or using force control, while the other joints were operated in position control mode.
These robots were used and continue to be used in many applications where the robot is
acting vertically on a workpiece oriented horizontally, such as polishing and insertion
operations. Based on the SCARA geometry, Adept Technology was founded in 1983.
Adept continues to supply direct drive robots that service industries, such as
telecommunications, electronics, automotive, and pharmaceuticals. These industrial
developments in robotics, coupled with the advancements in the research laboratories,
have profoundly affected robotics in different sectors of the technical world.

1.1.4 Medical Applications

The capability of high-precision operation in manufacturing settings gave the medical


industry high hopes that robots could be used to assist in surgery. Not only are robots
capable of much higher precision than a human, they are not susceptible to human
factors, such as trembling and sneezing, that are undesirable in the surgery room. In 1990,
Robodoc was developed by Dr. William Bargar, an orthopedist, and the late Howard Paul,
a veterinarian, of Integrated Surgical Systems, Inc., in conjunction with the University of
California at Davis. The device was used to perform a hip replacement on a dog in 1990
and on the first human in 1992, receiving U.S. Food and Drug Administration (FDA)
approval soon thereafter. The essence of the procedure is that traditional hip
replacements required a surgeon to dig a channel down the patient’s femur to allow the
replacement hip to be attached, where it is cemented in place. The cement often breaks
down over time, requiring a new hip replacement in 10 or 15 years for many patients.
Robodoc allows the surgeon to machine a precise channel down the femur, allowing for
a tight-fit between replacement hip and femur. No cement is required, allowing the bone
to graft itself onto the bone, creating a much stronger and more permanent joint.

1.1.5 Space Exploration


Space exploration has been revolutionized by the introduction of robotics, taking shape in many
different forms, such as flyby probes, landers, rovers, atmospheric probes, and robot arms. All
can be remotely operated and have had a common theme of removing mankind from difficult or
impossible settings. It would not be possible to send astronauts to remote planets and return
them safely. Instead, robots are sent on these journeys, transmitting information back to Earth,
with no intent of returning home.

1.1.6 Military and Law Enforcement Applications

Just as space programs have used robots to accomplish tasks that would not even be considered
as a manned mission, military and law enforcement agencies have employed the use of robots to
remove humans from harm’s way. Police are able to send a microphone or camera into a
dangerous area that is not accessible to law enforcement personnel, or is too perilous to enter.
Military applications have grown and continue to do so. Rather than send a soldier into the field
to sweep for landmines, it is possible to send a robot to do the same. Research is presently
underway to mimic the method used by humans to identify landmines. Another approach uses
swarm intelligence, which is research being developed at a company named Icosystems, under
funding from DARPA. The general approach is similar to that of a colony of ants finding the most
efficient path through trial and error, finding success based on shear numbers. Icosystems is using
120 robots built by I-Robot, a company co-founded by robotics pioneer Rodney Brooks, who is
also the director of the Computer Science and Artificial Intelligence Laboratory at MIT. One of
Brooks’ research interests is developing intelligent robots that can operate in unstructured
environments, an application quite different from that in a highly structured manufacturing
environment.

1.1.7 Other Applications and Frontiers of Robotics

In addition to their extensive application in manufacturing, space exploration, the military, and
medicine, robotics can be found in a host of other fields, such as the ever-present entertainment
market— toys, movies, etc. In 1998 two popular robotic toys came to market. Tiger Electronics
introduced “Furby” which rapidly became the toy of choice in the 1998 Christmas toy market.
Furby used a variety of different sensors to react with its environment, including speech that
included over 800 English phrases, as well as many in its own language “Furbish.” In the same year
Lego released its Lego MINDSTORMS robotic toys. These reconfigurable toys rapidly found their
way into educational programs for their value in engaging students, while teaching them about
the use of multiple sensors and actuators to respond to the robot’s surroundings. Sony released
a robotic pet named AIBO in 1999, followed by the third generation AIBO ERS-7 in 2003. Honda
began a research effort in 1986 to build a robot that would interact peacefully with humans,
yielding their humanoid robots P3 in 1996 and ASIMO in 2000 (ASIMO even rang the opening bell
to the New York Stock Exchange in 2002 to celebrate Honda’s 25 years on the NYSE). Hollywood
has maintained a steady supply of robots over the years, and there appears to be no shortage of
robots on the big screen in the near future.
Looking forward there are many frontiers in robotics. Many of the applications presented here
are in their infancy and will see considerable growth. Other mature areas will see sustained
development, as has been the case since the technological boom following the Second World
War. Many theoretical areas hold endless possibilities for expansion —nonlinear control,
computational algebra, computational geometry, intelligence in unstructured environments, and
many more. The possibilities seem even more expansive when one considers the creativity
generated by the cross-pollination of playwrights, science fiction writers, inventors,
entrepreneurs, and engineers.
(3) Rapid prototyping with 3-D printing

1.1 INTRODUCTION
Rapid Prototyping Technology is a group of manufacturing processes that enable the direct
physical realization of 3D computer models. This technology converts the 3D computer data
provided by a dedicated file format directly to a physical model, layer by layer with a high degree
of accuracy. This technology is fast developing and is more than competitive to traditional model
building techniques considering time and degree of detail. The paper gives an overview on existing
major RP techniques and their applications in engineering fields.

The past decade has witnessed the emergence of new manufacturing technologies that
build parts on a layer-by-layer basis. Using these technologies, manufacturing time for parts of
virtually any complexity is reduced considerably. In other words, it is rapid.

Rapid Prototyping Technologies and Rapid Manufacturing offer great potential for producing
models and unique parts for manufacturing industry. Thus, the reliability of products can be
increased; investment of time and money is less risky. Not everything that is thinkable today is
already workable or available at a reasonable price, but this technology is fast evolving and the
better the challenges, the better for this developing process.

1.2 Overview

The term Rapid prototyping (RP) refers to a class of technologies that can automatically construct
physical models from Computer-Aided Design (CAD) data.

It is a free form fabrication technique by which a total object of prescribed shape, dimension and finish
can be directly generated from the CAD based geometrical model stored in a computer, with little human
intervention. Rapid prototyping is an "additive" process, combining layers of paper, wax, or plastic to
create a solid object. In contrast, most machining processes (milling, drilling, grinding, etc.) are
"subtractive" processes that remove material from a solid block. RP’s additive nature allows it to create
objects with complicated internal features that cannot be manufactured by other means.

In addition to prototypes, RP techniques can also be used to make tooling (referred to as rapid tooling)
and even production-quality parts (rapid manufacturing). For small production runs and complicated
objects, rapid prototyping is often the best manufacturing process available. Of course, "rapid" is a relative
term. Most prototypes require from three to seventy-two hours to build, depending on the size and
complexity of the object. This may seem slow, but it is much faster than the weeks or months required to
make a prototype by traditional means such as machining. These dramatic time savings allow
manufacturers to bring products to market faster and more cheaply.

1.3 The Basic Processes

Although several rapid prototyping techniques exist, all employ the same basic five-step
process. The steps are:

1. Create a CAD model of the design


2. Convert the CAD model to STL format
3. Slice the STL file into thin cross-sectional layers
4. Construct the model one layer atop another
5. Clean and finish the model

1.4 Rapid Prototyping Techniques

Most commercially available rapid prototyping machines use one of six techniques. At
present, trade restrictions severely limit the import/export of rapid prototyping machines, so this
guide only covers systems available in the U.S.

1.4.1 Stereolithography

Patented in 1986, stereolithography started the rapid prototyping revolution. The


technique builds three-dimensional models from liquid photosensitive polymers that solidify when
exposed to ultraviolet light. As shown in the figure below, the model is built upon a platform situated just
below the surface in a vat of liquid epoxy or acrylate resin. A low-power highly focused UV laser traces
out the first layer, solidifying the model’s cross section while leaving excess areas liquid. Next, an elevator
incrementally lowers the platform into the liquid polymer. A sweeper re-coats the solidified layer with
liquid, and the laser traces the second layer atop the first.

1.4.2 Laminated Object Manufacturing

In this technique, developed by Helisys of Torrance, CA, layers of adhesive-coated sheet


material are bonded together to form a prototype. The original material consists of paper laminated with
heat-activated glue and rolled up on spools. As shown in the figure below, a feeder/collector mechanism
advances the sheet over the build platform, where a base has been constructed from paper and double-
sided foam tape. Next, a heated roller applies pressure to bond the paper to the base. A focused laser
cuts the outline of the first layer into the paper and then cross-hatches the excess area (the negative space
in the prototype). Cross-hatching breaks up the extra material, making it easier to remove during post-
processing. During the build, the excess material provides excellent support for overhangs and thin-walled
sections. After the first layer is cut, the platform lowers out of the way and fresh material is advanced.
The platform rises to slightly below the previous height, the roller bonds the second layer to the first, and
the laser cuts the second layer. This process is repeated as needed to build the part, which will have a
wood-like texture. Because the models are made of paper, they must be sealed and finished with paint or
varnish to prevent moisture damage.

1.4.3 Ink-Jet Printing

Ink-Jet Printing refers to an entire class of machines that employ ink-jet technology. The first was 3D
Printing (3DP), developed at MIT and licensed to Soligen Corporation, Extrude Hone, and others. The
ZCorp 3D printer, produced by Z Corporation of Burlington, MA is an example of this technology. As shown
in Figure 6a, parts are built upon a platform situated in a bin full of powder material. An ink-jet printing
head selectively deposits or "prints" a binder fluid to fuse the powder together in the desired areas.
Unbound powder remains to support the part. The platform is lowered, more powder added and leveled,
and the process repeated. When finished, the green part is then removed from the unbound powder, and
excess unbound powder is blown off. Finished parts can be infiltrated with wax, CA glue, or other sealants
to improve durability and surface finish. Typical layer thicknesses are on the order of 0.1 mm. This process
is very fast, and produces parts with a slightly grainy surface. ZCorp uses two different materials, a starch
based powder (not as strong, but can be burned out, for investment casting applications) and a ceramic
powder. Machines with 4 color printing capability are available. Ballistic particle manufacturing, depicted
in Figure 6b, was developed by BPM Inc., which has since gone out of business.

1.5 Applications of Rapid Prototyping

Rapid prototyping is widely used in the automotive, aerospace, medical, and consumer products
industries.

Engineering

The aerospace industry imposes stringent quality demands. Rigorous testing and certification is necessary
before it is possible to use materials and processes for the manufacture of aerospace components. Yet,
Boeing's Rocketdyne has successfully used RP technology to manufacture hundreds of parts for the
International Space Station and the space shuttle fleet. The company also uses RP to manufacture parts
for the military's F-18 fighter jet in glass-filled nylon .Another not yet mature idea is to have a RP
machine on board of the International Space Station (ISS) to produce spare parts for repair jobs. Models
are widely used in automotive industry for design studies, physical experiments etc. Functional parts have
been used for titanium casting have been made by RP techniques for parts in F1 racing cars.

Architecture

The Department of Architecture at the University of Hongkong is applying Rapid Prototyping Technology
for teaching students about the new possibilities in testing there draft, e.g. for lighting conditions,
mechanical details. One example is the Sidney Opera House.

Medical Applications

RPT has created a new market in the world of orthodontics. Appearance conscious adults can now have
straighter teeth without the embarrassment of a mouth full of metal. Using stereolithography technology
custom-fit, clear plastic aligners can be produced in a customized mass process. The RP world has made its
entry into the hearing instrument world too. The result is instrument shells that are stronger, fit better and
are biocompatible to a very high degree. The ear impression is scanned and digitized with an extremely
accurate 3-D scanner. Software specially developed for this converts the digital image into a virtual hearing
instrument shell .Thanks to the accuracy of the process, instrument shells are produced with high precision
and reproducibility. This means the hearing instruments fit better and the need for remakes is reduced. In
the case of repairs, damage to or loss of the ITE instrument, an absolutely identical shell can be
manufactured quickly, since the digital data are stored in the system.
1.6 Future developments

Rapid prototyping is starting to change the way companies design and build products. On the
horizon, though, are several developments that will help to revolutionize manufacturing as we
know it.

One such improvement is increased speed. "Rapid" prototyping machines are still slow by some
standards. By using faster computers, more complex control systems, and improved materials,
RP manufacturers are dramatically reducing build time.

Another future development is improved accuracy and surface finish. Today’s commercially
available machines are accurate to ~0.08 millimeters in the x-y plane, but less in the z (vertical)
direction. Improvements in laser optics and motor control should increase accuracy in all three
directions. In addition, RP companies are developing new polymers that will be less prone to
curing and temperature-induced war page.

The introduction of non-polymeric materials, including metals, ceramics, and composites,


represents another much anticipated development. These materials would allow RP users to
produce functional parts.

Another important development is increased size capacity. Currently most RP machines are
limited to objects 0.125 cubic meters or less. Larger parts must be built in sections and joined by
hand. To remedy this situation, several "large prototype" techniques are in the works.

One future application is Distance Manufacturing on Demand, a combination of RP and the


Internet that will allow designers to remotely submit designs for immediate manufacture.

1.7 Conclusion

Finally, the rise of rapid prototyping has spurred progress in traditional subtractive methods as well.
Advances in computerized path planning, numeric control, and machine dynamics are increasing the
speed and accuracy of machining. Modern CNC machining centers can have spindle speeds of up to
100,000 RPM, with correspondingly fast feed rates. 34 Such high material removal rates translate into short
build times. For certain applications, particularly metals, machining will continue to be a useful
manufacturing process. Rapid prototyping will not make machining obsolete, but rather complement it.
(4) Precise manufacturing with CNC machines

1.1 INTRODUCTION

In order to meet the increasing demand to manufacture complicated components of high


accuracy in large quantities, sophisticated technological equipment and machinery have been
developed. Production of these components calls for machine tools which can be set up fairly
rapidly without much attention.

The design and construction of Computer Numerically Controlled (CNC) machines differs
greatly from that of conventional machine tools. This difference arises from the requirements of
higher performance levels. The CNC machines often employ the various mechatronics elements
that have been developed over the years. However, the quality and reliability of these machines
depends on the various machine elements and subsystems of the machines. There are some of
the important constituents parts and aspects of CNC machines to be considered in their
designing, for example Machine structure, Guideways, Feed drives, Spindle and Spindle bearings,
Measuring systems, Controls, Software and Operator interface, Gauging, Tool monitoring.
The control of a machine tool by means of stored information through the computer is
known as Computer Numerically Controlled. The information stored in the computer can be read
by automatic means and converted into electrical signals, which operate the electrically
controlled servo systems. Electrically controlled servo systems permits the slides of a machine
tool to be driven simultaneously and at the apporopriate feeds and direction so that complex
shapes can be cut, often with a single operation and without the need to reorient the workpiece.
Computer Numerically Control can be applied to milling machines, Lathe machines,
Grinding machines, Boring machines, Flame cutters, Drilling machines etc.
1.2 HISTORICAL DEVELOPMENTS IN CNC MACHINES

 A relatively short time ago, machines were operated by craftsmen who determined the many
variables such as speeds, feeds, depth of cut etc. Now much of this work is being assigned to
computer and machines that are numerically controlled.
 Numerical control as applied to machine tools, had it’s beginning in 1949, when the United
States Air Force combined with Parsons Corporation to produce contoured surface from
instructions in the form of punched tape.
 Numerical control equipment was substituted for the tracer controls on a three axis Cininnati
Hydrotel Vertical Planer Mill.
 A successful feasibility demonstration of continuous contour milling was held in March 1952
followed by a final report; both the Air Force and Private Industry began further work to
extend the Numerical Control technology developed at Massachusetts, Institute of
Technology, U.S.A.
 Between 1955 and 1960 only 500 Numerically Controlled Machine were installed in U.S.A.
 During 1960 and 1964 approximately 4000 more were added.
 Till the middle of 60’s Russia, West Germany, Japan, and U.K. were for behind U.S.A., in the
production of Numerical Control Machine Tools.
 Japan entered the Numerical Control commercial scene during mid 60’s and by 1971
surpassed the U.S.A. in NC ratio.

NC ratio = Production of NC machines tools (units)


Total production of machine tools

 U.S.S.R. is also one of the largest producers of nc machine tools mainly for industrial use
1.3 FEATURES OF CNC MACHINES

 Rigid machine structure is provided, in order to bear the static load, Dynamic load ,
Thermal load , Vibration.
 Guideways are used in machine tools to control the direction or line of action of the
carriage or the table on which a tool or workpiece is held , To absorb all the static and
dynamic forces.
 On a CNC machine the function of feed rate drive is to provide motion to the slide as per
the motion commands. Since the degree of accuracy requirements are high, the feed
drive should having high efficiency and response.
 Hydrodynamic bearings, Hydrostatic Bearings, Antifriction Bearings are provided to the
CNC machine. In order to achieve the accuracy and the quality of the work produced
depends directly on the geometrical accuracy, running accuracy and the stiffness of the
spindle assembly.
 On all CNC machines, an electronic measuring system is employed on each controlled
axis to monitor the movement and to compare the position of the slide and the spindle
with the desired position.
 CNC controls are the heart of the CNC machines. The early CNC controls were developed
for simple applications in turning, machining centres and grinding. But with increased
capabilities on modern machine tools such as higher spindle speeds, higher rapid
traverses and more number of axes, CNC systems have been developed to meet these
needs.
 Better work piece quality is one of the most important advantages in using a hi-tech CNC
machine. To maintain quality the effect of parameters like tool wear and thermal growth
can be eliminated by automatic gauging system.
 Presently, established tool monitoring sensors and systems are available commercially
for integrating and systems are available commercially for integrating with CNC
machines. Tool monitoring systems enable the introduction of adaptive controls on
machines for optimizing the cutting parameters.
 Feedback is taken from various devices, e.g. Ecoders, Transducers.
1.4 MACHINE STRUCTURE
The machine structure is the load carrying and supporting member of the machine
tool. All the motors, drive mechanisms and other functional assemblies of machine tools are
aligned to each other and rigidity fixed to the machine structure. The machine structure is
subjected to static and dynamic forces and it is, therefore , essential that the structure does
not deform or vibrate beyond the permissible limits under the action of these forces. All
components of the machine must remain in correct relative positions to maintain the geometric
accuracy, regardless of the magnitude and direction of these forces. The machine structure
configuration is also influenced by the consideration of manufacture, assembly and operation.
The basic design factors involved in the design of machine structure are as follows,
1. Static load
2. Dynamic load
3. Thermal load
4. Guideways
5. Feed drive: - 1) Servo motor, 2) Mechanical Transmission System
6. Spindle / spindle bearings 1) Hydrodynamic 2) Hydrostatic 3) Antifriction
7. Measuring Systems: - 1) Direct 2) Indirect
8. Controls, Software and user interface
9. Gauging
10. Tool monitoring systems: - 1) Direct 2) Indirect
1.5 CONFIGURATION OF CNC SYSTEM
Fig shows a schematic diagram of the working principle of an NC axis of a CNC machine
and the interface of a CNC control.
A CNC system basically consists of the following:-

 Central processing unit (CPU).


 Servo control unit
 Operator control panel
 Machine control panel
 Other peripheral devices
 Programmable logic controller
Figure gives the typical control configuration of Hinumerik 3100 CNC system.
1.6 PROGRAMMING FOR CNC MACHINES
A program is a set of encoded information giving coordinate values and other details to
indicate how a tool should be moved in relation to a work piece in order to achieve a desired
machining form. A part program contains all the information for the machining of a component
which is input to CNC system. The CNC system provides signals at the correct time and in the
correct sequence to the various drive units of the machine.

1.7 PROCEDURE OF PART PROGRAMMING

1) Study the relevant component drawing thoroughly.


2) Identify the type of material to be machined.
3) Determine the specifications and functions of machine to be used.
4) Decide the dimension and mode i.e. Metric or inch.
5) Decide the co ordinate system i.e. Absolute or incremental
6) Identify the plane of cutting.
7) Determine the cutting parameters for the job / tool combination.
8) Decide the feed rate programming.
9) Check the tooling required.
10) Establish the sequence of machining operations.
11) Identify whether use of special features like subroutines, mirror imaging etc. is
required or not.
12) Decide the mode of storing the part program once it is completed.

1.8 CONTROL SYSTEMS FOR CNC MACHINES


Following are the two types of control systems used in the CNC machines
1. Open loop control system:-
In an open loop system the machine slides are displaced according to the
information loaded from the part program into the control system. Hence there is no
measurement of slide position and no feedback signals for comparison with the input
signal. The correct movement of slide entirely depends upon the ability of the drive
systems to move the slide through the required exact distance.
The most common method of driving the lead screw is by a stepper motor. The stepper motors
are the simplest way for converting detail electrical signals into proportional movement. As there
is no check on the slide position, the system accuracy depends upon the motors ability to step
through the exact number of steps provided at the input as shown in fig.
2. Closed loop control system
A closed loop system is as shown in fig. sends back a signal to the control unit from a measuring
device called as transducer. The transducer is attached to the slide ways. The signal indicates the
actual movement and position of the slides.

The control unit continues to adjust the position of the slide until it arrives it’s destination,
this system has feedback. Although more costly and complex than open loop system, these
system gives more accurate positioning. For this type of system, servomotors are used.

1.9 ADVANTAGES AND DISADVANTAGES OF CNC MACHINES


ADVANTAGES

 Higher flexibility.
 Increased productivity.
 Consistent quality.
 Reduced scrap rate.
 Reliable operation.
 Reduced non-productive time.
 Reduced manpower.
 Shorter cycle time.
 Just in time manufacture.
 An automatic material handling.
 Lesser floor space.
 Increased operational safety.
 Machining of advanced materials.
DISADVANTAGES

 CNC machines are more expensive than manually operated machines.


 Investment in CNC machines can lead to unemployment.
 Higher maintenance cost.
 Costlier CNC personnel.
 Air conditioned place are required for installation of the machines.
 Unsuitable for long run applications.
 Training of part programmer is required.

1.10 CONCLUSION
Development of Computer Numerically Controlled machines (CNC) is an outstanding contribution
to the manufacturing industries. It has made possible the automation of the machining processes
with flexibility to handle small to medium batch quantities in part production.
At present Indian industries needs CNC machines, which facing competition in global market, due
to day by day uncertainties in customer demand requirements in term of huge variety of products
with better quality and at lesser time. CNC machines are best suitable for better accuracy and
less man power

You might also like