You are on page 1of 46

History of robots

From Wikipedia, the free encyclopedia

Jump to: navigation, search Robotics Outline History Glossary Index Portal Category The history of robots has its roots as far back as ancient myths and legends. Modern concepts were begun to be developed when the Industrial Revolution allowed the use of more complex mechanics and the subsequent introduction of electricity made it possible to power machines with small compact motors. After the 1920s the modern formulation of a humanoid machine was developed to the stage where it was possible to envisage human sized robots with the capacity for near human thoughts and movements, first envisaged millennia before. The first uses of modern robots were in factories as industrial robots - simple fixed machines capable of manufacturing tasks which allowed production without the need for human assistance. Digitally controlled industrial robots and robots making use of artificial intelligence have been built since the 1960s. Chinese accounts relate a history of automata back to the 10th century BC when Yan Shi is credited with making an automaton resembling a human in an account from the Lie Zi text. Western and Eastern civilisations have concepts of artificial servants and companions with a long history. Many ancient mythologies include artificial people, such as the mechanical servants built by the Greek god Hephaestus (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend. Likely fictional, the Iliad illustrates the concept of robotics by stating that the god Hephaestus made talking mechanical handmaidens out of gold.[1] Greek mathematician Archytas of Tarentum is reputed to have built a mechanical pigeon around 400 BC, possibly powered by steam, capable of flying. The clepsydra was made in 250 BC by Ctesibius of Alexandria, a physicist and inventor from Ptolemaic Egypt. Heron of Alexandria (1070 AD) created some mechanical devices in the late 1st century AD, including one that allegedly could speak. Aristotle took up an earlier reference in Homer's Iliad and speculated that automatons could someday bring about human equality by making the abolition of slavery possible in his book Politics (ca. 322 BC).

Contents
[hide]

1 Ancient beginnings 2 500 to 1500 3 1500 to 1800 4 1801 to 1900 5 1901 to 1950

6 1951 to 2000

6.1 1951 to 1960 6.2 1961 to 1971 6.3 1971 to 1980 6.4 1981 to 1990 6.5 1991 to 2000 6.6 2001 to the 2010 6.7 2010 to the present

7 See also 8 Notes 9 References 10 Further reading

[edit] Ancient beginnings

The water-powered mechanism of Su Song's astronomical clock tower, featuring a clepsydra tank, waterwheel, escapement mechanism, and chain drive to power an armillary sphere and 113 striking clock jacks to sound the hours and to display informative plaques In ancient China, a curious account on automata is found in the Lie Zi text, written in the 3rd century BC. Within it there is a description of a much earlier encounter between King Mu of

Zhou (1023-957 BC) and a mechanical engineer known as Yan Shi, an 'artificer'. The latter proudly presented the king with a life-size, human-shaped figure of his mechanical handiwork.[2] The king stared at the figure in astonishment. It walked with rapid strides, moving its head up and down, so that anyone would have taken it for a live human being. The artificer touched its chin, and it began singing, perfectly in tune. He touched its hand, and it began posturing, keeping perfect time...As the performance was drawing to an end, the robot winked its eye and made advances to the ladies in attendance, whereupon the king became incensed and would have had Yen Shih [Yan Shi] executed on the spot had not the latter, in mortal fear, instantly taken the robot to pieces to let him see what it really was. And, indeed, it turned out to be only a construction of leather, wood, glue and lacquer, variously coloured white, black, red and blue. Examining it closely, the king found all the internal organs completeliver, gall, heart, lungs, spleen, kidneys, stomach and intestines; and over these again, muscles, bones and limbs with their joints, skin, teeth and hair, all of them artificial...The king tried the effect of taking away the heart, and found that the mouth could no longer speak; he took away the liver and the eyes could no longer see; he took away the kidneys and the legs lost their power of locomotion. The king was delighted.[2] Early water clocks are sometimes grouped in with the beginning of robotics. They began in China in the 6th century BC[3] and the Greco-Roman world in the 4th century BC where the Clepsydra is known to have been used as a stop-watch for imposing a time limit on clients' visits in Athenian brothels.[4] The idea of artificial people in western mythology dates at least as far back as the ancient legends of Cadmus, who sowed dragon teeth that turned into soldiers, and the myth of Pygmalion, whose statue of Galatea came to life. In Greek mythology, the deformed god of metalwork (Vulcan or Hephaestus) created mechanical servants, ranging from intelligent, golden handmaidens to more utilitarian three-legged tables that could move about under their own power, and the bronze man Talos defended Crete. Concepts akin to a robot can be found as long ago as the 4th century BC, when the Greek mathematician Archytas of Tarentum postulated a mechanical bird he called "The Pigeon" which was propelled by steam. Yet another early automaton was the clepsydra, made in 250 BC by Ctesibius of Alexandria, a physicist and inventor from Ptolemaic Egypt.[5] Hero of Alexandria (1070 AD) made numerous innovations in the field of automata, including one that allegedly could speak. Taking up the earlier reference in Homer's Iliad, Aristotle speculated in his Politics (ca. 322 BC, book 1, part 4) that automatons could someday bring about human equality by making possible the abolition of slavery:
There is only one condition in which we can imagine managers not needing subordinates, and masters not needing slaves. This condition would be that each instrument could do its own work, at the word of command or by intelligent anticipation, like the statues of Daedalus or the tripods made by Hephaestus, of which Homer relates that "Of their own motion they entered the conclave of Gods on Olympus", as if a shuttle should weave of itself, and a plectrum should do its own harp playing.

Hero of Alexandria (1070 AD) created numerous early automated devices, including one that allegedly could speak. Jewish lore mentions the Jewish legend of the Golem, a clay creature animated by Kabbalistic magic. Similarly, in the Younger Edda, Norse mythology tells of a clay giant, Mkkurklfi or Mistcalf, constructed to aid the troll Hrungnir in a duel with Thor, the God of Thunder.

[edit] 500 to 1500

Al-Jazari's programmable humanoid robots. The Cosmic Engine, a 10-metre (33 ft) clock tower built by Su Song in Kaifeng, China in 1088, featured mechanical mannequins that chimed the hours, ringing gongs or bells among other devices.[6][7] Al-Jazari (11361206), an Arab Muslim inventor during the Artuqid dynasty, designed and constructed a number of automatic machines, including kitchen appliances, musical automata powered by water, and the first programmable humanoid robot in 1206. Al-Jazari's robot was a boat with four automatic musicians that floated on a lake to entertain guests at royal drinking parties. His mechanism had a programmable drum machine with pegs (cams) that bump into little levers that operate the percussion. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.[8] Interest in automata was either mostly non-existent in medieval Europe, or unrecorded.[9][10][11] Oriental automata did, however, find their way into the imaginary worlds of medieval literature. For instance, the Middle Dutch tale Roman van Walewein ("The Romance of Walewein", early 13th century) describes mechanical birds and angels producing sound by means of systems of pipes.[12][13] One of the first recorded designs of a humanoid robot was made by Leonardo da Vinci (1452 1519) in around 1495. Da Vinci's notebooks, rediscovered in the 1950s, contain detailed drawings of a mechanical knight in armour which was able to sit up, wave its arms and move its head and jaw.[14] The design is likely to be based on his anatomical research recorded in the Vitruvian Man but it is not known whether he attempted to build the robot (see: Leonardo's robot).

[edit] 1500 to 1800


Between 1500 and 1800, many automatons were built including ones capable of acting, drawing, flying, and playing music;[10] several mechanical calculators were also built in this time period, some of the most famous ones are Wilhelm Schickard's "Calculating Clock", Blaise Pascal's "Pascaline", and the "Leibniz Stepped Drum", by Gottfried Wilhelm Leibniz.[15] In 1533, Johannes Mller von Knigsberg created an automaton eagle and fly made of iron; both could fly.[10] John Dee is also famous for creating a wooden beetle, capable of flying.[10] Some of the most famous works of the period were created by Jacques de Vaucanson in 1737, including an automaton flute player, tambourine player, and his most famous work, "The Digesting Duck". Vaucanson's duck was powered by weights and was capable of imitating a real duck by flapping its wings (over 400 parts were in each of the wings alone), eat grain, digest it, and defecate by excreting matter stored in a hidden compartment.[16]

John Kay invented his "flying shuttle" in 1733, and the "Spinning Jenny" was invented in 1764 by James Hargreaves, each radically increasing the speed of production in the weaving and spinning industries respectively.[17][18] The Spinning Jenny is hand-powered and requires a skilled operator; Samuel Crompton's Spinning Mule first developed in 1779 is a fully automated power driven spinning machine capable of spinning hundreds of threads at once. Richard Arkwright built a water powered weaving machine, and factory around it in 1781, starting the Industrial Revolution.[19] The Japanese craftsman Hisashige Tanaka, known as "Japan's Edison", created an array of extremely complex mechanical toys, some of which were capable of serving tea, firing arrows drawn from a quiver, or even painting a Japanese kanji character. The landmark text Karakuri Zui (Illustrated Machinery) was published in 1796.[20] By 1800, cloth production was completely automated.[11] With the advent of the Industrial Revolution the idea of automata began to be applied to industry, as cost and time saving devices.

[edit] 1801 to 1900

Tea-serving karakuri, with mechanism, 19th century. Tokyo National Science Museum. Improvements in the weaving industry had led to large amounts of automation, and the idea of programmable machines became popular with Charles Babbage's Analytical Engine[10] Babbage conceived his Analytical Engine as a replacement for his uncompleted Difference Engine; this larger, more complex device would be able to perform multiple operations, and would be operated by punch cards. Construction of the Analytical Engine was never completed; work was begun in 1833.[21] However, Ada Lovelace's work on the project has resulted in her being credited as the first computer programmer. In 1837, the story of the Golem of Prague, a humanoid artificial intelligence activated by inscribing Hebrew letters on its forehead, based on George Boole invented a new type of symbolic logic in 1847 which was instrumental to the creation of computers and robots.[10]

In 1898 Nikola Tesla publicly demonstrated a radio-controlled (teleoperated) boat, similar to a modern ROV. Based on his patents U.S. Patent 613,809, U.S. Patent 723,188 and U.S. Patent 725,605 for "teleautomation", Tesla hoped to develop the wireless torpedo into a weapon system for the US Navy (Cheney 1989).[22]

[edit] 1901 to 1950


See also: history of computing hardware The word robot was popularized by Czech author Karel apek in his 1921 play R.U.R. (Rossum's Universal Robots). According to Karel, his brother Josef was the actual inventor of the word "robot", creating the word from the Czech word "robota", meaning servitude.[23] In 1927, Fritz Lang's Metropolis was released; the Maschinenmensch ("machine-human"), a gynoid humanoid robot, also called "Parody", "Futura", "Robotrix", or the "Maria impersonator" (played by German actress Brigitte Helm), was the first robot ever to be depicted on film.[9] The world's first actual robot, a humanoid named Televox operated through the telephone system, was constructed in the United States in 1927. In 1928, Makoto Nishimura produced Japan's first robot, Gakutensoku.[24] In his 1936 paper "On Computable Numbers, with an Application to the Entscheidungsproblem"[25] (submitted on 28 May 1936), Turing reformulated Kurt Gdel's 1931 results on the limits of proof and computation, replacing Gdel's universal arithmetic-based formal language with what are now called Turing machines, formal and simple devices. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm, thus creating the basis for what is now called computer science. Many robots were constructed before the dawn of computer-controlled servomechanisms, for the public relations purposes of major firms. Electro appeared in Westinghouse's pavilion at the 1939 New York World's Fair. Some were built in between such major public gatherings, such as Garco, made by Garrett AiResearch in the 1950s. These were essentially machines that could perform a few stunts, like the automatons of the 18th century. Vannevar Bush created the first differential analyzer at the Massachusetts Institute Of Technology (MIT). Known as the Differential Analyzer, the computer could solve differential equations.[26] 1940 brought about the creation of two electrical computers, John Vincent Atanasoff and Clifford Berry's Atanasoff-Berry Computer (ABC). Ultimately, ideas from ABC were stolen for ENIAC.[27] In 1941 and 1942, Isaac Asimov formulated the Three Laws of Robotics, and in the process of doing so, coined the word "robotics". In the UK, the Robinson machine was designed for the British war effort in cracking Enigma messages.[28] This was done at the British code-breaking establishment at Bletchley Park; Ultra is the name for the intelligence so received.[27][28] Robinson was superseded by Colossus, which was built in 1943 to decode FISH messages by the British group Ultra; it was designed by Tommy Flowers and was 100 to 1000 times faster than Robinson, and was the first fully electronic computer.[29] The Bletchley machines were kept secret for decades, and so do not appear in histories of computing written until recently. After the war, Tommy Flowers joined the team that built the early Manchester computers. In Germany, Konrad Zuse built the first fully programmable digital computer in the world (the Z3) in 1941; it would later be destroyed in 1944.[30] Zuse was also known for building the first

binary computer from 1936 to 1938, called the Z1; he also built the Z4, his only machine to survive World War II.[30] The first American programmable computer was completed in 1944 by Howard Aiken and Grace Hopper. The Mark I (as it was called) ran computations for the US Navy until 1959.[31] ENIAC was built in 1946 and gained fame because of its reliability, speed, and versatility. John Presper Eckert and John W. Mauchly spent 3 years building ENIAC, which weighed over 60,000 lbs.[32] In 1948, Norbert Wiener formulated the principles of cybernetics, the basis of practical robotics. The first Turtles (Elmo and Elsie) were created by pioneer roboticist William Grey Walter in 1949.[10] The first working digital computer to be sold was Zuse's Z4 in Germany; the fully electronic US BINAC was sold twelve months earlier in September 1949 but it never worked reliably at the customer's site due to mishandling in transit. Second was the UK's Ferranti Mark 1 delivered in February 1951, the first software programmable digital electronic computer to be sold that worked upon delivery. It was based on the world's first software programmable digital electronic computer, Manchester's SSME of 1948. In 1950, UNIVAC I (also by Eckert and Mauchley) handled the US Census results; it was the third commercially marketed computer that worked on delivery (in December 1951).[33] The Turing test is proposed by Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'"

[edit] 1951 to 2000


After 1950 computers and robotics began to rapidly increase in both complexity and numbers as the technology had exponential growth in production, availability and capability.

[edit] 1951 to 1960


In 1951 William Shockley invented the bipolar junction transistor, announced at a press conference on July 4, 1951. Shockley obtained a patent for this invention on September 25, 1951. In 1951 a computer called LEO became operational in the UK. It was built by Lyons for its own use: this was the world's first software programmable digital electronic computer for commercial applications, exploiting the US development of mercury delay line memory, and built with the support of the Cambridge EDSAC project. LEO was used for commercial work running business application programs, the first of which was rolled out 17 November 1951. Eckert and Mauchly completed EDVAC in 1951. An improvement on ENIAC and UNIVAC, EDVAC used mercury delay lines to store data, making it the USA's first software stored program computer.[34] In 1952, the television network CBS correctly predicted the election of Dwight D. Eisenhower as president using UNIVAC. In 1952 IBM announced its 701 model computer, marketed towards scientific use, it was designed by Nathaniel Rochester.[35] Stanislaw Ulam and physicist Paul Stein converted MANIAC I (used for solving calculations involved in creating the hydrogen bomb) to play a modified game of chess in 1956; it was the first computer to beat a human in a game of chess.[36] The term "Artificial Intelligence was created at a conference held at Dartmouth College in 1956.[37] Allen Newell, J. C. Shaw, and Herbert Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957.[38] In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with $50,000.[39] John McCarthy also created LISP in the

summer of 1958, a programming language still important in artificial intelligence research.[40] Jack Kilby and Robert Noyce invented the integrated circuit or "chip" in 1959; the inventors worked independent of each other. This development eventually revolutionized computers by affecting both the size and speed.[41]

[edit] 1961 to 1971


Unimate, the first industrial robot ever created began work on the General Motors assembly line in 1961; the machine was conceived in 1954 by George Devol. Unimate was manufactured by Unimation. Unimate is remembered as the first industrial robot.[42] In 1962 John McCarthy founded the Stanford Artificial Intelligence Laboratory at Stanford University.[43] The Rancho Arm was developed as a robotic arm to help handicapped patients at the Rancho Los Amigos Hospital in Downey, California; this computer controlled arm was bought by Stanford University in 1963.[44] IBM announced its IBM System/360 in 1964. The system was heralded as being more powerful, faster, and more capable than its predecessors.[45] In 1965, Gordon Moore, a cofounder of Intel in 1968, develops what will become known as Moore's Law; the idea that the number of components capable of being built onto a chip will double every two years.[46] The same year, doctoral student Edward Feigenbaum, geneticist and biochemist Joshua Lederberg, and Bruce Buchanan (who held a degree in philosophy) begin work on the DENDRAL, an expert system designed to work in the field of organic chemistry.[47] Feigenbaum also founded the Heuristic Programming Project in 1965, it later became the Stanford Knowledge Systems Artificial Intelligence Laboratory.[11] The program Mac Hack was also written in 1966; it beat artificial intelligence critic Hubert Dreyfus in a game of chess. The program was created by Richard Greenblatt.[48] Seymour Papert created the Logo programming language in 1967. It was designed as an educational programming language.[49] The film 2001: A Space Odyssey was released in 1968; the movie prominently features HAL 9000, a malevolent artificial intelligence unit which controls a spacecraft.[50] Marvin Minsky created the Tentacle Arm in 1968; the arm was computer controlled and its 12 joints were powered by hydraulics.[44] Mechanical Engineering student Victor Scheinman created the Stanford Arm in 1969; the Stanford Arm is recognized as the first electronic computer controlled robotic arm (Unimate's instructions were stored on a magnetic drum).[44] The first floppy disc was released in 1970. It measured eight inches in diameter and read-only.[51] The first mobile robot capable of reasoning about its surroundings, Shakey was built in 1970 by the Stanford Research Institute. Shakey combined multiple sensor inputs, including TV cameras, laser rangefinders, and "bump sensors" to navigate.[44] In the winter of 1970, the Soviet Union explored the surface of the moon with the lunar vehicle Lunokhod 1, the first roving remote-controlled robot to land on another world.

[edit] 1971 to 1980


The first microprocessor, called the 4004 was created by Ted Hoff at Intel in 1971. Measuring 1/8 of an inch by 1/16 of an inch, the chip itself was more powerful than ENIAC.[52] Artificial intelligence critic Hubert Dreyfuss published his influential book "What Computers cannot Do" in 1972.[53] Douglas Trumbull's "Silent Running" was released in 1972; the movie was notable for the three robot co-stars, named Huey, Dewey, and Louie.[54] Released in 1973 was the logic based programming language PROLOG; this logic based language becomes important in the field of artificial intelligence.[55] Freddy and Freddy II, both built in the United Kingdom, were robots capable of assembling wooden blocks in a period of several hours.[56] German based company KUKA built the world's first industrial robot with six electromechanically driven axes, known as FAMULUS.[57] In 1974, David Silver designed The Silver Arm; the Silver Arm was capable of fine movements replicating human hands. Feedback was provided by touch and

pressure sensors and analyzed by a computer.[44] MYCIN, an expert system developed to study decisions and prescriptions relating to blood infections. MYCIN was written in Lisp.[58] Marvin Minsky published his landmark paper "A Framework for Representing Knowledge" on artificial intelligence.[59] By 1975, four expert systems relating to medicine had been created; PIP, MYCIN, CASNET, and Internist.[11] 1975: more than 5,000 computers were sold in the United States, and the first personal computer was introduced.[11] The Kurzweil Reading Machine (invented by Raymond Kurzweil), intended to help the blind, was released in 1976. Capable of recognizing characters, the machine formulated pronunciation based on programmed rules.[60] Based on studies of flexible objects in nature (such as elephant trunks and the vertebrae of snakes), Shigeo Hirose designed the Soft Gripper in 1976 the gripper was capable of conforming to the object it was grasping.[44] The knowledge based system Automated Mathematician was presented by Douglas Lenat in 1976 as part of his doctoral dissertation. Automated Mathematician began with a knowledge of 110 concepts and rediscovered many mathematical principles; Automated Mathematician was written in Lisp.[61] Joseph Weizenbaum (creator of ELIZA, a program capable of simulating a Rogerian physcotherapist) published Computer Power and Human Reason, presenting an argument against the creation of artificial intelligence.[62] Steven Jobs and Stephen Wozniak created the Apple Computer in 1977, and released the Apple II.[63] George Lucas' movie Star Wars was also released in 1977. Star Wars featured two robots; an android named C-3PO and R2-D2, both of which become iconic as robots.[64][65] Voyagers 1 and 2 were launched in 1977 to explore the solar system. The 30 year old robotic space probes continue to transmit data back to earth and are approaching the heliopause and the interstellar medium.[66] The SCARA, Selective Compliance Assembly Robot Arm, was created in 1978 as an efficient, 4-axis robotic arm. Best used for picking up parts and placing them in another location, the SCARA was introduced to assembly lines in 1981.[67] XCON, an expert system designed to customize orders for industrial use, was released in 1979.[68] The Stanford Cart successfully crossed a room full of chairs in 1979. The Stanford Cart relied primarily on stereo vision to navigate and determine distances.[44] The Robotics Institute at Carnegie Mellon University was founded in 1979 by Raj Reddy.[69]

[edit] 1981 to 1990

KUKA IR 160/60 Robots from 1983 Takeo Kanade created the first "direct drive arm" in 1981. The first of its kind, the arm's motors were contained within the robot itself, eliminating long transmissions.[70] IBM released its first personal computer (PC) in 1981; the name of the computer was responsible for popularizing the term "personal computer".[71] Prospector a "computer-based consultation program for mineral exploration",[72] created in 1976, discovered an unknown deposit of molybdenum in Washington state. The expert system had been updated annually since its creation.[73] The Fifth Generation Computer Systems Project (FGCS) was started in 1982. Its goals were knowledge based

information processing and massive parallelism in a supercomputer, artificial intelligence like system.[74] Cyc, a project to create a database of common sense for artificial intelligence, was started in 1984 by Douglas Leant. The program attempts to deal with ambiguity in language, and is still underway.[75] The first program to publish a book, the expert system Racter, programmed by William Chamberlain and Thomas Etter, wrote the book "The Policeman's Beard is HalfConstructed" in 1983. It is now thought that a system of complex templates were used.[76] In 1984 Wabot-2 was revealed; capable of playing the organ, Wabot-2 had 10 fingers and two feet. Wabot-2 was able to read a score of music and accompany a person.[77] In 1985, Kawasaki Heavy Industries' license agreement with Unimation was terminated; Kawasaki began to produce its own robots. Their first robot was released one year later.[78] By 1986, artificial intelligence revenue was about $1 billion US dollars. Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for the Deep Blue.[79] In 1986, Honda began its humanoid research and development program to create robots capable of interacting successfully with humans.[80] Artificial intelligence related technologies, not including robots, now produce a revenue of $1.4 billion US dollars.[11] In 1988, Stubli Group purchased Unimation.[44] The Connection Machine was built in 1988 by Daniel Hillis; the supercomputer used 64,000 processors simultaneously.[81] A hexapodal robot named Genghis was revealed by MIT in 1989. Genghis was famous for being made quickly and cheaply due to construction methods; Genghis used 4 microprocessors, 22 sensors, and 12 servo motors.[82] Rodney Brooks and Anita M. Flynn published "Fast, Cheap, and Out of Control: A Robot Invasion of The Solar System". The paper advocated creating smaller cheaper robots in greater numbers to increase production time and decrease the difficulty of launching robots into space.[83]

[edit] 1991 to 2000


While competing in a 1993 NASA sponsored competition, Carnegie Mellon University's eight legged robot Dante failed to collect gases from Mt. Erebus because of a broken fiber optic cable. Dante was designed to scale slopes and harvest gases near the surface of the magma; however, the failure in the cable did not permit the robot to enter the active volcano.[84] In 1994, Dante II entered Mt. Spurr and successfully sampled the gases within the volcano.[85] The biomimetic robot RoboTuna was built by doctoral student David Barrett[disambiguation needed ] at the Massachusetts Institute of Technology in 1996 to study how fish swim in water. RoboTuna is designed to swim and resemble a blue fin tuna.[86] Invented by Dr. John Adler, in 1994, the Cyberknife (a stereotactic radiosurgery performing robot) offered an alternative treatment of tumors with a comparable accuracy to surgery performed by human doctors.[87] Honda's P2 humanoid robot was first shown in 1996. Standing for "Prototype Model 2", P2 was an integral part of Honda's humanoid development project; over 6 feet tall, P2 was smaller than its predecessors and appeared to be more human-like in its motions.[88] Expected to only operate for seven days, the Sojourner rover finally shuts down after 83 days of operation in 1997. This small robot (only weighing 23 lbs) performed semi-autonomous operations on the surface of Mars as part of the Mars Pathfinder mission; equipped with an obstacle avoidance program, Sojourner was capable of planning and navigating routes to study the surface of the planet. Sojourner's ability to navigate with little data about its environment and nearby surroundings allowed the robot to react to unplanned events and objects.[89] Also in 1997, IBM's chess playing program Deep Blue beat the then current World Chess Champion Garry Kasparov playing at the "Grandmaster" level. The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was

broadcast live over the internet and received over 74 million hits.[90] The P3 humanoid robot was revealed by Honda in 1998 as a part of the company's continuing humanoid project.[91] In 1999, Sony introduced the AIBO, a robotic dog capable of interacting with humans, the first models released in Japan sold out in 20 minutes.[92] Honda revealed the most advanced result of their humanoid project in 2000, named ASIMO. ASIMO is capable of running, walking, communication with humans, facial and environmental recognition, voice and posture recognition, and interacting with its environment.[93] Sony also revealed its Sony Dream Robots, small humanoid robots in development for entertainment.[94] In October 2000, the United Nations estimated that there were 742,500 industrial robots in the world, with more than half of the robots being used in Japan.[10]

[edit] 2001 to the 2010


In April 2001, the Canadarm2 was launched into orbit and attached to the International Space Station. The Canadarm2 is a larger, more capable version of the arm used by the Space Shuttle and is hailed as being "smarter."[95] Also in April, the Unmanned Aerial Vehicle Global Hawk made the first autonomous non-stop flight over the Pacific Ocean from Edwards Air Force Base in California to RAAF Base Edinburgh in Southern Australia. The flight was made in 22 hours. [96] The popular Roomba, a robotic vacuum cleaner, was first released in 2002 by the company iRobot.[97] In 2004, Cornell University revealed a robot capable of self-replication; a set of cubes capable of attaching and detaching, the first robot capable of building copies of itself.[98] On January 3 and 24 the Mars rovers Spirit and Opportunity land on the surface of Mars. Launched in 2003, the two robots will drive many times the distance originally expected, and Opportunity is still operating as of mid 2011.[99] All 15 teams competing in the 2004 DARPA Grand Challenge failed to complete the course, with no robot successfully navigating more than five percent of the 150 mile off road course, leaving the $1 million dollar prize unclaimed.[100] In the 2005 DARPA Grand Challenge, five teams completed the off-road course; Stanford University's Stanley won first place and the $2 million dollar prize.[101] Also in 2005, Honda revealed a new version of its ASIMO robot, updated with new behaviors and capabilities.[102] In 2006, Cornell University revealed its "Starfish" robot, a 4-legged robot capable of self modeling and learning to walk after having been damaged.[103] In September 2007, Google announced its Lunar X Prize. The Lunar X Prize offers 30 million dollars to the first private company which lands a rover on the moon and sends images back to earth.[104] In 2007, TOMY launched the entertainment robot, i-sobot, which is a humanoid bipedal robot that can walk like a human beings and performs kicks and punches and also some entertaining tricks and special actions under "Special Action Mode".

[edit] 2010 to the present


Robonaut 2, the latest generation of the astronaut helpers, launched to the space station aboard space shuttle Discovery on the STS-133 mission. It is the first humanoid robot in space, and although its primary job for now is teaching engineers how dexterous robots behave in space, the hope is that through upgrades and advancements, it could one day venture outside the station to help spacewalkers make repairs or additions to the station or perform scientific work.[1

Robotics
From Wikipedia, the free encyclopedia

Jump to: navigation, search

The Shadow robot hand system File:TOPIO 3.0.jpg TOPIO, a humanoid robot, played ping pong at Tokyo International Robot Exhibition (IREX) 2009.[1][2]

KUKA industrial robot operating in a foundry

TALON military robots used by the United States Army Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots [3] and computer systems for their control, sensory feedback, and information processing.[4][5] The concept and creation of machines that could operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century.[6] Today, robotics is a rapidly growing field, as we continue to research, design, and build new robots that serve various practical purposes, whether domestically, commercially, or militarily.

Contents
[hide]

1 Etymology 2 History 3 Components


3.1 Power source 3.2 Actuation


3.2.1 Electric motors 3.2.2 Linear actuators 3.2.3 Series elastic actuators 3.2.4 Air muscles 3.2.5 Muscle wire 3.2.6 Electroactive polymers 3.2.7 Piezo motors 3.2.8 Elastic nanotubes 3.3.1 Touch 3.3.2 Vision 3.3.3 Other

3.3 Sensing

3.4 Manipulation

3.4.1 Mechanical grippers 3.4.2 Vacuum grippers 3.4.3 General purpose effectors 3.5.1 Rolling robots

3.5 Locomotion

3.5.1.1 Two-wheeled balancing robots 3.5.1.2 One-wheeled balancing robots 3.5.1.3 Spherical orb robots 3.5.1.4 Six-wheeled robots 3.5.1.5 Tracked robots 3.5.2.1 ZMP Technique 3.5.2.2 Hopping 3.5.2.3 Dynamic balancing (controlled falling) 3.5.2.4 Passive dynamics 3.5.3.1 Flying 3.5.3.2 Snaking 3.5.3.3 Skating 3.5.3.4 Climbing 3.5.3.5 Swimming (like a fish)

3.5.2 Walking applied to robots


3.5.3 Other methods of locomotion


3.6 Environmental interaction and navigation 3.7 Human-robot interaction


3.7.1 Speech recognition 3.7.2 Robotic voice 3.7.3 Gestures 3.7.4 Facial expression 3.7.5 Artificial emotions 3.7.6 Personality

4 Control

4.1 Autonomy levels 5.1 Dynamics and kinematics

5 Robotics research

6 Education and training


6.1 Career training 6.2 Certification 6.3 Summer robotics camp 7.1 Effects on unemployment

7 Employment

8 See also 9 Notes 10 Bibliography 11 Further reading 12 External links

[edit] Etymology
The word robotics was derived from the word robot, which was introduced to the public by Czech writer Karel apek in his play R.U.R. (Rossum's Universal Robots), which premiered in 1921.[7] According to the Oxford English Dictionary, the word robotics was first used in print by Isaac Asimov, in his science fiction short story "Liar!", published in May 1941 in Astounding Science Fiction. Asimov was unaware that he was coining the term; since the science and technology of electrical devices is electronics, he assumed robotics already referred to the science and technology of robots. In some of Asimov's other works, he states that the first use of the word robotics was in his short story Runaround (Astounding Science Fiction, March 1942).[8][9] However, the original publication of "Liar!" predates that of "Runaround" by five months, so the former is generally cited as the word's origin.

[edit] History
Main article: History of robots See also: Robot

A scene from Karel apek's 1920 play R.U.R., showing three robots Stories of artificial helpers and companions and attempts to create them have a long history. The word robot was introduced to the public by the Czech writer Karel apek in his play R.U.R. (Rossum's Universal Robots), published in 1920.[7] The play begins in a factory that makes

artificial people called robots creatures who can be mistaken for humans though they are closer to the modern ideas of androids. Karel apek himself did not coin the word. He wrote a short letter in reference to an etymology in the Oxford English Dictionary in which he named his brother Josef apek as its actual originator.[7] In 1927 the Maschinenmensch ("machine-human") gynoid humanoid robot (also called "Parody", "Futura", "Robotrix", or the "Maria impersonator") was the first and perhaps the most memorable depiction of a robot ever to appear on film was played by German actress Brigitte Helm in Fritz Lang's film Metropolis. In 1942 the science fiction writer Isaac Asimov formulated his Three Laws of Robotics and, in the process of doing so, coined the word "robotics" (see details in "Etymology" section below). In 1948 Norbert Wiener formulated the principles of cybernetics, the basis of practical robotics. Fully autonomous robots only appeared in the second half of the 20th century. The first digitally operated and programmable robot, the Unimate, was installed in 1961 to lift hot pieces of metal from a die casting machine and stack them. Commercial and industrial robots are widespread today and used to perform jobs more cheaply, or more accurately and reliably, than humans. They are also employed in jobs which are too dirty, dangerous, or dull to be suitable for humans. Robots are widely used in manufacturing, assembly, packing and packaging, transport, earth and space exploration, surgery, weaponry, laboratory research, safety, and the mass production of consumer and industrial goods.[10] Significance One of the earliest descriptions of automata appears in the Lie Zi text, on a much earlier Third encounter between King Mu of Zhou (1023century 957 BC) and a mechanical engineer known as B.C. and Yan Shi, an 'artificer'. The latter allegedly earlier presented the king with a life-size, humanshaped figure of his mechanical handiwork.[11] Descriptions of more than 100 machines and First automata, including a fire engine, a wind century organ, a coin-operated machine, and a steamA.D. and powered engine, in Pneumatica and Automata earlier by Heron of Alexandria 1206 1495 1738 1898 1921 1930s Created early humanoid automata, programmable automaton band[12] Designs for a humanoid robot Mechanical duck that was able to eat, flap its wings, and excrete Nikola Tesla demonstrates first radiocontrolled vessel. First fictional automatons called "robots" appear in the play R.U.R. Humanoid robot exhibited at the 1939 and 1940 World's Fairs Date Robot Name Inventor

Yan Shi

Ctesibius, Philo of Byzantium, Heron of Alexandria, and others Robot band, handwashing automaton, Al-Jazari [13] automated moving peacocks[14] Mechanical knight Leonardo da Vinci Jacques de Digesting Duck Vaucanson Teleautomaton Nikola Tesla Rossum's Universal Karel apek Robots Elektro Westinghouse Electric

1948 1956 1961 1963 1973 1975

Simple robots exhibiting biological behaviors[15] First commercial robot, from the Unimation company founded by George Devol and Joseph Engelberger, based on Devol's patents[16] First installed industrial robot. First palletizing robot[17]

Elsie and Elmer Unimate Unimate Palletizer

Corporation William Grey Walter George Devol George Devol Fuji Yusoki Kogyo KUKA Robot Group Victor Scheinman

First industrial robot with six Famulus electromechanically driven axes[18] Programmable universal manipulation arm, a PUMA Unimation product

[edit] Components
[edit] Power source
Further information: Power supply and Energy storage At present; mostly (lead-acid) batteries are used, but potential power sources could be:

pneumatic (compressed gases) hydraulics (liquids) flywheel energy storage organic garbage (through anaerobic digestion) faeces (human, animal); may be interesting in a military context as faeces of small combat groups may be reused for the energy requirements of the robot assistant (see DEKA's project Slingshot Stirling engine on how the system would operate) still unproven energy sources: for example Nuclear fusion, as yet not used in nuclear reactors whereas Nuclear fission is proven (although there are not many robots using it as a power source apart from the Chinese rover tests.[19]). radioactive source (such as with the proposed Ford car of the '50s); to those proposed in movies such as Red Planet

[edit] Actuation
Main article: Actuator

A robotic leg powered by air muscles Actuators are like the "muscles" of a robot, the parts which convert stored energy into movement. By far the most popular actuators are electric motors that spin a wheel or gear, and linear actuators that control industrial robots in factories. But there are some recent advances in alternative types of actuators, powered by electricity, chemicals, or compressed air: [edit] Electric motors Main article: Electric motor The vast majority of robots use electric motors, often brushed and brushless DC motors in portable robots or AC motors in industrial robots and CNC machines. [edit] Linear actuators Main article: Linear actuator Various types of linear actuators move in and out instead of by spinning, particularly when very large forces are needed such as with industrial robotics. They are typically powered by compressed air (pneumatic actuator) or an oil (hydraulic actuator). [edit] Series elastic actuators A spring can be designed as part of the motor actuator, to allow improved force control. It has been used in various robots, particularly walking humanoid robots.[20] [edit] Air muscles

Main article: Pneumatic artificial muscles Pneumatic artificial muscles, also known as air muscles, are special tubes that contract (typically up to 40%) when air is forced inside them. They have been used for some robot applications.[21]
[22]

[edit] Muscle wire Main article: Shape memory alloy Muscle wire, also known as Shape Memory Alloy, Nitinol or Flexinol Wire, is a material that contracts slightly (typically under 5%) when electricity runs through it. They have been used for some small robot applications.[23][24] [edit] Electroactive polymers Main article: Electroactive polymers EAPs or EPAMs are a new plastic material that can contract substantially (up to 380% activation strain) from electricity, and have been used in facial muscles and arms of humanoid robots,[25] and to allow new robots to float,[26] fly, swim or walk.[27] [edit] Piezo motors Main article: Piezoelectric motor Recent alternatives to DC motors are piezo motors or ultrasonic motors. These work on a fundamentally different principle, whereby tiny piezoceramic elements, vibrating many thousands of times per second, cause linear or rotary motion. There are different mechanisms of operation; one type uses the vibration of the piezo elements to walk the motor in a circle or a straight line.[28] Another type uses the piezo elements to cause a nut to vibrate and drive a screw. The advantages of these motors are nanometer resolution, speed, and available force for their size.[29] These motors are already available commercially, and being used on some robots.[30][31] [edit] Elastic nanotubes Further information: Nanotube Elastic nanotubes are a promising artificial muscle technology in early-stage experimental development. The absence of defects in carbon nanotubes enables these filaments to deform elastically by several percent, with energy storage levels of perhaps 10 J/cm3 for metal nanotubes. Human biceps could be replaced with an 8 mm diameter wire of this material. Such compact "muscle" might allow future robots to outrun and outjump humans.[32]

[edit] Sensing
Main article: Robotic sensing [edit] Touch Main article: Tactile sensor Current robotic and prosthetic hands receive far less tactile information than the human hand. Recent research has developed a tactile sensor array that mimics the mechanical properties and touch receptors of human fingertips.[33][34] The sensor array is constructed as a rigid core surrounded by conductive fluid contained by an elastomeric skin. Electrodes are mounted on the surface of the rigid core and are connected to an impedance-measuring device within the core. When the artificial skin touches an object the fluid path around the electrodes is deformed, producing impedance changes that map the forces received from the object. The researchers

expect that an important function of such artificial fingertips will be adjusting robotic grip on held objects. Scientists from several European countries and Israel developed a prosthetic hand in 2009, called SmartHand, which functions like a real oneallowing patients to write with it, type on a keyboard, play piano and perform other fine movements. The prosthesis has sensors which enable the patient to sense real feeling in its fingertips.[35] [edit] Vision Main article: Computer vision Computer vision is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences and views from cameras. In most practical computer vision applications, the computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Computer vision systems rely on image sensors which detect electromagnetic radiation which is typically in the form of either visible light or infra-red light. The sensors are designed using solid-state physics. The process by which light propagates and reflects off surfaces is explained using optics. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. There is a subfield within computer vision where artificial systems are designed to mimic the processing and behavior of biological systems, at different levels of complexity. Also, some of the learning-based methods developed within computer vision have their background in biology. [edit] Other Other common forms of sensing in robotics use LIDAR, RADAR and SONAR.[citation needed]

[edit] Manipulation
Further information: Mobile manipulator Robots need to manipulate objects; pick up, modify, destroy, or otherwise have an effect. Thus the "hands" of a robot are often referred to as end effectors,[36] while the "arm" is referred to as a manipulator.[37] Most robot arms have replaceable effectors, each allowing them to perform some small range of tasks. Some have a fixed manipulator which cannot be replaced, while a few have one very general purpose manipulator, for example a humanoid hand. For the definitive guide to all forms of robot end-effectors, their design, and usage consult the book "Robot Grippers".[38] [edit] Mechanical grippers One of the most common effectors is the gripper. In its simplest manifestation it consists of just two fingers which can open and close to pick up and let go of a range of small objects. Fingers can for example be made of a chain with a metal wire run through it.[39] Hands that resemble and work more like a human hand include the Shadow Hand, the Robonaut hand,[40] ... Hands that are of a mid-level complexity include ie the Delft hand, ...[41][42] [edit] Vacuum grippers Vacuum grippers are very simple astrictive[43] devices, but can hold very large loads provided the prehension surface is smooth enough to ensure suction.

Pick and place robots for electronic components and for large objects like car windscreens, often use very simple vacuum grippers. [edit] General purpose effectors Some advanced robots are beginning to use fully humanoid hands, like the Shadow Hand, MANUS,[44] and the Schunk hand.[45] These highly dexterous manipulators, with as many as 20 degrees of freedom and hundreds of tactile sensors.[46]

[edit] Locomotion
Main articles: Robot locomotion and Mobile robot [edit] Rolling robots

Segway in the Robot museum in Nagoya. For simplicity most mobile robots have four wheels or a number of continuous tracks. Some researchers have tried to create more complex wheeled robots with only one or two wheels. These can have certain advantages such as greater efficiency and reduced parts, as well as allowing a robot to navigate in confined places that a four wheeled robot would not be able to.
[edit] Two-wheeled balancing robots

Balancing robots generally use a gyroscope to detect how much a robot is falling and then drive the wheels proportionally in the opposite direction, to counter-balance the fall at hundreds of times per second, based on the dynamics of an inverted pendulum.[47] Many different balancing robots have been designed.[48] While the Segway is not commonly thought of as a robot, it can be thought of as a component of a robot, such as NASA's Robonaut that has been mounted on a Segway.[49]
[edit] One-wheeled balancing robots

Main article: Self-balancing unicycle A one-wheeled balancing robot is an extension of a two-wheeled balancing robot so that it can move in any 2D direction using a round ball as its only wheel. Several one-wheeled balancing

robots have been designed recently, such as Carnegie Mellon University's "Ballbot" that is the approximate height and width of a person, and Tohoku Gakuin University's "BallIP".[50] Because of the long, thin shape and ability to maneuver in tight spaces, they have the potential to function better than other robots in environments with people.[51]
[edit] Spherical orb robots

Main article: Spherical robot Several attempts have been made in robots that are completely inside a spherical ball, either by spinning a weight inside the ball,[52][53] or by rotating the outer shells of the sphere.[54][55] These have also been referred to as an orb bot [56] or a ball bot[57][58]
[edit] Six-wheeled robots

Using six wheels instead of four wheels can give better traction or grip in outdoor terrain such as on rocky dirt or grass.
[edit] Tracked robots

Tank tracks provide even more traction than a six-wheeled robot. Tracked wheels behave as if they were made of hundreds of wheels, therefore are very common for outdoor and military robots, where the robot must drive on very rough terrain. However, they are difficult to use indoors such as on carpets and smooth floors. Examples include NASA's Urban Robot "Urbie".
[59]

[edit] Walking applied to robots

iCub robot, designed by the RobotCub Consortium Walking is a difficult and dynamic problem to solve. Several robots have been made which can walk reliably on two legs, however none have yet been made which are as robust as a human. There has been much study on human inspired walking, such as AMBER lab which was established in 2008 by the Mechanical Engineering Department at Texas A&M University.[60] Many other robots have been built that walk on more than two legs, due to these robots being significantly easier to construct.[61][62] Hybrids too have been proposed in movies such as I, Robot, where they walk on 2 legs and switch to 4 (arms+legs) when going to a sprint. Typically,

robots on 2 legs can walk well on flat floors and can occasionally walk up stairs. None can walk over rocky, uneven terrain. Some of the methods which have been tried are:
[edit] ZMP Technique

Main article: Zero Moment Point The Zero Moment Point (ZMP) is the algorithm used by robots such as Honda's ASIMO. The robot's onboard computer tries to keep the total inertial forces (the combination of earth's gravity and the acceleration and deceleration of walking), exactly opposed by the floor reaction force (the force of the floor pushing back on the robot's foot). In this way, the two forces cancel out, leaving no moment (force causing the robot to rotate and fall over).[63] However, this is not exactly how a human walks, and the difference is obvious to human observers, some of whom have pointed out that ASIMO walks as if it needs the lavatory.[64][65][66] ASIMO's walking algorithm is not static, and some dynamic balancing is used (see below). However, it still requires a smooth surface to walk on.
[edit] Hopping

Several robots, built in the 1980s by Marc Raibert at the MIT Leg Laboratory, successfully demonstrated very dynamic walking. Initially, a robot with only one leg, and a very small foot, could stay upright simply by hopping. The movement is the same as that of a person on a pogo stick. As the robot falls to one side, it would jump slightly in that direction, in order to catch itself.[67] Soon, the algorithm was generalised to two and four legs. A bipedal robot was demonstrated running and even performing somersaults.[68] A quadruped was also demonstrated which could trot, run, pace, and bound.[69] For a full list of these robots, see the MIT Leg Lab Robots page.
[edit] Dynamic balancing (controlled falling)

A more advanced way for a robot to walk is by using a dynamic balancing algorithm, which is potentially more robust than the Zero Moment Point technique, as it constantly monitors the robot's motion, and places the feet in order to maintain stability.[70] This technique was recently demonstrated by Anybots' Dexter Robot,[71] which is so stable, it can even jump.[72] Another example is the TU Delft Flame.
[edit] Passive dynamics

Main article: Passive dynamics Perhaps the most promising approach utilizes passive dynamics where the momentum of swinging limbs is used for greater efficiency. It has been shown that totally unpowered humanoid mechanisms can walk down a gentle slope, using only gravity to propel themselves. Using this technique, a robot need only supply a small amount of motor power to walk along a flat surface or a little more to walk up a hill. This technique promises to make walking robots at least ten times more efficient than ZMP walkers, like ASIMO.[73][74] [edit] Other methods of locomotion
[edit] Flying

A modern passenger airliner is essentially a flying robot, with two humans to manage it. The autopilot can control the plane for each stage of the journey, including takeoff, normal flight, and even landing.[75] Other flying robots are uninhabited, and are known as unmanned aerial vehicles (UAVs). They can be smaller and lighter without a human pilot onboard, and fly into dangerous territory for military surveillance missions. Some can even fire on targets under command.

UAVs are also being developed which can fire on targets automatically, without the need for a command from a human. Other flying robots include cruise missiles, the Entomopter, and the Epson micro helicopter robot. Robots such as the Air Penguin, Air Ray, and Air Jelly have lighter-than-air bodies, propelled by paddles, and guided by sonar.

Two robot snakes. Left one has 64 motors (with 2 degrees of freedom per segment), the right one 10.
[edit] Snaking

Several snake robots have been successfully developed. Mimicking the way real snakes move, these robots can navigate very confined spaces, meaning they may one day be used to search for people trapped in collapsed buildings.[76] The Japanese ACM-R5 snake robot[77] can even navigate both on land and in water.[78]
[edit] Skating

A small number of skating robots have been developed, one of which is a multi-mode walking and skating device. It has four legs, with unpowered wheels, which can either step or roll.[79] Another robot, Plen, can use a miniature skateboard or rollerskates, and skate across a desktop.[80]
[edit] Climbing

Several different approaches have been used to develop robots that have the ability to climb vertical surfaces. One approach mimicks the movements of a human climber on a wall with protrusions; adjusting the center of mass and moving each limb in turn to gain leverage. An example of this is Capuchin,[81] built by Stanford University, California. Another approach uses the specialised toe pad method of wall-climbing geckoes, which can run on smooth surfaces such as vertical glass. Examples of this approach include Wallbot [82] and Stickybot.[83] China's "Technology Daily" November 15, 2008 reported New Concept Aircraft (ZHUHAI) Co., Ltd. Dr. Li Hiu Yeung and his research group have recently successfully developed the bionic gecko robot "Speedy Freelander".According to Dr. Li introduction, this gecko robot can rapidly climbing up and down in a variety of building walls, ground and vertical wall fissure or walking upside down on the ceiling, it is able to adapt on smooth glass, rough or sticky dust walls as well as the various surface of metallic materials and also can automatically identify obstacles, circumvent the bypass and flexible and realistic movements. Its flexibility and speed are comparable to the natural gecko. A third approach is to mimick the motion of a snake climbing a pole[citation needed].
[edit] Swimming (like a fish)

It is calculated that when swimming some fish can achieve a propulsive efficiency greater than 90%.[84] Furthermore, they can accelerate and maneuver far better than any man-made boat or submarine, and produce less noise and water disturbance. Therefore, many researchers studying underwater robots would like to copy this type of locomotion.[85] Notable examples are the Essex University Computer Science Robotic Fish,[86] and the Robot Tuna built by the Institute of Field Robotics, to analyze and mathematically model thunniform motion.[87] The Aqua Penguin, designed and built by Festo of Germany, copies the streamlined shape and propulsion by front "flippers" of penguins. Festo have also built the Aqua Ray and Aqua Jelly, which emulate the locomotion of manta ray, and jellyfish, respectively.

[edit] Environmental interaction and navigation


Main article: Robotic mapping

RADAR, GPS, LIDAR, ... are all combined to provide proper navigation and obstacle avoidance (vehicle developed for 2007 DARPA Urban Challenge) This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2009) Though a significant percentage of robots in commission today are either human controlled, or operate in a static environment, there is an increasing interest in robots that can operate autonomously in a dynamic environment. These robots require some combination of navigation hardware and software in order to traverse their environment. In particular unforeseen events (e.g. people and other obstacles that are not stationary) can cause problems or collisions. Some highly advanced robots as ASIMO, EveR-1, Mein robot have particularly good robot navigation hardware and software. Also, self-controlled cars, Ernst Dickmanns' driverless car, and the entries in the DARPA Grand Challenge, are capable of sensing the environment well and subsequently making navigational decisions based on this information. Most of these robots employ a GPS navigation device with waypoints, along with radar, sometimes combined with other sensory data such as LIDAR, video cameras, and inertial guidance systems for better navigation between waypoints.

[edit] Human-robot interaction


Main article: Human-robot interaction

Kismet can produce a range of facial expressions. If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually be capable of communicating with humans through speech, gestures, and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is unnatural for the robot. It will probably be a long time before robots interact as naturally as the fictional C-3PO. [edit] Speech recognition Main article: Speech recognition Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech.[88] The same word, spoken by the same person may sound different depending on local acoustics, volume, the previous word, whether or not the speaker has a cold, etc.. It becomes even harder when the speaker has a different accent.[89] Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first "voice input system" which recognized "ten digits spoken by a single user with 100% accuracy" in 1952.[90] Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%.[91] [edit] Robotic voice Other hurdles exist when allowing the robot to use voice for interacting with humans. For social reasons, synthetic voice proves suboptimal as a communication medium,[92] making it necessary to develop the emotional component of robotic voice through various techniques.[93] [94] [edit] Gestures Further information: Gesture recognition One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. In both of these cases, making hand gestures would aid the verbal descriptions. In the first case, the robot would be recognizing gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate "down the road, then turn right". It is likely that gestures will make up a part of the interaction between humans and robots.[95] A great many systems have been developed to recognize human hand gestures.[96]

[edit] Facial expression Further information: Facial expression Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon it may be able to do the same for humans and robots. Robotic faces have been constructed by Hanson Robotics using their elastic polymer called Frubber, allowing a great amount of facial expressions due to the elasticity of the rubber facial coating and imbedded subsurface motors (servos) to produce the facial expressions.[97] The coating and servos are built on a metal skull. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened, or crazy-looking affects the type of interaction expected of the robot. Likewise, robots like Kismet and the more recent addition, Nexi[98] can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.[99] [edit] Artificial emotions Artificial emotions can also be imbedded and are composed of a sequence of facial expressions and/or gestures. As can be seen from the movie Final Fantasy: The Spirits Within, the programming of these artificial emotions is complex and requires a great amount of human observation. To simplify this programming in the movie, presets were created together with a special software program. This decreased the amount of time needed to make the film. These presets could possibly be transferred for use in real-life robots. [edit] Personality Many of the robots of science fiction have a personality, something which may or may not be desirable in the commercial robots of the future.[100] Nevertheless, researchers are trying to create robots which appear to have a personality:[101][102] i.e. they use sounds, facial expressions, and body language to try to convey an internal state, which may be joy, sadness, or fear. One commercial example is Pleo, a toy robot dinosaur, which can exhibit several apparent emotions.
[103]

[edit] Control

Puppet Magnus, a robot-manipulated marionette with complex control systems Further information: Control system This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2009) The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases - perception, processing, and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to calculate the appropriate signals to the actuators (motors) which move the mechanical. The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands. Sensor fusion may first be used to estimate parameters of interest (e.g. the position of the robot's gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction) is inferred from these estimates. Techniques from control theory convert the task into commands that drive the actuators. At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a "cognitive" model. Cognitive models try to represent the robot, the world, and how they interact. Pattern recognition and computer vision can be used to track objects. Mapping techniques can be used to build maps of the world. Finally, motion planning and other artificial intelligence techniques may be used to figure out how to act. For example, a planner may figure out how to achieve a task without hitting obstacles, falling over, etc.

[edit] Autonomy levels


Control systems may also have varying levels of autonomy.
1. Direct interaction is used for haptic or tele-operated devices, and the human has nearly

complete control over the robot's motion. 2. Operator-assist modes have the operator commanding medium-to-high-level tasks, with the robot automatically figuring out how to achieve them. 3. An autonomous robot may go for extended periods of time without human interaction. Higher levels of autonomy do not necessarily require more complex cognitive capabilities. For example, robots in assembly plants are completely autonomous, but operate in a fixed pattern. Another classification takes into account the interaction between human control and the machine motions.
1. Teleoperation. A human controls each movement, each machine actuator change is

specified by the operator. 2. Supervisory. A human specifies general moves or position changes and the machine decides specific movements of its actuators. 3. Task-level autonomy. The operator specifies only the task and the robot manages itself to complete it.

4. Full autonomy. The machine will create and complete all its tasks without human interaction.

[edit] Robotics research


Further information: Open-source robotics, Evolutionary robotics, Areas of robotics, and Robotics simulator Much of the research in robotics focuses not on specific industrial tasks, but on investigations into new types of robots, alternative ways to think about or design robots, and new ways to manufacture them but other investigations, such as MIT's cyberflora project, are almost wholly academic. A first particular new innovation in robot design is the opensourcing of robot-projects. To describe the level of advancement of a robot, the term "Generation Robots" can be used. This term is coined by Professor Hans Moravec, Principal Research Scientist at the Carnegie Mellon University Robotics Institute in describing the near future evolution of robot technology. First generation robots, Moravec predicted in 1997, should have an intellectual capacity comparable to perhaps a lizard and should become available by 2010. Because the first generation robot would be incapable of learning, however, Moravec predicts that the second generation robot would be an improvement over the first and become available by 2020, with the intelligence maybe comparable to that of a mouse. The third generation robot should have the intelligence comparable to that of a monkey. Though fourth generation robots, robots with human intelligence, professor Moravec predicts, would become possible, he does not predict this happening before around 2040 or 2050.[104] The second is Evolutionary Robots. This is a methodology that uses evolutionary computation to help design robots, especially the body form, or motion and behavior controllers. In a similar way to natural evolution, a large population of robots is allowed to compete in some way, or their ability to perform a task is measured using a fitness function. Those that perform worst are removed from the population, and replaced by a new set, which have new behaviors based on those of the winners. Over time the population improves, and eventually a satisfactory robot may appear. This happens without any direct programming of the robots by the researchers. Researchers use this method both to create better robots,[105] and to explore the nature of evolution.[106] Because the process often requires many generations of robots to be simulated,[107] this technique may be run entirely or mostly in simulation, then tested on real robots once the evolved algorithms are good enough.[108] Currently, there are about 1 million industrial robots toiling around the world, and Japan is the top country having high density of utilizing robots in its manufacturing industry.[citation needed]

[edit] Dynamics and kinematics


Further information: Kinematics and Dynamics (mechanics) This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2009) The study of motion can be divided into kinematics and dynamics. Direct kinematics refers to the calculation of end effector position, orientation, velocity, and acceleration when the corresponding joint values are known. Inverse kinematics refers to the opposite case in which required joint values are calculated for given end effector values, as done in path planning. Some special aspects of kinematics include handling of redundancy (different possibilities of

performing the same movement), collision avoidance, and singularity avoidance. Once all relevant positions, velocities, and accelerations have been calculated using kinematics, methods from the field of dynamics are used to study the effect of forces upon these movements. Direct dynamics refers to the calculation of accelerations in the robot once the applied forces are known. Direct dynamics is used in computer simulations of the robot. Inverse dynamics refers to the calculation of the actuator forces necessary to create a prescribed end effector acceleration. This information can be used to improve the control algorithms of a robot. In each area mentioned above, researchers strive to develop new concepts and strategies, improve existing ones, and improve the interaction between these areas. To do this, criteria for "optimal" performance and ways to optimize design, structure, and control of robots must be developed and implemented.

[edit] Education and training

The SCORBOT-ER 4u - educational robot. Robotics engineers design robots, maintain them, develop new applications for them, and conduct research to expand the potential of robotics.[109] Robots have become a popular educational tool in some middle and high schools, as well as in numerous youth summer camps, raising interest in programming, artificial intelligence and robotics among students. First-year computer science courses at several universities now include programming of a robot in addition to traditional software engineering-based coursework.

[edit] Career training

Universities offer Bachelors, Masters, and Doctoral degrees in the field of robotics. Some Private Career Colleges and vocational schools offer robotics training aimed at careers in robotics.

[edit] Certification
The Robotics Certification Standards Alliance (RCSA) is an international robotics certification authority that confers various industry- and educational-related robotics certifications.

[edit] Summer robotics camp


Several national summer camp programs include robotics as part of their core curriculum, including Digital Media Academy, RoboTech, and Cybercamps. In addition, youth summer robotics programs are frequently offered by celebrated museums such as the American Museum of Natural History[110] and The Tech Museum of Innovation in Silicon Valley, CA, just to name a few.

[edit] Employment

A robot technician builds small all-terrain robots. (Courtesy: MobileRobots Inc) Robotics is an essential component in many modern manufacturing environments. As factories increase their use of robots, the number of roboticsrelated jobs grow and have been observed to be steadily rising.[111]

[edit] Effects on unemployment


Main article: Relationship of automation to unemployment Some analysts, such as Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future,[112] argue that robots and other forms of automation will ultimately result in significant unemployment unless the economy is engineered to absorb them without displacing humans, as machines begin to match and exceed the capability of workers to perform most jobs. At present the negative impact is only on menial and repetitive jobs, and there is actually a positive impact on the number of jobs for highly skilled technicians,

engineers, and specialists. However, these highly skilled jobs are not sufficient in number to offset the greater decrease in employment among the general population, causing structural unemployment in which overall (net) unemployment rises. As robotics and artificial intelligence develop further, some worry that even many skilled jobs may be threatened. According to conventional economic theory, this should merely cause an increase in the productivity of the involved industries; resulting in higher demand for other goods, and hence higher labour demand in these sectors and off-setting whatever negatives are caused. Conventional theory describes the past well, but may not describe future scenarios due to shifts in the parameter values that shape the context (see Automation and its effects on unemployment)

A team led by Robert Full of the University of California, Berkeley, will work with cockroaches both living and robotic to uncover .

Robots have long captured the human imagination. But Credit: Robert Full, UC Berkeley (left). Daniel Koditschek, University despite many advances, robots have yet to reach the potential so often envisioned in science fiction. Today's of Pennsylvania (right) engineers and computer scientists are still pursuing one missing ingredient: high intelligence. It would be nice for example, if robots possessed the intelligence needed to cope with uncertainty, learn from experience and work as a team. Robots with minimal intelligence are already invading our homes. For example, more than 500,000 Roombas, the vacuum cleaner from iRobot, have been sold, and Friendly Robotics has sold 25,000 automatic lawn mowers. Toy robots have also been given minimal intelligence. Robot dogs such as the Sony AIBO can identify and chase a ball, avoid obstacles and respond to voice commands. At the robotics frontier, researchers seek more intelligence, but not necessarily that of a fully functional human brain. A cockroach brain would be nice, for starters. Insects easily control six legs as they scamper over, under or around obstacles, and robot designers are borrowing features from insect nervous systems to build six-legged robots with similar talents. "Intelligent robots will be one of the engineering achievements of the 21 st century," said Junku Yuh, who leads the robotics program in the National Science Foundation's (NSF) Computer and Information Science and Engineering directorate. "We will see them more and more in our daily lives." Throughout history, robots have embodied and exemplified cutting-edge technology. Mechanical automatons were devised during the Industrial Revolution, electronic circuitry was added at the turn of the 20th century, computers gave robots "brains" in the 1940s and shrinking electronics and more powerful computers have granted robots greater abilities. Industries adopted robots for many manufacturing tasks, from automobile assembly to ship welding. During the past 20 years, advances in sensors, actuators and "mechatronics" -- the integration of electronics with mechanical design -- have led to remarkable robots such as Honda's humanoid ASIMO. Of course, every advance brings new sets of challenges. Today, NSF supports mechanical engineers, electrical engineers, computer scientists and other researchers as they develop future generations of intelligent robots. These engineers and

computer scientists cooperate with biologists, neuroscientists and psychologists to exploit new knowledge in the study of the brain and behavior. NSF also supports education activities that use robots as a platform for studying mechanics, electronics, software and other topics. In addition to the challenges of packaging intelligence, robotics research ultimately pursues practical goals. Some robots will help people do what they can't or would rather not do. Other robots will tackle complex projects by working as teams. Robots will help protect critical infrastructure and monitor the environment as mobile, intelligent sensors. And of course, robots will continue to explore extreme environments where no human can go, or wants to.

Future of robotics
From Wikipedia, the free encyclopedia

Jump to: navigation, search Main article: Robotics

TOPIO, a robot can play table tennis with humans. This article is about the future of robotics for civil use.

Contents
[hide]

1 Types of robots 2 Applications 3 Market evolution 4 Impact on the economy and job market

5 Projected robotics timeline 6 Robot rights 7 See also 8 References 9 External links

[edit] Types of robots


Humanoid robots:

Lara is the first female humanoid robot with artificial muscles (metal alloy strands that instantly contract when heated by electric current) [1] [2] instead of electric motors (2006). Asimo is one of the most advanced projects as of 2009.

Modular robots: can be built from standard building blocks that can be combined in different ways.

Utility fog M-Tran - a snake-like modular robot that uses genetic algorithms to evolve walking programs Self replicating robots [3] [4] - modular robots that can produce copies of themselves using existing blocks. Swarmanoid [5] [6] is a project that uses 3 specialized classes of robots (footbots, handbots and eyebots) to create an effective swarm. Such swarm should be able, for example, tidy a bedroom with each robot doing what it is best at. Self-Reconfiguring Modular Robotics Educational toy robots RoboCup TOPIO Caterpillar plans to develop remote controlled machines and expects to develop fully autonomous heavy robots by 2021 [7]. Some cranes already are remote controlled. It was demonstrated that a robot can perform a herding [8] task. Robots are increasingly used in manufacturing (since 1960s). In auto industry they can amount for more than half of the "labor". There are even "lights off" factories such as an IBM keyboard manufacturing factory in Texas that are 100% automated[1]. Robots such as HOSPI [9] are used as couriers in hospitals, etc. Other hospital tasks performed by robots are receptionists, guides and porters helpers, [10] (not to mention surgical robot helpers such as Da Vinci) Robots can serve as waiters [11] [12] and cooks [13].

Educational toy robots: Sports robots:

[edit] Applications

[edit] Market evolution


Today's market is not fully mature. One or more software compatibility layers have yet to emerge to allow the development of a rich robotics ecosystem (similar to today's personal computers one). The most commonly used software in the robotics research are Free Software solutions such as Player/Stage or cross-platform technologies such as URBI. Microsoft is currently working in this direction with its new proprietary software Microsoft Robotics Studio. The use of open source tools helps in continued improvement of the tools and algorithms for robotic research from the point one team leaves it.

[edit] Impact on the economy and job market


Some analysts such as Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future,[2] argue that as information technology advances, robots and other forms of automation will ultimately result in significant unemployment as machines and software begin to match and exceed the capability of workers to perform most routine jobs. As robotics and artificial intelligence develop further, even many skilled jobs may be threatened. Technologies such as machine learning[3] may ultimately allow computers to do many knowledge-based jobs that require significant education. This may result in substantial unemployment at all skill levels, stagnant or falling wages for most workers, and increased concentration of income and wealth as the owners of capital capture an ever larger fraction of the economy. This in turn could lead to depressed consumer spending and economic growth as the bulk of the population lacks sufficient discretionary income to purchase the products and services produced by the economy.[4] However, radical advances in other technological fields, such as nano manufacturing, may lead to prices of many products dropping drastically to little or nothing more than the atoms required to build it and transportation costs.

[edit] Projected robotics timeline

Robots capable of manual labour tasks-

2009 - robots that perform searching and fetching tasks in unmodified library environment, Professor Angel del Pobil (University Jaume I, Spain), 2004[5] 2015-2020 - every South Korean household will have a robot and many European, The Ministry of Information and Communication (South Korea), 2007[6] 2018 - robots will routinely carry out surgery, South Korea government 2007[6] 2022 - intelligent robots that sense their environment, make decisions, and learn are used in 30% of households and organizations - TechCast[7] 2030 - robots capable of performing at human level at most manual jobs Marshall Brain[8] 2034 - robots (home automation systems) performing most household tasks, Helen Greiner, Chairman of iRobot[9] 2015 - one third of US fighting strength will be composed of robots - US Department of Defense, 2006[10]

Military robots

2035 - first completely autonomous robot soldiers in operation - US Department of Defense, 2006[10]

Developments related to robotics from the Japan NISTEP [11] 2030 report :

2013-2014 agricultural robots (AgRobots[12],[13]). 2017 medical robots performing low-invasive surgery 2017-2019 household robots with full use. 2019-2021 Nanorobots 2021-2022 Transhumanism

2013-2017 robots that care for the elderly

[edit] Robot rights


According to research commissioned by the UK Office of Science and Innovation's Horizon Scanning Centre [14], robots could one day demand the same citizen's rights as humans. The study also warns that the rise of robots could put a strain on resources and the environment.
1. ^ http://www.automationworld.com/news-220 2. ^ Ford, Martin R. (2009), The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, Acculant Publishing, ISBN 978-1448659814, http://www.thelightsinthetunnel.com. (e-book available free online.) 3. ^ "Machine Learning: A Job Killer?" 4. ^ "Will Automation Lead to Economic Collapse?" 5. ^ Robots get bookish in libraries, BBC News

Advantages of Robotics Robotics are essentially needed to carry out functions in different industrial sectors. This article provides information about several advantages of robotics.

The use of robotics is widely spread in the 21st century. There is not a single sector that doesn't use robotic systems in carrying out technical processes. Robotic systems have come a long way since their invention, and are getting more and more advanced. They can perform flawless work in very less time. They have many advantages that contribute to various factors such as time, quality, safety, etc. Advantages of Robotics in Business Quality and Accuracy of Work Robotic systems have the capability of impressively meliorating the quality of work. They don't make any mistakes and errors as humans do. This saves a lot of important output and production time. They provide optimum output in regards to quality as well as quantity. In the medical field, they are used to carry out complicated surgeries which are very difficult for doctors and surgeons to perform. In the industrial sector they prevent any errors in the production of goods. Quantity of Production If robots are used for production purposes, the throughput speed rises, which directly has an effect on production. They have the capability to work at a constant speed without the need to take short breaks, sleeps, vacations, and some other time-spending factors. Moreover, they have the potential to produce considerably more than a human worker. Advantages of Robotics in Various Fields

Industrial Robotics The use of robotic systems in the industrial sector is a necessity nowadays, as more and more products are to be manufactured in a very less time, and that too with high-quality and accuracy. Big industrial manufacturing giants have robotic systems that work 24/7. Such systems can even do the work of approximately 100 or more human workers at a time. Car and electronic manufacturing companies mostly make use of such automated systems. They employ robotic systems in several testing and assembling procedures which would be difficult and time-consuming for human workers to carry out. Robotic arms are a simple example of such technologies. They also may be utilized for robotic painting and robotic welding jobs. Robotic packaging machinery is used in companies which manufacture daily-use products. Medical and Healthcare Robotics Robotic systems have also proven to play a very important role in the medicinal and surgical sector, be it in manufacturing medicines and drugs or carrying out simple tasks in specific surgeries. However, robots don't perform the whole procedure in surgeries, but certainly assist the surgeons to perform the task accurately. A surgeon may use a 'robotics surgery coordinator' to perform a surgery without making big incisions, and also in lesser time than normal. The use of robotics in nursing is increasing due to the shortage of efficient manpower. Moreover, a robot may be used in performing an unmanned operation which is known as a robotic surgery. Robotics in Household Nowadays, robots that can perform house duties are also being manufactured. However, the technology of house robots is not being used commercially. Some examples include robotic pool cleaners and robotic vacuum cleaners. Robotics programming is a way of feeding information into the robots regarding what tasks are to be performed and how. After more development in this field, the use of robots in household may be common. Scientists are working on technologies that can be incorporated in future robotic pets, which can enable the pets to better mingle with families, and also provide care and protection. Future robotics systems may come up with benefits that we can't even imagine of. In many films, the robotic hand has been showed, who knows it may become a reality in the near future. The advantages of robotics are certainly predicted to grow in several other fields over time
Product Index: robots index

Robotic Dispensing Systems

Desktop Robots

Assembly Line Robots

Many manual dispensing operations can be economically automated by an I&J Fisnar desktop dispensing robot, saving costs in material waste, rejects, time and labor. I&J Fisnar has a wide range of 3 and 4 axes robots for automatic dispensing of adhesives, liquid gaskets, resins and UV materials.

Gantry-Cartesian robotic systems can be easily integrated with inline industrial automation. Gantry industrial robots have all their axes above the work making them ideal for large dispensing applications and jobs requiring a fourth axis such as when using a spray valve. They are available in a range of standard and special sizes.

Robotic Arms

Rotary Tables

SCARA robots are used in robotic automation cells for factory automation and situations requiring a high level of manipulation. SCARA assembly robots will dispense on stationary work, they take up a limited bench area and can work over a rotary table where a mix of manual assembly and automated dispensing is required.

I&J Fisnar rotary tables are used for semi-automatic dispensing applications of rotary parts. Tables can be programmed for auto cycling allowing the operator to remove and place new parts for dispensing. Programming features compensate for fluid flow delays and for dispensing greater or less than 360 degrees.

Mounting Hardware

A selection of equipment to mount any size barrel, syringe, or

cartridge as well as any of our dispensing valves. Equipment includes mounting hardware for barrels, syringes, cartridges, and valves. We also have tip locators and 360degree conversion kits for precision dispensing.

Note: changing the title may cause links pointing to http://knol.google.com/k/development-ofrobotics to stop working, but links to http://knol.google.com/k/admin/development-ofrobotics/ow5jbvr76bz9/31 will always work. Development of Robotics

Summary

Summary The development of robotics may spawn the most significant change in business related uses within the next 10 years. Science-fiction-like uses of robots are quickly becoming reality as companies across the globe work together to create intelligent and adaptive robotic machines. The business-related uses include industrial (manufacturing), personal (consumer), service, and security and defense robotics. Venture capital opportunities exist for Marshall USC in each of these categories, and investing in India-based robotics companies will help take Marshall USC in more socially responsible directions while still having high potential financial returns. Robotic Design and Technology The way robots are currently being designed is leading to a generation of robotics that will have many significant business-related uses. Many applications exist to encourage and assist developerswhether large or smallto create useful robots. Microsoft has released Robotics Studio, a software development kit that helps robot developers tackle a number of problems when designing robots. The software allows developers to build robots that can take in the commands coming from multiple sensors and send them to the robot's motors. This will effectively reduce the chances that the robot will malfunction because its software is too busy to send output to a part of the robot to read input from its sensors. Microsoft Robotics Studio also includes technology called decentralized software services (DSS), which will simplify the writing of robotic applications. For example, when

combined with broadband wireless technology, a robot can be monitored and adjusted remotely through a web browser. According to Bill Gates, the goal of the software is to create an affordable, open platform that allows robot developers to readily integrate hardware and software into their designs [1]. This software and others are opening the door to so-called intelligent robotic technologysuch as visual recognition, navigation and machine learning. Visual recognition hardware is already available from many companies, such as Tyzx, who have developed the DeepSea G2 Stereo Vision Systema small stereo camera that can see and interact with the world in three dimensions [2]. Advanced navigation hardware is also available as GPS (Global Positioning System) technology continues to evolve. Machine learning is another direction robotics is heading, where complex algorithms and techniques will give robots the ability to learn [3]. This intelligent robotic technology is being implemented in a number of different business industries: industrial (manufacturing), personal (consumer), service, and security and defense. Implementation (Business-Related Uses) Industrial (manufacturing robotics): The current state of the robotics industry is dominated by manufacturing robots in automobile assembly lines, with over a million industrial robots in use worldwide. Most of the current industrial robots are without intelligent technology such as vision, hearing, or smell. There are, however, some new robots that utilize this such as Motomans vision-guided robot, which can pick randomly located automotive parts from a bin, place them on a table, and then individually place them in another bin [4]. Personal (consumer) robotics: Consumer robots are still very primitive in terms of its potential application. The most notable product is iRobots selfvacuuming Roomba, which sold over 2.5 million units already [5]. The potential applications range from routine household tasks, to robot toys and remotely monitoring children. Service Robotics: These robots have a range of potential remote service applications, from robots that assist the elderly in getting around, to robots that enable health care professionals to diagnose and treat patients from miles away [1]. One current service robot, Swarmy removed about fourteen 50-gallon drums of sludge from an old wastewater tank in New Mexico [6]. Security and Defense Robotics: There are many robotics applications currently used for security and defense in the military, and that number is only growing. In 2006, the U.S. military used robots in over 30,000

missions. Over three thousand military robots are expected to be used in the next five years [7].

Why Robotics? In his 2007 Scientific American article A Robot in Every Home, Bill Gates relates the current state of the robotics industry to that of the computer industry thirty years ago when big, expensive mainframe computers were used only by large institutions and researchers were creating the basic necessary pieces to make the computer industry what it is today [1]. The manufacturing robots in automobile assembly lines are being compared to the mainframe computers, and past researchers are similar to current researchers developing robotic applications for specialized uses. If the robotics industry expands as much as the computer industry did, investing in robotics could offer very high returns for USC Marshall. Robotics ventures in India Despite its recent economic boom, India is still a third world country and much poverty still exists. However, the robotics industry in India has high growth potential; many venture capital opportunities exist that would help take USC Marshall in more socially responsible directions. Potential Growth Currently, the largest growth is coming from industrial (manufacturing) robotic companies in India. According to an article in AutomationWorld, the industry is expected to grow at two to two-and-one-half times the global average. Within the manufacturing sector, personal safety and enhanced productivity are becoming increasingly important, and industrial robots play a huge part in improving this. One of the largest Indian robotics companies is Precision Automation & Robotics India (PARI), who claims its industrial robots are used by global companies including Caterpillar, Hitachi, Bosch, Emerson Power, American Axle, Honeywell and Indian subsidiaries of multi-national companies such as Samsung, Philips, LG, Suzuki, Renault, Ford, Honda and Hyundai [8]. Although most robots in India are industrial, PARI and other companies are beginning to move into other robot markets. PARI already has 5 defense related projects with the Indian government and is introducing a golfing robot that can golf with 97% accuracy. Indian company Gridbots is launching a robot called Robograd, which can be used to clean homes and keep an eye on intruders for $250. A human-eyed robot called Neel has also been introduced by India-based HiTech Robotics. Neel caters to nursing and household use because it can avoid obstacles through stereo vision and also recognize human faces and interact with humans through speech [8], [10].

Potential Benefits for India Although robots have typically been portrayed as detrimental to peoples jobs, this is not true. Mr. R.C.Bhargava, Chairman, Maruti Suzuki India Ltd. best explained why in an article in Machinist: "Not only will investment in India-based companies help create jobs for the economy, but other potential application of robotics can benefit India. Mr. Kapil Sibal, Minister for Science & Technology and Earth Sciences is urging the robotics industry to embrace other than just industrial robots. Applications such as disaster management, earth quake relief, and monitoring the line of control can all be implemented with the use of robotics" [9]. Conclusion: Potential Benefits for USC Marshall Because of its current and projected growth, investing in the robotics industry in India will greatly benefit USC Marshall. The robotics industry may potentially become as big as the computer industry, and there are potential huge financial payoffs if a successful company is chosen. Since there already is a market for robotics in India, the risk is lower when compared with other third world countries. Robotics will help Indias overall economy improve by creating more jobs as well as other social benefits. By investing in India-based robotics companies, there exists great opportunity for Marshall USC to gain financially while going in more socially responsible directions. Developmental robotics From Wikipedia, the free encyclopedia Jump to: navigation, search Developmental Robotics (DevRob), sometimes called epigenetic robotics, is a methodology that uses metaphors from neural development and developmental psychology to develop the mind for autonomous robots. The focus is on a single or multiple robots going through stages of autonomous mental development (AMD). Researchers in this field study artificial emotions, self-motivation, and other methods of self-organization. The program that simulates the functions of genome to develop a robot's mental capabilities is called a developmental program. Different from traditional machine learning, some major features of developmental robotics are: Task-nonspecificity: Since it is difficult for the genome to predict what tasks the baby will learn and perform in his life, the developmental program is bodyspecific (species specific) but not task-specific. Environmental openness: Due to the task-nonspecificity, AMD must deal with unknown and uncontrolled environments, including various human environments.

Raw sensors: AMD must directly deal with continuous raw signals from sensors (e.g., vision, audition and touch), since different tasks require different information in the sensors. Only raw signals have all. Online processing: At each time instant, what the machine will sense next depends on what the machine does now. Incremental processing: Acquired skills must be used to assist in the acquisition of new skills, as a form of scaffolding. This requires incremental processing. DevRob is related to, but differs from, evolutionary robotics (ER). ER uses populations of robots that evolve over time, whereas DevRob is interested in how the organization of a single robot's control system develops through experience, over time. DevRob is also related to work done in the domains of Robotics, Artificial Life.

disadvantages Business success is not guaranteed by the introduction of robotic systems and robots in the workplace. As in any competitive society, certain companies will move to the front in industries, and other companies will lag behind due to the lack of sufficient financial resources and technical expertise, etc. Disadvantages of robotic systems and robots may be as follows: High initial cost of robotic systems and robots. Possible need for extra space, and new technology, to accommodate robotic systems and robots. Importance of using highly skilled and technical engineers, programmers and others to set up robotic systems and robots to prevent unnecessary future problems and mishaps. Learning curve of persons working with new robotic systems and possible injuries during that time. Robotic systems and robots are limited to their functions and only the programmers really know what those functions are. Unless Artificial Intelligence is highly sophisticated, robots may not respond properly in times of an emergency or when some unexpected variance occurs. Introducing new systems will inherently bring out defects. Intellectual or physical limitations of employees or household consumers with regard to the operation of a robot or robotic system. Moral Issues Countries and companies appear to be aggressively working to have

consumers idolize robots and robotic systems for their own personal best interests. Robots and robotic systems replace certain workers causing economic losses with possible resultant shortening of lifespans. Certain people, due to lack of funds, may not be able to access important uses of robots - for example, certain surgeries using robotic systems. Certain military robots are pro-war. The aggressive introduction of robots and robotic systems in the world is making companies and human beings more dependent upon robots and robotic systems and therefore a more dependent society. Since Artificial Intelligence is becoming more sophisticated and robots will be entering more households, there may be important negative effects on the human family system.

www.inl.gov/m... 128 X 102 Link

www.internetk... 128 X 87 www.army-guid... 128 X 107 www.expo21xx.... Link Link 104 X 128 Link

www.designser... 128 X 83 Link

hsc.csu.edu.a... 128 X 108 Link

www.arandatoo... www.chipsbook... 117 X 128 Link 93 X 128 Link

www.futurecon... 128 X 70 www.asrengine... Link 128 X 96 Link

text.wccnet.o... 112 X 128 Link

www.gist.us/i... 128 X 78 Link

www.futurlec....

www.wqed.org/... www.haisingca... 128 X 87 128 X 96 Link

You might also like