Professional Documents
Culture Documents
Guided Robotics
Steven Prehn
Robotic Guidance, LLC
Traditional Vision vs. Vision based Robot Guidance
J
• Basic Premise: 4 J
Vision Guidance is needed J 5
when the part is not 3 J
6
always in the same TCP
position J Uframe
1 J
2
Understanding Cameras
Vertical
1/3"
Pixels
Camera
CCD
Voltage = 1.0 Brightest = 255
CCD
Voltage = .5 Mid Gray = 128
CCD Black = 0
Voltage = 0
Volts Gray Scale
Picture Elements – Pixels
Pixels have Two Properties:
1,1 1,2 1,3 1,4 1,5 1,6 1,7 1,8 255 255 255 255 255 255 255 255
2,1 2,2 2,3 2,4 2,5 2,6 2,7 2,8 255 255 255 255 255 255 128 128
3,1 3,2 3,3 3,4 3,5 3,6 3,7 3,8 255 255 128 255 255 255 128 128
4,1 4,2 4,3 4,4 4,5 4,6 4,7 4,8 255 255 128 128 255 255 255 128
5,1 5,2 5,3 5,4 5,5 5,6 5,7 5,8 255 255 128 128 128 255 255 128
6,1 6,2 6,3 6,4 6,5 6,6 6,7 6,8 255 255 128 255 255 255 255 255
7,1 7,2 7,3 7,4 7,5 7,6 7,7 7,8 255 255 128 255 255 255 255 255
8,1 8,2 8,3 8,4 8,5 8,6 8,7 8,8 255 255 255 255 255 255 255 255
C-mount provides a
fixed standard
distance to imager
significantly tilted
World and Tool Frames
Two Key Contributors to the +
X
Robots Position
+
User Frames Tool
Z
frame
The Primary Robot position
Coordinate system is referred to
as the World Frame Top
view
Other frames can be created that +
Z
are positioned relative to the World +
Z
frame
world Tool
frame
Tool Frames +
X
+
Y
$MNUTOOL[1,tool_num] • UT
Robot
• User Frame
User • USER
$MNUFRAME[1,frame_num] Frame • UFRAME
World
Coordinate • UF
System
Tool Frame and Programming
Tool Frame
Robot
Positional
Data
User
Frame
World
Coordinate
System
Tool Frame
Robot
Positional
Data
User
World Coordinate Frame
System
Robot and Tool Positions Relative to the Part
Δ
X
Z Z
X X
Vision Calibration and the User Frame
In order for visual offset data to be
useful to the robot, both vision
and the robot must recognize the
same coordinate system.
the matrix product A:B (denoted without multiplication signs or dots) is defined to be the
n × p matrix
• Since T : T −1 = I
R : I = F : P : T −1
Standard Robot Equation:
R:T=F:P R = F : P : T −1
Guidance Summary
1. Robot’s Knowledge of Calibration Frame (Used by the
camera)
2. Camera Calibration Assigns the translation to be Used.
3. Vision Process is Used to locate the Part Position
– Requires a Z height of the part relative to the frame of reference
– Record a part reference position
4. Move the robot to pick the part and record this position.
The Camera is used to find the part and calculate its position.
The robot is moved to a location relative to the part with
offsets appiled.
Visual Tracking
Camera
Offset Random
calculation feeding
Conveyor
Pulsecoder
• 2D Line Tracking
– X, Y location and angle orientation PLUS conveyor
position
– Cue Management
Basic 3D Robot Guidance Methods
• 2.5 D – For Rough Z approximation
• 2.5 D – With Structured Light Reference
• Single View 3D using Geometric Relationships
• Multiple laser or structured light pattern
triangulation methods
• Advanced 3D – (depth analysis)
2D Guidance with a Change in Z
• The image created by the camera is like
looking through a Cone
• The ratio of Pixels to Units of Measure
changes as you move within the cone
• If the part distance from the camera is not
identical to when the camera was
calibrated, finding a parts position
accurately requires adjustment of the
transformation.
• How do you know the part height?
• What can be leveraged?
– Part scale, height sensors
– Lens mathematics
2D Single Camera - 2.5 D
Camera Image
Bottom Part
The world is not flat…
+X
-Y
-Z
2D Robotic Assumptions
• 2D imaging systems can be used if:
– The part always sits flat on a surface or fixture (no pitch or yaw
changes)
– The part is consistent in its size and shape
– The tool is designed to compensate for any variation in height
(and subsequent X, Y error)
• 2D is not a good solution when:
– Parts are stacked and may be subject to tipping
– Parts are randomly placed in a bin for picking
– Parts enter the robot cell on a pallet that is damaged, or on a
conveyor that wobbles
– high accuracy assembly process like hanging a door on an
automobile
Example 3D Robot Applications
Laser
Light Stripe
on work piece Lasers Vertical Position
Light Plane
determines Z
Hex wrenches with laser line Curved surface with laser line
A 2D Change of Perspective
Camera Image
Distance Moved
Camera Camera
Grid
Plane 2 Plane 1
Bin Avoidance
• Bin – Define the size and location of the
bin
• Robot – Model the EOAT and setup check
to keep robot and EOAT from contacting
bin
Summary
• Machine vision has progressed significantly in the
last 10 to 15 years
• Advances in technology continue to provide new
capabilities
• Pay close attention to 3D enhancements for Robotic
Guidance Applications
Contact Information
Steven Prehn