You are on page 1of 48

Advanced Vision

Guided Robotics
Steven Prehn
Robotic Guidance, LLC
Traditional Vision vs. Vision based Robot Guidance

• Traditional Machine Vision –


– Determine if a product Passes or Fails
• Assembly Verification
• Find Defects
• Gauging/Metrology

• Vision Guided Robots –


– It all about Location,
• Locate and Pick Parts
• Locate and move relative to Parts for Assembly
• Locate parts and remove flash or apply epoxy
Robotic System Adaptability

J
• Basic Premise: 4 J
Vision Guidance is needed J 5
when the part is not 3 J
6
always in the same TCP
position J Uframe
1 J
2
Understanding Cameras

CCD – Charged Coupled Device

Horizontal Pixels Imager


Chip

Vertical
1/3"
Pixels

Camera

Digital Imager Array


Megapixels Pixels
Color Image Converted to Gray Scale

Continuous Image Converted to “Picture Elements”


Or Pixels
Gray Scale = Measured Light Level

CCD
Voltage = 1.0 Brightest = 255

CCD
Voltage = .5 Mid Gray = 128

CCD Black = 0
Voltage = 0
Volts Gray Scale
Picture Elements – Pixels
Pixels have Two Properties:

1,1 1,2 1,3 1,4 1,5 1,6 1,7 1,8 255 255 255 255 255 255 255 255

2,1 2,2 2,3 2,4 2,5 2,6 2,7 2,8 255 255 255 255 255 255 128 128

3,1 3,2 3,3 3,4 3,5 3,6 3,7 3,8 255 255 128 255 255 255 128 128

4,1 4,2 4,3 4,4 4,5 4,6 4,7 4,8 255 255 128 128 255 255 255 128

5,1 5,2 5,3 5,4 5,5 5,6 5,7 5,8 255 255 128 128 128 255 255 128

6,1 6,2 6,3 6,4 6,5 6,6 6,7 6,8 255 255 128 255 255 255 255 255

7,1 7,2 7,3 7,4 7,5 7,6 7,7 7,8 255 255 128 255 255 255 255 255

8,1 8,2 8,3 8,4 8,5 8,6 8,7 8,8 255 255 255 255 255 255 255 255

grid location and gray scale value


An imager chip is a bunch of little light meters
Lens Mathematics
6mm 25mm

• Focal Length is the distance in millimeters from the


optical center of a lens to the imaging sensor (when
the lens is focused at infinity).
• Calculate Viewing Area
– Working Distance from Camera to Object
– Desired Field of View; Width of Height
– Image Array Size: 1/3”, ½”, 2/3” Etc.
– Lens Size – Focal Length
Lens Calculations

C-mount provides a
fixed standard
distance to imager

You can solve for any one of the


missing parameters, if you know the
other three
Joint to Cartesian Space

• Robot kinematics is the study of the


motion (kinematics) of robots. In a
kinematic analysis the position, velocity
and acceleration of all the links are
calculated without considering the
forces that cause this motion. -1

• Joint angles (provided by encoders) and


arm segment lengths are combined to
render position and orientation

-1 Reference Wikipedia


Understanding Cartesian Coordinates

• Consider the Robots Face Plate Position


with respect to a part (X,Y and Z)
• Now consider a plane pivoting around
this point. Rotating around X,Y and Z.
• This is part positioning with 6 degrees
of freedom.

• Our ultimate goal:


How can vision be used to guide the
robot to the position of a part?
Position Relationships
• Start with a Cartesian Z

coordinate system for Furniture


rendering position (X,Y,Z) Y
• R is the position of the
platter relative to the room. Platter
P
• F is the position of the R
furniture in the World
coordinate system. F
• P is the position of the platter
in the furniture frame or
furniture coordinate system. Room

• Now consider a table where


adjacent legs are shorter and
a platter where one side is X

significantly tilted
World and Tool Frames
 Two Key Contributors to the +
X

Robots Position
+
 User Frames Tool
Z
frame
 The Primary Robot position
Coordinate system is referred to
as the World Frame Top
view
 Other frames can be created that +
Z
are positioned relative to the World +
Z
frame
world Tool
frame
 Tool Frames +
X
+
Y

 Using Kinematics the world


frame, and Tool Frame, the
robot can be used to create
positions and establish planes
Frames Important to Vision
• World frame - default frame of the robot
• User frame - user defined frame
• Tool frame - user defined frame

• Tool Center Point


• TCP
• TOOL
• UTOOL

$MNUTOOL[1,tool_num] • UT
Robot

• User Frame
User • USER
$MNUFRAME[1,frame_num] Frame • UFRAME
World
Coordinate • UF
System
Tool Frame and Programming

• When a point is recorded, it references both the Tool


Frame and the User Frame

Tool Frame
Robot

Positional
Data

User
Frame
World
Coordinate
System

• The Tool Frame is what the robot “looks at”


when you ask it to do a Linear or Circular
movement.
User Frame
• User Frames are based off of the World
Frame
• They can be taught at any location and any
orientation
• Positions are taught with respect to a User
Frame (as well as, a Tool Frame)

Tool Frame
Robot

Positional
Data

User
World Coordinate Frame
System
Robot and Tool Positions Relative to the Part

• The robots position and


orientation is reported with
six degrees of freedom R
T

– X,Y,Z, Pitch, Yaw, Roll


• The tool has a position and
orientation, too.
• Robot and Tool frames are
combined to find a position P

relative to the part.


– This relationship is resolved
before moving the robot the
part.
Simple Camera Calibration
• Place a grid with known spacing at
the same height as the top of the
parts to translate Pixels to Units of
Measure
– (Grid to Camera)
• Tie in the orientation of the
camera to the grid
– Which direction representing X and Y
• Tie in the relationship of the grid
to the robot
– (Grid to Robot)
• Now we can determine the parts
position relative to the robot
Part Height Variation Problem

Δ
X
Z Z

X X
Vision Calibration and the User Frame
In order for visual offset data to be
useful to the robot, both vision
and the robot must recognize the
same coordinate system.

This involves establishing a


correspondence between the
camera's coordinate system and a
robot user frame.

To accomplish this a robot frame is


taught that corresponds to a
frame used to calibrate vision.
Basic Cameras are 2D
• What aspects of a parts position
relative to the robot can we
determine solely from a 2D image?
• Variables that you may need to
know:
– Distance from the camera
– Magnification of the lens
– Size of the part
– Calibration (EG: pixels/mm)
– Orientation of the camera in space
– Orientation of the robot in space
• How does the distance away from
the camera effect the part position
calculation?
Image to Robot Relationship

In two-dimensional applications, the XY plane of the user frame


specified here should be parallel to the target work plane.
How do you compensate when this is not the case?
Vision To Robot Transformations Considerations

• Camera mounting style


– Fixed position or Robot mounted camera
• Cycle time
• Size of part (FOV) vs. accuracy needed
• Part Presentation issues
– In which axis's is the part likely move?
• X, Y, Rotation, Z, Pitch and Yaw
– Is the part consistent or is its presentation consistent
– Is it possible to correlate position from different
perspectives?
– Can structured light be used to help identify location?
Tool Frame

• Origin is called the Tool Center Point or TCP


– Defines location of where work is done

• Default location is center of the faceplate


• Origin of the tool frame must be offset to a
fixed point on the physical tool
• All offset measurements for the Tool Frame
are relative to origin of the Face Plate
Matrix Multiplication
If A is an n × m matrix and B is an m × p matrix,

the matrix product A:B (denoted without multiplication signs or dots) is defined to be the
n × p matrix

* Reference Source – Wikipedia Matrix multiplication


Basic Matrix Math Continued
• Inverse Matrix
– If A is a square matrix, there may be an inverse matrix A−1 of A such
that

– where I is the identity matrix of the same order.

* Reference Source – Wikipedia Matrix multiplication


Example of Basic Robot Math
R T
• To isolate R (the robot’s position) we
performing algebraic manipulation of
the frame matrices as follows:
P
R : T : T −1 = F : P : T −1
where T-1 is the inverse of the tool.
F

• Since T : T −1 = I

R : I = F : P : T −1
Standard Robot Equation:
R:T=F:P R = F : P : T −1
Guidance Summary
1. Robot’s Knowledge of Calibration Frame (Used by the
camera)
2. Camera Calibration Assigns the translation to be Used.
3. Vision Process is Used to locate the Part Position
– Requires a Z height of the part relative to the frame of reference
– Record a part reference position
4. Move the robot to pick the part and record this position.

 The Camera is used to find the part and calculate its position.
 The robot is moved to a location relative to the part with
offsets appiled.
Visual Tracking
Camera

Offset Random
calculation feeding

Conveyor

Pulsecoder

• 2D Line Tracking
– X, Y location and angle orientation PLUS conveyor
position
– Cue Management
Basic 3D Robot Guidance Methods
• 2.5 D – For Rough Z approximation
• 2.5 D – With Structured Light Reference
• Single View 3D using Geometric Relationships
• Multiple laser or structured light pattern
triangulation methods
• Advanced 3D – (depth analysis)
2D Guidance with a Change in Z
• The image created by the camera is like
looking through a Cone
• The ratio of Pixels to Units of Measure
changes as you move within the cone
• If the part distance from the camera is not
identical to when the camera was
calibrated, finding a parts position
accurately requires adjustment of the
transformation.
• How do you know the part height?
• What can be leveraged?
– Part scale, height sensors
– Lens mathematics
2D Single Camera - 2.5 D
Camera Image

Height change creates subtle apparent size change.


Are you sure the part size is not different – creating the
same affect?
Depalletizing

• Apparent Part Size can be


used to calculate relative
height
• The height designates where
in the calibration cone it is
• The transformation is adjusted
relative to the frame of
Top Part
reference

Bottom Part
The world is not flat…

Traditional cameras see a flat world – 2D


and flat

Robots work in the real world


and must compensate for a
parts position with 6 degrees of
freedom
World Frame
+Z

• World frame has its origin at


the intersection of axes 1 and 2
• Aid to Remember Coordinate
Relationships: the Right Hand
Rule
Origin
-X +Y

+X
-Y

-Z
2D Robotic Assumptions
• 2D imaging systems can be used if:
– The part always sits flat on a surface or fixture (no pitch or yaw
changes)
– The part is consistent in its size and shape
– The tool is designed to compensate for any variation in height
(and subsequent X, Y error)
• 2D is not a good solution when:
– Parts are stacked and may be subject to tipping
– Parts are randomly placed in a bin for picking
– Parts enter the robot cell on a pallet that is damaged, or on a
conveyor that wobbles
– high accuracy assembly process like hanging a door on an
automobile
Example 3D Robot Applications

• Racking and Deracking


• Palletizing and Depalletizing
• Welding uneven surfaces
• Grinding and flash removal
• Machine load
• High accuracy assembly
• Parts on Hangers
• Picking Stacked parts
• Picking parts randomly located in
bins
Laser Line/Structured Light

Camera sees this view

Laser

Light Stripe
on work piece Lasers Vertical Position
Light Plane
determines Z

Camera is at 90° and Laser at 45°


Structured Light in The Real World

Hex wrenches with laser line Curved surface with laser line
A 2D Change of Perspective
Camera Image

• As part orientation changes in


pitch and yaw, surface points
converge or diverge.
Calibration – Two Plane Method

Distance Moved

Camera Camera
Grid

Plane 2 Plane 1

• Requires either the robot to move the camera or the


robot to move the grid.
• Renders greater vector accuracy.
• Helps improve lens math.
Applying Geometric Relationships
• Identify fixed and reliable geometric
features (corners or holes)
• Apply Geometric Position Relationships
between features
• Compensate for Perspective
Geometric Relationships
• Start with a known shape
• Extract feature Point Position
with respect to calibrated
cameras
• The part shape is assumed to be
constant although position is not
• Combine camera position
relationship with found feature
to extract new position
Advanced 3D
• Several Companies are advancing 3D point cloud
image generation
• Technologies include:
– Time of Flight Sensors
– Morei Interferometry
– Structured Light
– Stereo Projection
– others…
• Processing renders part position by matching
features in 3D Space
• Translation of the found position with respect to
original position remains consistent to previous
robot math examples
3D Sensors – From Wikipedia

FANUC 3D Area Sensor

"TOF Kamera 3D Gesicht" by Captaindistance - Own


work. Licensed under CC BY 3.0 via Wikimedia
Commons -
http://commons.wikimedia.org/wiki/File:TOF_Kamera_3D
_Gesicht.jpg#mediaviewer/File:TOF_Kamera_3D_Gesich
t.jpg
Bin Picking
Break Down of Vision Tasks
• Locate the Bin
• Locate Candidate Parts
• Move to pick Candidates without collision
• Remove parts without Collision

Bin Avoidance
• Bin – Define the size and location of the
bin
• Robot – Model the EOAT and setup check
to keep robot and EOAT from contacting
bin
Summary
• Machine vision has progressed significantly in the
last 10 to 15 years
• Advances in technology continue to provide new
capabilities
• Pay close attention to 3D enhancements for Robotic
Guidance Applications
Contact Information

Steven Prehn

Robotic Guidance, LLC


USA
Email:
stevenprehn@roboticguidance.com

You might also like