You are on page 1of 13

Transformations Recap

Basic 2D Transformations

cosθ − sin θ 0 sx 0 0


 sin θ 0 0
 cosθ
 sy 0
 0 0 1
 0 0 1
1 0 d x  1 ex 0 1 0 0
0 1 d  0 1
 y
 0 e
 y 1 0
0 0 1  0 0 1  0 0 1
Order of Transformations

Scaled Rotated

Original

Rotated Scaled
Original

Affine Transformation
 Consider any matrix of the form:
 a1 a2 a3 
a a5 a6 
 4
 0 0 1 

 This is known as the affine


transformation
 It can be constructed by concatenation
of any basic transformations
Recovering Best Affine
Transformation
 Given two images with unknown
transformation between them…

 Compute the values for [a1 … a6]

Recovering Best Affine


Transformation
 Suppose we are given some
correspondences
Recovering Best Affine
Transformation

Image 1 Image 2 Overlap of points


after recovering
the transformation
We can try to find the set of parameters in which
the error is minimum

Recovering Best Affine


Transformation
 Least Squares Error Solution
 Is the solution (i.e. set of parameters

a1 … a6) such that the sum of the


square of error in each corresponding
point is as minimum as possible
 No other set of parameters exists that

may have a lower error


Least Squares Error Solution
 x*j   a1 a2 a3   x' j 
 * 
 y j  =  a4 a5 a6   y ' j 
  (x , y )
* *

 
j j

 1   0 0 1   1  (x , y
j j

n
E (a1 , a2 , a3 , a4 , a5 , a6 ) = ∑ ( x*j − x j ) 2 + ( y *j − y j ) 2
j =1

n
(
E (a) = ∑ (a1 x' j + a2 y ' j + a3 − x j ) 2 + (a4 x' j + a5 y ' j + a6 − y j ) 2
j =1
)

Least Squares Error Solution


n
E (a) = ∑ ((a x + a y + a − x ) + (a x + a y + a − y ) )
1 j 2 j 3
' 2
j 4 j 5 j 6
' 2
j
j =1
 Minimize E w.r.t. a

 Compute ∂E ∂a , put equal to zero, solve simultaneously


i

 ∑ x 2j ∑x y ∑x j j j 0 0 0   ∑ x j x 'j 
 j j j   j 
 x y 0   a1   ∑ y j x 'j 
∑ ∑y ∑y 2
0 0
j
j j
j
j
j
j
   j 
   a2   ' 
 ∑ xj ∑ y ∑1 j 0 0 0 
 a3   ∑
xj 
 j j j
   = j

 0 0 0 ∑x ∑x y 2
j j j ∑j x j  a4  ∑j x j y 'j 
 j j  a   
 0 0 0 ∑x y ∑y j j
2
j ∑j y j  a5  ∑j y j y 'j 
 j j   6   
 0

0 0 ∑x ∑y j j ∑j 
1   ∑ j 
 j
y ' 

j j  
Least Squares Error Solution

Ax = B
 The solution is:

X = A-1B

Recovering Best Affine


Transformation (alternate way)
 x'  a1 a2 a3   x 
 y ' = a a5 a6   y 
   4
 1   0 0 1   1 

 a1 
a 
 2
 x y 1 0 0 0  a3   x'
0 0 0 x y 1 a  =  y '
  4   
 a5 
 
a6 
Recovering Best Affine
Transformation
 Given three pairs of corresponding
points, we get 6 equations
 x1 y1 1 0 0 0   a1   x1 ' 
0 0 0 x1 y1 1  a2   y1 ' 

 x2 y2 1 0 0 0  a3   x2 '
   =  
0 0 0 x2 y2 1 a4   y2 '
 x3 y3 1 0 0 0   a5   x3 ' 
    
 0 0 0 x3 y3 1  a6   y3 '
Ax=B x=A-1B

Recovering Best Affine


Transformation
 What if we knew four corresponding points?
 We should be able to utilize the additional
information
 x1 y1 1 0 0 0  x1 ' 
 0 0 0 x y 1 a  
 1 1   1   y1 ' 
 x2 y 2 1 0 0 0   a 2   x2 ' 
    
 0 0 0 x2 y2 1  a3  =  y2 '
 x3 y3 1 0 0 0 a4   x3 ' 
    
 0 0 0 x3 y3 1  a5   y3 '
 x y 1 0 0 0  a   x '
 4 4
 6   4 
 0 0 0 x4 y4 1  y4 '
Recovering Best Affine
Transformation
 Ax = B
 x1 y1 1 0 0 0  x1 ' 
Cannot take inverse 0 1  a1   y1 ' 

 0 0 x1 y1
directly
 x2 y2 1 0 0 0 a2   x2 '
 Also, 4    
0 0 0 x2 y2 1  a3   y2 '
correspondences  x3  =
y3 1 0 0 0 a4   x3 ' 
may not be exactly    
0 0 0 x3 y3 1  a5   y3 '
represented by an x  
y4 1 0 0 0  a6   x4 '
affine transformation  4   
[Why ?]  0 0 0 x4 y4 1  y4 '

Pseudo inverse
For an over-constrained linear system
Ax = B
A has more rows than columns
Multiply by AT on both sides
ATAx = ATB
ATA is a square matrix of as many rows as x
We can take its inverse
x = (ATA)-1ATB

Pseudo-inverse gives the least squares error


solution!
Recovering Best Affine
Transformation
 In general, we may be given n
correspondences
 Concatenate n correspondences in A
and B
 A is 2n*6
 B is 2n*1
 Solve using Least Squares
 x = (ATA)-1ATB

2D Displacement Models
x' = x + b1
 Translation:
y ' = y + b2
x' = x cosθ − y sin θ + b1
 Rigid:
y ' = x sin θ + y cosθ + b2
x' = a1 x + a2 y + b1 Difference b/w
 Affine: affine and rigid?
y ' = a3 x + a4 y + b2
Affine does not
a x + a2 y + b1 have the
 Projective: x' = 1 orthonormality
c1 x + c2 y + 1 constraint
a3 x + a4 y + b2
y' =
c1 x + c2 y + 1
Stratification
 h11 h12 h13  Concurrency, collinearity,
Projective h  order of contact (intersection,
8dof  21 h22 h23  tangency, inflection, etc.),
 h31 h32 h33  cross ratio

Parallellism, ratio of areas,


 a11 a12 t x  ratio of lengths on parallel
Affine a  lines (e.g midpoints), linear
6dof  21 a22 t y  combinations of vectors
 0 0 1  (centroids).

 sr11 r12 t x  Ratios of lengths, angles.


Similarity r 
4dof  21 sr22 t y 
 0 0 1 
 r11 r12 tx 
Euclidean r r22 t y  lengths, areas.
3dof  21
 0 0 1 
Ref: Marc Pollefeys

2D Affine Warping
Warping
 Inputs:
 Image X
 Affine Transformation A = [a1 a2 b1 a3 a4 b2]T
 Output:
 Generate X’ such that X’ = AX
 Obvious Process:
 For each pixel in X
 Apply transformation
 At that location in X’, put the same color as at the original
location in X
 Problems?

Warping
 This will leave holes…
 Because every pixel does not map to an integer
location!
 Reverse Transformation
 For each integer location in X’
 Apply inverse mapping
 Problem?
 Will not result in answers at integer locations,
in general
 Bilinearly interpolate from 4 neighbors
Interpolation…
 In 1D 3

4 5
4.3 ?
 Use y=mx+c
 m = 1, c = -2
 Substitute x=4.3, => y = 2.3

2D Bilinear Interpolation
 Four nearest points (x, y ) (x, y )
of (x, y)

(x, y ), (x, y ), (x, y ), (x, y )


( x, y )
where x = int( x)
y = int( y ) (x, y ) (x, y )
x = x +1
y = y +1
Bilinear Interpolation
f ' ( x, y ) = ε x ε y f ( x , y ) + ε x ε y f ( x, y ) + ε x ε y f ( x, y ) + ε x ε y f ( x , y )
εx = x − x
εy = y − y
(x, y ) (x, y )
εx = x − x
εx
εy = y − y
εy εy

εx

(x, y ) (x, y )

You might also like