Stereoscopic Imaging for Slow

Download Report

Transcript Stereoscopic Imaging for Slow

Stereoscopic Imaging for
Slow-Moving Autonomous Vehicle
Senior Capstone Project
Final Presentation
Bradley University ECE Department
By: Alexander Norton
Advisor: Dr. Huggins
April 26, 2012
Presentation Outline

Project Overview
Stereoscopic Imaging Overview
Previous Work
Functional and System Description
Completed Work
Results

Suggestions for Future Work





Project Overview
The goal of this project was to design a
stereoscopic imaging system using two
low cost digital cameras that could
calculate depth information from sets of
images which could then be used to
navigate an autonomous vehicle
 Two modes of operation: calibration mode
and run mode

Stereoscopic Imaging Overview






The use of two horizontally aligned cameras separated
by a fixed distance that take a pair of images at the
same time
Calibrate cameras so they act like pin hole cameras
Determine corresponding pixel groups
Find the disparity (offset in the x coordinate) between the
corresponding pixel groups.
Use triangulation to find distance to pixel groups
This depth information can be used to create a 3-D
terrain map
Previous Work
BirdTrak (Brian Crombie and Matt Zivney,
2003)
 Bradley Rover(Steve Goggins, Rob
Scherbinski, Pete Lange, 2005)
 NavBot (Adam Beach, Nick Wlaznik,
2007)
 SVAN (John Hessling, 2010)

System Description
System block diagram
 Subsystem block diagrams

 Cameras
 Computer
 Software

Modes of operation
 Calibration
 Run
mode
mode
System Block Diagram
Cameras Subsystem
Computer Subsystem
Camera 1 Image capture signal
Camera 2 Image capture signal
User Input
CPU
Movement instructions
Display 3D map on screen
Calibration Mode
Position calibration rig in
front of cameras
Estimate the relative
position and orientation of
the stereo camera “heads”
Take images of calibration
rig in several orientations
Compute the intrinsic and
extrinsic parameters for
the stereo cameras
Use OpenCV to compute
extrinsic and intrinsic
camera parameters
Compute the rectification
transformation
that makes the camera
optical axes parallel
Run Mode
Necessity of Calibration
Produces the rotation and translation
matrices needed to rectify sets of images
 Rectification makes the stereo
correspondence more accurate and more
efficient
 Failing to calibrate the cameras is the
reason for why past groups have failed to
get accurate results and useful system.

Completed Work
Calibration mode software



Input is a list of sets of images of a chessboard, and the
number of corners along the length and width of the
chessboard
Read in the left and right image pairs, find the chessboard
corners, and set object and image points for the images
where all the chessboards could be found
Given this list of determined points on the chessboard
images, the code calls stereoCalibrate() to calibrate the
cameras
Calibration Mode Software
This calibration yields the camera matrix M
and the distortion vector D for the two
cameras; it also yields the rotation matrix
R, the translation vector T, the essential
matrix E, and the fundamental matrix F
 The accuracy of the calibration is
assessed by the software using “epipolar”
geometry.

Calibration Mode Software
The code then moves on to computing the
rectification maps using stereoRectify()
 The rectification maps are used when
processing sets of images obtained in run
mode

Calibration Mode Software Matrices
Rotation matrix R, Translation Vector T :
extrinsic matrices, put the right camera in
the same plane as the left camera, which
makes the two image planes coplanar
 Fundamental matrix F: intrinsic matrix,
relates the points on the image plane of
one camera in pixels to the points on the
image plane of the other camera in pixels

Calibration Mode Software Matrices
Essential Matrix E: intrinsic matrix, relates
the physical location of the point P as seen
by the left camera to the location of the
same point as seen by the right camera
 Camera matrix M, distortion matrix D:
intrinsic matrices, calculated and used
within the function

Completed Work
Run Mode Software
Uses the matrices obtained from
calibration
 Rectifies each set of images to correct for
distortions
 Computes and displays the disparity map

Calibration Mode Results
Output showing found chessboard corners
Calibration Mode Results
Output rectified chessboard images
Calibration Mode Results
Command window showing calibration results
Run Mode Results
Output rectified set of images after cameras have been calibrated
Run Mode Results
Output disparity map of rectified set of images
Theoretical Run Mode Results
One image from a set of
sample images
Disparity map obtained
from the set of sample
images
Results
Wrote working code using OpenCV
libraries and functions
 Successfully grab images
 Some outputs of calibration are correct
 Unable to accurately compute the disparity
map of an image with a simple target in
front of a plain background.

Possible Errors
Incorrect calibration results
 Cameras could have internal flaws that
cannot be corrected with sufficient
accuracy.
 Correspondence calculation could have
errors.

Suggestions for Future Work
Investigate the mathematics underlying
the OpenCV functions
 Develop methods to find and correct for
errors that occur as a result of incorrect
calibrations and/or correspondence
computations.

Questions??