Robust Lane Detection and Tracking Prasanth Jeevan Esten Grotli Motivation   Autonomous driving Driver assistance (collision avoidance, more precise driving directions)

Download Report

Transcript Robust Lane Detection and Tracking Prasanth Jeevan Esten Grotli Motivation   Autonomous driving Driver assistance (collision avoidance, more precise driving directions)

Robust Lane Detection and
Tracking
Prasanth Jeevan
Esten Grotli
Motivation


Autonomous driving
Driver assistance (collision avoidance,
more precise driving directions)
Some Terms



Lane detection - draw boundaries of a
lane in a single frame
Lane tracking - uses temporal
coherence to track boundaries in a
frame sequence
Vehicle Orientation- position and
orientation of vehicle within the lane
boundaries
Goals of our lane tracker



Recover lane boundary for straight or
curved lanes in suburban environment
Recover orientation and position of
vehicle in detected lane boundaries
Use temporal coherence for
robustness
Starting with lane detection

Extended the work of Lopez et. al.
2005’s work on lane detection





Ridgel feature
Hyperbola lane model
RANSAC for model fitting
Realtime
Our extension: Temporal coherence for
lane tracking
The Setup

Data: University of Sydney (Berkeley-Sydney Driving Team)



640x480, grayscale, 24 fps
Suburban area of Sydney
Lane Model: Hyperbola


2 lane boundaries
4 parameters



Features: Ridgels



2 for vehicle position and orientation
2 for lane width and curvature
Picks out the center line of lane markers
More robust than simple gradient vectors and edges
Fitting: RANSAC

Robustly fit lane model to ridgel features
Setup
Setup
Setup
The Setup

Data: University of Sydney



640x480, grayscale, 24 fps
Suburban area of Sydney
Lane Model: Hyperbola


2 lane boundaries
4 parameters



Features: Ridgels



2 for vehicle position and orientation
2 for lane width and curvature
Picks out the center line of lane markers
More robust than simple gradient vectors and edges
Fitting: RANSAC

Robustly fit lane model to ridgel features
Lane Model




Assumes flat road,
constant curvature
L and K are the lane
width and road
curvature
 and x0 are the
vehicle’s orientation
and position
 is the pitch of the
camera, assumed to
be fixed
Lane Model


v is the image row of a lane boundary
uL and uR are the image column of the left
and right lane boundary, respectively
The Setup

Data: University of Sydney (Berkeley-Sydney Driving Team)



640x480, grayscale, 24 fps
Suburban area of Sydney
Lane Model: Hyperbolic


2 lane boundaries
4 parameters



Features: Ridgels



2 for vehicle position and orientation
2 for lane width and curvature
Picks out the center line of lane markers
More robust than simple gradient vectors and edges
Fitting: RANSAC

Robustly fit lane model to ridgel features
Ridgel Feature


Center line of
elongated high
intensity structures
(lane markers)
Originally proposed
for use in rigid
registration of CT
and MRI head
volumes
Ridgel Feature


Recovers dominant
gradient orientation
of pixel
Invariance under
monotonic-grey
level transforms
(shadows) and rigid
movements of
image
The Setup

Data: University of Sydney



640x480, grayscale, 24 fps
Suburban area of Sydney
Lane Model: Hyperbola


2 lane boundaries
4 parameters



Features: Ridgels



2 for vehicle position and orientation
2 for lane width and curvature
Picks out the center line of lane markers
More robust than simple gradient vectors and edges
Fitting: RANSAC

Robustly fit lane model to ridgel features
Fitting with RANSAC


Need a minimum of four ridgels to solve for
L, K, , and x0
Robust to clutter (outliers)
Fitting with RANSAC

Error function


Distance measure
based on # of
pixels between
feature and
boundary
Difference in
orientation of ridgel
and closest lane
boundary point
Temporal Coherence


At 24fps the lane boundaries in
sequential frames are highly correlated
Can remove lots of clutter more
intelligently based on coherence

Doesn’t make sense to use global (whole
image) fixed thresholds for processing a
(slowly) varying scene
Classifying and removing
ridgels

Using the previous lane boundary



Dynamically classify left and right ridgels
per row image gradient comparison
“far left” and “far right” ridgels removed
Velocity Measurements


Optical encoder provides velocity
Model for vehicle motion

Updates lane model parameters  and x0
for next frame
Results, original algorithm
QuickTime™ and a
decompressor
are needed to see this picture.
Results, algorithm w/ temporal
QuickTime™ and a
decompressor
are needed to see this picture.
Conclusion

Robust by incorporating temporal features




Still needs work
Theoretical speed up by pruning ridgel
features
Ridgel feature robust
Lane model assumptions may not hold in
non-highway roads
Future Work



Implement in C, possibly using OpenCV
Cluster ridgels together based on location
Possibly work with Berkeley-Sydney Driving
Team to use other sensors to make this
more robust (LIDAR, IMU, etc.)
Acknowledgements




Allen Yang
Dr. Jonathan Sprinkle
University of Sydney
Professor Kosecka
Important works
reviewed/considered

Zhou. et. al. 2006




Particle filter and Tabu Search
Hyperbolic lane model
Sobel edge features
Zu Kim 2006




Particle filtering and RANSAC
Cubic spline lane model
No vehicle orientation/position estimation
Template image matching for features