Vehicle Around View System

Abstract

Nowadays, more and more automobile electronics appears since people attach great importance to safety issue. For example, collision avoidance, parking sensor, blind spot monitor. Those devices provide intuitive image information to driver and let driver to see the view around his vehicle. In this work, we developed a surround view monitoring system to provide a realistic panorama which helps driver to aware of blind spots.

There are some omniview products commercially available. They used four cameras to build a panorama, but it was hard to fuse the overlapping region between two camera views. There are two approaches to deal with overlapping region. One approach is to find an appropriate seam and divide images into two parts. The disadvantage is that some objects in the overlapping region may be missing. Another approach is image blending by summing weighted pixels, but it may have ghost effect. In order to avoid those two defects, we applied blending method in our research and eliminated ghost effect by deforming 3D projection model. Then we can make the panorama more realistic.

In the past, 3D panorama was created by mapping texture images on a sphere or cylinder. The parallax between cameras is large in our environment, so ghost effect appears when cameras are not at the same projection center. We used feature matching to obtain the position of objects, and deform the projection model to fit objects. The main contribution of our work is feature matching on large parallax and deformation on the projection model. By using deformation technique, the proposed system could eliminate ghost effect in the overlapping region.

In our research, we use camera calibration, feature matching on large parallax, projection model deformation and image blending to construct our system. In the implementation, we mounted four fish-eye cameras to capture images. By using feature-based deformation on the projection model, we could eliminate ghost effect in the overlapping region with matching features detected in neighboring camera views.

Results