How 360-Degree Bird's Eye Car Cameras Work

Anyone who has ever tried parking on a busy city street will know that drivers can use all of the help they can get. Parking spaces are often tight, and making a mistake can mean you're trading paint — and insurance information — with the vehicles to either side of you. Automakers keep coming up with more advanced technologies to help drivers relieve their parking woes. 

Advertisement

As Get My Parking reports, parking aids are a fairly new invention — they largely arrived on the market starting with the Toyota Prius in 2003. The tech kicked off with electromagnetic and ultrasonic park distance sensors mounted to the front and rear bumpers that let drivers play a game of hot or cold to get into tight spaces. The next key development was placing cameras on the exterior of the vehicle, usually on the rear bumper, to eliminate the annoying beeping often used with the old sensor technology. This feature effectively gave drivers eyes on the outside of the vehicle. 

The natural evolution of the rear parking camera — short of automated parking, obviously — is the bird's-eye surround view camera first introduced by Nissan in 2007. The 360-degree bird's-eye surround view camera gives a driver a view of their vehicle and its surroundings from above as if they were a few feet above its roof and looking down. Obviously, there isn't a GoPro on a selfie stick mounted to the top of a car or a drone permanently hovering above it, so how exactly does the tech work?

Advertisement

Cameras and a processor work together to help drivers

Texas Instruments is a manufacturer and supplier of one such system. The company explains in a whitepaper that the Automotive Advanced Driver Assistance System uses an array of 180-degree cameras, a system-on-chip (SoC) processor, and some clever programming to stitch together images from four to six cameras mounted on the front bumper, rear bumper, and sides of a vehicle. Each camera captures super-wide views of its surroundings, and algorithms align geometry in the overlapping imagery from adjacent cameras, effectively stitching it all together. Collecting and combining the data is just one part of the equation, though. 

Advertisement

Before stitching the images from the different cameras together, the SoC has to correct the distortion and artifacts produced by the fish-eye lenses, as well as engage in perspective transformation, which makes the image appear more like a shot captured from above. After correction, the system performs brightness, white balance, and color balancing to compensate for any differences between the content. This ensures the final image that reaches the driver via the infotainment system looks cohesive without obvious seams. The result is a seemingly magical look at the car's surroundings, but there is a gap where the vehicle should be. 

That gap is filled by superimposing the vehicle into the generated video feed. The system is not plug-and-play, and each vehicle design requires some research and development to tune the algorithm's parameters. For some larger vehicles, it's still not always easy to judge the scale of vehicles when they're represented on a screen inside the car, so manufacturers often include perimeter lines and guidelines that show the projected path based on the orientation of the steering wheel, as well.

Advertisement

Recommended

Advertisement