By Linda Conlin, Pro to Pro Managing Editor

Machines such as cars and robots don’t have eyes, but we expect them to “see.” How can that happen? Researchers are applying what has been learned from decades of perfecting eye-imaging technologies to tomorrow’s autonomous systems sensor technologies, such as those found in self-driving cars.

Let’s start with cars. Today’s self-driving cars use a combination of three sensor and image technologies: radar, Light Imaging Detection and Ranging (LIDAR) , and cameras. Radar sends out radio waves that bounce off distant surfaces hundreds of yards away and can detect their size and speed. However, radar interprets images in very low resolution and so can’t identify objects. LIDAR uses laser light pulses to scan the environment by firing millions of laser signals per second, which are then reflected off object surfaces and returned to a receiver, creating a 3D model of the car’s surroundings. But LIDAR is limited by weather conditions such as fog or dust. Self-driving cars use cameras to see in high resolution. Lenses placed around the vehicle provide wide-angle views of close surroundings and narrower views of the distance.

Now to  optical coherence tomography (OCT). Developed in 1991, OCT is a noninvasive imaging technology used to obtain high resolution cross-sectional images of the retina. It is similar to an ultrasound but uses long wavelength light waves near infrared instead of sound waves. These light waves reflect off different depths within the eye to construct a profile, while a laterally scanning light beam creates a 3D image of the eye. OCT delivers high resolution because it is based on light, rather than sound or radio frequency. An optical beam is directed at the tissue, and a small portion of light that reflects from sub-surface structures is collected. OCT can build clear 3D images of thick samples by rejecting scattered background light while collecting light directly reflected from the surfaces to be examined.

How can OCT technology help a self-driving car? Researchers at Duke University’s Pratt School of Engineering may have the answer. While OCT devices are used to profile microscopic structures up to several millimeters deep, robotic 3D vision systems only need to locate the surfaces of much larger objects. But for cars, more speed and range are needed. To accomplish this, the researchers narrowed the range of frequencies used by OCT, and only looked for the peak signal generated from the surfaces of objects. This resulted in much greater imaging range and speed than traditional LIDAR. The imaging is fast and accurate enough to capture the details of moving objects in real-time. This technology is essential to the safe operation of the fully autonomous vehicles on the horizon, and researchers predict that it can be applied to robots and other automated systems.