Mouser Left Banner
Mouser Left Banner
Mouser Left Banner
Mouser Right Banner
Mouser Right Banner
Mouser Right Banner
More

    3D Flash Lidar Mapping and Gesture Recognition

    Range-imaging Time-of-Flight (TOF) camera technology, also called 3D flash lidar, is a technology many roboticists are taking advantage of. Unlike traditional lidar, which collects data point-by-point and then infers depth information, flash lidar, as its name implies, emits a single light pulse to capture an entire scene in three dimensions, no scanning required.

    Other advantages that 3D flash lidar has over standard lidar are its compact size, its ease of use, and accuracy within centimeters. It can image in situations with heavy dust, smoke, and fog, at night, and in bright sunlight.

    The individual components of the camera are a lens, an integrated light source, a sensor to capture image information, and an interface. (More on sensors in just a bit.) For our purposes here, I am going to stick to just these basics. If you want to dig deeper into TOF cameras and sensors you can read about them here.

    Another significant advantage is that TOF sensors are relatively inexpensive. However, you cannot just slap a camera on a drone, robot, or piece of equipment, hand it to the end user and say, “Good luck!” You need to create a complete system that includes software that translates the data collected and presents it in a way the end user can understand.

    Should you be using the sensors in a gesture recognition application, your customer is likely going to have to integrate your solution into an existing and sophisticated computer system and it will have to communicate with many other components. So while the sensor may be inexpensive, the entire solution may require a significant monetary investment by your customer in the way of cost, time, and training, as well as the infrastructure necessary to make it work.

    Moreover, as useful as TOF sensors are, they often cannot get the job done on their own. As you will see in our use case examples below, TOF sensors are often part of a team of sensors, each performing a function uniquely suited to its capabilities.

    3D Flash Lidar Applications

    For the most part, TOF applications are broken down into two main categories: Gesture recognition and non-gesture applications. Gesture recognition is concerned with fast response time and non-gesture applications, such as mapping and altitude determination, with a focus on accuracy.

    Automotive (Gesture Recognition)

    eyeSight, a company headquartered in Israel, has developed a solution for automakers that includes artificial intelligence, IR and TOF sensors, and software to enhance driver safety. The TOF camera’s role is that of gesture recognition. Instead of taking their hands off the wheel and eyes off the road to operate the vehicle’s infotainment system, the driver can use simple gestures to answer a call or play their favorite music. The TOF camera also performs driver identification to adjust the seat, mirrors, and temperature preferences automatically.

    The system’s IR sensor takes over the task of mapping the driver, and if he or she looks away or falls asleep, the system will send the appropriate warning or engage the brakes if needed. Not only is the eyeSight system complex, but it must also operate within the vehicle’s intricate computer network.

    Aside from gaming uses, on which Kinect pretty much corners the market, or flashy PowerPoint presentation skills, it’s exciting to think about ways TOF sensors and cameras can be used to enhance safety through gesture recognition. For example, how might you use this type of technology to make products more accessible for people with arthritis?

    Drone Flight (Non-Gesture)

    Winemakers must continuously monitor the health of their vines. Rogue bacteria and fungi can quickly spread throughout the vineyard wiping out an entire crop of grapes. French company Chouette has developed a drone system to help winemakers monitor vineyard health much more efficiently than walking up and down rows of vines.

    The Chouette drone is equipped with a TeraRanger One distance measurement sensor based on infrared TOF technology and multispectral camera sensors. Chouette uses the TeraRanger’s wide field of view as a precision altimeter to achieve a smooth, consistent flight. Not only does that allow the drone to sustain a long productive flight time, but it enables the multispectral camera sensors to collect better data. That data is then translated into easy-to-read images and detailed information about the health of the vines via a software program.

    How could you improve on existing drone products if you were able to maintain consistent altitude control not only outdoors, but indoors as well?

    Space Mapping (Non-Gesture)

    Researchers at NASA’s Langley Research Center are working with 3D flash lidar, Doppler lidar, and laser altimeter sensors to carry out autonomous landings on the moon and eventually Mars. The laser altimeter and flash lidar will get to work before the approach phase, at altitudes over 15km. While the laser altimeter is collecting data on vehicle position and altitude, the flash lidar is quickly mapping the surface to make sure the vehicle is where it’s supposed to be and that it is still safe to land there.

    As the vehicle makes its approach (500-1,000m above the surface), the Doppler lidar is in charge of precision navigation via velocity and distance data, while the flash lidar continues to map and identify hazardous features. Consider it the equivalent of a back seat driver warning, “No, not there; there’s a giant rock right there, park it over here. No, not there, that is a crater. Just over here, next to that slope.”

    Some other uses NASA is exploring for the 3D-imaging flash lidar are automatic spacecraft rendezvous and docking, as well as autonomous rover and robot guidance, collision avoidance, and mobility operations. I wonder if they have considered gesture control for the astronauts driving the rovers.

    Conclusion

    Flash lidar certainly has its advantages, yet it’s even more powerful when combined with other sensors, hardware, and software to create an entire solution. When you are developing your own radical solution, you want to consider how it will integrate with a more extensive system and keep in mind that it has to be user-friendly. Otherwise, you could end up with a great idea that sits on a shelf.

     

    ELE Times Research Desk
    ELE Times Research Deskhttps://www.eletimes.com
    ELE Times provides a comprehensive global coverage of Electronics, Technology and the Market. In addition to providing in depth articles, ELE Times attracts the industry’s largest, qualified and highly engaged audiences, who appreciate our timely, relevant content and popular formats. ELE Times helps you build awareness, drive traffic, communicate your offerings to right audience, generate leads and sell your products better.

    Technology Articles

    Popular Posts

    Latest News

    Must Read

    ELE Times Top 10