Demystifying LiDAR: A Detailed Guide to the Great Wavelength Debate
// php echo do_shortcode (‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’)?>
It is widely recognized that advanced driver assistance systems (ADAS) and autonomous driving (AD) can be successful with effective sensing of the environment surrounding the vehicle feeding algorithms for autonomous navigation. Given the absolute reliance on sensing in critical situations, multiple sensor modalities are used with the merged data to complement each other and provide redundancy. This allows each technology to take advantage of its strengths and provide a better combined solution.
The three modalities that will be predominant for the sensor used in vehicles for ADAS and AD in the future are image sensors, radar and LiDAR (Light Detection and Ranging). Each of these sensors has its own strengths and together they can constitute a complete suite of sensors providing data to enable autonomous perception algorithms to make decisions with the fusion of sensors – the ability to deliver color, intensity, speed and depth for each point or core in the scene.
Of these three main modalities, LiDAR is the most emerging technology to be commercialized for consumer use, although the concept of using light to measure distance dates back decades. Automotive LiDAR Market Expected to Experience Dramatic Growth growing from $ 39 million in 2020 to $ 1.75 billion in 2025, according to Yole Développement, driven by the proliferation of autonomous systems requiring the full suite of sensors. The opportunity is so great that there are well over 100 companies working on LiDAR technology, with cumulative investments in these companies exceeding $ 1.5 billion by 2020 – and that was before the Flood of initial public offerings led by PSPC by more than a handful of LiDAR companies that started at the end of 2020. But when there are so many companies working on a single technology – some of which are fundamentally different, like the length of light wave used (the most important examples being 905nm and 1550nm) – it is inevitable that there will be winning technology and consolidation, as we have seen time and time again, that it It’s Ethernet for networking or VHS for video.
When you look at the users of LiDAR technology – the automotive manufacturers, as well as the companies that design and build autonomous robotic vehicles for the transport of people and goods – the most important thing on their minds is their requirements. Ultimately, these companies want vendors to provide them with low cost LiDAR sensors with a high degree of reliability while meeting performance specifications for telemetry and low reflectivity object detection. While all engineers have strong views, these companies are likely to be agnostic about implementing the technology if the vendor can meet the performance and reliability requirements at the right price. And that leads to the fundamental debate that this article aims to help decide: What wavelength will prevail for automotive LiDAR applications?
Presentation of LiDAR
To begin to answer this question, it is necessary to understand the anatomy of a LiDAR system, of which there are different architectures. Coherent LiDAR, one type of which is called a frequency modulated continuous wave (FMCW), mixes a transmitted laser signal with reflected light to calculate the range and speed of objects. FMCW offers some advantages, but it remains relatively rare compared to the more common LiDAR approach, direct time-of-flight (dToF) LiDAR. This implementation measures the distance to an object by timing the time it takes for a very short pulse of light emitted from a light source to be reflected by an object and returned to be detected by the sensor. It uses the speed of light to directly calculate the distance to the object using the simple mathematical formula relating time, speed and distance. A typical dToF LiDAR system has six main hardware functions, although the choice of wavelength primarily affects the transmit and receive functions.
Table 1 shows a list of the different LiDAR manufacturers that range from known Tier 1 automakers to startups in all regions of the world. Based on market reports and public information, the vast majority of these companies operate their LiDARs in near infrared (NIR) wavelengths, as opposed to short infrared (SWIR) wavelengths. Also, while SWIR-focused vendors working on FMCW are limited to these wavelengths, most of those with a direct implementation of time-of-flight have the option of creating a system with wavelengths. NIR, if they so choose, while still being able to leverage a lot of their existing IPs around functions like beam steering and signal processing.
Since the majority, but not all, of these manufacturers have chosen NIR wavelengths, how they arrived at this decision and what the implications are must be considered. At the heart of the discussion is the basic physics related to the properties of light and the semiconductor materials that make up the components used in LiDAR.
Photons fired by the laser in a LiDAR system, which are intended to be reflected back by objects and received by the detector, must compete with ambient photons from the sun. Looking at the spectrum of solar radiation and taking into account atmospheric absorption, there are “dips” in the irradiance at certain wavelengths which would reduce the amount of photons existing as noise for the system. At 905nm, there is about 3 times more sunlight than at 1550nm, which means that a NIR system has to deal with more noise that can interfere with the sensor. But that’s just one of the factors to consider when choosing a wavelength for a LiDAR system.
The components responsible for the detection of photons in the LiDAR system are different types of photodetectors, so it is important to explain why they can be made of different semiconductor materials depending on the wavelength to be detected. In a semiconductor, a forbidden band separates the valence and conduction bands. Photons provide the energy necessary for electrons to overcome this band gap and make the semiconductor conductive, thus creating a photocurrent. The energy of each photon is related to its wavelength and the band gap of a semiconductor is related to its sensitivity. This is why different semiconductor materials are needed depending on the wavelength of the light to be detected. Silicon, which is the most common and cheapest semiconductor to manufacture, is sensitive to visible wavelengths and NIR up to about 1000 nm. To detect wavelengths beyond the SWIR range, the more exotic group III / V semiconductor alloy can be made to fabricate materials like InGaAs capable of detecting those wavelengths of light, from 1000 nm to 2500 nm.