The recent self-driving Uber and Tesla crash is putting safety at the forefront of the autonomous vehicle discussion. The accident could slow the development of self-driving vehicles, but we believe it is important to continue the important work to drive forward a vehicle that will be safer than those driven by humans. We want to educate the autonomous vehicle market on a technology that can make these vehicles much safer. We asked our engineers to describe in their own words – why does an autonomous vehicle need a Thermal camera?
Here’s AdaSky R&D teams reply:
The scene is night time, street lights aren’t working well, and a pedestrian is crossing the street.
Will an autonomous vehicle detect it in time?
Assuming the main front sensors on the vehicle are:
1. Radar (part of the setup, although less relevant)
2. CMOS-based vision solution with a perfect detection at range, which is as far as the headlights can see
3. Lidar such as HDL-64E, with a range of up to 120 meters, and refresh rate of 5-20Hz
For successful detection, the autonomous vehicle will either need a high-confidence detection from one of the sensor modalities, or a consensus between two of the sensing modalities. But how quickly can each of these modalities react?
1. Let’s start with the Radar:
“A vehicle or truck might have a large reflection, but pedestrians and motorcycles aren’t only smaller in size but have relatively few hard or metallic shapes to reflect radar signals. Reflection from a truck can overshadow that from a motorcycle just as a small child standing next to a vehicle could become “invisible” to a radar receiver.” (reference)
Assuming that there’s no significant metallic signature in the scene (unlike in the Uber accident where it is expected that Radar would detect a metallic bicycle), for a pedestrian on their own, a Radar might work, and might not, and it might be arguable. There are sources that say that in the future things will change but radar will continue to lack resolution making long range pedestrian detection unlikely for radar.
2. CMOS vision solution:
CMOS can only see as far as can be highlighted by the headlights. Since we’ve mentioned street lights, we’ll also assume the low lights are on. In the average case, depending on visibility, lamp type, etc, visible range for pedestrians would be 40 to 50 meters. That means detection at 40 meters is a reasonable figure.
3. And finally the Lidar:
HDL-64E has a range of “up to 120 meters”, and a refresh rate of 5-20Hz. Effectively, the actual range for pedestrians is much smaller than 120 meters as it requires more than just a dot in the point cloud to tell the difference between a pedestrian and anything else. Since it’s very likely that the refresh rate is high (20Hz) to allow rapid update of the vehicle’s surroundings, the actual range reduces even more. Let’s assume it has a pedestrian detection of 50 meters (Voyager says 60, but airing on the safe side.)
From there, Lidar faces another difficulty – it detects an object, but the vehicle still needs to understand what kind of object it is? Another question: is the object found on a collision course with the vehicle?
It will probably take few more Lidar spins to detect that – remember, the confidence-level has to be very high as there might be no other modality available to double check and confirm its detection at 50 meters. So we’re down to 40 meters (~10 spins or 0.5 secs later)
At a 40-meter range both the CMOS and the Lidar agree that there’s a pedestrian in the other lane, about to cross to the vehicle’s lane. With a small delay (milliseconds) the vehicle’s ECU will then issue an emergency braking procedure.
At 70kph (43mph), braking distance on a dry surface is about 30 meters (100 feet).
Great news: Pedestrian is saved!
But lets play with the variables a little bit because this could happen under many circumstances:
- Wet road – greater braking distance = not enough time to come to a full stop.
- Vehicle is driving faster, means greater braking distance needed = not enough time to come to a full stop.
- Vehicle is heavy or is a truck = not enough time to come to a full stop.
- Vehicle is going downhill = not enough time to come to a full stop.
- Pedestrian is a small child, which means less pixels ~ smaller detection range. = unlikely that the vehicle has enough time to come to a full stop.
- Headlights stop functioning, which means smaller detection range = unlikely that the vehicle has enough time to come to a full stop.
- Lidar stopped functioning = unlikely that the vehicle has enough time to come to a full stop.
- CMOS stopped functioning = unlikely that the vehicle has enough time to come to a full stop.
- Bad weather means smaller detection range = unlikely that the vehicle has enough time to come to a full stop.
- CMOS is blinded by a car that’s behind the pedestrian = unlikely that the vehicle has enough time to come to a full stop.
In each of those scenarios, the vehicle can’t come to a full stop and therefore the pedestrian is likely hit.
Adjust the ranges to simulate the experience in different scenarios for yourself, here.
Fortunately, sensors that employ far infrared (FIR) technology can fill the reliability gaps left by other AV sensors.
The above scenarios is exactly why we need to add an FIR camera to every autonomous vehicle:
- Long range pedestrian detection during day and night – up to 200m
- Easily segment living humans from objects – humans are shown very bright compared to the background environment with a thermal sensor – see what we mean in this video of a man crossing the street at night.
- Not blinded by oncoming vehicle headlights or direct sunlight
- Works in any weather
- New independent sensing modality, increases confidence when fused with other modalities
What is FIR?
FIR has been used for decades in defense, security, firefighting, and construction, making it a mature and proven technology. We have taken that proven technology and adapted it to automotive applications. FIR-based cameras uses far infrared light waves to detect differences in heat (thermal radiation) naturally emitted by objects and converts this data into an image. Unlike the more common optical sensors used on cars that capture images perceptible to the human eye, FIR cameras scan the infrared spectrum just above visible light and can, thus, detect objects that may not otherwise be perceptible to a camera, radar, or lidar.
With a sensitivity of 0.05 Centigrade for high-contrast imaging, a VGA thermal sensor with FIR will detect the pedestrian at up to 200 meters (with 17 degree FOV). The FIR sensor will be able to track the pedestrian through time at 30 or 60 fps and also detect the road ahead. Thermal FIR also detects lane markings and the position of the pedestrian (which direction he or she is facing) in most cases. It will then be able to tell that the pedestrian is going off the sidewalk and about to start to cross the road on the opposite lane, and hence the FIR sensor would be able to predict if there is a risk for the vehicle hitting the pedestrian.
See video of a bicycle crossing the road at night with a thermal sensor versus a CMOS solution.
The autonomous vehicle with FIR sensors will slow down to make sure there’s enough time to break in case the pedestrian performs an unpredictable move. This means everyone will arrive safely to their destination.
In addition, a thermal FIR sensor can also do that just as well during daylight and especially when it is raining, foggy, snowing, blinded by the sun, entering or exiting a tunnel or other dynamic lighting conditions. In every single case, a thermal FIR sensor will only make the confidence-level and safety level go up.
AdaSky’s Viper sensor is the first-of-its-kind thermal sensor with shutterless technology, meaning the vehicle’s vision isn’t mechanically blinded, even for a millisecond. Best in resolution and in battery life, Viper is significantly smaller than other camera solutions at only 2.6 cm in diameter by 4.3 cm in length and has very low power consumption, making it ideal for the complex autonomous vehicle system.
Comment: There’s nothing new here – even a human driver might not be able to break in time.
Answer: Well, that’s correct. And that’s a good excuse for human driver, but doesn’t cut it for an autonomous vehicle.
Comment: Even my smartphone’s camera or my GoPro can see in the dark
Answer: This might be true, but for CMOS cameras this comes with a cost – long exposure time. And long exposure time is not ideal for a machine vision solution as it usually means, under the assumption that the scene is dynamic, that there’s motion blur, and blur makes image perception harder. Of course there are ways to compensate for that, but the physical fact is that the faster the vehicle or pedestrian is moving, the worse the image looks. How much worse? This doesn’t really matter, an degradation of the image means a pedestrian could go undetected. This makes the camera less practical.
Comment: Lidars will have increased range and precision in the near future.
Answer 1: That is yet to be proven and there are lots of issues that still need to be resolved (such as eye-safety). We do agree that they will get better in the future, however autonomous vehicles will still prefer to get timing critical decisions faster and with higher confidence. Hence another modality is needed to confirm what the Lidar is seeing.
Answer 2: Some day, Lidar will likely be able to detect at long range, but it might not be able to segment and classify the object (or human) fast enough. It will be difficult for Lidar to detect pedestrians or animals behind fences, vegetation, street sign or vehicle doors or those carrying things like bicycles, etc, on their way to cross the road.
Answer 3: Lidar will probably not perform well in bad weather.
Answer 4: Resolution per $ will probably still be higher than the rest of the sensor modalities.
AdaSky brings far infrared technology to the automotive market, aiming to empower the vehicles of tomorrow to see further and better – whenever. AdaSky’s founding team is made up of veterans from the semiconductor, thermal sensor, image processing and computer vision market. They have been developing state-of-the-art FIR sensing solutions for the last decade. Now, the company’s multidisciplinary team of experienced engineers has further innovated and adapted the solution to the specific needs of self-driving cars, making AdaSky’s solution a critical addition to cars to eliminate vision and perception weaknesses for fully-autonomous vehicles.