Where are Autonomous Vehicles Now?Autonomous vehicles hold much promise: they’re set to transform our roadways, creating a much safer driver experience. After all, stats show that human error can be blamed for more than 90% of road accidents. Back in 2015 or 2016, many automotive manufacturers announced major plans for getting fully autonomous commercial vehicles on the road in the next few years, but we’ve long surpassed their initial estimates. It was an exciting time for the industry, but the hype had moved far ahead of the reality. So, what progress has actually been made toward a fully self-driving car? It helps to evaluate progress using SAE’s widely-accepted levels of driving automation. There are five levels of automation, from Level 0 (no driving automation) to Level 5 (full driving automation).
- Level 0: No autonomy (driver has full control of the vehicle)
- Level 1: Driver assistance
- Level 2: Partial automation
- Level 3: Conditional automation
- Level 4: High automation
- Level 5: Full automation (self-driving car)
What Makes Building an Autonomous Vehicle So Challenging?Ultimately, what the problem comes down to is that creating a fully self-driving car for all conditions is extremely difficult. It’s much more complicated than automotive experts realized at the outset of their projections, which is why companies have had to delay timelines, sell off autonomous divisions, and revamp their approach. Let’s talk about what makes this type of project so difficult:
- The World is Complicated. Autonomous vehicles must navigate a highly complex world of various roadways, street signs, pedestrians, other vehicles, buildings, and more.
- Humans are Unpredictable. These vehicles need to not only understand their driver, but also be able to predict human behavior, which as we know can be relatively unpredictable.
- Tech is Expensive. Hardware installed in autonomous vehicles (think cameras, LiDAR systems, and RADAR) is used to capture the external world and help the vehicle make decisions. But this hardware still needs to improve significantly to provide the level of detailed data vehicles need. It also isn’t very cost-effective.
- Training Must be Thorough. Autonomous vehicles need to be trained for all possible conditions (for example, extreme weather like snow or fog); it’s extraordinarily difficult to predict all the conditions a vehicle might encounter.
- There’s No Margin for Error. autonomous vehicles are a life-or-death use case as they directly impact driver and passenger safety. These systems must be perfectly accurate.
Data Holds the KeySolving the above challenges means looking at where they come from. And to do that, we need to understand how a self-driving car works. These cars rely on AI, especially computer vision models, that give the vehicle the ability to “see” the world around it and then make decisions based on what it sees. Data is captured from hardware on the vehicle (as we mentioned, cameras, LiDAR, RADAR, and other types of sensor data) and used as input for the models. For a car to react to a pedestrian in the road, for example, it will need to have seen sensor data representing that condition before. In other words, it needs to be trained using data that represents all possible scenarios and conditions. If you think about your experiences in a vehicle, you can understand that this ends up being a lot of conditions, and therefore a lot of training data. If we look at our pedestrian example alone, we’d also need to incorporate examples of children as well as adults, people in wheelchairs, babies in strollers, and other scenarios that may not immediately come to mind. Further, we’d want our model to differentiate an actual pedestrian from a picture of a person’s face on a sign, for example. You can see that what seems like a straightforward use case can get complicated fast. Not only does the vehicle need a lot of training data, that training data needs to be accurately annotated. An AI model can’t just look at an image of a pedestrian and understand what it’s looking at; the image needs to include clear labels of which part of that image includes the pedestrian. As a result of this complexity, there are many different types of annotation that are used for autonomous vehicle AI models, including:
- Point cloud labeling for LiDAR and RADAR data: identifies and tracks objects in a scene
- 2D labeling including semantic segmentation for camera data: gives the model an understanding of which class each pixel belongs to
- Video object and event tracking: helps the model understand how objects move through time
- And more