Autonomous vehicles are developed with a wide range of self-driving capabilities. Some vehicles provide basic automation, like cruise control and blind-spot detection, while other vehicles are reaching fully-autonomous capabilities. Many of these capabilities are being made possible by AI technology.
However, before talking about big scale deployments for smart city transportation, more work is needed to improve the AI algorithms and mapping features powering autonomous vehicles. This article reviews innovations in Autonomous Vehicle AI and mapping, which might help secure a future in which autonomous vehicles are deployed city-wide.
1. Deep Reinforcement Learning (DRL)
Multiple types of machine learning are being applied to the development of autonomous vehicles, including DRL. This method combines the strategies of deep learning and reinforcement learning in an attempt to better automate the training of algorithms.
When implementing DRL, researchers use reward functions to guide software-defined agents toward a specific goal. Throughout training, these agents learn either how to attain that goal or how to maximize the reward over subsequent steps.
With the help of data collected from current autonomous vehicles, human drivers, and manufacturers, these agents can eventually be trained to operate independently. In the meantime, DRL has useful applications in lower level automation of vehicles. It can also provide value in vehicle manufacturing, where it can be applied to transform factory automation and vehicle maintenance.
2. Path Planning
Path planning is the decision-making process that autonomous vehicles use to determine safe, convenient, and economical routes. It requires taking into account street configurations, static and dynamic obstacles, and changing conditions. Currently, path planning is based on the combination of behavior-based models, feasible models, and predictive control models.
The process occurs roughly as follows:
The route planning mechanism determines a route from point A to B according to available roads or lanes.
A behavioral layer is then applied to determine vehicle movement according to environmental variables, such as traffic or weather conditions.
These determinations are applied to feasible and predictive control models to guide the operation of the vehicle.
As the trip progresses, feedback from sensors and analyses is fed to these components so adjustments can be made in real-time to adjust for errors or unforeseen events.
In the above process, the relatively easy part is predicting how the vehicle itself will behave under certain conditions. What is more challenging is predicting what might happen in the environment the vehicle is operating in. For example, how can models predict when neighbor vehicles will swerve or pedestrians will enter the street.
To improve these predictions, researchers are applying multi-model algorithms to simulate possible trajectories and speeds of objects. These models enable the autonomous system to prepare for multiple scenarios simultaneously. Then based on evaluated probabilities of each scenario occurring, the system can define how the vehicle responds.
Simultaneous localization and mapping (SLAM) is a technology used to orient vehicles in real-time to the surroundings. While still in its early stages, eventually, this technology can enable vehicles to operate autonomously in areas where maps are not available or where available maps are incorrect.
What makes this technology so challenging to implement is that currently, mapping is based on first knowing an object’s orientation. However, orientation is typically determined by comparing sensor data to pre-existing maps of surroundings. This dual reliance makes it difficult to achieve either goal when landmark information is unknown.
One of the ways this problem is overcome is by incorporating a rough map, based on GPS data, which is then refined as a vehicle moves through an environment. This requires vehicle sensors that constantly measure the environment and apply careful calculations to correct for vehicle movement and sensor accuracy.
An example of SLAM applications can be seen in Google’s autonomous vehicle used for generating Google Maps data. This vehicle uses a laser radar (LIDAR) assembly that is attached to the roof to measure its surroundings.
Measurements are taken at up to 10 times a second depending on how fast the vehicle is moving. The data collected is then passed through an array of statistical models, including Bayesian filters and Monte Carlo simulations to accurately improve existing maps.
4. HD Maps
High-definition (HD) maps are maps that include minute environmental details, often down to a centimeter scale. These maps include the details that live drivers would be able to see and interpret in real-time while driving but which autonomous vehicles need ahead of time. For example, lane markings, curve angles, road boundaries, or pavement gradients.
The level of detail provided by HD maps helps autonomous vehicles more accurately predict behavior and enables more accurate direction. This doesn’t eliminate the need to evaluate environmental changes in real-time. However, it does lighten the load of how thoroughly sensor data must be processed and analyzed.
AI algorithms are just one part of the components needed to power fully autonomous vehicles. Growth is also driven by the integration of higher quality data. For example, data collected from advanced sensors or derived from more accurate maps. While deep learning models have greatly contributed to the improvement of autonomous vehicle AI, there are still many challenges these vehicles face, and which should be dealt with before true maturity can be achieved.
About the Author
Limor Maayan-Wainstein is a senior technical writer with 10 years of experience writing about cybersecurity, big data, cloud computing, web development, and more. She is the winner of the STC Cross-European Technical Communication Award (2008) and a regular contributor to technology publications.