We’re driving computers.

Cars of the future need to be able to heal themselves. We’re not driving cars anymore — we’re driving computers.

Cars are complicated, with 100 or more computer-controlled subsystems that are needed for functions like steering, braking, and adjusting the seats. And all that hardware is embedded with massive amounts of software.

There are different systems that help the self-driving car control the car. Systems that need improvement include the car navigation system, the location system, the electronic map, the map matching, the global path planning, the environment perception, the laser perception, the radar perception, the visual perception, the vehicle control, the perception of vehicle speed and direction, and the vehicle control method.

Being connected to the internet can make vehicles vulnerable and software updates can sometimes introduce new problems. The risk will only grow in the push toward automation, so having a software safety net will be important, especially when you’re downloading safety critical systems — that you don’t break the car. Self-healing software is now undergoing tests by many automakers — monitors a car’s systems to help ensure that doesn’t happen.

We are going to see AI technology to self-diagnose coding problems and fix errors on-the-go. The technology sifts line-by-line through the estimated 100 million lines of code in today’s cars to detect faults and predict problems before they occur. If it finds a glitch, the software seamlessly rolls back to an earlier, safer version so related functions aren’t disabled while the problem is addressed. So rollback capability will be important. Self-healing software could be a viable way to protect against both malicious and accidental corruption of vehicle software. The approach requires additional onboard storage to preserve all of the previous changes, which could decrease the performance. New electronic systems that automakers have in the works should address that. If machines are going to replace human drivers, they will need to be resilient like humans.

The challenge for future mobility companies is to produce control systems capable of analyzing sensory data in order to provide accurate detection of other vehicles and the road ahead. Modern self-driving cars generally use SLAM algorithms, which fuse data from multiple sensors and an off-line map into current location estimates and map updates. Waymo has developed a variant of SLAM with detection and tracking of other moving objects (DATMO), which also handles obstacles such as cars and pedestrians. Simpler systems may use roadside (RTLS) technologies to aid localization. Typical sensors include radar, lidar, stereo vision, IMU. Control systems on automated cars may use sensor fusion, which is an approach that integrates information from a variety of sensors on the car to produce a more consistent, accurate, and useful view of the environment. Heavy rainfall, hail, or snow could impede the car sensors.

Driverless vehicles require some form of ML for the purpose of visual object recognition. Automated cars are being developed with deep neural networks, a type of deep learning architecture with many computational stages, or levels, in which neurons are simulated from the environment that activate the network. The neural network depends on an extensive amount of data extracted from real-life driving scenarios, enabling the neural network to “learn” how to execute the best course of action.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s