Alfred Jones Talks About the Challenges of Designing Fully Self-Driving Vehicles

by

in

The leap to self-driving cars could be as game-changing as the one from horse power to engine power. If cars prove able to drive themselves better than humans do, the safety gains could be enormous: auto accidents were the #8 cause of death worldwide in 2016. And who doesn’t want to turn travel time into something either truly restful or alternatively productive?

But getting there is a big challenge, as Alfred Jones knows all too well. The Head of Mechanical Engineering at Lyft’s level-5 self-driving division, his team is building the roof racks and other gear that gives the vehicles their sensors and computational hardware. In his keynote talk at Hackaday Remoticon, Alfred Jones walks us through what each level of self-driving means, how the problem is being approached, and where the sticking points are found between what’s being tested now and a truly steering-wheel-free future.

Check out the video below, and take a deeper dive into the details of his talk.

Levels of Self-Driving

The Society of Automotive Engineers (SAE) established a standard outlining six different levels of self-driving. This sets the goal posts and gives us a way to discuss where different approaches have landed on the march to produce robotic chauffeurs.

Alfred walks through each in detail. Level 0 is no automation beyond cruise control and ABS brakes. The next level up, “driver assistance”, adds features like lane holding and distance-aware cruise control. Level 2, “partial autonomous driving” combines two or more of these functions. (Telsa’s “Full Self-Driving” mode is truly only partially autonomous.) In these modes, the driver is responsible for watching the system, and deciding when its use is safe.

Level 3 allows the system to turn itself off and hand control back to the driver. This is the first level where the car starts to make higher level decisions about the overall traffic situation, and is the beginning of what I think is the general public’s de facto definition of self-driving. With Level 4 the vehicle should be able to drive itself completely autonomously in restricted areas and get itself safely to the side of the road even if the driver doesn’t take over. Level 5 is the top and the holy grail: the vehicle controls itself in any condition a human driver would have, with zero human intervention or oversight necessary.

Why This is So Hard?
Virtual and empirical testing to ensure the sensor suite stands up to real-world conditions

Generally speaking, this is a sensor problem. By and large the actual control of the vehicle is a solved problem. Alfred mentions that there are important issue to consider like latency between self-driving control hardware and the vehicle’s computer systems, but making the car go where you want it to is already happening. From millisecond to millisecond, those decisions of where the vehicle should go are the very difficult part.

Pointing out the obvious, the road is crazy, people are unpredictable, and changes in road conditions like weather, closures, and construction make for an ever-changing playing field. Couple this with roadways that were designed for human drivers instead of robot operators and you have a crap shoot when it comes to interpreting sensor data. Interim solutions, like traffic lights that communicate directly with self-driving cars rather than relying on the sensors to detect their state, are possible ways forward that involve changes outside of the vehicles themselves.

Buzzing About Sensor Fusion

The most interesting part of the talk is Alfred’s discussion of sensor fusion. It’s an often thrown around buzzword, but rarely deeply explained with examples.

Some combination of cameras, lidar, and radar are used to sense the environment around the vehicle. Cameras are cheap and high resolution, but poor at determining distance and can be obscured by something as simple as road spray. Alfred calls lidar “super-fantastic”, able to depth map the area around the vehicle, but it’s expensive and can’t detect color or markings. It’s also low-reliability because lidar sensors include moving parts. Radar can see right through some things that foil the other two, like fog, but its output is very low resolution.

Effect of dust and smoke on lidar

Combining all of these is the definition of sensor fusion and one great example of how that works is the exhaust from a vehicle parked on the side of the road. Lidar picks up particles in the cloud and would slam on the breaks if this were the only input for decision making. Radar sees right through it knowing there is no threat. And the camera can correlate that a parked vehicle has an exhaust pipe and what the other sensors have detected fits the expectation from past learning.

Actually Doing It

This is fun to talk about, but Alfred Jones is actually doing it, and that means diving into the minutia of engineering. It’s fascinating to hear him talk about the environmental testing employed for proofing the sensor array against huge temperature ranges, wet/humid conditions, and all challenges common to automotive applications. His thoughts on sensor recalibration in the Q&A at the end is of interest. And all around we’re just excited to hear from one of the engineers grinding away through all barriers in pursuit of the next big breakthrough.

Article Source and Credit hackaday.com https://hackaday.com/2020/11/24/alfred-jones-talks-about-the-challenges-of-designing-fully-self-driving-vehicles/ Buy Tickets for every event – Sports, Concerts, Festivals and more buytickets.com

Discover more from Teslas Only

Subscribe now to keep reading and get access to the full archive.

Continue reading