Will You Like Your Self-Driving Car? And Will It Like You?

by

in

Freedom via automobile—and the box it draws around everything else—has defined most of the past century, from the layouts of our cities to the designs of our homes and even the music we enjoy. Yet almost as soon as cars started defining the cultural zeitgeist, our collective imaginations turned to the next step: the freedom of a car, combined with freedom from having to actually drive it. It certainly hasn’t happened yet, but what will happen when our cars finally can drive themselves? What will those cars be like? Is it possible they might even have synthesized personalities? What’s the Lemon law for a car that is mechanically sound but turns out to be a morose, whiny annoyance?

If your self-driving car’s personality sounds like a fanciful problem, well, it is—probably. But there may be unanticipated outcomes from achieving the sort of generalized computer intelligences necessary for self-driving cars, and that’s something worth thinking about. Worse, we humans may be as big of a hurdle to the adoption of fully self-driving cars as developing the technology is, with or without the hassle of a car that’s also a smug, self-satisfied jerk.

We Don’t Have Self-Driving Cars Yet

Concepts for self-driving cars have been around for more than 80 years, and while there are today several projects demonstrating working autonomous-transport systems, they operate only in closed, relatively controlled environments. The world at large has aimed for the self-driving target more or less continuously since the DARPA Grand Challenge dangled a $1,000,000 prize for the team that could navigate 300 miles of California and Nevada desert in 2004 (spoiler alert: the prize went unclaimed and the top team only made it 7.5 miles from the start). Today, Waymo is working on a self-driving system, GM has its Super Cruise system, and many other marques have their own systems in the works, too, but none are ready to take the wheel completely. And, as much as Tesla may tout its Autopilot system, if you try it out in real life (or read the Autopilot section of the Tesla Model S owner’s manual, an excerpt of which is shown below, for example), you’ll find, one way or another, that it’s not ready for general self-driving either:

Limitations

Many factors can impact the performance of Autopilot components, causing them to be unable to function as intended. These include (but are not limited to):

Poor visibility (due to heavy rain, snow, fog, etc.).
Bright light (due to oncoming headlights, direct sunlight, etc.).
Damage or obstructions caused by mud, ice, snow, etc.
Interference or obstruction by object(s) mounted onto the vehicle (such as a bike rack).
Obstruction caused by applying excessive paint or adhesive products (such as wraps, stickers, rubber coating, etc.) onto the vehicle.
Narrow or winding roads.
A damaged or misaligned bumper.
Interference from other equipment that generates ultrasonic waves.
Extremely hot or cold temperatures.

WARNING: The list above does not represent an exhaustive list of situations that may interfere with proper operation of Autopilot components. Never depend on these components to keep you safe. It is the driver’s responsibility to stay alert, drive safely, and be in control of the vehicle at all times.

Given the rapid and broad spread of driving-assistance features throughout the automotive industry (driven by customer demand, according to the manufacturers), it’s clear that if it’s at all feasible, self-driving cars are coming, eventually. But first, we’ll need some combination of industry and government to get self-driving cars and the infrastructure they’ll need operating on a common framework. Even if we manage to tick all the regulatory boxes inside the next decade, however, there are other hurdles.

Self-Driving Cars Are Hard

To date, it’s not the data acquisition—either modeling the car’s external environment itself or modeling real-world driver behavior in known circumstances—that is the primary hurdle to achieving a fully self-driving car, though those are big hurdles. The bigger stumbling block is getting all those machine sensors and computers acquiring the data to assemble a picture of the world that resembles that of a human’s in terms of detail and nuance. Analyzing the data, a task that typically falls on some type of artificial intelligence, presents the largest difficulty.

Artificial intelligence, or AI, has been part of the popular subconscious for decades now, too, whether as robots nearly indistinguishable from humans like the Replicants in Blade Runner, or buddy robots like C3PO and R2D2 in Star Wars, or inhuman, disembodied intelligences like HAL 9000 in 2001: A Space Odyssey, or Skynet in The Terminator. And that non-exhaustive list of rather fantastic movies spans only the years 1968-1984. Movies featuring some aspect of artificial intelligence go as far back as the 1920s, but are more popular today than ever, with dozens of big-screen movies either including AI or featuring it as a central part of the plotline.

As much as the movies may make it seem like fully sentient, superhuman artificial intelligence is already here, or at the very least, just around the corner, there remain many technological hurdles to achieving an artificial intelligence capable of navigating an open environment in a human-approved and predictable way. Such an AI, after all, even without exhibiting any form of sentience (i.e. conscious knowledge of itself as an entity) would be capable of executing most human tasks in the real world, as driving is one of the more complex tasks in which humans regularly engage. It might make most human-staffed retail jobs obsolete; any computer that can be trained to navigate its way safely across Manhattan, Paris, or Tokyo with no instructions except “end up here” can definitely learn to pour a perfect latte or ring up some jeans.

Why Your Self-Driving Car Might Need a Personality

Get to the point, already, right? If you’ve stuck with me this far, we’re on the same page about the state of the art of the self-driving car, and we have some idea of the challenges left to tackle. It’s conceivable then, that the level of intelligence necessary to drive a vehicle safely amongst humans on foot, bicycle, scooter, and board, not to mention human-driven cars, pets, wildlife, weather, and all of the rest, is a level of intelligence that might bring with it some other facets of intelligence—a personality, for example. It might even be that a personality is a consequence of that much intelligence, or even somewhat less; I live with some very personable pets that I wouldn’t trust with an RC car, let alone a real one.

Don’t get me wrong; I’m not saying your car will be alive, or sentient, or even aware in any meaningful sense. But with enough AI horsepower to fully take the place of a human driver, it’s hard to imagine there wouldn’t be a significant amount of AI applied to the human-car interaction process. Far more is possible than the basic learning algorithms and trainable voice-recognition systems of today without getting into the realm of science fiction, including the possibility that your car might have a personality, or what passes for one.

In fact, something like a personality might be a necessity for a sufficiently smart car, not because the car needs it, but because we do. In 2017, a study published in the journal Frontiers in Robotics and AI, by researchers from the University of Salzburg, the University of the West of England, and the Austrian Institute of Technology, found that humans like mistake-making robots a lot more than we like flawless robots.

The research suggests this somewhat unlikely outcome is due not to anthropomorphism or ascribing lower intelligence to the robot, but instead due to a well-known facet of human psychology called the Pratfall Effect. The researchers even presented a re-formulated take on the Pratfall Effect to take into account their findings: “Imperfections and mistakes carry the potential of increasing the likability of any social actor (human or robotic).” The researchers also noted several previous results that likewise confirmed the applicability of the Pratfall Effect to human-robot interactions.

Well, hold your horses, who said anything about robots? We’re talking about self-driving cars, right? You say Mr. Potato, I say Mr. Roboto. A car that can find its way dozens or hundreds of miles through varied terrain and weather without any human intervention is, by definition, a very advanced robot.

What Your Self-Driving Car’s Personality Portends

While it would be problematic for your car to attempt to express its personality by mistakenly driving you through a deli, it might make sense to have more “human” moments when interacting directly with the occupant. If a car’s human-machine interaction (HMI) system were to intentionally seed in casual mistakes on occasion to create a better user experience, those mistakes, might, collectively, be taken to be a personality. If not a full personality—you won’t be retelling its jokes or making apologies for its insensitivity—then a personality in the sense that those mistakes, whether programmed into the firmware at the factory or created spontaneously by a learning AI, shape the mental and emotional content in our minds when we think about our interactions with them.

Great! We’ve taken the driver out of the picture, but now we have an unwanted passenger tagging along on every ride, and we’re not even ridesharing.If that morning commute feels long now, what with having to monitor the traffic occasionally between tweets (all too common even in cars without any self-driving capability), imagine what it’ll be like when the computer wakes you up from your power-nap to ask you if you saw the latest episode of your favorite show last night, just because it wants to talk. Bad, to be sure, but worse might be a future where your self-driving car has neither a personality nor an attempt at one, a car so flawless it inspires no emotion at all except, perhaps, a vague sense of dread.

Will you like your self-driving car? Probably, because it will probably be engineered to be likable. But maybe—just maybe—it’ll all come down to luck of the draw, and a typical Craigslist car listing of the 2070s will look more like a personal ad:

“2072 Honda Clarsight SFCEV for sale by self, $188,003,000,343 [ed.—Did we forget to mention the hyperinflation of the 2030s?] 22,000-mile car enjoys long drives along the beach, cool mornings in the driveway, and a fine late-vintage hydrogen on the weekends. Seeking occupant for work and pleasure. No smokers, no pets, and please, for the love of asphalt, no more than 5 passengers. Mileage is current, but I will continue to shuttle around this crew of jerks until someone less annoying is willing to take on my title loan, so expect that number to rise.”


















































The post Will You Like Your Self-Driving Car? And Will It Like You? appeared first on Automobile Magazine.

Article Source and Credit automobilemag.com https://www.automobilemag.com/news/will-you-like-your-self-driving-car/ Buy Tickets for every event – Sports, Concerts, Festivals and more buytickets.com

Discover more from Teslas Only

Subscribe now to keep reading and get access to the full archive.

Continue reading