Automakers, technology company and practically every single industry observer or analyst are trumpeting the arrival of a new automotive era – that of driverless cars. Most of the companies involved in the emerging sector have figured out the mechanical part of the equation – but how about the philosophical?
The riddle is simple – we know autonomous cars will be here, either in 5, 10 or 20 years, but they’re a fact. And while they are promoted as major safety enhancers, by taking out of the driving equation the human factor (responsible in most cases at least in part for the vast majority of accidents), they also need to solve a hard issue. In the usual Asimov-style Robot law question, we now need to decide if we’re going to grant machines power over who lives or dies in a crash. Taken to the extreme, let’s imagine a scenario – should a self-driving car sacrifice its occupant(s) by jumping off a cliff to avoid killing a school bus full of children? That’s a thorny philosophical question that needs to be addressed even as the industry is advancing in a hurry to the future where autonomous vehicles will be in harmony with each other and the rest of the drivers.
These shades of gray – no pun intended – will be explored with the help of ethicists and philosophers. Ford, General Motors, Audi, Renault and Toyota auto executives are making trips to Stanford University’s Center for Automotive Research, which toys around with programming of ethical decisions into cars to see what perspires. “This issue is definitely in the crosshairs,” comments Chris Gerdes, the head of the lab. “This is going to set the tone for all social robots,” says philosopher Patrick Lin, who runs the Ethics and Emerging Sciences Group at California Polytechnic University.