Autonomous vehicles need to be smart enough to consider human error image

Companies from Internet giants such as Google Inc. to traditional automakers such as Mercedes-Benz or Nissan are entrenched in an all-out race to be the first among many to deliver tomorrow’s self-driving vehicles.

But as the companies are spearheading the research and development of autos that have the artificial intelligence to be able to drive themselves, even the smartest vehicles might be sometimes bested by the factor that triggered their development in the first place – human error. For example, a recent incident had a Volvo employee that was about to showcase new safety features actually running into a bunch of people. The driver mistakenly thought the vehicle was equipped with an optional system that could detect pedestrians and brake automatically in case of peril and anyways the high speed of the XC60 sport utility vehicle might have actually been higher than the system would have been able to handle. And they’re not the only incidents – Google’s test fleet of self-driving autos had 13 accidents in six years of driving.

The smart autos of the future are touted as being able to prevent accidents overall, the issues actually showcase the challenges the auto sector will face – the computer driven cars will need to have an interplay with the massive number of cars being driven by people in the first years of operation. Also, as we have entered the era of semi-autonomous driving already, alternating the control from the car to the human is one key problem, according to Philippe Crist, an OECD economist who coordinated a recent study on autonomous driving.

Via Bloomberg