When Self Driving Cars Make Mistakes, Who Takes Over?

UTAH. Driverless cars have already taken to the road in Las Vegas. These research vehicles are serving as taxis. And, according to the Salt Lake Tribune, Utah may become the first state to make autonomous vehicles legal. The new law would include special insurance requirements for autonomous vehicles. Yet, autonomous vehicles are still being tested for safety and there are situations where the vehicles continue to have difficulties. Who steps in when these vehicles encounter problems? Recent events in Arizona in which a driverless vehicle hit a pedestrian have also raised important questions about autonomous vehicle safety. Testing has currently been suspended until more information is discovered. It isn’t clear how this accident will impact Utah’s proposed legislative changes.

Autonomous cars on a road with visible connection

Enter the remote driver. According to the New York Times, for every driverless vehicle being tested on the road, there is a person possibly hundreds of miles away, monitoring the cars. The human monitor can check that the cars are operating as planned, and, if anything goes wrong, the human operator can intervene. These human operators sit in front of what looks like a video game console—complete with steering wheel and brakes. In the event of an incident, the human driver can take over.

Some autonomous vehicle companies believe that remote monitoring of vehicles will be a necessary requirement for driverless cars of the future. But does remote monitoring create its own series of liabilities? What happens if a remote driver makes a mistake? What happens if a remote driver erroneously intervenes and causes a crash? What happens if you have a bad actor working as a remote driver? Could this make the vehicles more vulnerable to terrorist attacks?

In more states, companies are being permitted to test their driverless vehicles as long as a remote operator is watching. Yet, this doesn’t answer some important questions about the safety of remote operating systems. After all, there is certainly a difference between driving a car in real life and driving a car remotely. Finally, the presence of remote operators assumes that the remote operator is closely monitoring the situation. As we have seen with other partially autonomous cars, people can also go into autopilot if they believe that the cars are reliable enough. Will the people be paying enough attention to know that they need to step in? It isn’t clear whether there was a remote driver present in the recent Arizona pedestrian accident.

The more likely scenario is that the car will stop when it encounters a hazard. If the car stops, it will need a human operator to get it moving again. For example, some autonomous vehicles have difficulty with roundabouts. Having a human operator to control the vehicle during these tough decision-making moments can prevent stalled autonomous vehicles from cluttering city roadways.

The technology itself could signal the human observer as well. If the technology is aware of its own limitations, this could make it easier to flag problems and alert help. Some autonomous vehicle companies are looking far ahead—to a time when people may not know how to drive and to a time when cars may simply not have steering wheels.

The presence of more autonomous vehicles on the road is likely to bring changes to personal injury law and car accident law. The Truman Law Firm, P.C. is a car accident attorney in Utah who is closely watching these changes. Autonomous vehicles will not be infallible. When individuals get hurt, lawyers may need to step in to help. If self-driving cars still have human operators working behind the scenes, human liability, in addition to defective products claims could be a possibility.

As it stands, most car accidents are the result of human error. If you’ve been hurt in a car accident, you have rights under the law. Visit out firm at http://trumanlawfirm.com/ to learn more.