What should a self-driving car do when a nearby vehicle is swerving unpredictably back and forth on the road as if its driver were drunk? What about encountering a vehicle driving the wrong way? Before autonomous cars are on the road, everyone should know how theyâll respond in unexpected situations.
IÂ develop, test and deploy autonomous shuttles, identifying methods to ensure self-driving vehicles are safe and reliable. But thereâs no testing track like the countryâs actual roads, and no way to test these new machines as thoroughly as modern human-driven cars have been, withÂ trillions of miles driven every yearÂ for decades. When self-driving cars doÂ hit the road,Â they crashÂ in ways bothÂ seriousÂ andÂ minor. Yet all their decisions are made electronically, so how can people be confident theyâre driving safely?
Fortunately, thereâs a common, popular and well-studied method to ensure new technologies are safe and effective for public use: The testing system for new medications. The basic approach involves ensuring these systems do what theyâre intended to, without any serious negative side effects â even if researchers donât fully understand how they work.
Self-driving carsÂ are expectedÂ toÂ improve road safety,Â freeing up driversâ time and attentionÂ andÂ transforming citiesÂ and even societies.
The regulations that are created for self-driving cars will have massive effects thatÂ ripple throughout the economy and society. The rules are likely to come from some combination of the two current automotive regulators, the federalÂ National Highway Traffic Safety Administration andÂ state departments of transportation.
Federal rules focus primarily on safety standards for structural, mechanical and electrical components of the vehicles, like airbags and seat belts. States can enforce their own safety rules â for example, regulating emissions and handling driver licensing and vehicle registration, which often also includes requiring insurance coverage.
Todayâs state and federal rules treat drivers and cars as separate entities. But self-driving cars, by definition, combine the two. Without consistency between those regulations, confusion will reign.
The Obama administration came up withÂ 116 pages of regulationsÂ with lots of details, but little understanding of how self-driving cars worked. For example, they called for each car to have human-readable permanent labels listing its specific self-driving capabilities, including limits on speeds, specific highways and weather conditions, all of which would be extremely confusing for users. The regulations also called for ethical decisions to be made âconsciously and intentionallyâ â which isÂ questionable, if not impossible, for a machine.
The Trump administration pared down the rules toÂ 26 pages, but have not yet addressed the important issue of testing self-driving cars.
Testing algorithms is very like testing medications. In both cases, researchers canât always tell exactly why something works (especially in the case ofÂ machine learningÂ algorithms), but it is nevertheless possible to evaluate the outcome: Does a sick person get well after taking a medication?
The US Food and Drug Administration requires medicines to be tested notÂ for their mechanisms of treatment, but for the results. The two main criteria are effectiveness â how well the medicine treats the condition itâs intended to â and safety â how severe any side effects or other problems are. With this method, itâs possible to prove a medication is safe and effective without knowing how it works.
Similarly, federal regulations could â and should â require testing for self-driving carsâ algorithms. To date, governments have tested cars as machines, ensuring steering, brakes and other functions work properly. Of course, there are also government tests for human drivers.
A machine that does both should have to pass both types of tests â particularly forÂ vehicles that donât allow for human drivers.
In my view, before allowing any specific self-driving car on the road, NHTSA should require test results from the car and its driving algorithms to demonstrate they are safe and reliable. The closest standard at the moment isÂ Californiaâs requirementÂ that all manufacturers of self-driving cars submit annual reports of how many times a human driver had to take control of its vehicles when the algorithms failed to function properly.
Thatâs a good first step, but it doesnât tell regulators or the public anything about what the vehicles were doing or what was happening around them when the humans took over. Tests should examine what the algorithms direct the car to do on freeways with trucks, and in neighbourhoods with animals, kids, pedestrians and cyclists. Testing should also look at what the algorithms do when both vehicle performances and sensorsâ input is compromised by rain, snow or other weather conditions. Cars should run through scenarios with temporary construction zones, four-way intersections, wrong-way vehicles and police officers giving directions that contradict traffic lights and other situations.
Human driving tests include some evaluations of a driverâs judgment and decision-making, but tests for self-driving cars should be more rigorous because thereâs no way to rely on human-centred concepts like instinct, reflex or self-preservation. Any action a machine takes is a choice, and the public should be clear on how likely it is that those choices will be safe ones.
Comparing with humans
Self-driving carsâ algorithms constantly calculate probabilities. How likely is it that a particular shape is a person? How likely is it that the sensor data means the person is walking toward the road? How likely is it that the person will step into the street? How likely is it that the car can stop before hitting her? This is in factÂ similar to how the human brainÂ works.
That presents a straightforward opportunity for testing autonomous cars and any software updates a manufacturer might distribute to vehicles already on the road: They could present human test drivers and self-driving algorithms with the same scenarios and monitor their performance over many trials. Any self-driving car that does as well as, or better than, people, can be certified as safe for the road. This is very much like the method used in drug testing, in which a new medicationâs performance is rated against existing therapies and methods known to be ineffective, like the typicalÂ placebo sugar pill.
Companies should be free to test any innovations they want on their closed tracks, and even on public roads with human safety drivers ready to take the wheel. But before self-driving cars become regular products available for anyone to purchase, the public should be shown clear proof of their safety, reliability and effectiveness.