Strive for equal access to AI’s benefits

Wealthy people have always been able to afford more safety—they can buy bigger, heavier cars equipped with more airbags and the latest safety systems. This phenomenon is unlikely to reverse in the foreseeable future. 

This raises an important question: Will people with higher socio-economic status have greater access to AI’s benefits? Or will these benefits be shared more widely?

In the Moral Machine experiment, which presented people with ethical dilemmas about who a driverless car should save in an unavoidable accident, our participants believed status should be taken into account. All else being equal, a business executive was almost 40% more likely to be spared in a dilemma involving unavoidable harm, compared to a homeless person.

There are many reasons why people who answered our survey may have answered this way. One possibility is that they are elitists, who look down on homeless people. Another possibility is that they thought that business executives make a bigger net contribution to society—our participants also prioritized the lives of doctors, presumably because doctors save other human lives, so saving them would lead to even more lives saved. 

Regardless of the source of this preference, it is one that requires serious discussion. In the absence of legal or public opinion pressure to treat all adults equally, car companies may simply program their cars to reduce liability. And since a business executive is more likely to afford a lawsuit against the car maker, it is plausible that cars will learn to prioritize the safety of the wealthy.

There is an even more serious danger of market forces providing wealthy people with extra protection in exchange for payment. In the absence of regulations, a person may be able to purchase an electronic chip, recognizable by driverless cars, which gives them and their family extra protection. 

Naturally, the question of who gets to reap the benefit of AI extends far beyond the realm of driverless car safety. Whenever AI systems cause externalities—i.e. side-effects—, there is an incentive for selling insurance against such a risk.

References

  • Awad, E. et al. The Moral Machine experiment. Nature 563, 59–64 (2018).

Previous
Previous

Put society in the loop

Next
Next

Consider Localism