AI as a moral superpower

Our vigilance about the risks of AI evil should not blind us from realizing AI’s potential to make us more, not less, moral. In the most plain terms, AI is technology for automating decision-making. And if we can use AI to make our decisions more ethical, and less error-prone, then there is a moral imperative to use AI. 

If AI technology can figure out what the right thing to do in a fraction of a second, ahead of an imminent motor vehicle crash, then it would be unethical to confine the task to humans. We humans cannot calculate the complex trajectories and all the possible outcomes in a car crash, let alone think about whose life we should save. But machines can think that fast, so maybe we should let them do it. That does not amount to surrendering moral power to the machines, but rather extending our own moral powers to unprecedented domains and time-scales.

Oxford University philosophers Alberto Giubilini and Julian Savulescu ask you to imagine you trying to dispose of a coffee cup. You do not know what material the cup is made of, or whether a particular bin is destined to an efficient recycling plant. So you fall short of your own moral standard, simply because you lack sufficient information. An ‘Artificial Moral Advisor’ (AMA) can, Giubilini and Savulescu argue, use AI to provide us with quicker and more efficient moral advice than our limited brains can. Just as there are map apps that help us navigate a new city, because they have detailed and up-to-date information about the street layout and traffic patterns, the AMA can help us navigate the moral territory more efficiently, to help us—both individually and collectively—reach our destination: a good life and a good society.

References

  • Giubilini, A. & Savulescu, J. The Artificial Moral Advisor. The ‘Ideal Observer’ Meets Artificial Intelligence. Philos. Technol. 31, 169–188 (2018).

Previous
Previous

In the long run, humans and machines are all dead

Next
Next

AI as a tool for self-reflection