There is always a human behind the machine

We must always remind ourselves that the AI itself is not a moral agent, like a human or an alien from space. It is an artefact, a machine, built by humans to do their bidding. As such, if an AI commits a crime, we should primarily think of it as a weapon, or the means by which the crime was committed, rather than as the criminal. This is important, if we are to hold people accountable.

Having said that, some people have argued that robots can become ‘legal persons’, in the same sense that companies and corporations can have legal personhood. A corporation’s legal personhood entails certain rights, obligations and abilities. For example, a corporation can own property, enter into contracts, get sued and so on. But even in a world where corporations are considered legal persons, that does not fully eliminate the accountability of human employees or board members. So perhaps legal personhood for machines can work, as long as we know what we’re doing.

In the long run, it is possible that we may end up with machines that are considered full moral agents, say if AI agents acquire full consciousness. This is a very controversial and complex topic, because it entails an existential threat to human supremacy and sovereignty. Suffice it to say, if we find ourselves seriously contemplating the question of AIs as full moral agents, we have bigger problems on our plate. But for all short-term and medium-term moral transgressions facilitated by AI, we are well advised to peek at the human behind the curtain.

References

  • Bryson, J. J., Diamantis, M. E. & Grant, T. D. Of, for, and by the people: the legal lacuna of synthetic persons. Artif. Intell. Law 25, 273–291 (2017).

Previous
Previous

Only humans can solve ethical dilemmas

Next
Next

Use AI to watch AI