Watch out for AI henchmen

In the 1974 film The Godfather Part II, directed by Francis Ford Coppola, Michael Corleone (played by Al Pacino) takes over as the new Don of the Corleone family. After being betrayed by his brother, Fredo, Michael gets his brother killed. But he does not do the dirty deed himself. Instead, he delegates the job to his personal henchman Al Neri. In doing so, Michael avoids the emotional toll of having to kill a family member by his own hand. He also distances himself from the crime, making it more difficult for him to be prosecuted. One might argue that had Michael not had a personal henchman, Fredo may remain alive.

The ability to delegate a dirty job to someone else can tempt us to cause harm to others. Can our ability to delegate to AI have the same effect? Delegating dirty work to AI differs from outsourcing tasks to fellow humans. For one, the exact workings of an AI system’s decisions are often invisible and incomprehensible. Letting such ‘black box’ algorithms execute tasks on one’s behalf increases ambiguity and plausible deniability even further, thus blurring the responsibility for any harm caused. Also, when entrusting machines to execute tasks that can hurt people, the potential victims generally remain psychologically distant and abstract—another ingredient that might tempt good people to do bad things.

As a concrete example, suppose you have a business that sells products on eBay or Amazon, or an apartment that you rent through Airbnb. Suppose you can delegate to an AI that can set prices on your behalf dynamically based on market conditions. Recent research reveals that letting algorithms set prices can lead to algorithmic collusion—that different AIs, acting on behalf of different people, learning to fix non-competitive prices, without even communicating. If your AI does this, you benefit, while enjoying plausible deniability, since you never told it to manipulate the market. Another example is the use of AI-powered social media bots to bully others online in creative ways. Here, you have intent to harm, but can distance yourself psychologically from the act. A third example is the possibility of using AI systems for interrogation, which might threaten torture without our knowledge, or with our knowledge but without us knowing the details. All these above examples highlight a temptation for being oblivious or willfully ignorant to the fact that AI might use unethical tactics to further our goals.

References

  • Caldwell, M., Andrews, J. T. A., Tanay, T. & Griffin, L. D. AI-enabled future crime. Crime Science 9, 14 (2020).

  • Calvano, E., Calzolari, G., Denicolo, V. & Pastorello, S. Artificial Intelligence, Algorithmic Pricing and Collusion. SSRN Electronic Journal doi:10.2139/ssrn.3304991.

  • McAllister, A. Stranger than science fiction: The rise of AI interrogation in the dawn of autonomous robots and the need for an additional protocol to the UN convention against torture. Minn. Law Rev. 101, 2527 (2016)

Previous
Previous

When evaluating AI, remember the baseline

Next
Next

Avoid excessive precaution