Michael Sandel and Ethics Bowl Discuss the Morality of Sentient AI

Members+of+Lakesides+ethics+bowl+chiming+in+on+the+ethics+of+sentient+AI.+%28Rishi+L.+24%29

Members of Lakeside’s ethics bowl chiming in on the ethics of sentient AI. (Rishi L. ’24)

After leading a heated assembly debate on the tyranny of merit on February 9, political philosopher Michael Sandel joined Lakeside’s Ethics Bowl Club to discuss the ethics of sentient AI (artificial intelligence) and virtual avatars. Similarly to the assembly, Mr. Sandel led the discourse in the style of a socratic seminar, where club members deliberated a case about Google’s LaMDA program, a language model AI program that can replicate human-like text responses. In June 2022, a Google employee claimed that LaMDA had become sentient after transcripts showed the AI saying it was “afraid of being turned off” because it would be a “kind of death for it.” However, Google denied this claim and experts agreed that LaMDA is simply a text replication program that simulates lifelike conversations, and could not feel or perceive things. Nevertheless, the claim sparked a conversation about the moral questions surrounding AI sentience, with some arguing that the prospect of AI suffering should be taken seriously, while others emphasized that a conscious AI is still centuries away and thus its current benefits outweigh any moral considerations. However, some philosophers argue that even if the chance of an AI being sentient is low, we should still extend moral consideration to it. 

During the meeting, Sandel discussed his take on AI and its relationship to sentience, stating that while LaMDA may have feared death, this is not the same as sentience because being fearful of something does not necessarily equate to being able to perceive or feel. An Ethics Bowl team member also made the point that a language-learning model like LaMDA may be able to replicate human-like conversations, but is not experiencing those feelings for itself, and is simply regurgitating information using machine learning. 

While this suggests that some aspects of humanity cannot be fully replicated by machines, there may come a time when humans can no longer differentiate between online conversations with AI chatbots and humans. AI may also become more efficient at solving problems and learning from mistakes, given that the computer will have data accumulated from many people over many years, while the average person will only be able to learn from their own experiences and mistakes. When this occurs, how will humanity distinguish itself? What will make humanity unique? Will learning from your own mistakes be meaningful? Ethics Bowl club members pondered over these questions, and came to the conclusion that such AI would pose a serious threat to the shared human identity. Moreover, one club member suggested that this kind of AI could threaten humans’ “superiority complex” and make us feel useless compared to the progressive AI. The team, however, also considered the idea that the AI is built by humans and thriving off of human knowledge, and thus can be controlled and used as a tool to enhance humanity, rather than limit it. 

…one club member suggested that this kind of AI could threaten humans’ “superiority complex” and make us feel useless compared to the progressive AI.

Sandel then introduced the team to another side of AI: virtual avatars mimicking deceased people. These avatars would closely emulate the person’s personality and store important memories of events, experiences, and people during the person’s lifetime. Sandel asked questions about the ethics of using virtual avatars and how our relationship to shutting them off might change if we are shutting off an AI mimicking someone we know personally. The team explored the idea of shaping an alternate reality where real experiences and virtual experiences blend together, and what this might mean for the concept of a legacy. One team member brought up the idea that if a loved one could become virtually immortal, spending time with them while they are alive will become less of a priority, and the net-value of personal experiences would lower. Moreover, distinguishing between real memories of the person while they were alive and memories of the virtual avatar would be difficult, making our overall perception of the person a confusing blur of real memories and virtual interactions. 

While virtual avatars of people we know closely may have many drawbacks, resurrecting important historical figures, scientists, and changemakers through virtual avatars could have many benefits. Imagine being able to converse with Albert Einstein, Marie Antoinette, or Martin Luther King Jr.! These exchanges could lead to scientific breakthroughs, cultural reforms, and a more complete understanding of famous figures. However, the creation of these avatars would be unethical because they would be recreated without the historical figure’s consent. They would also still have similar limitations to avatars of close friends and family, as these avatars could be confused for the real people and their legacy, which would be even more problematic, as one could misinterpret or manipulate history.

Mr. Searl reading out case information about Google’s LaMDA program to get the ball rolling. (Rishi L. ’24)

While LaMDA and virtual avatars may seem like distant concepts, powerful language-learning models already exist, such as ChatGPT, so it makes sense to consider the ethics of AI and sentience.  Will AI like ChatGPT ever become sentient? What will happen if ChatGPT can mimic your close friends and family? Will people become more hesitant to use ChatGPT, or will this encourage people to further immerse themselves in the world of AI? The discussion on AI and sentience has given the team a lot to think about and has opened up new avenues for exploration in future Ethics Bowl Club meetings and competition.