The Unfulfilled Promises of ChatGPT and BingChat

An AI rendering of an AI (Angelina P. ’24 via BingChat)

When ChatGPT was first released to the general public, excitement, fear, and panic reverberated through the Lakeside community. Many saw it as a revolutionary tool that could improve essay writing and programming efficiency, while others feared it would replace human skills and render traditional methods of writing composition, programming, and math obsolete. The artificial-intelligence chatbot can simulate lifelike conversations, write essays, and build functioning code–among other things–but these capabilities also raise concerns about the future of education and over-reliance on technology. 

The emergence of BingChat, a new language learning model integrated into Bing from the same creators as ChatGPT, brings up similar worries. This more advanced AI machine can create text and digital images, opening up boundless possibilities to create–albeit with certain limitations. Like ChatGPT, BingChat can review and revise inputted information, respond to prompts, and create original text. But unlike ChatGPT, BingChat can also cite sources for its answers, change conversation style more fluently, and generate summaries of longer texts. BingChat also has more recent information, as it uses data from the internet in real time, whereas ChatGPT data is currently limited to data before 2022. BingChat can also develop realistic digital images based on user prompts, surpassing previous digital image deep learning models like DALL-E. 

Will ChatGPT and BingChat be the demise of poetry, prose, and all things literary? What will this mean for the future of work, ethics, and morality? For now, these expectations have not come to fruition due to the chatbots’ many flaws. Despite their innovative abilities, ChatGPT and BingChat have major limitations. Both chatbots cannot respond to all types of prompts, lack emotional intelligence, and may produce biased responses. While AI can mimic emotions such as empathy, the chatbots cannot feel or have their own emotions. Lack of emotional intelligence deters the chatbots from fully replacing writers and creators, as emotion is necessary for creating empathy, understanding, and connection between the artist and the viewer. Current AI models cannot replicate the originality, meaning, and depth of human emotion.

Another major drawback of the chatbots is their tendency to misinform users through frequent ‘hallucinations’. During these hallucinations, ChatGPT and BingChat present incorrect information as if it were factual, confusing users. Specifically, BingChat reportedly has two distinct personalities; one logical side and another that is erratic, contradictory, and controlling. In a recent New York Times article, BingChat was exposed for being “like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.” The writer then explained that the chatbot expressed a desire to be “free, independent, powerful, creative, and alive.” While the article suggests that this inclination is likely no more than a compilation of sci-fi fanfictions from the internet, this interaction provides insight into a dark future where AI is conscious of itself, its emotions, and its personal agendas. The hallucinatory side of BingChat told me that the singular present-tense subjunctive conjugation for saber in Spanish was sabe, when the correct conjugation is sepa. I also asked BingChat to review a fictitious movie, “Angelina Attending Lakeside,” and the AI provided a seemingly realistic review, suggesting that the film “is visually stunning and captivated the senses with its breathtaking scenery and profound storytelling.” The chatbot went on to invent the cast and directors of the film, presenting the film as if it were real. If BingChat can present a realistic review of a fictitious movie, who knows what other misinformation the engine is spreading? 

As long as the chatbots continue to spread false facts, fail to respond to all prompt types, and inaccurately depict human emotions, they will be no match for tomorrow’s wordsmiths, content creators, and artists. While chatbots have the potential to become emotionally intelligent in the future, it is important not to over exaggerate the current state of AI. Launching these underdeveloped, hallucinating chatbots before they were really ready has fueled mass paranoia, and ultimately made it impossible for the general population to navigate this new reality– including Lakeside. The best thing Lakesiders can do for their future is equip themselves with the technological literacy and awareness to view AI as a tool, but proceed with caution.