A week ago, I asked ChatGPT for a list of Latinx Georgia state legislators. It responded that, as of 2021, no Latinxs officials were elected in the state. This claim is incorrect. When I re-confronted the AI phenomena, it later listed State Representative Pedro “Pete” Marin and Rep. Marvin Lim as the only two Latinx Georgia legislators. This time, only the latter was incorrect. Despite citing “news articles and official government websites” as its source, the instance made me entirely question the factual reliability and accuracy of ChatGPT.
ChatGPT has taken the world by storm with its ability to generate human-like responses at lightning speed. The AI tool has been praised as a revolutionary tool for students and professionals, branded as the future of college essay writing rooted in its instant access to vast information. It’s no surprise the popular AI language model caught the attention of the Wheel’s Editorial Board.
The Feb. 1 editorial, “ChatGPT threatens the integrity of learning, but we have the final say,” lauds ChatGPT as a “foundational source for summarized research,” arguing the tool could help students save time with assignments. There is no denying that ChatGPT is impressive; however, as with any technology, there are numerous concerns about the accuracy and authenticity of the information and sources provided. The Board also failed to warn students, and all ChatGPT users, about the false information the AI tool incorporates when elaborating on certain writing samples or responding to specific research efforts.
It is easy to see why people have turned to ChatGPT for assistance with research, writing and other “busy-work” tasks. Chat GPT will acknowledge that it does not have access to certain publications when asked to summarize specific scholarly articles. Thus, the tool is restrained by the data it has been trained on and has access to. If the data ChatGPT draws from is biased, incomplete or inaccurate, it only makes sense for the results it generates to be as well.
This trend of inaccuracy has also been analyzed and scrutinized by a graduate student at George Mason University, Will Hickman. In his efforts to challenge the growing AI tool, Hickman asked ChatGPT a vast amount of research questions, which he said provided “false and misleading references.” The author found an insurmountable amount of reasoning and factual errors, including references to nonexistent papers, sources utterly unrelated to the question and incorrect authors for the sources it provided.
Fallacious arguments and references for research are only a glimpse into the inherently unreliable nature of this AI tool. Fueled by the fear of mass misinformation, NewsGuard created a study that concluded the latest version, GPT-4, is “easier to convince to spread misinformation” despite being 40% “more likely to produce factual responses” than its previous model. The results also included the concerning theory that ChatGPT could be used to “spread misinformation at an unprecedented scale.”
ChatGPT is undeniably a powerful tool with the potential to revolutionize the way we access and dissect information. In the age of misinformation, however, it is crucial to be cautious when using the tool and verifying the accuracy of the information it provides. Students and all ChatGPT users should be aware of these limitations and actively combat inaccurate claims when using the chatbot.
There is always room to improve language models like ChatGPT. However, one thing is for sure: all ChatGPT users should be wary of the very real possibility of misinformation created by excessive confidence in the “revolutionary” AI mechanism.
Sara Perez (24c) is from Managua, Nicaragua.