AI PRESENTS NEW DILEMMAS
OPINION By Lily Wheadon
Recently, it’s been impossible to go through a day without hearing “ChatGPT” thrown around in conversation. ChatGPT is just one of the many forms of artificial intelligence (also known as AI) that have been making their way to the public stage in the past few months, and there are some ethical concerns that have arisen because of it. The mass introduction of AI into mainstream society has been too much, too soon, and the consequences could be catastrophic.
One of the main concerns with ChatGPT is that academic integrity could be completely destroyed. I have seen multiple posts online about students using the AI platform to do math problems, assignments, and write whole essays. These AI-generated assignments can usually pass through plagiarism checkers without raising any alarms to teachers, which could have more long-term consequences than may appear at first.
Without technology advanced enough to verify that assignments are not computer-generated, dishonesty in school will go unpunished. A teacher seeing plagiarism or cheating of any kind on an assignment usually results in a zero, but without any way to know if work is AI-generated, the amount of unpunished cheating incidents will increase. This is not only bad for the morality of the next generation of students, but it could lead to a lack of critical thinking skills. If a student has a computer available that can do any assignment that they ask it to do, there is very little preventing them from using this computer as much as possible, even to a point where they do not know how to do the work they are being assigned.
A similar pattern was noticed in online school during COVID lockdowns, as students were left unsupervised and found it easier to cheat their way through certain classes rather than doing the work out themselves. This left many students with a lack of motivation to study or complete assignments once in-person schooling started back up.
Not only can AI impact the academic world, but it has some concerning effects in other aspects of life. Recently, Snapchat unveiled their own AI assistant which, according to CNN, is powered by ChatGPT. Snapchat presents this form of AI almost as another “friend” on the app that users can chat with. The conversation with the AI bot is always pinned, so no matter when the last time users interacted with the AI was, it will always appear at the top of the "recents” list. This itself is concerning, as the app seems to be promoting a sort of social reliance on AI.
While most people who use the Snapchat AI only use it to joke around with their friends or test its limits, there is something off about the AI bot itself: it lies about having access to your location. I saw someone proclaim this on Tiktok, so I decided to try it out for myself. I asked my AI if it had access to my location, which it denied with a simple “No, I don’t have access to your location.” Moments later, I asked where the nearest Dunkin was located, and sure enough, it sent me results for five nearby locations. Not only is this deceptive, but it is also slightly frightening. If the AI lies about knowing location, what else could it be lying about? The fact that these computers know more than they or their developers are letting on is deeply unsettling.
The AI apocalypse is not happening any time soon, but if we continue utilizing AI in the way that we have been, there could be frightening consequences for society.