Once upon a time, the slogan “Master science, math, and chemistry, and fear no challenge in the world” instilled confidence in countless students. However, in the era of AI and the internet, this motto is losing its effectiveness. Nowadays, it seems that GPT (Generative Pre-trained Transformer) has become an invincible force, surpassing all.
In this prevailing trend, numerous student communities are leveraging GPT to generate papers, complete calculations, and fill in assignments. This has caused great concern among professors in academia, who lament that many students are losing their ability for independent learning and critical thinking, relying solely on GPT tools to provide answers.
In an effort to counteract this situation, one professor decided to employ a “magic trick” to defeat the “magic” itself. However, to his astonishment, when he mistakenly used a generative conversational AI, ChatGPT, to assess students’ self-written academic papers, all the students’ grades not only plummeted to “fail,” but even obtaining a graduation certificate became uncertain.
The specific cause of the incident began with a post by a netizen named DearKick on the Reddit social platform.
It turned out that Jared Mumm, professor of agricultural courses in the Department of Agricultural Sciences and Natural Resources at Texas A & M University-Commerce, sent an email to all students, He said he was using AI tools to assess whether students submitted assignments were written by humans or computer-generated.
According to the screenshot of the disclosure, the full contents of the email are as follows:
You should now be able to see your final grades on D2L. Please read this email carefully before each person emails me individually.
By the time I graded your last three assignments, I had opened my account on Chat GTP.After I login to this account, copy and paste your answers, ChatGTP will tell me if the content is automatically generated by the program.I entered each person’s last three assignments twice each. If they are all claimed by ChatGTP to be AI-generated, you get 0 points.
My final grade submission for this class is due at 5pm today. I will give everyone an “X” for this course.If you are not satisfied with the results you see on D2L before 5pm today, you will complete another assignment.If you are satisfied with your grade, don’t hand in the next assignment.
For newly submitted assignments, you need to complete them by 5pm on Friday. It will email you a word document. If you submit your new assignment after 5 p.m. on a Friday, you will receive the grade that is currently displayed on your computer. The value of this assignment is 200 points.
Here’s the tip for this assignment: You are advising a farmer and they are asking you to help them decide whether they should crawl and feed their sows while they are producing piglets.
You have to list 5 reasons.However, when I checked through the ChatGTP program, if there were any signs of use, Not only will your current grades remain the same, but I will judge you for academic misconduct outside of your class grades, which will affect your future participation in any of my courses or any other courses at this university.
A stone stirs up thousands of waves, many students are more ignorant, I do not know what the professor is doing in the end.
It is not difficult to see that the email contains a number of errors, such as the popular AIGC tool is called “ChatGPT,” not “ChatGTP.”
Behind this critical error, the professor seems to not understand the ChatGPT tool and its working principle, mistaking the generative AI ChatGPT as a tool for detecting AI content.
Needless to say, the tools are all wrong, and the results will not be much better.
As expected, ChatGPT ended up marking many student submissions as AI-generated.
According to Rollingstone, it is not that the students did not want to explain to the professor, but they explained and the professor did not listen. Even when a student provided the professor with evidence that he had not used ChatGPT, the professor ignored this point and even “blew the dirty words,” commented on the school’s scoring software system:
“I don’t rate AIShit.”
In desperation, a student “emailed the dean and copied the president of the university,” but did not receive immediate help. The students who were caught up in the controversy said that it was clear that their articles were indeed written by themselves.What made them feel more devastated was that some of the students who were close to graduation had their diplomas temporarily withheld. Another netizen said that Mumm failed “several” entire classes in a similar way, rather than questioning the effectiveness of his methods for detecting cheaters.
Counterattack from the Students
In response to the recent controversy surrounding the use of ChatGPT for plagiarism detection, students have come forward to challenge the reliability of the tool. Those familiar with ChatGPT are well aware that it is trained on vast amounts of data and can easily mimic human-like writing based on various prompts, often categorizing human-authored content as “AI-generated.”
To demonstrate the fallibility of ChatGPT, netizen Delicious_Village112 copied an abstract from a paper previously published by Professor Mumm and asked ChatGPT whether it was written by a human or generated by AI.
Surprisingly, ChatGPT identified it as potentially “AI-generated,” responding, “Yes, with the right prompts, the paragraph you shared could indeed be generated by language models like ChatGPT.”
University Emergency Release Statement
Faced with a growing number of students starting to break the news online, Texas A & M University School of Business issued an urgent statement in response: It has addressed concerns about ChatGPT in the agriculture classroom.
In its statement, the university said, They noted several recent news reports involving allegations that seniors in an agriculture class at Texas A & M University’s College of Business had been flunked and temporarily denied diplomas because they had questions about AI-generated assignments.
A & M-Commerce confirmed that no students have failed or been barred from graduating because of the issue.
Dr. Jared Mumm, Head Teacher of the class, is currently in a separate communication with the students about the last written assignment. Some students were given a provisional grade of “X” – meaning “incomplete” – to give professors and students time to determine if AI was used to write their assignments and, if so, to rank them.
Several students have been cleared of suspicion and their grades have been released, while one student has come forward to admit he used ChatGPT in his homework. Several other students chose to complete the new assignment that Dr. Mumm offered them.
University officials are investigating the incident and developing policies to address the use or misuse of AI technology in the classroom, the announcement said. They are also working to adopt AI detection tools and other resources to manage the intersection of AI technology and higher education. The use of AI in schoolwork is a rapidly changing issue facing all learning institutions.
DearKick, meanwhile, posted an update below the original post that the professor had so far apologised to one of the students who had been wronged:
The situation is (for the most part) resolved.
In meetings with professors and several administrative officials, we learned a few key points:
It was initially thought that the entire class had been put on hold, but in reality just over half of the class was affected.
- The diploma is in “reserve” status until “the investigation of each person is completed.”
- The school said it did not ban anyone from graduating / leaving the school because the diploma was in “reserved” status and had not yet been officially rejected.
- Dear Kick said he has spoken to several students so far, and as of this writing, one student has been exonerated by the time they edited in Google Docs, although their diplomas have not yet been issued, but should be released.
Meanwhile, DearKick revealed that “his work may have been affected by profanity and unprofessional communication with students, but not by misusing AI tools.” It is reported that the professor apologized to the one student who has so far proved that he did not cheat.
Under what circumstances can ChatGPT be used?
So far, the announcement seems to have solved most students’ problems, but this time the confusion caused by the misuse of AI tools by educators has caused many concern.
Should AI tools be used in universities? Should teachers use software to detect AI-generated content in student submissions?
In fact, when ChatGPT just appeared, many university professors called for the use of ChatGPT. Even Darren Hick, an assistant professor of philosophy at Furman University, posted that he caught a cheator writing a paper using AI and reported it.
In response to this trend, the New York City Department of Education officially announced that students and teachers in New York City can no longer access ChatGPT on Department devices or internets.
In fact, ChatGPT is not the best tool for recognizing AI-generated text, and it can’t even accurately determine if someone is using it to write an article.
In the past few months, although OpenAI launched AI-Text-Classifier, Stanford University brought DetectGPT, and even college students are developing AI “anti-fake” tools such as GPT Zero to distinguish between text written by humans and text written by AI from various vendors, the extremely high failure rate is prohibitive.
Previously, five computer scientists from the University of Maryland, Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wen Xiao Wang And Soheil Feizi, after studying the detection of text generated by large language models, published a paper entitled “Can AI-generated text be reliably detected?” Any question that ends with a question mark can be answered with a “no.”
Vinu Sankar Sadasivan, one of the authors of the paper, admits that even using the best detectors to detect AI-generated text, this probability is no better than throwing a coin to judge.
“Generative AI text models are trained using human text data with the aim of making their output similar to humans. These AI models even remember human texts and in some cases output them without referencing the actual source of the text. As these large language models continue to iterate, the best detectors can only achieve nearly 50% accuracy.” , Sadasivan said.
According to the results of the paper, reliable text detection tasks are impossible in practice.
We may never be able to reliably say whether an article was written by a human or an AI.