This Paper is So Bad! You Were Either Hallucinating or Using Generative AI

March 1, 2024
AI

Concerns with Generative AI in Academic Settings

Students love ChatGPT and other generative artificial intelligence (AI). And why not? You pose a question in the chatbot and boom, you get an immediate answer. For those who struggle to write and organize their thoughts, generative AI can seem like the answer to your prayers. I mean. Why struggle when the computer can do all of the work for you. Right? Wrong! Generative AI does not answer the prayers of a student. For starters, most college policies forbid the use of these tools. In the past, we discussed the risks associated with getting caught using these tools. Despite our warnings (and those of professors), we know that kids will take the risk, get caught and land right into our office needing a student advisor.

Accuracy and AI Hallucination

Furthermore, generative AI doesn’t consistently “get it right.’ In fact, we’ve had cases where essays that were so far from the intended content that the student faced accusations of submitting work generated by AI hallucination. An AI hallucination occurs when the AI provides an incorrect response, gathering and generating false information, ultimately deviating significantly from the intended context. The resulting answer may be wildly inaccurate and fabricated. A diligent student, well-versed in the subject matter, would quickly identify the inaccuracies in the generated answer. But hey, utilizing AI wouldn’t be a consideration if the student had undertaken the task themselves. There is no honor amongst thieves (or cheaters, in this case), and Students sometimes overlook that a bizarre, unfounded answer might raise an eyebrow of a professor. What could be more humiliating than getting caught cheating by submitting a response that is so off the mark, it must have been the product of a hallucination.

Defense Strategies and Academic Integrity

What happens when a student just submits a sloppy paper? If a student’s work is challenged, one defense would be to explain how the “wrong” answer was submitted. Another defense would be to produce search history to show that ChatGPT was never consulted for information. Proving the absence of reliance on generative AI can be difficult because it involves demonstrating that the produced work came from one’s own thought process rather than automated assistance. No student wishes to defend the idea that their paper is so sloppily composed that even a computer wouldn’t produce such junk. Nevertheless, it’s important to note that hurrying through homework is not a violation of policy.

False Accusations and Academic Vigilance

In fact, we have worked with many innocent students who were falsely accused of using AI to generate their work. Professors, in their haste to combat this type of cheating, sometimes misidentify innocent students who properly submitted original work. Fortunately, we’ve been able to clear the names and reputations of those students. However, students who follow the rules should not have to worry that they could be accused of using AI by some vigilante professor trying to “catch” the student in the wrong. In the same line, sadly we do see instances where policies have been violated.

Promoting Academic Integrity

In the end, there’s no better defense than the obvious: do your own work. And follow our three suggestions.

  • Make sure proper sources are used to complete a paper.
  • Make sure that there is proper attribution to all sources relied upon in a paper.
  • Make sure that if there is any question about the academic integrity rules in class that written clarification is sought and obtained from the professor.

If you or someone you know has been falsely accused of cheating using AI, please contact our Student & Athlete Defense attorneys Susan Stone at SCS@kjk.com; 216.736.7220, or Kristina Supler at KWS@kjk.com; 216.736.7217.