When I first caught students attempting to use ChatGPT to write their essays, it felt like an inevitability. My initial reaction was frustration and irritation—not to mention gloom and doom about the slow collapse of higher education—and I suspect most educators feel the same way. But as I thought about how to respond, I realized there could be a teaching opportunity. Many of these essays used sources incorrectly, either quoting from books that did not exist or misrepresenting those that did. When students were starting to use ChatGPT, they seemed to have no idea that it could be wrong.
I decided to have each student in my religion studies class at Elon University use ChatGPT to generate an essay based on a prompt I gave them and then “grade” it. I had anticipated that many of the essays would have errors, but I did not expect that all of them would. Many students expressed shock and dismay upon learning the AI could fabricate bogus information, including page numbers for nonexistent books and articles. Some were confused, simultaneously awed and disappointed. Others expressed concern about the way overreliance on such technology could induce laziness or spur disinformation and fake news. Closer to the bone were fears that this technology could take people’s jobs. Students were alarmed that major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks. The assignment satisfied my goal, which was to teach them that ChatGPT is neither a functional search engine nor an infallible writing tool.
Other educators tell me that they have tried similar exercises. One professor had students write essays and then compare them to one that ChatGPT wrote on the same topic. Another produced a standard essay from ChatGPT that the students each graded. Future versions of this task could focus on learning how to prompt this AI, to tell it more precisely what to do or not do. Educators could also have students compare ChatGPT to other chatbots, like Bard. Teachers could test ChatGPT by asking for a specific argument and prompting the AI to use at least three sources with quotations and a bibliography, then show the results to the class. The prompt could be tailored to the content of each class so students would be more likely to detect any mistakes.
When I tweeted about this assignment, some more enthusiastic supporters of AI were annoyed that I did not mandate the use of GPT-4 or teach students how to use plugins or prompt again, which would have (allegedly) given them better, more accurate essays to assess. But this misses the point of the task. Students, and the population at large, are not using ChatGPT in these nuanced ways because they do not know that such options exist. The AI community does not realize how little information about this technology’s flaws and inaccuracies—as well as its strengths—has filtered into public view. Perhaps AI literacy can be expanded with assignments that incorporate these strategies, but we must start at the absolute baseline. By demystifying the technology, educators can reveal the fallible Wizard of Oz behind the curtain.
Both students and educators seem to have internalized the oppressive idea that human beings are deficient, relatively unproductive machines, and that superior ones—AI, perhaps—will supplant us with their perfect accuracy and 24/7 work ethic. Showing my students just how flawed ChatGPT is helped restore confidence in their own minds and abilities. No chatbot, even a fully reliable one, can wrest away my students’ understanding of their value as people. Ironically, I believe that bringing AI into the classroom reinforced this for them in a way they hadn’t understood before.
My hope is that having my students grade ChatGPT-generated essays will inoculate them against overreliance on generative AI technology and boost their immunity to misinformation. One student has since told me that she tried to dissuade a classmate from using AI for their homework after she learned of its proclivity for confabulation. Perhaps teaching with and about AI can actually help educators do their job, which is to illuminate the minds of the young, help them formulate who they are and what it means to be human, and ground them as they meet the challenge of a future in flux.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at [email protected].