Don’t Blame Students for Using ChatGPT to Cheat

Don’t Blame Students for Using ChatGPT to Cheat

Don’t Blame Students for Using ChatGPT to Cheat

When college education is rendered transactional, a generation trained to use technological tools to solve problems is just doing what it’s told.


The latest higher-ed discourse is positively flooded with worries about ChatGPT, a free chatbot developed by the OpenAI research lab that produces fluent, if not always correct, responses to user prompts. Though there are anecdotal reports that it provides highly plausible and polished answers on occasion, ChatGPT still demonstrates obvious limitations and generally fails to provide perfectly coherent or accurate prose that meets academic standards of research and citation. But the technology is improving, and with a little fact-checking and revision, texts generated by current-generation AIs can be made to resemble original student submissions. Over winter break, many professors quickly revised their syllabi, anticipating a wave of machine-made writing that cannot be caught using conventional plagiarism checkers.

Although many people have written about how to AI-proof assignments—with some even proclaiming the death of the college essay—few have remarked on why faculty think students will be so eager to take up this new plagiarizing technology. ChatGPT did not cause our plagiarism problem: It has only automated or deskilled the essay-mill industry already churning out papers for pay. Computer-assisted plagiarism is a mere symptom of a much larger problem with education today.

Students cheat for a variety of reasons. Some plagiarizers are desperate, others are overworked, and some students simply do not understand the rules. But it’s naive to fret over a culture of plagiarism without considering any of the political or economic context of today’s university. Students’ calculations are logical in the era of the neoliberal, austerity-addled college: Education is no longer promoted as a goal worth pursuing for its own sake; if post-secondary education is a transactional process whose sole purpose is to unlock better career options, then why not cut corners? Why not optimize one’s chances? It’s not a secret why undergraduates might now have fewer compunctions about submitting someone else’s work.

Corporatization of the university reframed education as a narrow form of preprofessional training. Students are now acutely aware of the fact that the university functions as a class-sorting mechanism. Educational institutions launder privilege as scholarly merit, rewarding students whose families possessed the resources to prepare them for college and send them off to elite schools.

However, amid ongoing precarity, the university is no longer a reliable path into the middle class. Students are rightfully anxious about how their academic performance might affect their socioeconomic futures. When everyone from parents to politicians insists that students should choose their course of study based on anticipated return on investment, we should not be surprised that many undergraduates see plagiarism as a strategy to ensure that all those years of (often expensive, debt-inducing) schooling will pay off.

Meanwhile, as tuitions rise, more students have taken on jobs to pay for their education, leaving them with little time to complete homework. At some level they must realize that—as Malcolm Harris suggested—they are performing unpaid work to ready themselves for an employment opportunity that may never come. ChatGPT must undoubtedly appear to many of them as a labor-saving device, reducing time spent on unremunerated tasks such as essay writing so they can focus on waged labor and other pressing responsibilities such as care work.

In many ways, ChatGPT looks like the flexible laboring subject students are expected to become after graduation. The capitalist class pushes workers to rebrand, retrain, and relocate as they chase scarce jobs in an era of economic turbulence. The chatbot is well-suited to this instability: it is a virtuoso capable of adopting any style, voice, or opinion that is demanded of it. As such, it’s unencumbered by commitments that might hinder it from completing its duties. If necessary, it will contradict itself in mid-sentence. ChatGPT embodies the cynicism and opportunism that Italian Marxist Paolo Virno diagnoses in post-Fordist workers forced to sell themselves out to remain employed. AI plagiarism becomes an object lesson preparing future workers to shed their beliefs and values whenever capitalism deems them inexpedient.

None of this excuses plagiarism or suggests that universities should ignore attempts to game the system. A graduating cohort reliant upon machines to think is especially scary considering the future that AI is poised to bring about. As Ezra Klein has argued, clever chatbots will soon “drive the cost of bullshit to zero.” Here Klein uses this profane term in the way it has been theorized by philosopher Harry G. Frankfurt: Interested parties are going to use chatbots to crank out a limitless amount of discourse disconnected from the truth in order to influence and confuse the public.

This is precisely the nightmare scenario that the essay genre prepares students to confront. In my writing classes, we learn how to locate, evaluate, and understand sources of good information about any topic. Just as importantly, students come to see scholarly inquiry as a collective project dedicated to improving our shared understanding of the world. The chatbot does none of these things. It does not care about the truth, and it has no grasp of the topics that it holds forth upon. It is incapable of listening or responding to others. Now, at least, AI can only spit out monologues of probable-sounding blather.

If students do not dig deep into writing and research, they will never fully appreciate the difference between the AI’s plausible nonsense and genuine scholarly dialogue. Unfortunately, some faculty have already replaced the essay with alternate assignments such as oral exams that make it harder to cheat but do not replicate the experience of entering into a sustained conversation with other writers.

Fighting back against AI plagiarism will be difficult because, once more, the university’s corporatization has made it susceptible to it to begin with. Faculty have long suffered the same forms of casualization experienced by other college graduates. Universities often relegate writing instruction to underpaid adjuncts with no job security—overburdened faculty who have the least support to hold the line against the chatbot plagiarism. Meanwhile, quick fixes such as AI detectors will only make student-teacher relationships more adversarial, eroding the trust and goodwill that helps prevent students from plagiarizing in the first place.

To stop AI plagiarism, we must reverse the trend toward both student and faculty precarity while creating a learning environment in which undergraduates value education as an intrinsic good. Otherwise, artificial intelligences will soon render academic integrity and scholarly inquiry obsolete.

Dear reader,

I hope you enjoyed the article you just read. It’s just one of the many deeply-reported and boundary-pushing stories we publish everyday at The Nation. In a time of continued erosion of our fundamental rights and urgent global struggles for peace, independent journalism is now more vital than ever.

As a Nation reader, you are likely an engaged progressive who is passionate about bold ideas. I know I can count on you to help sustain our mission-driven journalism.

This month, we’re kicking off an ambitious Summer Fundraising Campaign with the goal of raising $15,000. With your support, we can continue to produce the hard-hitting journalism you rely on to cut through the noise of conservative, corporate media. Please, donate today.

A better world is out there—and we need your support to reach it.


Katrina vanden Heuvel
Editorial Director and Publisher, The Nation

Ad Policy