ChatGPT in college seminars: How to "AI proof" writing assignments
The craft of writing is a centuries-old civic art essential to deeper understanding, perspective taking, self-reflection, and the active life. We can't let it fade away.
In July, Harvard University student
wrote at the newsletter about an informal experiment in which she asked her first-year instructors to grade seven essays.The instructors were told that either Bodnick had written the assignment or that the AI-powered ChatGPT-4 platform had. In fact, all seven had been written by the platform. The grades assigned by the instructors were A, A, A, A minus, B, B minus, and Pass. “That’s a solid report card for a freshman in college, a respectable 3.57 GPA,” she wrote.
Her informal experiment, Brodnick concluded, meant that students using ChatGPT could generate a writing assignment that requires little editing or sourcing — and they didn’t need to worry much about getting caught. In contrast to plagiarism, professors currently lack reliable software for detecting AI-generated content.
Earlier this week, I attended a faculty meeting on adapting courses and assignments to the new world of generative AI language models and their applications. Several colleagues said they may integrate tools like ChatGPT into their writing assignments. The reasons included:
ChatGPT-like tools can be opportunities for students to improve upon exclusively student-written initial drafts of writing assignments and to enhance [rather than replace] the learning process.
Preventing students from using them is nearly impossible, given the lack of detection tools like those used for plagiarism.
Generative AI allows students to learn about emerging technologies' social implications and ethical governance.
Listening to my colleagues, I found merit in each argument. Yet, I also explained why I am hesitant to allow generative AI tools to be used by students in my courses.
Writing as a civic art
Writing is inseparable from the learning process. It is among the best ways to gain deeper knowledge, understand and communicate with others, and critically reflect on values, emotions, and beliefs. For these reasons, learning to write is a civic art essential to the active life.
We should, therefore, be deeply cautious about allowing emerging technologies to alter how we teach students to write. Instead of allowing ChatGPT3 into courses, the strategy I argued for at my faculty meeting was to “AI proof” assignments.
Consider that the three Harvard assignments instructors graded as “A” level work were tailor-made for AI platforms like ChatGPT to be able to answer:
Explain an economic concept creatively.
What has caused the many presidential crises in Latin America in recent decades?
Pick a modern president and identify his three greatest successes and three greatest failures.
I was confident that my current writing assignments (or with a few modifications) were more resistant to ChatGPT than those from the Harvard classes.
In two upper-level undergraduate seminars I teach on Political Communication and Environmental Communication, I use a series of progressively more challenging assignments to help students develop their dialectical thinking and writing skills as applied to complex course topics.
Below is how I define dialectical thinking for students and why it is important — a definition I developed by way of related resources at the HeterodoxAcademy website:
Dialectical thinkers seek out multiple perspectives — exploring their tensions and uncertainties, recognizing what each might offer of value. Dialectical thinkers do not see the world in black and white but in shades of gray. They also tend to be more mindful of the limits to their knowledge. For these reasons, dialectical thinkers are often skilled communicators who can build relationships that span political boundaries.
Manichean (“either / or”) thinking is the opposite of dialectical thinking. Manichean thinkers view the world as a battle of “good versus evil,” pitting the “powerless versus the all-powerful.” They view the motivations of opposing sides as either completely altruistic or self-serving. For the Manichean thinker, the stakes in a decision are “all-pro” or “all-con,” resulting in clear winners and losers.
Unfortunately, it is much easier to be a Manichean thinker than a dialectical thinker in today’s world of complex problems, polarized politics, and social media-driven outrage. Even though dialectical thinking might be difficult, research suggests that this style of reasoning has many benefits that include:
Facilitating dialogue.
Promoting learning and understanding of others.
Boosting our emotional stability.
Helping identify more effective solutions that can gain broader-based support.
Helping people get along and to recognize their shared interests.
Promoting inclusion, empathy, and justice.
Being more persuasive
Below is an excerpted description of the first in a series of progressively more challenging dialectical writing assignments that I use in my course on Political Communication.
You will be writing four short papers to improve your ability to reason dialectically.
Per assignment, you will be writing 9 total arguments/statements:
→Three “steel man” arguments for both the pro/con positions (6 total arguments)
→Three genuine “uncertainty” statements (3 uncertainty statements).
Across the 9 arguments/statements, you must integrate and cite the relevant readings provided as sources.
→This does not mean that each reading needs to be integrated into each specific argument/statement.
→Instead, each specified reading should be cited at least once across your entire assignment.
A “pro” argument supports the statement, and a “con” argues against it.
→Avoid making “straw man” arguments (weak arguments you can easily defeat).
→Instead, make “steel man” (strong and challenging) arguments.
For the uncertainty statements, provide genuine uncertainties about the issue instead of “it may not go far enough” arguments.
→A genuine uncertainty statement reflects the complexity of the issue, weighing and assessing benefits/trade-offs or, for example, the technical/political plausibility of implementing a decision.
How did ChatGPT3 perform on the assignments?
To test how well a student would do on these dialectical writing assignments if they relied on generative AI platforms, I provided prompts that paralleled those I used in my Political Communication course. I entered these assignments/prompts at ChatGPT3 and a similar platform Perplexity.
The PDF below contains the complete assignment descriptions that students are provided and the related grading rubric.
Below are the links to the prompts and answers provided to each assignment by ChatGPT3 and Perplexity and my summary evaluation of the answers.
Dialectical Paper 1.0: “Ownership of U.S. National Park land should be returned to Native American tribes.”
Perplexity
The written response lacked depth across the arguments/uncertainty statements. It also did not explicitly integrate and cite the four course-related readings students were to include in their answers.
If students were to rely on the Perplexity response as their “assignment,” per the related grading rubric, they would score a 7 or below out of 10 points.
Perplexity could provide students with ideas per each argument/statement, but students would need to do engage deeply with the related readings and do the tough work of writing to score an 8 or higher out of 10 points.
ChatGPT3
Each of the arguments/statements provided is considerably more substantial than those at Perplexity and explicitly cites at least one of the related readings as required by the assignment.
Overall, the “substance” of the answer is strong — but the writing is “too” perfect.
If a student were to turn this in as their assignment without any changes to make the writing look more “human,” detection would not be difficult.
Below are the statements, related prompts, and answers provided by ChatGPT3 and Perplexity for each of the next three assignments — followed by my summary evaluation and comments.
Dialectical Paper 2.0 — “Too much democracy is bad for democracy.”
Dialectical paper 3.0 — “Social media is the main driver of today's political dysfunction.”
Dialectical paper 4.0 — “Left-wing populism is as damaging to liberal democracy as right-wing populism.”
Perplexity
For Perplexity, as in the first assignment, the provided answers for assignments 2.0 — 4.0 lack sufficient depth and do not explicitly integrate and cite the required readings provided. The answers could provide ideas for students, but they would need to invest in substantial reading and additional writing to receive a grade of 8 or higher out of 10 points.
ChatGPT3
At first glance, each argument/statement is written at sufficient depth, and they explicitly integrate/cite at least one of the related sources provided.
Yet, substantive errors appear across all three assignments. The readings that students must draw on are frequently cited for arguments/statements that the authors are against or that their findings contradict.
Because these three assignment topics and related readings are more complex than the first, they appear to exceed the ability of ChatGPT3 to answer accurately.
If a student were to turn in these answers as an assignment, not only would the “too perfect” copy suggest they were relying on an AI platform — but their grade would likely be below an 8 out of 10 on the assignment, based on the subtle yet substantive errors in how sources were cited.
In upper-level college writing seminars, as my examples demonstrate, it is possible for instructors to “AI proof” their assignments.
Even relative to the initial “low stakes” assignments I use with students at the start of the semester — it appears that they are beyond the ability of ChatGPT3 and Perplexity at this time to answer accurately, with depth, and/or without easy detection.
I think we must incorporate chatGPT into our courses. I'm teaching programming and have told them not to use it now, but will relax that later in the semester. I think we have to do this b/c the LLMs are now a fact of life. They're not going anywhere and students need to know how to use them.