Grading, slowly making my way through a digital pile of script analysis papers.
One newish twist is AI. It seems, more and more, I find a paper whose paragraphs are beautifully crafted but whose prose is superficial. My mode of script analysis focuses exclusively on structure. GPT and its ilk tend to prefer lofty reflections on theme and character, making (and repeating) basic links between those or that scene and this or that theme.
Most students just don't recognize that a writer has a voice, that we can tell when they shift from their own (often error-riddled but honest) voice into the cottony vagaries of AI.
Encountering one of these depletes me. Usually I catch on about halfway through, as nonspecifics pile up. By that point I've spent time and energy crafting some encouraging intervention ("can you be more specific? Give me a 'for instance' from the text?").
And then I cut and paste something into gptzero or another detector, and BAM--likely AI generated. Such detectors are themselves error prone. I wouldn't use them as a first-line test. But they can sometimes tell me if and how someone has run into trouble.
I have to remind myself, as I always do when encountering academic dishonesty, that it's not personal. Dishonesty happens, as Truth Default Theory avers, when the truth becomes inconvenient. Students cheat out of desperation, not out of some desire to hurt teachers. I'm sure some may feel a certain contempt for the class or for me, but the same could be said of those who don't cheat.
Mostly there's just a mass of students who aren't (or who feel) unprepared to do the kind of reading and writing we do in class. I'm continually trying to revise my teaching to reach such students, to clarify what it is they need to make this task seem doable.
And AI makes it harder. It feels like work to them--they look it up, they teach it about this play they may have read part of, and they have it spit out what they think I want to hear. I think some of them convince themselves it's like what they might have written. But then, how would they know? That's one of the awful things about LLMs (large language models); they prevent students from learning their own voice. They never know what they "sound" like without the filter of AI-ification.
And it's exhausting to go through the rigmarole of reporting them to student advocacy and accountability. Each time, I'm like is it worth it? Am I doing this out of pique, or am I doing it to teach the student something? At this point, it's more a matter of consistency. I did it for this one student; I have to do it for everyone similarly positioned. And sometimes it really is a good wake-up call. My institution at this moment is pretty good about making these teachable moments.
But. It's still rough.
"I use GPT for lots of things," say some friends outside of academia.
I don't.