Timothy Burke, one of my favorite scholars to follow, posted a doozy of a Substack today. There he looks ahead at the future of AI in pedagogical settings. It's not a sunny vision:
My worries are many, but one of the most prepossessing concerns I have, perhaps outweighing the dark political clouds on the horizon, are the consequences of the unconstrained, unmediated, unconsidered release of generative AI tools and components into our informational and expressive environment. Whatever happens to our governments and institutions in the episodic to-and-fro of the next decade may turn out to be less consequential by far than the conjuncture of our informational and cultural infrastructures with generative AI.Burke explains that his concern isn't so much with students' pernicious use of AI to cheat. That problem, he notes, is not new, nor is it entirely the fault of students themselves. "The use of writing as a proof that a student did the reading has always been a baleful mistake," he argues. "The idea that a large quantity of writing would by itself secure mastery of writing expression equally so. At the university level, combine that with introductory courses that have hundreds of students listening to lectures and then meeting with teaching assistants and you have a recipe that has always encouraged cheating, that has always been bedeviled by forms of indetectable dishonesty."
Ouch--but accurate. I assign a lot of writing in my script analysis class, and it bears rethinking why I do so. I try not to have my writing assignments perform did-you-read-this police functions. (That's what short daily quizzes are for--policing reading and policing attendance--a practice I'm uneasily resigned to for now.) I'd hoped to move my script analysis class more fully into the realm of writing as process this next semester. By splitting my one section (with 40 students on average) into two, I hoped to qualify for writing-intensive designation. With that designation and the assistance it brings, I could craft a course in which students write to understand and then revise that writing in order to communicate effectively. I'm currently in a pretty dark place in terms of my optimism regarding those goals this semester. I have fifty-two students in the two classes and--for the first time teaching this class in over a decade--no TAs. I don't see a way for me alone to teach them both right now, not at least without radically reneging on my original plans.
Student reading and student writing seem to have gotten worse--drastically so--in the last few years post COVID. And, pace Burke's arguments, AI really has changed the game. Have some students always cheated and BS'd their way through essays? Sure. But AI does more than just make this halfassery easier to accomplish and harder to detect (though it does both those things). A student copy-and-pasting from some website at least realizes on some level that they're doing something wrong, literally taking someone else's work and presenting it as their own. Most of the plagiarism I see in student papers now is different. Sure, sometimes a student has made use of one of the many, many AI essay-writing products advertising themselves to students as harmless, even essential helps. ("My prof wanted me to write three whole pages!" sales-students in these ads gripe. "Who has time for that? That's why I let [PRODUCT] do it for me...") And sure, "just say you used Grammarly" is a common bit of get-out-of-jail-free advice circulating online for students.
But increasingly, I fear that students really are just using Grammarly or some program like it to check their writing, only to find that the program is AI-helping ("AIding"?) them by replacing words or phrases with "better" ones. The result is often a ship-of-Theseus trick, where the new ship consists of lots of adverbs and pseudo-profound statements about how we humans need to hold onto our dreams or somesuch. Like I heard from a colleague: "I used to get terribly written papers with some good ideas. Now I get perfectly written papers with terrible ideas." When I flag such AI-ded essays as violations of my course policies, I think some students really are honestly confused. They don't know what good writing looks like. They don't know what their own writing feels like. They can't tell why the AI version, while usually grammar-perfect--reads like queasy mush. And from their perspective, they did nothing wrong. They wrote something and, as I (used to) tell them to, checked and proofread their writing with a program that purports to do just that. But AI's version of "improvement" tends to mean "make the essay sound smart [lots of big words and adverbs] and uplifting [unctuous praise for the play/playwright, grand talk about generic, humanistic themes]." Students just learning how to write, just learning what real criticism and analysis are, can't tell the difference between quality and crap. And AI makes it all the less likely they will ever be able to do so.
My concern here echoes Burke's big worries about society at large:
[T]he indiscriminate vomiting of generative AI into everything we read and view, every tool we use, every device in our homes, every technological infrastructure we operate or own, means at best an unproductive estrangement, a new mediating layer that no one, expert or otherwise, can really understand or control. A kind of techno-tinnitus, a buzzing hum of interference or diffusion. When things break, when things don’t quite do what we want them to do, when we don’t get what we’re owed from what we’ve created or done, there won’t be anything to do about it. When we’re described, evaluated, measured, assessed in all the ways that are already balefully mindless when they’re done by actual human beings that we actually can see, we’ll suddenly become even less recognizable, even less true to our realities, and there won’t be anything to be done about it. You’ll complain about the AI processes in the black box to another black box AI and there will be no one anywhere who actually knows why it gave the results that it did, no expert Delphic oracle who can take it apart and get it back on track. At worst, it means that everything that is translocal to our material surrounds will be untrustworthy and unknowable. Not even an interesting fiction, just a kind of informational drift, a noise so pervasive that the entirety of the signal gets lost.It'd be one thing if the infonoise (cAIcophony? I'll stop now) were reliable. But LLMs like ChatGPT specialize not in truth but in bullshit--words spewed out with utter confidence and no regard for their truth value. It's like Poe's Law but applied to all digital representations of reality itself: can't tell if true or BS. Students using AI now often, I'm convinced, simply don't know that they're indulging in BS, producing it, absorbing it, starving themselves on a diet of predigested slop.
Increasingly, it looks like infopoop is going to be what we get from AI. Or, as Burke puts it:
Generative AI is being used so heedlessly, so much like a silicon equivalent of the Human Centipede, gulping down its own shit as it hungrily demands more and more and more text for its training models, that it is going to end up spewing informational diarrhea forever all over the entire infrastructure of knowledge production. It is going to pollute all forms of many-to-many communication, all forms of mass media. When we hit that point, it will be impossible to cleanse it all out again. Everything we know will become a Superfund toxic waste site, full of forever hallucinations and distortions.
Now there's an image: AI as (in)Human Centipede. BS recycling BS recycling BS, crowding out other exchanges, eating up available energy and attention. "We’re probably only a few years away from mandatory brown-outs in ordinary homes," predicts Burke, "because the AI needs the power."
What did Baudrillard say about the ultimate stage of the simulacrum? The representation replaces reality? Only now the eclipsing representation is crap. Welcome to the simulcraprum.
Sigh.
No comments:
Post a Comment