I know that I’m late to the bandwagon with regard to ChatGPT, but I somehow got myself featured on a panel hosted by our campus’s teaching institute to discuss what to do about and with ChatGPT in the classroom.
I think part of the reason that I’m being included is that I’m honest enough to say that I really don’t know what to do about ChatGPT, and naive enough to admit that I expect there are some great things we can do WITH it.
As for the for the former, I remain committed to some version of the “No Cop Shit” in the classroom mentality in the classroom. I’d rather have a student take advantage of my generosity than to become some kind of heavy who tries to police student efforts at resistance. Perhaps at my weakest, I believe that I’m more likely to coop a student through understanding than I am to break student resistance with force, but I like to think that my goals don’t always involve doing what I can to subvert the last of a students innate desire to resist the structures of capitalism, authoritarianism, and discipline.
I also discovered that ChatGPT detectors are not entirely locked down. I asked ChatGPT to produce a short essay on Late Roman metallurgy on Cyprus. I then ran it through a couple of the standard AI text detectors. At least one of them told me that it was unlikely to have been produced by AI and another gave it a 50% chance of being AI generated . All the AI text detectors recognized my blog post from Tuesday as written by a human. I credit my obtuse grammatical style (once compared to Cicero on acid) and liberal sprinkling of typos. A student paper clearly composed by an AI bot, however, — the student admitted it — met with ambivalence even from ChatGPT when I asked it if its language model produced the paper. In other words, it appears that the “cop shit” route might quickly turn into a scene from Blade Runner.
(As ChatGPT tells us “As an artificial intelligence language model, I don’t have the ability to dream or experience consciousness, so I cannot dream of anything, including electric sheep…. However, the question of what it means to be conscious and how it relates to artificial intelligence and replicants is a central theme in Blade Runner, and the movie leaves open the possibility that Deckard’s consciousness may be artificial or implanted. Overall, the nature and extent of Deckard’s consciousness are left up to interpretation and debate… there is ongoing research into the development of artificial consciousness, which aims to create machines that possess self-awareness and subjective experiences. However, such technology is still in its infancy and remains a topic of much debate and speculation… As an artificial intelligence language model, I do not possess consciousness in the same way that humans do. While I am capable of generating responses and holding conversations, these are based on algorithms and data processing rather than subjective experiences or emotions.”)
Ambiguity surrounding the character of text generated by ChatGPT especially as its language model develops (evolves?) over time makes the job of any would-be instructional blade runner at least as fraught as the administrator of the Voight-Kampf Test in the film.
As for the potential of ChatGPT in the classroom, I remain optimistic (if a bit naive). One of the things that I’ve struggled with consistently is when a student clearly understands a topic, has done the research, and have engaged with the reading, but struggles to express their ideas in writing. Our tendency now is to work with these students to improve their writing skills, to structure their writing process, and to produce results that are adequate reflections of their ideas and engagement. This remediation comes at a cost, of course. Generally, I think it is a fair to say that students who struggle with writing, struggle academically in a college setting. It’s a hell of an environment to find yourself behind and to make up ground. In fact, in my experience students who struggle with writing often struggle academically in general because they have to invest far more time trying to write in an adequate way than students who have basic writing skills. This invariably detracts from other tasks vital to their performance in college (reading, review, problem solving, and so on).
Of course, as the famous saying goes… all good writers are the same, but all bad writers are bad in different ways. A student who struggles with organizing their thoughts into an orthodox paper is different from a student who struggles to compose sentences despite having a well structured paper. One wonders whether ChatGPT could, in the right situation, be a crutch that allows a students whose writing in poor to avoid losing even more ground.
A recent paper in ACS Nano, by too many authors to list, “Best Practices for Using AI When Writing Scientific Manuscripts” goes a step further and argues, as near as I can tell, that part of what makes ChatGPT convincing is that much like human generated prose, it struggles to produce the kind of bad writing that we all know (and love) from our students:
“The human-like quality of the text structure produced by ChatGPT can deceive readers into believing it is of human origin. It is now apparent, however, that the generated text might be fraught with errors, can be shallow and superficial, and can generate false journal references and inferences. More importantly, ChatGPT sometimes makes connections that are nonsensical and false.”
These problems are probably not with the language model itself, but with the text from which the language model is generated. While we may have developed beyond the idea of garbage-in, garbage-out in computing, what strikes me with ChatGPT is that it appears in my rather superficial experience with it to create text that is remarkably uniform in its badness. In other words, it produces bad text that is bad in only some, rather limited ways. In contrast, the worst student papers tend to be replete with grammatical and organization problems. ChatGPT seems to mitigate these quite effectively, but leave many of the common thinking, referencing, and evidence issues in plain sight.
What this means for teaching is hard to know. As any faculty member who reads a considerable quantity of student work will tell you, part of the joy of reading student work is not just in its often bizarre and wonderful content, but also in its style. Students offer a window into the future of writing, thinking, and speaking English. ChatGPT seems intent on mitigating the dynamism of the English language and one wonders, at the university level at least, whether this is where it presents the greatest risk.