Three Things Thursday: On AI and LLM

At first, I was all aboard on the panic about large language models (LLM) and “artificial intelligence” in the academy. In fact, I participated in an on-campus conversation a few months ago that centered on the impact of ChatGPT in the classroom. Since then, I’ve largely grown bored of the hype and the endlessly repeated tropes that AI will change everything, we need to adapt or die, or that AI is poised to open new horizons.

Since I appreciate folks like Joshua Nundell’s efforts to respond to and critique some of the recent conversations, I thought I might add my two cents in the spirit of solidarity among bloggers, if nothing else. 

Thing the First

It seems to me that some of the anxiety surrounding the impact of LLM driven AI in the classroom centers is a bit misplaced. After all, there is a massive catalogue of approaches to writing that easily sidestep the problematic temptation to use, say, ChatGPT to produce an assignment. In my department alone, I know colleagues who do low-stakes, in-class writing, some who develop richly scaffolded writing assignments that require outlines, multiple drafts, proper citations, and other elements that LLM can’t replicate, and finally, some who encourage students to work in groups where peer pressure mitigates the risk of using ChatGPT.

Each of these approaches have pedagogical merits and are well-tested tools in a teaching tool kit. In other words, creating scenarios where LLM assisted writing is discouraged doesn’t involve re-thinking how we teach. It simply involves adopting what many have argued are “good practices” for teaching writing anyway. Of course, I understand that incorporating these practices into a class involve a bit of a redesign, but it’s hardly a revolution.

Thing the Second

Recent handwriting over the role that LLM driven AI plays in scholarship is mostly ridiculous. The examples used are, of course, egregious—especially those that preserve the telltale word, “Certainly” before listing a bunch of references in a literature review—and, presumably, embarrassing to the journals where such text appears. But let’s be honest here: these are not good journals. The examples bandied about the internet are simply not good articles as even a cursory survey of their content (and despite their being far outside my field) reveals.

In other words, this does little to convince me that a wave of AI generated content is welling up in the depths of the more unscrupulous scholarly world. Of course, most academics know that a tremendous amount of poor and mediocre scholarship exists. This is not driven by ready access to LLM derived AI composition, but by the irresistible pressures to publish frequently, to develop important quantitative markers for scholarly performance, and to constantly justify a position within the academy. Of course, publishers are only too happy to take advance of the need for content. Ironically, the pressures produced by unrealistic research expectations and unscrupulous publishers rely partly over-extended and over-worked faculty who can’t (or, more tactically, won’t) fulfill their professional obligations as reviewers.  

It seems to me that this ecosystem is as much to blame for the rise in articles that carelessly make use of LLM’s capacity to generate plausible sounding text. This isn’t to absolve the “authors” of such articles of dishonest practices, but to suggest that blaming it on ChatGPT is mistaking the symptom for the disease.

Thing the Third

Over the last dozen years, I’ve shifted from being an enthusiastic advocate for open access academic publishing to more of an agnostic skeptic. This isn’t because I think OA publishing is bad or wrong—after all I run an open access press—but that I think OA publishing as part of a more complex scholarly ecosystem that isn’t necessarily an unqualified good for all participants in this system.

It has been interesting to me to see how scholars have pivoted from championing the power of OA publications as democratizing knowledge to hesitating just a bit now as it becomes clear that OA publications may form an important component of future LLMs. Without disparaging the entire OA movement, it seems apparent that the emergence of LLM and recent challenges by copyright holders whose works constitute these LLMs creates opportunities for OA texts to create a foundation for new forms of automated and algorithmically derived knowledge making. 

Of course, for this to work, the larger ecosystem has to continue to produce high quality OA texts for our new LLM to consume. If we imagine that publishers will ultimately seek to monetize LLMs and their algorithms, then the loop is effectively closing. The growing body of OA publications, which some scholars and institutions pay to produce, will invariably populating the next generation of LLMs which will, in turn, power the next batch of AI text generators. 

This isn’t some kind of radically new observations, but does, I think, help me understand the how the larger ecosystem surrounding AI text generators and LLMs works with both teaching and publishing in the academy. 

Leave a comment