ChatGPT is new, but the *idea* of essay-capable computer software has been around for much longer.
By the early 2000s, college faculty were fretting about the possibilities of machine scoring for college essays and how that would impact their teaching role. It’s not a big step from worrying about machine scoring to worrying about machine writing.
My feeling back then, when these tools seemed a distant future, was that they were 1) inevitable and 2) potentially offered improved options for teaching critical thinking, writing, and analysis.
Many of my students, even at selective institutions, were poor writers. As a result, much of my grading time was spent circling misspelled or mis-chosen words, identifying sentence fragments, and asking for pronoun references. Even students whose writing was technically correct were not yet skilled enough to convey complex arguments, provide sufficient context, or use evidence well.
Not surprisingly, coaching tended to get derailed into how to write instead of what to write.
I thought then that if these students had access to tools that could help them with the writing part, we could move on to really working on the ‘what’ part. We could focus on constructing sound arguments, to providing good context, to selecting good evidence and supporting documentation.
Now that the age of AI content generators has arrived, I still think this is true.
This spring, teaching as an adjunct in a history course, I elected to use a policy that permitted the use of AI content generators if this was acknowledged in the endnotes.
Most students followed this instruction.
I still failed more students on these essays than I ever have in my life.
Not because they used AI. But because now that their poor writing wasn’t getting in the way, all the other problems in their work were clearly revealed.
They hadn’t done their assigned reading, so they didn’t spot it when the AI provided hallucinated summaries.
They didn’t correct the repetitive nature of AI content.
They didn’t know that AI provides fake quotes and fake citations and they didn’t check either for themselves.
It took time for me to annotate their essays, flagging the fake quotes, identifying the problems with the citations, highlighting the hallucinated information that had no connection to the assigned readings. However, this also allowed me to base my score fully on intellectual rigor of the work submitted, no generosity for weak writing or effort spent.
This was a small class, so it was possible to spend the time.
I don’t know if it will make a difference, this semester, to these students. It was probably just too shocking for them to take it in.
But compounded across courses and subjects, I believe there is an opportunity here to demonstrate critical engagement, to demand and reward closer reading, and to foster deeper analytical thinking.
I think this is especially true for students whose poor writing skills prevented them from engaging with the material in meaningful ways, and who were not well served by a system that rewarded submission over content.
AI is here to stay. It is up to us as educators to use it well to the benefit of all our students.
Posted June 7, 2025
Saw an interesting case study in using AI for retail training.
The key paragraph, for me was "Because the new sales process is meant to guide associates rather than provide a script, we recommended using a dynamic, AI-driven approach for the refresher simulations. Associates write in their own responses to the customer rather than select a multiple-choice option. A custom Language Learning Model (LLM) that was trained on the sales process powers the feedback for these simulations. Almost like a real training coach, training the LLM (also called a model) allows it to provide specific feedback based on what associates type in for their answers. This approach helps build associates' confidence and allows them to get personalized feedback."
It might be awful in practice, and it's definitely expensive and has IT security issues, but still - this potential is fascinating to me. Practice is everything, but practicing to a script is only useful until the players have memorized the script, which is why corporate role plays - live or ticky box online - end up being so cringe in real life. If AI could provide a truly better, and reliably better (not occasionally hallucinating or confirming rather than easing cultural bigotries and biases), that would be a truly useful application of these new tools.
Reposted from my Linkedin
In my experience feedback is a gift that keeps on giving, and it's great to see that the data supports that.
How Effective Are Learning Strategies in Improving Learning Outcomes?
Reposted from my Linkedin