When I finally talked to my students about AI, it was a relief that we could figure things out together
Talking History
I waited a couple of semesters before discussing generative AI with my classes, both because I hadn’t yet wrapped my head around it and because it didn’t seem like it would be useful to the students in my courses. But this semester, I decided it was time!
My course, “Mexico Between Fact and Fiction,” an upper-division history elective, has weekly short writing assignments that potentially lend themselves to AI experimentation. I also felt more prepared to tackle the issue, having attended two conversations hosted by CDIL on generative AI in the classroom.
While I think my sentiments generally lean toward AI-skeptical (mainly because I can’t get over the fact that millions of authors have had their work “borrowed” without consent for uses they haven’t agreed to), I decided that I didn’t want to prohibit the use of AI in my class outright. Instead, I thought honesty about my skepticism would not only be more effective but would also give students the chance to experiment a bit with generative AI and assess it for themselves.
I raised the issue on our second day of class during our discussion of a chapter from Hayden White’s Metahistory. This text by White, brief and pretty dense, offers a provocative case for how history and fiction are formally indistinguishable. The chapter examines the (European) history of history writing, explaining how things were different before and after the French Revolution. Literary arts and fabulation used to be at the heart of history writing, but by the nineteenth-century historians had embraced “facts” as the basis of good history. Despite this, White argues, fiction and history are both essentially imaginary constructions of the past.
We began the discussion with a question I posed to the class: how did the nineteenth-century division of “history” and “fiction” come about, according to White? The students in this group of twenty-five mostly history majors started out with some tentative suggestions, but by the time several comments had been made I could see that they had a firm hold of the chapter’s content. They understood that the eighteenth-century model of history writing had become “dangerous” in the eyes of historians; they could see how this style of writing was even blamed for the French Revolution, and they grasped how an embrace of “facts” was the resulting outcome.
Bringing ChatGPT into the Conversation
After this happily successful discussion, I turned on my screen and showed the students that I’d asked ChatGPT the same question. The response was about three paragraphs long. I read it aloud. Then I asked the students to comment on what they saw in the response. What did they think of it? What stood out to them?
The first comment was this: “Well, it doesn’t really answer the question.” And indeed, it did not! ChatGPT gave a handy three-paragraph summary of Hayden White’s general philosophy, but it did not even come close to answering the more specific question about how, when, or why the division of history and fiction described by White came about.
With this as a launching pad, we had a discussion about what was in the reply and why the reply was sub-standard. I was quick to point out to them that all of their responses had been much better. One student suggested a different, related question about White’s chapter, and I typed that into the ChatGPT conversation; we received an identical three-paragraph response in reply. That raised some eyebrows.
I then broadened the conversation to a more general discussion of generative AI. They had a lot to say. For the most part, they were more skeptical than enthusiastic. One person commented that it was good at coding, and another commented that certain workplaces (like consulting) were embracing its use. But there was also concern about how bots would be reading their applications for jobs (already going on) and not doing it particularly well.
During the conversation, I also expressed my own views, stating my concern about copyright and my impression that questions about history often result in fairly blinkered responses (given who has written the bulk of history tomes in the past couple of centuries). But I also made clear that AI wouldn’t be prohibited in our class. Instead, I encouraged them to think about how various uses of AI would or wouldn’t fit with our learning goals in the course.
Process Over Product
When I was revamping my syllabus over the fall and winter, I had AI in mind (particularly after attending an AI workshop facilitated by CDIL), and this focused my attention on my learning goals in new ways. I began by asking myself, “How can I write paper prompts that discourage students from using AI?” which soon led me to a related question: “Why don’t I want the students to use AI for their papers?”
I know the answer to this might seem obvious, but I discovered that it’s actually not so simple. Apart from my stated copyright concerns, I had to ponder why I didn’t want them to use AI. Was it just that I didn’t want them taking shortcuts? Shortcuts to what?
The answer I came up with is this: I don’t want them skipping the writing because they learn something through the process of writing. If AI does the writing for them, they miss out on critical learning moments in our class. In other words, it’s not the product of the writing that I care about for my students (in this particular class). It’s the learning and thinking and wrestling with ideas that matter.
But this insight led me to identify two immediate questions:
1. How can I get students to think of a paper as a process, not just a product?
In my experience, students are trained to think of paper writing as a results-oriented pursuit. Years of feedback and grade-giving have taught them that the product is what matters. Write a paper and hope that it meets the professor’s idea of excellence! How could I communicate to the students that the goal for me was not producing perfect papers but wrestling with ideas?
One way is simply to say that, of course. But how well would that work if I was still grading the papers based on the result? Was there a way to grade the papers not on the result? Was there a way for my grading to communicate to the students, “What I really care about here is the learning process, not a perfect paper?”
2. What if the process doesn’t have to be a paper at all?
It wasn’t long before I found myself asking an even more basic question: Is writing a paper the only way for students to learn history? Could the process I’m looking for happen in other formats and modes?
I eventually concluded that it could take different forms. For this class, I realized if the students were truly wrestling with ideas, my goal would be achieved. Their method for doing so would be a secondary concern. Writing is one way to do it, but there are others.
Formative Feedback, Student Choice
Here is where my ongoing work with a Center for Teaching Excellence cohort comes in. In this “More Just Grading” cohort, we have been discussing material by other instructors and teaching experts in order to think about what grades do. I was able to use some of the materials from this cohort to consider how my grading approach could clearly communicate my learning goals to students.
Foregrounding Feedback
As I’ve learned in my cohort, formative feedback is the single most effective way of promoting student learning.
In studies where some students were given only grades, others were given grades and formative feedback, and still others were given only formative feedback, which group do you think improved most? The third group – those who were given only formative feedback. In other words, the grades are more of a distraction than an assistance. (If you’re interested in learning more about this, the study is by Ruth Butler, and I read about it in Chapter 7 of Cathy Davidson’s The New Education.)
So I decided to embrace this approach of only giving formative feedback. What does that mean in practice? All their assignments are graded as complete/incomplete. On the first day of class, I shared two pieces of writing with them – the chapter by Davidson and an article by Alfie Kohn – and we discussed them together. I explained my grading approach – an effort to focus on learning rather than on the reward conferred by the grade.
Giving Students Options
Once I realized that my goal in the course was about engaging ideas, it also freed me up to see that the students could have their own, complementary goals. If the method (writing or not writing) is secondary to me, then why should I dictate the format? I decided to invite the students to make some choices in the course that would further encourage them to see the assignments less as efforts to meet my standards and more as efforts to meet learning goals: mine and theirs.
In this class, students can elect to do their assignments in writing (if they want to work on their writing, which I’m happy to help with) or in other formats (if they want to work on other goals). Broadly speaking, I’ve set goals for the “what” but have asked the students to make choices about the “how.” So, for example, one of my learning goals for the class is for the students to see both fiction and history differently by the end of the class, but they might choose to write analytic papers or creative pieces for their major assignments.
What We Learned
So what are my takeaways from 1) reflecting on what I really wanted students to learn and 2) talking to my class about AI? What benefits does AI confer in learning history?
Identifying Guidelines for Using AI
In my reflection on my learning goals in the course, I concluded it might be possible that generative AI could help a student learn quickly about some background relating to Hayden White, or Mexican history, and I would have no problem with that. But it won’t be a great help to them in generating content for assignments if they are actually invested in what they’re learning and why they’re learning it.
In truth, the class had come to a similar conclusion (mostly without me): that generative AI was not great at answering specific questions related to our material; nor was it entirely reliable—at least in isolation—as a source of general information about our subject.
Talking about AI Matters
On the other hand, there was a sense of palpable relief among the students that we were having the conversation. They agreed that it’s hard to know or guess what faculty think about it, and getting any kind of guidance was clearly appreciated. They were deeply invested in talking about AI, even if their general tenor was more skeptical than enthusiastic. I found myself thinking, at one point, “Wow, this is their future.” They are going to encounter AI more and more; of course, they want to talk about it and think about it.
I also felt that by talking about it together in this way, I’d communicated to them that my role was not to police their use of AI but to think critically with them about the use of AI. I hadn’t fully articulated this to myself as a goal of the conversation, but I was happy with it as a result.
If I can continue to be a thought partner for my students about the role of AI in their learning, I’ll continue to hear about their experiences and opinions, and that will help me understand where they are now and where they might need to go next.
For more on this topic, see Communicating with Your Students around AI the section on Engaging with Generative AI