Student Education and Empowerment in the Age of AI

June 2025 | Claire Angus

This report explores the evolving role of generative AI in student learning at Boston College, drawing on insights from a year-long initiative that included a faculty Working Group on Purposeful AI and a parallel Student AI Advisors group. Through guided discussions, collaborative activities, and classroom pilots, the Center for Digital Innovation in Learning (CDIL) at BC sought to understand the ways that students perceive, understand, and engage with AI in an educational context in order to make sure their needs and voices are considered in the course design process. 

Students at BC are eager to move past the dominant institutional narrative that suggests that using AI is primarily an issue of academic integrity. Instead, they expressed a strong desire for more nuanced ethical guidance, practical skill development, and open dialogue with faculty. Rather than using AI to avoid learning, most of the students we spoke to reported using it to reinforce class material, clarify confusing concepts, and study more effectively – often in ways not easily observable by faculty. Meanwhile, a lack of clear communication and psychological safety around the subject and broader implications of AI use has led to student confusion, anxiety, and missed opportunities to support meaningful learning.

The insights and recommendations offered throughout this report are intended to spark discussion and to support the design and development of further research and innovative teaching practices that integrate AI while upholding BC’s educational principles and Ignatian values. 

The mission of the Center for Digital Innovation in Learning (CDIL) at Boston College is to partner with faculty to create thoughtful technology-enhanced learning experiences. We have supported faculty through the emergence of generative AI over the past few years as part of this mission. At the start of the 2024-2025 academic year, we observed a growing polarization in the perspectives of students and faculty around the use of large language models (LLMs) and other forms of generative AI that we thought warranted a closer look.  

According to a 2025 survey by Educause, the top factor driving AI-related strategic planning in higher education is “the rise of student use of AI in courses,” followed closely by concerns about “risks of inappropriate use.” Taken together, these concerns seem to have prompted many institutions to initially frame discussions about generative AI on campus through the lens of risk mitigation, leading to an emphasis on academic integrity, potential misconduct, and clarifying course policies.

In addition to concerns about student learning and assessment, faculty have also felt the pressure of broader rhetoric and cultural shifts related to AI that threatens to disrupt their own livelihoods and professional identities. Articles like D. Gordon Burnett’s piece in the New Yorker, “Will the Humanities Survive Artificial Intelligence?” and James D. Walsh’s “Everyone is Cheating Their Way Through College: ChatGPT Has Unraveled the Entire Academic Project” from New York Magazine have more recently captured the despair some faculty feel about the impact of generative AI on the way they have historically taught. At a recent faculty event, a BC professor lamented, “People don’t want to think about it, because it’s so depressing.” As faculty continue wrestling with these professional and personal concerns, they must also attempt to adjust their approaches to student evaluation to ensure work is being assessed fairly. 

In the meantime, students are nervous about being falsely accused of academic dishonesty and have been left mostly to themselves to navigate the tensions between pursuing their studies and preparing for the ever-changing job market that awaits them upon graduation. Despite widespread concern among faculty about how students use generative AI, students have reported their professors engaging in minimal, if any, constructive discussion in the classroom about the impact of AI on their learning. This void has left students relatively uninformed about the principles informing their instructors’ policies and lacking a solid grasp of how to use AI constructively. Instead, the centering of academic integrity as the core AI issue in the classroom, exacerbated by the difficulty faculty have in detecting its use, has created an environment of suspicion and distrust, putting faculty-student relationships and effective learning outcomes at risk. 

Student AI Advisors

By September 2024, CDIL and BC’s Information Technology Services (ITS) had already put in motion plans to run a year-long Working Group exploring thoughtful and purposeful uses of generative AI for BC faculty. In order to begin tackling this issue of distrust and bridging the gap between faculty and student perspectives, we decided to launch a parallel Student AI Advisors group as a dedicated forum for BC students to have a similar space to discuss their ideas and concerns with each other. 

The goals of the Student AI Advisors Group were threefold:

  1. Cultivating Conversation: Sharing perspectives about AI with professors and peers

  2. Experimenting with AI: Trying educational applications of AI and reflecting on outcomes

  3. Gathering Insights: Surfacing how students were using AI and how it was shaping their learning experiences

We accepted 14 undergraduate students to the Student AI Advisors group in September, aiming for diverse representation across age, gender, and field of study. Ten students ultimately completed the year-long program after two left the group early and two could not continue into the spring semester. The group met monthly for structured discussions on topics such as academic integrity, AI use cases in coursework, and experimentation with emerging AI tools.

Student AI Advisors contributed their perspectives through group discussion, written reflections, and voluntary follow-up interviews. Insights were also gathered from a separate student focus group in May 2025 that evaluated the effectiveness of a custom AI tutor for Spanish grammar practice. Beyond BC, CDIL also looked at emerging themes in national and global research to notice where sentiments and themes emerging in our discussions were being echoed at a broader level. 

This report includes descriptions of several of the methods of engagement we used with students and faculty that illustrate how this dialogue evolved. All students quoted in this report gave their consent to be quoted anonymously. Unless indicated otherwise, all quotes are from Boston College undergraduate students. Some quotes have been lightly edited for clarity.

Building Dialogue and Trust Between Faculty and Students

Faculty-Student “Penpals”

The Purposeful AI Faculty Working Group served as an opportunity for a group of BC faculty to explore ways that AI could augment their teaching practices through creating custom AI assistants for their courses. Our sessions together began with an emphasis on the importance of exploring student needs and experiences prior to designing these assistants, in line with the human-centered design philosophy that CDIL supports. To facilitate greater empathy and deeper understanding with undergraduate students, we introduced an asynchronous activity we called “Penpals.” Faculty were invited to write down questions they wished they could ask students directly. Their submissions included questions like: 

  • How are you feeling about getting a job after you graduate?

  • What’s an ethical challenge you’ve faced when using AI?

  • What should professors know about how you’d like AI to be used in your classes?

CDIL compiled the responses, identified themes, and used them to shape discussion prompts for the student group. 

Student Answers and Questions for Faculty

When we met with the Student AI Advisors, we began with introductions and then posed a few faculty-submitted questions to the group for open discussion. As students warmed up, we transitioned to a more structured activity: students responded individually to additional questions on a shared Miro board, which allowed for deeper, more nuanced insights. We closed the session by asking students to write down what they would want to ask their professors about AI.

Their questions included:

  • How do you see your class preparing us for life after BC?

  • How has GenAI influenced your views on teaching and your students?

  • How can I use AI ethically?

  • How can students work with you to create a mutually trusting environment regarding AI?

Early Takeaways

Upon hearing the students’ responses to their questions, the faculty group continued their discussion from the prior meeting with renewed perspectives as they wrestled with potential implications for their teaching. Even though the students weren’t in the room at the time, hearing unfiltered student concerns seemed to create the sense of connection and empathy we had hoped for. As faculty then responded to the students’ questions, several emphasized a need to work together: 

  • “I hope that students feel comfortable asking questions about AI use. I hope that I create an atmosphere where asking those questions feels safe and appropriate. I think we need to work together to figure this out.”

  • “I think it is critical to understand we need to work on this together.  Professors need you to push us and help us to ensure we have a safe environment to share best practices — what works and what demotivates. As with most things, it is empathy and listening.” 

  • “This is an awesome question. Trust in AI grows as we openly explore its capabilities and challenges together, ensuring ethical considerations remain at the forefront.”

Although we still had lots of ground to cover, this first step of using guided questions, reflection, and empathy seemed to help students and faculty recognize that they could approach these challenging conversations together as learners. 

Faculty Silence Leaves a Void

One clear consensus to emerge among the Student AI Advisors was that they wanted more open, honest conversations with their professors about how to navigate generative AI responsibly. Many students reported that after an initial cursory discussion about academic integrity and course policy, their instructors rarely, if ever, returned to the topic during the semester. 

There are likely many factors that contribute to this silence – first and foremost being that faculty may have other priorities for their in-class time with students. Some might feel underprepared to speak about the nuances of generative AI in an informed way. Still others might wish to avoid the topic due to the emotional and existential questions that could open a can of worms and otherwise derail the class.  

Regardless of the reasons, their silence often signals to students that the topic is closed for discussion. As one Student AI Advisor shared, “Most professors already have a well-established policy on AI. And when they’re going over it, they’re not really asking students, ‘hey, share with me how you’re using AI.’ But many of them are just like ‘ok, this is a final thing. I don’t need to know about your opinion, this is how I’m gonna do it in my class.’ And this can shut up a lot of discussions among students.”

A Missed Opportunity

In his New Yorker article, Burnett observes that the general reticence of faculty to discuss AI is both widespread and strange (emphasis in original): “On campus, we’re in a bizarre interlude: everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be: “We’ll just tell the kids they can’t use these tools and carry on as before.” This is, simply, madness. And it won’t hold for long.” 

Meanwhile, students remain dissatisfied with leaving the conversation at that point. They are thinking about how AI will affect them in the future, and the silence of those they look to for guidance can be confusing. “AI… will be an important tool for our future careers,” one student advisor shared. “Not teaching us how to properly use [it] can put us at a disadvantage.”

Another student expressed their frustration with the lack of acknowledgement about the benefits of AI. “There has to be a general understanding that we are going to use AI to make our lives more efficient,” they said. Finally, another student framed their perspective in terms of what they expected from their instructors: “AI should be encouraged. You are doing your students a disservice by trending them away from new technologies. It is the teacher’s job to prepare the student for the future.”

While students wanted to go beyond the framing of AI as an academic integrity issue, they did express a need for more specificity in the guidelines they received from their professors on what constituted acceptable and unacceptable academic uses. One of the Student AI Advisors reflected, “All professors have different opinions. Sometimes I feel like… what is right and wrong? Students can be stressed because what’s being taught [as] correct in one setting is suddenly not correct in another setting. And this is like, really murky.”

Another explained that when students are encouraged to cite their usage of AI in some contexts but forbidden to use it at all in others, faculty unintentionally send mixed messages to students. “Students are afraid [to cite AI] because they are afraid [they will] get in trouble [by] admitting that they have used AI in their work [at all].” Burnett observes something similar: “It’s not that [students are] dishonest; it’s that they’re paralyzed. As one quiet young woman explained after class, nearly every syllabus now includes a warning: Use ChatGPT or similar tools, and you’ll be reported to the academic deans. Nobody wants to risk it.” 

Taking a Risk

A Student AI Advisor named Alison* was dissatisfied with the depth of discussion when her professor initially shared his course’s AI policy, so she went to speak with him outside class to ask some clarifying questions. That conversation was enough to prompt the professor to facilitate an in-class discussion with the rest of the class in order to learn more. 

Alison shared that initially, others in the class were reluctant to volunteer information, afraid that any admission of AI use would be used to punish them. Once their instructor announced that he wouldn’t punish students for answering his questions truthfully, they started sharing more openly. “He was shocked,” she shared. “He didn’t realize [all the ways] students were using AI in his class… and that sometimes it benefits our education.” 

After the class discussion, the professor challenged Alison to submit two versions of her next assignment: one in which she used AI, and one in which she didn’t. Along with both versions, Alison listed all prompts she had used on the AI-assisted paper. 

“He was shocked … he [thought] that I did a really good job with prompting,” she shared. “He’s actually very open-minded, he [just] didn’t know that he should have a conversation with students about AI. His lack of awareness of how students are using AI sort of [acted as] an invisible barrier that really stopped him from communicating actively with students and seeing his students’ perspective.” 

Exploring Responsible & Ethical Use of AI 

Even in the best cases, deciding to open up dialogue about constructive uses of AI for education marks the beginning, not the end, of a complex journey. In order to help students take the next step toward more nuanced and robust conversations about the ethical use of AI in their coursework, we adapted an activity developed by Laura Roberts at WPI with the Student AI Advisors group. 

In pairs or small groups, students reviewed short written scenarios describing how AI might be used in an assignment—ranging from using ChatGPT to fix grammar to submitting AI-generated content with no changes. Each group ranked their scenario on a six-point spectrum from “clearly inappropriate” to “clearly appropriate,” color-coded accordingly from red to dark green. They then explained and debated their choices with the larger group, surfacing a surprising amount of disagreement even within a like-minded cohort.

Student reflections revealed just how thought-provoking they found the experience:

  • “It highlighted the many differing opinions that we all had about each scenario.”

  • “I would have assumed that we would have been more on the same page.”

  • “Differences came down to what the end value was for our education: some emphasized the acquisition of knowledge, others emphasized the importance of learning how to work, while others emphasized efficiency.”

  • “I wish we had more time to continue [these] discussions.”

Following the success of the student activity, we completed the same activity with the faculty group. 

What We Learned

While the results from both groups were surprisingly similar, the considerations each group used to arrive at their conclusions revealed a different level of contextual understanding.

  • Out of 17 scenarios, students and faculty agreed exactly in their placement of 9, and were within one degree on 4 more.

  • Of the 4 scenarios where the student group and faculty group reached conclusions that were two or more degrees apart, two were rated as “more acceptable” by students and two were rated as “more acceptable” by faculty. 

Faculty rated AI usage as more acceptable than students in the following two scenarios where AI refined the final output of student writing (e.g. rewriting a paragraph or generating a rough abstract). 

  • “A student writes their own paper but uses AI to rewrite a particularly unclear paragraph while keeping the meaning intact.”

  • “A student uses AI to generate a draft abstract for a paper they’ve written and then revises that abstract before submission.”

Students rated AI usage as more acceptable than faculty in the following two scenarios where AI was used to develop or refine their arguments earlier in the writing process (e.g. outlining or strengthening a thesis).

  • “A student asks AI to suggest an outline for a research paper they plan to write themselves.”

  • “A student drafts their own thesis and asks AI to suggest ways to make it more compelling without changing the core idea.”

How Students (Actually) Use AI 

If instructors do not engage in conversations about AI with their students, it’s possible that their ideas about “how students are using AI” may be disproportionately influenced by what is visible in students’ submitted work. Given that generative AI has been broadly positioned as a tool that maximizes efficiency while minimizing effort, instructors may see overly polished prose or familiar tells that are not typical in undergraduate writing and therefore conclude that the student bypassed the rigor of doing the work themselves. It’s important to acknowledge that at least some of the time, those instructors are right (and justifiably frustrated as a result). Most students agreed that “other students” used ChatGPT to offload work they didn’t want to do themselves.

But what happens if faculty and students don’t always mean the same thing when they refer to “using AI”? 

Using generative AI as a strategy to avoid effort entirely, e.g. by generating a complete paper that is directly submitted for assessment, was not something that the students we spoke to would defend. We have taken to referring to this type of AI usage as outsourcing – delegating the work completely to someone or something else.  If anything, students generally spoke with disdain about those who would use AI in this way. 

“It’s never good to use it just for an answer, because then you don’t learn anything, and there’s no point in that,” one undergraduate in our focus group stated. Another followed up, “It’s not, like, stealthy or anything. I mean, it’s pretty obvious. Plus, some of the time it’s just straight-up wrong, so you have to know what you’re talking about to be able to fact-check it anyway.” The students in the advisors group agreed. “I used to think of [using AI] as a cheating thing, and I was like, ‘I’m too good for this,’” one explained. Many others referenced first learning about generative AI while still in high school, where it seemed simple: “That was the whole stigma in high school… every time you wrote a paper you’d have to put it through an AI checker, and people would get caught, so the whole thing was that people would use it to cheat.”

The good news? Both faculty and high school educators have gotten the message across to students clearly that completely offloading their academic work to generative AI is inappropriate. While there will always be students who attempt to see what they can get away with, we didn’t encounter any pushback from students on the more obvious and egregious examples of using AI to completely outsource their work. 

Instead, the conversation needs to become more nuanced – and to recognize constructive uses of AI that genuinely assist student learning. This next section will look in more detail at some of these augmenting uses of AI for learning. 

Creative and Constructive Uses of AI

Students we spoke to this year were eager to talk about how they used AI to: 

  1. Deepen their understanding of course concepts 

  2. Make use of additional practice opportunities for skill development

  3. Focus their efforts in research and test prep

Deepening Understanding of Course Concepts

During the first Student AI Advisors meeting, we asked the students, “How does your learning in the classroom differ from your learning from AI?” While no one suggested that AI should replace the education they got in class, they were quick to identify the supplemental ways that AI helped them to keep up with course material, stay engaged, and clarify tricky topics. 

Students particularly enjoyed the personalized pacing of being able to slow down or get clarification when they needed more explanation. “With AI, my learning mostly stems from looking for examples for concepts and ideas that need to be reinforced,” one student shared. Another agreed, “[AI is] helpful for better understanding complex concepts that the professor may have glossed over.” This use case for AI was the one that seemed to come up most frequently over the course of the year. “I like using AI to explain topics I don’t understand after class, or to help me study,” we heard during the focus group. Shortening the time it took to understand was also helpful. “If I am confused about a topic from classroom discussion, AI will be able to explain the same thing much quicker and more concisely,” we heard from yet another student. 

Other students used AI to speed up or go deeper on a particular point of interest if they felt they had already grasped the basics. “Sometimes [in-class learning] leads to wasted time on topics I already understand or don’t care about,” one shared. “AI learning is much more engaging and efficient compared to learning in a classroom in which a professor is basically just reading a Powerpoint,” said another. Either way, students saw the control and agency that AI gave them as a strong benefit. 

Not only does AI offer the convenience of always-on supplemental tutoring, but several students alluded to the idea that it also allows them to persevere further into mastering a topic, primarily because it removes a social barrier to repeatedly asking for help from another person. This may be particularly worth considering at an institution like Boston College, where the expectations surrounding students’ standard of achievement is particularly high. 

One student from our focus group explained, “I know sometimes if I’m asking a friend or I keep going to a teacher… I’m like, ‘wow,’ from a self-conscious perspective, I’m like, ‘I’m being annoying,’ ‘this makes me feel like an idiot’. [Whereas] I’m not afraid to keep going with AI… ‘what does this mean? Why? What does this mean?’ Versus with a teacher or a friend I’ll kinda feel bad, like, ‘oh, I’m bothering you.’” 

Another student agreed, “With AI…you can ask as many questions as you want and don’t have to be afraid of asking a bunch of clarifying questions – which allows you to really understand the topic at hand.” 

Although this conversation took place toward the end of the academic year with younger undergraduates, their sentiment echoed something that came up at the beginning of the year from a senior. “There is a general fear of being judged, participating in front of people,” the senior reflected. “The fear of judgment is really detrimental to learning openly and confidently and AI gets rid of that fear.”

Focused Skill Development: Ricardo the Chatbot

One member of the faculty working group, Marta, created a custom chatbot named “Ricardo” for her Intermediate Spanish class. Marta wanted to provide her students with more opportunities to practice the grammatical concepts and Spanish verb tenses that they were learning in class, so she carefully crafted a system prompt, tested its output, and ultimately offered Ricardo to her students to guide them in extra practice as they studied. 

The general response from Marta’s students was very positive. Nineteen out of twenty students used Ricardo, and every student who used it reported that they found it helpful. Unlike more general purpose AI tools, Ricardo was grounded in the specific context of the course: Marta carefully instructed it to keep in step with what students were currently learning, rather than introducing ideas that went beyond the scope of what they had covered. This contextualization proved to be very helpful for students, who explained that knowing Ricardo was targeted toward exactly what they were learning in class helped them stay focused and reinforce the core concepts they were trying to learn. 

When we asked them about their experience with Ricardo, several of Marta’s students were quick to highlight this contextualization as the most valuable aspect of using it to learn. Although they were generally positive about having this as a way to support their learning, they became more animated when talking about this aspect. “[The best thing about Ricardo was] that it was so in tune with what we were doing in class,” one student explained. Another quickly chimed in, “I totally agree. The best part is it knows where you are.” They built off the comments of another of their classmates, who had opened the conversation by explaining that initially they had been reluctant to learn another tool. That reluctance changed after Marta explained what Ricardo could do. “Once I found that it could quiz me on the exact content, that was a big selling point to me,” they said. “Context was important.”

The students in the focus group then continued to speculate about what this might look like in their other classes. “If it knew my calculus textbook, I think that would be a gamechanger,” one shared. “Like, if I could tell it, ‘I’m studying chapter 13.7 – and I’m trying to solve problem 7, can you help me through it?’ Then getting a step-by-step process of how to solve the problem.”

 One student went as far as to say that Ricardo had helped him finally understand grammatical concepts that he had been struggling with for years. “AI has completely changed how I study Spanish,” he said. “I’ve literally been taking Spanish [since middle school]. I’m so bad at Spanish, and I keep restarting …  but I understand the grammar so much better now that I have AI, because it [can go through and help me understand] in English, in Spanish, whatever… honestly I’d say it’s been a game-changer.”

Research and Studying

Along with the benefits of deeper, contextualized understanding and repeated, low stakes practice, AI also provided support to students in the realms of research and studying. 

While aware of the dangers of consulting untrustworthy sources – and at least nominally aware of the potential for LLMs to hallucinate nonexistent studies or sources – several students indicated that they used AI on a regular basis to explore and identify sources for further research. These use cases ranged from using AI to summarize the main arguments in an academic paper to coming up with lists of potential studies to explore. 

One student reflected that research technologies and tools have changed before, and that this change isn’t necessarily a bad thing. “Before the internet, you did research by going to the library and… whatever else you did, I wasn’t born… but when I was learning history and doing research it wasn’t, ‘go to the library,’ it was like, ‘figure out how to find credible sources online, because that’s how we do research now.’ Maybe something similar is going to happen with AI, where now you have this tool that you can’t avoid, so you need to learn how to work with it. And maybe some of that means working in a controlled environment… where you can set boundaries so that the student knows what to use AI for.” 

Students also got creative when it came time to study for exams or test themselves on how well they could recall and apply key course concepts. A student in our AI advisors group reflected on how her study habits had changed: “Last year, I would reread all the readings [to prepare for an exam]… and then do my study guide. This year, I realized that with AI, I can just upload all the readings and it will generate study guides for me. And then … I don’t have to reread all the documents … [I can] figure out what is less clear and then maybe revisit [those] areas. So it saved me so much time on just, you know, manually typing and reading through each line and more on only focusing on the area of weakness that I have.”

These more in-depth reflections from students give a glimpse into the thought processes they may have when it comes to using AI that are not visible or obvious to faculty otherwise. By taking the familiar concepts of research and studying and finding creative new ways for AI to assist them, they are clearly augmenting, rather than outsourcing, their own learning. However, the campus rhetoric surrounding AI is still confusing enough that at least one student admitted that they weren’t sure if it was acceptable or ethical for them to use AI in this way. “What is okay and what is not okay when using AI for studying?,” they asked. “Any use at all feels demonized depending on the class.”

Hidden Temptations and Pedagogical Challenges

While we made a strong effort to keep the lines of communication and good rapport open with students in our roles as facilitators, we did also notice concerning themes that warrant additional attention. 

Using AI to Structure or Synthesize Thought

One of the more problematic patterns to surface (introduced in the “Exploring Reasonable & Ethical Uses of AI” section) is the difference between what students and faculty consider appropriate use of AI when it comes to earlier stages of traditional writing assignments. As mentioned earlier, faculty rated AI usage as more acceptable than students in later stages of the writing process while students rated AI usage as more acceptable than faculty earlier in the process, such as creating an outline for their paper or refining their thesis. 

The debate over using AI earlier in the writing process came up repeatedly in the faculty group as well as both student groups where we facilitated discussions. One faculty member succinctly summarized the issue: “I think what you’re pointing to… is writing as a form of thinking versus writing as a product.”

The writing-as-product mindset seemed deeply ingrained in some students and ignited fierce debate among others. During the appropriate uses of AI activity, one student argued that at the undergraduate level, very little students would come up with would be original thought regardless, and so instead many students felt like assignments were a game of guessing what their instructor wanted and aiming to submit that. Others pushed back, emphasizing the importance of developing critical thinking and forming their own arguments. “In introductory courses especially, where AI can most easily replicate students’ work, teachers should help paint the bigger picture for students and really encourage them to not rely on AI for answers,” one suggested. 

At another point, a conversation about ethical use of AI revealed the depth of the disconnect for some students. We asked, “If AI completely disappeared tomorrow, how would that affect your educational experience?” Several students who responded to that question agreed that they would feel the biggest impact in their humanities courses. More specifically, they said, they would miss the structure ChatGPT and other LLMs provide when writing essays and papers. Unlike the way they spoke about using ChatGPT to submit work directly (this was “cheating,”) none of the students in this other conversation acknowledged or seemed to think that there was anything wrong with using it earlier in the process. 

Student 1: “I think it would mostly affect my humanities classes. I’m taking the first year writing seminar, and I’m like, ‘I want to talk about this in my paper, and I also want to talk about this. What’s a good way to connect the two?’ And so not having that safeguard or easy way of saying, ‘how do I connect the two?’…it would still be my own work, and still my own ideas, but it would take a lot longer to think through different connections and like… an outline. So it really helps you structure your ideas more.” 

Student 2: “Yeah, I agree 100%. I would definitely feel it more in philosophy and stuff like that. I’m not all that creative, to be honest with you, so I struggle, especially getting started. So it’s helpful to get that outline, 100%. Get a direction.”

Student 3: “I would say outlines … I think it’s helpful, when I don’t know how to structure an essay, I think that part of [ChatGPT] is really helpful.”

Student 4: “My philosophy class is like big against it which I don’t really understand because I find in other classes it can be very helpful as far as getting your ideas together, especially when you’re doing all these readings every semester and you have all these notes and different ideas about different things.”

The Challenges of Motivation and Engagement

We also heard further evidence from students that would support the hypothesis that they are more likely to use ChatGPT in ways that could be considered “cheating” in courses or on assignments that they found pointless or meaningless. “I do think people are more keen to ask AI to summarize “pointless” or “terribly long or difficult” readings for class, and anything that seems like a waste of time is outsourced to GenAI with much less thought than might have been necessary to plagiarize or cut corners years ago,” one student reflected. 

Another student shared, “My professor assigned us a 33 page reading on [topic]. We didn’t get tested, it was a normal reading on it. So of course I’m going to ask ChatGPT to make a two-page summary and come into the class with the same knowledge of someone who read the whole thing for an hour. If you’re going to assign things that you think they don’t want to read, be assured that they’re not going to read it. If you think that people are not going to read it, then maybe don’t assign it.”

Students Will Expect Faculty to Set Higher Standards in Teaching

Faculty have repeatedly expressed frustration at what they perceive as students putting in a minimal level of effort to learn. We may be starting to see evidence of the inverse of this: students expressing their dissatisfaction with instructors who put in minimal effort to teach – or who don’t take the time to design courses and assessments that require more than a cut and paste answer from an LLM. 

When we got to the root of the issue, students started to express themselves more bluntly. “If AI can spit out these answers, then are the things in your course that deep or important? Maybe you should evaluate what you’re teaching in your course,” one said. Another explained, “One of the main reasons why I’m [part of the Student AI Advisors group] is this sense of frustration over the [apparent] uselessness … of a lot of classes that I’ve taken at BC because of AI …  Truthfully, maybe like 30% of my classes that I’ve taken in my four years at this school are now mostly irrelevant.”

These frustrations weren’t just because students were bored, though. They were combined with a sense that they are missing out on an opportunity to prepare for the future that they see emerging around them. “[Friends in banking and consulting] are terrified because … their job is mostly entering things into spreadsheets and making powerpoints … Right now … they can plug their work into AI and it does their work for them … [they’re] not telling their bosses, so they’re getting paid for doing nothing, but that’s obviously a bubble that is not going to last that long.”

Conclusion

This report offers a snapshot of the transitional moment we are in: students are experimenting with powerful new tools, often in isolation, while faculty navigate uncertainty and disruption in their professional roles. What stands in the way may be less of a fundamental mismatch in values than of a lack of clarity, communication, and opportunities to collaborate.

Integrating generative AI into higher education in a way that upholds BC’s academic mission and Jesuit values will require more than just policy updates or technological solutions. It will demand a cultural shift toward curiosity, empathy, and mutual accountability. It will also likely require institutional support and training for faculty to design learning experiences that are meaningful, human, and resistant to AI shortcuts. And it will mean encouraging students not just to follow rules, but to reflect, question, and grow. 

There is much work yet to be done, even as the broader technological and national landscape continues to shift at an alarming rate. Sometimes the right next step may not be entirely clear. However, faculty and staff who genuinely want to positively influence those under their teaching and care could do worse than to remember this final point of consensus among the Student AI Advisors group: 

“The best classes I’ve taken at this school? Have been the ones that have helped me learn how to live a better life.”

References

Burnett, D.G. (2025) Will the Humanities Survive Artificial Intelligence? The New Yorker. https://www.newyorker.com/culture/the-weekend-essay/will-the-humanities-survive-artificial-intelligence

Walsh, J. (2025) Everyone is Cheating Their Way Through College: ChatGPT has unraveled the entire academic project. New York Magazine. ​​https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.html

Last Updated