Updated Apr 03, 2026
At Student Voice AI, we are interested in what students actually say when universities introduce a new technology into learning and assessment, not just whether usage goes up. That is why Glenys Oberg, Yifei Liang, Margaret Bearman, Tim Fawns, Michael Henderson and Kelly E. Matthews' Higher Education paper, "Feeling AI: Circulating emotions, institutional climates, and moral boundaries in student use of AI", matters. Drawing on a national Australian survey of 8,021 students and focus groups with 79 students, it shows that universities should treat AI as an emotional and institutional issue as much as a technical one. For UK teams reviewing module feedback, AI pilot comments, and broader student experience data, that is a very useful distinction.
Most higher education debate about AI starts with integrity risk, productivity gains, or teaching design. The problem is that those frames can miss how students actually experience AI in day-to-day study. A tool can be convenient, and still feel risky. It can save time, and still leave students worried they are crossing a line.
This paper asks how students emotionally engage with AI in higher education, and what those emotions reveal about wider institutional climates, moral expectations, and boundaries around acceptable use. The context was Australia in 2024, when universities were under visible regulatory pressure to respond to Generative AI. That matters for UK readers because many institutions here are in a similar phase: policy is moving quickly, local rules vary across modules, and students are trying to work out what counts as sensible support versus academic danger.
Methodologically, the study combines breadth with depth. The survey mapped students' emotional responses to AI across four universities, while the focus groups showed how those emotions played out in relation to assessment, learning, and creativity and voice. That makes the paper more useful than a simple attitude poll. It explains not only whether students feel positive or negative, but why those feelings shift depending on context.
Student responses to AI were clearly ambivalent rather than cleanly polarised. More than half of surveyed students reported optimism (54.3%) and excitement (50.0%) about AI, but scepticism was slightly more common again at 55.8%. Worry and gratitude were also both common. That combination matters because institutions can easily misread adoption as trust. A student may use AI frequently while still feeling uneasy about whether it is accurate, legitimate, or safe.
Assessment was the area where institutional anxiety became most visible. The focus groups show students connecting AI use to surveillance, plagiarism risk, and inconsistent rules. In the paper's terms, AI had become caught up in an "affective climate" of vigilance and suspicion. That is highly transferable to UK higher education. When assessment guidance changes quickly and module-level expectations differ, students often turn to open comments to describe confusion, self-policing, and fear of being judged unfairly.
Students also described AI as emotionally double-edged in learning itself. Some used it for reassurance, drafting help, or quick feedback, then felt worse rather than better because the tool introduced doubt about whether they were learning properly. Others spoke about guilt, laziness, or the sense that relying on AI might undermine hard-won academic habits. One student captured the line many participants were trying to draw:
"Cutting down paragraphs, grammar stuff, that's fine. But research? Do it yourself"
That quote gets to the heart of the paper. Students were not only asking whether AI works, they were asking what kind of learner they become when they use it. The same pattern appeared in the section on creativity and voice. Many participants defended originality and authorial identity very strongly, worrying that too much AI would flatten their thinking or make their work feel less like their own. In other words, optimism about AI often coexisted with a desire to protect authenticity, effort, and belonging.
The paper's most useful institutional conclusion is that universities need more than technical AI literacy. They need critical affective literacy. The authors argue that students need spaces to discuss relief, guilt, scepticism, and fear, rather than being pushed into a simple "allowed or banned" frame. That is important because emotional responses are part of the data. They tell universities where policy is unclear, where trust is breaking down, and where students feel they must navigate AI alone.
For UK higher education teams, the first implication is to stop evaluating AI mainly through adoption metrics. Ask what students feel when they use it, where they trust it, where they hesitate, and whether rules feel intelligible across different modules. A short pulse survey that measures usefulness but not worry, guilt, or trust will miss a large part of the picture.
Second, universities should separate AI experience into distinct domains when collecting feedback. This paper shows why a single prompt about "AI in learning" is too blunt. Comments about assessment risk, comments about learning support, and comments about creativity or voice are not the same problem. They need different interventions. For Student Experience and Market Insights teams, this is exactly where systematic open-text analysis becomes useful.
Third, institutions should read emotionally charged comments as early warning signals rather than noise. When students say a tool feels risky, unfair, or confusing, they are often describing a governance problem, not just a personal preference. Student Voice Analytics is relevant here because it can help universities categorise and benchmark themes such as trust, policy clarity, authorship, fairness, and surveillance across large comment sets, so AI policy is shaped by evidence rather than anecdote.
Finally, the paper points towards a more credible policy stance: clearer local guidance, more explicit conversations about acceptable use, and less reliance on fear as the main implementation tool. If universities want students to use AI responsibly, they need to create conditions in which students can ask questions without feeling that uncertainty itself is suspicious.
Q: How should universities ask students about AI if they want actionable evidence rather than a vague temperature check?
A: Use a small set of separate prompts on usefulness, trust, fairness, policy clarity, and emotional response, then pair them with an open-text question such as "What feels most helpful or most risky about AI use in your course?" This paper suggests students often hold positive and negative feelings at the same time, so institutions need questions that let those tensions show up clearly.
Q: What should UK institutions keep in mind when applying findings from an Australian study?
A: The regulatory context in the paper is specifically Australian, and the authors are careful to present the findings as a snapshot from a fast-moving moment. But the underlying issues transfer well: decentralised module rules, integrity concerns, uneven staff messaging, and students trying to judge where AI fits in legitimate study practice. UK universities should use the paper as a guide to what to test locally, not as a one-size-fits-all blueprint.
Q: What does this change about student voice work in higher education?
A: It broadens it. Student voice on AI is not only about whether students approve of a tool. It is about whether they feel safe using it, whether the rules feel coherent, whether trust in assessment is being protected, and whether students still feel like authors of their own work. That means free-text feedback becomes especially valuable, because it captures the emotional and moral dimensions that a headline score will flatten.
[Paper Source]: Glenys Oberg, Yifei Liang, Margaret Bearman, Tim Fawns, Michael Henderson and Kelly E. Matthews "Feeling AI: Circulating emotions, institutional climates, and moral boundaries in student use of AI" DOI: 10.1007/s10734-026-01658-6
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.