Updated Apr 22, 2026
Student voice on Generative AI is easy to reduce to a usage rate. The more useful question for a university is what students are trying to get when they open the tool the moment they feel stuck. That is why Zara Hooley, Sitira Williams and Beverley Hancock-Smith's Studies in Higher Education paper, "Impact of text-generative artificial intelligence tools on students’ approach to assessment: a case study of a UK institution", matters for universities collecting student voice on assessment, feedback, and AI policy. It suggests that many students are not simply chasing efficiency. They are looking for immediate, private, low-judgement help when they cannot get unstuck quickly enough elsewhere.
The paper sits in a familiar UK higher education context. As the authors note, HEPI data suggested that up to 88% of UK students used GenAI tools in assessments during 2025. The wider sector is already trying to make sense of that shift, as seen in Advance HE's recent spotlight on student experiences of GenAI in UK universities. But a usage rate alone does not tell institutions whether students are seeking speed, privacy, reassurance, or a substitute for support they do not feel able to access.
Hooley, Williams and Hancock-Smith tackle that gap through a qualitative case study at a UK post-92 university with around 25,000 students and a diverse widening-participation intake. They conducted semi-structured interviews with 11 students across different disciplines and levels of study, with a student researcher involved in the interview design and fieldwork. The research question is practical and useful for survey teams: how are students actually using text-generative AI around assessment, what motivates that use, and what does it mean for learner identity?
The first finding is that students often use GenAI to supplement teaching when support feels too generic or too slow to access. Participants described turning to AI after lectures, seminars, or independent study when they still did not understand a concept or wanted the "workings out" that teaching had not made explicit. In large-cohort settings, that matters because a student can leave a session with unanswered questions and find AI easier to approach than a delayed email exchange with staff. The takeaway is simple: some AI use is really a signal about unmet teaching support.
Immediacy and privacy were central, and for some students more important than perfect accuracy. Students repeatedly valued the fact that GenAI responded straight away and let them ask what they saw as basic or embarrassing questions without fear of judgement. One participant described it as:
"a personal lecturer on your phone"
That is a striking line because it explains why institutional concerns about accuracy do not automatically change behaviour. If a student wants help at 7 p.m., or wants a concept explained more simply than a lecture or search engine provides, speed and privacy can outweigh depth. The practical implication is that support availability, not only AI policy, shapes student choices.
Students described three main patterns of use: understanding concepts, getting started, and refining text. They used GenAI to clarify difficult material, sketch plans, generate ideas, and rephrase writing more clearly. The paper argues that the technology's simplifying and homogenising tendencies, often criticised elsewhere, were experienced by these students as helpful when they needed a quick overview or a lower-pressure route into a task. For institutions with large cohorts or widening-participation intakes, that is a useful signal about where students may be compensating for a real learning gap.
At the same time, students were alert to the risks. They talked openly about inaccuracy, superficiality, over-reliance, and the danger of de-skilling. Some participants felt the tool could help them express ideas they genuinely had but struggled to articulate. Others worried that if they outsourced too much thinking, they would weaken their own academic development. That tension fits wider evidence that students use Generative AI for feedback, but trust teachers more when the judgement becomes more consequential. The benefit of recognising this tension is better policy design: students need guidance that reflects both the attraction and the risk, especially where institutions are also weighing privacy and false-positive risks in AI detection.
For UK universities, the first implication is to stop asking only whether students use GenAI. Module evaluations, pulse surveys, and AI-specific questionnaires should separate concept clarification, planning, drafting, confidence, privacy, and trust. Those distinctions make the evidence far more actionable because they show whether the issue is policy clarity, teaching design, feedback access, or something else entirely. The payoff is tighter diagnosis and more precise intervention.
The second implication is to treat private AI use as a clue about hidden support needs. If students are using GenAI because they want a shame-free place to ask basic questions, that is not only an AI issue. It is also a teaching and support design issue. Clearer formative checkpoints, more visible office hours, better question channels, and assessment literacy work that aligns expectations earlier could reduce the need to seek that help elsewhere. The payoff is earlier support, lower friction, and fewer students quietly falling behind.
Third, institutions should pay close attention to where GenAI is bridging entry-level gaps and where it may be masking them. The paper is especially relevant for large-cohort and widening-participation settings because students valued simplified explanations that helped them catch up with the level of tuition. That can be useful, but it also raises a practical risk: if AI is doing too much of the explanatory work, universities may miss where induction, academic skills support, or assessment guidance needs strengthening. The payoff is more equitable support for students who need it most, and a clearer view of where teaching design needs work.
The final implication is methodological. If universities add open-text prompts about AI and assessment, they need a governed way to interpret the answers, not a high-stakes workflow built around generic LLM comment analysis. Our student comment analysis governance checklist is a useful starting point. Student Voice Analytics can then help teams group comments on immediacy, privacy, trust, de-skilling, and support gaps consistently at scale. That gives Quality and Student Experience teams a clearer basis for deciding whether the right response is better AI guidance, better teaching support, or clearer assessment communication. The payoff is a more defensible evidence trail.
Q: How should a university ask students about GenAI use in assessment if it wants useful evidence rather than a headline usage rate?
A: Use separate prompts for different kinds of use. Ask whether students use GenAI to clarify concepts, plan work, improve wording, generate feedback, or complete substantive parts of an assessment. Then add one open-text question such as "What makes GenAI feel helpful or risky on your course?". That gives teams clearer evidence on whether students are seeking convenience, confidence, privacy, or a substitute for missing support, instead of collapsing everything into one headline usage rate.
Q: What should institutions keep in mind about the methodology of this study?
A: This is an exploratory qualitative case study based on interviews with 11 students at one UK post-92 university. That means it offers depth rather than broad generalisability. It is most useful as a strong directional signal about motivation and behaviour, especially because the sample spans different disciplines and study levels. Institutions should use it alongside their own survey and comment data before making broader claims.
Q: What does this change about student voice work more broadly?
A: It suggests that AI-related student comments should not be treated only as misconduct or tech-policy data. They are also evidence about teaching access, confidence, assessment design, and where students feel able to ask for help. That makes GenAI feedback part of the wider student voice in assessment and feedback picture, and a more useful input into course and support design.
[Paper Source]: Zara Hooley, Sitira Williams and Beverley Hancock-Smith "Impact of text-generative artificial intelligence tools on students’ approach to assessment: a case study of a UK institution" DOI: 10.1080/03075079.2026.2640092
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.