Updated Feb 20, 2026
At Student Voice AI, we work with universities that want to understand student experience in detail, especially where a single score can hide radically different realities for different groups. In a recent Higher Education paper, Holtzman and colleagues examine how deaf and hard of hearing (DHH) students describe accessibility and support in their institutions. The study is a useful prompt for UK teams working on disabled student experience, because it shows what accessibility looks like from the student perspective, and why the gap between policy and practice often shows up in day-to-day communication and processes. Read the paper here.
The context is familiar: universities may have formal obligations, adjustments processes, and specialist support, but students still encounter friction at the points where teaching, administration, and informal learning meet. Holtzman et al. ask what DHH students perceive as the main barriers to participation, and what kinds of support and institutional practices help them succeed. Although the study is based in Israel, the operational lessons travel well to the UK sector, because the same issues appear in student feedback: access to information, predictable processes, and staff capability.
One of the clearest messages is that accessibility is experienced as a system, not as a set of individual accommodations. Students described accessibility problems in lectures and wider academic life, including moments where adjustments were missing, inconsistent, or required repeated follow-up.
"The system utterly fails to provide accessibility in lectures, even in large courses."
The paper also highlights the hidden work of self-advocacy. DHH students often had to explain what they needed, remind staff, and adapt their own study strategies. That labour is easy for universities to overlook, because it can be invisible in standard reporting, yet it is central to how students experience belonging, fairness, and wellbeing.
Another theme is variability: support quality can depend on who teaches a module, how well information flows between teams, and whether assistive technologies are available and reliable. When accessibility depends on ad hoc goodwill rather than predictable design, students can experience each semester as a fresh negotiation.
At the same time, Holtzman et al. show that DHH students are not only describing barriers. They also describe agency and strategies: using technologies, developing routines, and drawing on peer support to navigate complex learning environments. For UK institutions, that is a reminder to recognise student expertise, and to treat accessibility improvements as co-designed changes, not just compliance tasks.
For UK higher education teams, three practical moves follow from this study.
First, design for accessibility by default, then audit it. Captioning and transcripts for recordings, microphone use in teaching spaces, accessible slides and documents, and clear guidance on what students can expect are all basic signals that reduce the need for individual escalation.
Second, reduce process friction. Where adjustments, timetabling changes, or support requests are handled through fragmented channels, students end up repeating themselves and chasing information. A single source of truth, named ownership, and published service standards (for response times and delivery) help turn good intent into a reliable experience.
Third, use student voice data to prioritise and verify fixes. Survey scores can tell you whether disabled students are having a worse experience, but they rarely tell you where the breakpoints are. Open-text comments, module evaluation responses, and pulse surveys can surface concrete failure modes (for example, "captions missing", "interpreter not booked", "staff did not know the process"). Analysed well, that feedback becomes an operational roadmap.
Q: How can universities use student feedback to improve accessibility for DHH students?
A: Start by ensuring DHH students have accessible ways to give feedback (not just end-of-year surveys), then combine structured questions with open-text prompts about teaching delivery, communication, and support processes. Use the comments to identify repeatable issues you can fix institution-wide, such as recording quality, captions, or unclear points of contact, then close the loop by telling students what changed.
Q: What should we be cautious about when applying a qualitative case study to our own institution?
A: A small qualitative study does not provide prevalence rates, and local context matters. Treat the findings as a high-quality map of plausible barriers and mechanisms, then test them against your own evidence, such as student comments, support service data, and disability gap metrics. The goal is to translate insight into hypotheses you can validate and act on.
Q: What does this imply for NSS and internal survey analysis in the UK?
A: It reinforces the need to segment and interpret results carefully, because averages can hide accessibility breakdowns that affect specific groups. It also underlines why free-text matters: for accessibility, the actionable detail is often in the narrative. When you can systematically analyse open comments, you can pinpoint the specific touchpoints that are failing and track whether interventions are improving lived experience.
[Paper Source]: Daniel Holtzman, Rinat Michael, Sawsan Alwakil, Mohammad Karkabi, Ameer Hamra "Accessibility and support in higher education: a case study of deaf and hard of hearing students’ perceptions" DOI: 10.1007/s10734-025-01586-x
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.