Students disclose AI use when governance feels fair and trustworthy

Updated Apr 23, 2026

Students do not disclose AI use just because a policy tells them to. They disclose when the rules feel fair, consistent, and safe, which is the central lesson from Wanqing Xia and Wei Wei's Innovations in Education and Teaching International paper, "Beyond the syllabus: How the fidelity of GenAI governance implementation shapes student trust and transparency behaviours". For universities collecting AI pilot feedback, module evaluation comments, and wider student voice evidence, the paper moves the debate beyond whether students know the rules. It asks whether students trust the way those rules are enacted, a question that now sits alongside UK evidence on student experiences of GenAI across universities.

Context and research question

Universities have moved quickly to write policies on permitted GenAI use, assessment integrity and academic conduct, and disclosure. The risk is treating policy clarity as policy credibility. A rule can sit in the handbook and still feel inconsistent, punitive, or unsafe when students decide whether to disclose AI assistance in real coursework.

Xia and Wei study that gap through a four-wave longitudinal design involving 739 students across 25 courses. The paper examines how course-level GenAI governance shapes students' transparency behaviours through procedural justice and trust. Using multilevel regression, the authors distinguish policy procedural clarity from implementation fidelity, then test how procedural justice, trust, disclosure, and concealment relate over time. That makes the study useful for UK Student Experience, Quality, and Market Insights teams because it treats AI policy as something students experience through repeated local encounters, not as a static document.

Key findings

The central finding is that implementation fidelity mattered more consistently than policy procedural clarity. In practical terms, students seemed to respond less to the existence of a written process than to whether governance was enacted consistently and transparently in their course. For universities, the takeaway is blunt: a well-written central policy is not enough. If module-level explanations, staff responses, and assessment workflows send mixed signals, trust can erode anyway.

The abstract captures the point neatly:

"students' disclosure or concealment of AI assistance may depend less on written rules than on how governance is enacted."

Procedural justice predicted trust, and trust was the main pathway to transparency. The study found that procedural justice predicted trust, and that trust in turn was associated with more disclosure and less concealment. Once trust was included, the direct links from procedural justice to disclosure and concealment were no longer significant. For institutional teams, the message is practical rather than abstract. Fairness matters because it shapes whether students believe disclosure will be handled sensibly.

The findings also challenge a purely compliance-led model of AI governance. If universities focus only on detecting misuse or instructing students to declare AI assistance, they may miss the condition that makes candour more likely: trust. That aligns with recent evidence that students' AI use is shaped by hope, worry, guilt, and institutional climate, not only by access to tools or knowledge of rules. Students can know what a policy says and still hide use if disclosure feels risky.

Direct procedural contact may help, but it was not the whole story. The paper reports a directional pattern suggesting that direct procedural contact may strengthen the justice-trust link, although the interaction was not conventionally significant. More importantly, implementation fidelity predicted procedural justice among both contacted and non-contacted students. The useful takeaway is that governance signals travel beyond formal process touchpoints. Course culture, peer stories, staff messaging, and visible consistency can all shape whether governance feels fair.

The practical implication is that AI transparency is relational. Students are not simply making an individual moral choice in a vacuum. They are reading cues from staff, policy, assessment design, and peer experience. If those cues imply fairness, consistency, and proportionate response, disclosure becomes more plausible. If they imply suspicion or inconsistency, concealment becomes more understandable.

Practical implications

For UK universities, the first implication is to audit enactment, not only policy wording. Review whether AI rules are explained consistently across modules, whether disclosure routes are easy to find, and whether staff responses align with institutional guidance. That gives institutions a governance system students can recognise in practice, not only a policy they are expected to remember.

Second, institutions should collect student voice on procedural justice and trust directly. A generic question such as "Do you understand the AI policy?" is too narrow. Teams should also ask whether rules feel fair, whether disclosure feels safe, whether staff explanations are consistent, and what would make students more willing to be transparent. This connects directly to the UK sector signal that GenAI student voice needs structure, not staff-only guidance. The payoff is feedback that points to policy implementation gaps rather than a vague measure of awareness.

Third, universities should analyse AI-related open comments as governance evidence. Comments about fear of false accusation from AI detectors, inconsistent module rules, disclosure anxiety, unfair access, and unclear permitted use should not be treated as general AI noise. They show where trust is being built or lost. Student Voice Analytics fits naturally here because it can group those themes across module evaluations, pulse surveys, and AI consultations, giving quality teams a clearer view of whether policy is landing differently by school, course, or student group.

Finally, institutions need to document the analysis process itself. AI governance comments may include sensitive information about assessment behaviour, staff practice, and student anxiety, so teams need privacy controls, small-cohort rules, and a repeatable open-text analysis method before results are shared. The student comment analysis governance checklist is a practical starting point. That creates a clearer route from student comments to policy improvement without creating avoidable data or trust risks.

If your institution is revising AI guidance this year, do not stop at asking whether the rules are clear. Ask whether students believe disclosure will be treated fairly, because that is the condition this paper suggests universities cannot afford to assume.

FAQ

Q: How should a university apply this paper when reviewing its GenAI policy?

A: Start by testing whether the policy is experienced consistently at course level. Ask students whether rules are clear, whether disclosure feels safe, and whether staff explanations match the written guidance. Then review open comments for recurring trust, fairness, and concealment concerns before changing the policy language alone.

Q: What should institutions keep in mind about the methodology?

A: This is a four-wave longitudinal study of 739 students in 25 courses, analysed with multilevel regression. That gives the paper useful strength because it looks across time and course contexts, but the findings should still be tested locally. UK teams should treat the study as a strong framework for designing local feedback questions and interpreting AI governance comments, not as a direct benchmark for every institution.

Q: What does this change about student voice on AI?

A: It shifts the focus from awareness to trust. Student voice on AI should not only ask whether students use GenAI or know the rules. It should ask whether governance feels fair enough for students to be honest. That makes open-text feedback especially valuable, because comments reveal whether students see AI policy as guidance, protection, surveillance, or risk.

References

[Paper Source]: Wanqing Xia and Wei Wei "Beyond the syllabus: How the fidelity of GenAI governance implementation shapes student trust and transparency behaviours" DOI: 10.1080/14703297.2026.2655907

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.