Updated Mar 29, 2026
marking criterialawLaw students are not confident in marking criteria when expectations are vague or applied unevenly. Across Marking criteria open-text comments in the National Student Survey (NSS) 2018–2025, analysed using our NSS open-text analysis methodology, there are 13,329 comments with a sentiment index of −44.6. In Law, the equivalent figure is −47.2, suggesting that frustration with assessment clarity is entrenched rather than occasional. In Law open-text analysis for 2018–2025, assessment topics dominate discussion within the undergraduate comment themes and categories: Marking criteria accounts for 4.5% of comments with an index of −46.7. The practical message is clear: students want rubrics they can decode, marking they can trust, and feedback they can use before the next submission.
Understanding how work is assessed shapes progression, confidence, and professional readiness. When criteria are communicated clearly and applied consistently, students are better able to improve and more likely to trust the process. By analysing student surveys, text data, and the student voice, teams can target support where fairness and clarity feel most fragile and create an environment where expectations feel predictable and achievable.
Where do law students encounter inconsistency in marking?
Students report variability between markers and modules, which undermines confidence in both grades and the process behind them. Different tutors may interpret the same criteria differently, leaving students unsure what strong work actually looks like. Marker calibration, cross-module alignment where learning outcomes overlap, and transparent “what we agreed” notes reduce this variance, so students spend less time second-guessing and more time improving their work.
How can criteria be made easier to understand?
Overly complex or inaccessible criteria hinder performance because students cannot translate them into practical decisions. Students benefit when staff release criteria with the assessment brief, walk through them in class, and provide exemplars across grade bands. Checklist-style rubrics with weightings and common error notes help students plan earlier, self-evaluate more accurately, and submit with more confidence.
What is driving perceptions of subjectivity in grading?
Subjectivity often stems from vague descriptors and inconsistent emphasis across markers. When criteria do not show what separates one grade band from another, students assume personal preference is filling the gap. Programmes that use detailed rubrics, run calibration on shared samples, and require markers to reference rubric lines in feedback reduce room for discretion and make judgements easier to understand.
Do students perceive grading as overly harsh?
Yes, some cohorts describe grading as discouraging rather than developmental. When grade distributions or feedback tone feel punitive, motivation drops and students disengage from the improvement process. Programmes that align challenge with attainable standards, explain grade rationales clearly, and provide feed-forward opportunities protect rigour while keeping students engaged.
Why does feedback often fail to support improvement?
Students frequently describe feedback as too brief, generic, or late to inform the next submission. Prioritise timely, specific comments that reference the marking criteria and state what to change next time. A short “how your work was judged” summary tied to the rubric lines can turn feedback into practical guidance rather than post-hoc justification.
How does variability in tutor practice affect assessment?
Inconsistent tutor approaches create confusion and perceived unfairness, especially when students compare experiences across modules. Regular staff development, shared marking samples, and periodic moderation conversations help align expectations before gaps widen. Text analysis of student comments can surface hotspots where criteria are being interpreted variably, allowing targeted coaching and faster course correction.
What would transparency in assessment look like?
Transparency means students understand expectations, how work is judged, and what high performance looks like before they submit. Publish full rubrics and weightings, explain any differences between modules up front, and invite questions through short Q&A sessions. Involving students in assessment design conversations increases ownership and reduces the sense that grades arrive as surprises.
What actions enhance assessment practices now?
Start with changes that make expectations easier to read and easier to apply consistently:
These steps improve fairness, reduce avoidable confusion, and make the assessment process easier for students to trust, while strengthening law assessment methods more broadly.
How Student Voice Analytics helps you
If you want to see where law students question fairness in assessment, explore Student Voice Analytics and benchmark those concerns against the wider sector.
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.