No. Student comments point to low confidence in how criteria are expressed and applied. Across Marking criteria open-texts in the National Student Survey (NSS) 2018–2025 there are ~13,329 comments, with a sentiment index of −44.6; Law sits at −47.2 within that dataset, indicating widespread frustration across the sector. In Law open-text analysis for 2018–2025, assessment topics dominate discussion: Marking criteria accounts for 4.5% of comments with an index of −46.7. These patterns explain why students prioritise unambiguous rubrics, exemplars, consistent marking and feedback they can act on.
Understanding how work is assessed shapes progression and professional readiness. Criteria must be communicated and applied consistently to foster fairness and effectiveness. By analysing student surveys, text data and the student voice, staff can target support throughout the learning process and create an inclusive learning environment where expectations feel predictable and achievable.
Where do law students encounter inconsistency in marking?
Students report variability between markers and modules, which undermines confidence and the reliability of grades. Different tutors may interpret the same criteria differently, leaving students unsure how to excel. Marker calibration, cross-module alignment where learning outcomes overlap, and transparent “what we agreed” notes reduce this variance and build trust.
How can criteria be made easier to understand?
Overly complex or inaccessible criteria hinder performance. Students benefit when staff release criteria with the assessment brief, walk through them in class, and provide exemplars at grade bands. Checklist-style rubrics with weightings and common error notes help students plan work and self-evaluate against expectations.
What is driving perceptions of subjectivity in grading?
Subjectivity often stems from vague descriptors and inconsistent emphasis across markers. Programmes that use detailed rubrics, run calibration on shared samples, and require markers to reference rubric lines in feedback reduce scope for personal discretion and make judgements feel anchored to standards.
Do students perceive grading as overly harsh?
Yes, some cohorts describe grading as discouraging rather than developmental. Where grade distributions or feedback tone feel punitive, motivation drops and students disengage. Programmes that align challenge with attainable standards, explain grade rationales, and provide feed-forward opportunities see better engagement and reduced anxiety.
Why does feedback often fail to support improvement?
Students frequently describe feedback as too brief, generic or late to inform the next submission. Prioritise timely, specific comments that reference the marking criteria and indicate what to change next time. A short “how your work was judged” summary tied to the rubric lines can turn feedback into guidance rather than post-hoc justification.
How does variability in tutor practice affect assessment?
Inconsistent tutor approaches create confusion and perceived unfairness. Regular staff development, shared marking samples, and periodic moderation conversations help align expectations. Text analysis of student comments can surface hotspots where criteria are interpreted variably, allowing targeted coaching.
What would transparency in assessment look like?
Transparency means students understand expectations, how work is judged, and what high performance looks like. Publish full rubrics and weightings, explain any differences between modules up front, and invite questions via short Q&A sessions. Involving students in assessment design conversations fosters ownership and improves comprehension of standards.
What actions enhance assessment practices now?
How Student Voice Analytics helps you
Request a walkthrough
See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.
© Student Voice Systems Limited, All rights reserved.