Are law students confident in marking criteria and assessment practices?

Updated Mar 29, 2026

marking criterialaw

Law students are not confident in marking criteria when expectations are vague or applied unevenly. Across Marking criteria open-text comments in the National Student Survey (NSS) 2018–2025, analysed using our NSS open-text analysis methodology, there are 13,329 comments with a sentiment index of −44.6. In Law, the equivalent figure is −47.2, suggesting that frustration with assessment clarity is entrenched rather than occasional. In Law open-text analysis for 2018–2025, assessment topics dominate discussion within the undergraduate comment themes and categories: Marking criteria accounts for 4.5% of comments with an index of −46.7. The practical message is clear: students want rubrics they can decode, marking they can trust, and feedback they can use before the next submission.

Understanding how work is assessed shapes progression, confidence, and professional readiness. When criteria are communicated clearly and applied consistently, students are better able to improve and more likely to trust the process. By analysing student surveys, text data, and the student voice, teams can target support where fairness and clarity feel most fragile and create an environment where expectations feel predictable and achievable.

Where do law students encounter inconsistency in marking?

Students report variability between markers and modules, which undermines confidence in both grades and the process behind them. Different tutors may interpret the same criteria differently, leaving students unsure what strong work actually looks like. Marker calibration, cross-module alignment where learning outcomes overlap, and transparent “what we agreed” notes reduce this variance, so students spend less time second-guessing and more time improving their work.

How can criteria be made easier to understand?

Overly complex or inaccessible criteria hinder performance because students cannot translate them into practical decisions. Students benefit when staff release criteria with the assessment brief, walk through them in class, and provide exemplars across grade bands. Checklist-style rubrics with weightings and common error notes help students plan earlier, self-evaluate more accurately, and submit with more confidence.

What is driving perceptions of subjectivity in grading?

Subjectivity often stems from vague descriptors and inconsistent emphasis across markers. When criteria do not show what separates one grade band from another, students assume personal preference is filling the gap. Programmes that use detailed rubrics, run calibration on shared samples, and require markers to reference rubric lines in feedback reduce room for discretion and make judgements easier to understand.

Do students perceive grading as overly harsh?

Yes, some cohorts describe grading as discouraging rather than developmental. When grade distributions or feedback tone feel punitive, motivation drops and students disengage from the improvement process. Programmes that align challenge with attainable standards, explain grade rationales clearly, and provide feed-forward opportunities protect rigour while keeping students engaged.

Why does feedback often fail to support improvement?

Students frequently describe feedback as too brief, generic, or late to inform the next submission. Prioritise timely, specific comments that reference the marking criteria and state what to change next time. A short “how your work was judged” summary tied to the rubric lines can turn feedback into practical guidance rather than post-hoc justification.

How does variability in tutor practice affect assessment?

Inconsistent tutor approaches create confusion and perceived unfairness, especially when students compare experiences across modules. Regular staff development, shared marking samples, and periodic moderation conversations help align expectations before gaps widen. Text analysis of student comments can surface hotspots where criteria are being interpreted variably, allowing targeted coaching and faster course correction.

What would transparency in assessment look like?

Transparency means students understand expectations, how work is judged, and what high performance looks like before they submit. Publish full rubrics and weightings, explain any differences between modules up front, and invite questions through short Q&A sessions. Involving students in assessment design conversations increases ownership and reduces the sense that grades arrive as surprises.

What actions enhance assessment practices now?

Start with changes that make expectations easier to read and easier to apply consistently:

  • Standardise and simplify criteria where learning outcomes overlap, while explaining intentional differences.
  • Publish annotated exemplars and require markers to reference rubric lines in feedback.
  • Calibrate markers using shared samples and close the loop with students on what was agreed.
  • Offer short feed-forward clinics before submission windows for high-volume modules.
  • Track and respond to recurring queries about criteria via a visible FAQ on the VLE.

These steps improve fairness, reduce avoidable confusion, and make the assessment process easier for students to trust, while strengthening law assessment methods more broadly.

How Student Voice Analytics helps you

  • Track sentiment on Marking criteria over time by cohort, mode, and site, with drill-downs from provider to school, department, and programme.
  • Compare like-for-like across Law and other CAH areas, and by demographics, so you can target where tone is most negative first.
  • Export concise, anonymised summaries for programme teams and boards, including representative comments and year-on-year movement.
  • Evidence impact by monitoring changes in assessment-related topics and sentiment after interventions.

If you want to see where law students question fairness in assessment, explore Student Voice Analytics and benchmark those concerns against the wider sector.

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.