Are law students confident in marking criteria and assessment practices?

By Student Voice Analytics
marking criterialaw

No. Student comments point to low confidence in how criteria are expressed and applied. Across Marking criteria open-texts in the National Student Survey (NSS) 2018–2025 there are ~13,329 comments, with a sentiment index of −44.6; Law sits at −47.2 within that dataset, indicating widespread frustration across the sector. In Law open-text analysis for 2018–2025, assessment topics dominate discussion: Marking criteria accounts for 4.5% of comments with an index of −46.7. These patterns explain why students prioritise unambiguous rubrics, exemplars, consistent marking and feedback they can act on.

Understanding how work is assessed shapes progression and professional readiness. Criteria must be communicated and applied consistently to foster fairness and effectiveness. By analysing student surveys, text data and the student voice, staff can target support throughout the learning process and create an inclusive learning environment where expectations feel predictable and achievable.

Where do law students encounter inconsistency in marking?

Students report variability between markers and modules, which undermines confidence and the reliability of grades. Different tutors may interpret the same criteria differently, leaving students unsure how to excel. Marker calibration, cross-module alignment where learning outcomes overlap, and transparent “what we agreed” notes reduce this variance and build trust.

How can criteria be made easier to understand?

Overly complex or inaccessible criteria hinder performance. Students benefit when staff release criteria with the assessment brief, walk through them in class, and provide exemplars at grade bands. Checklist-style rubrics with weightings and common error notes help students plan work and self-evaluate against expectations.

What is driving perceptions of subjectivity in grading?

Subjectivity often stems from vague descriptors and inconsistent emphasis across markers. Programmes that use detailed rubrics, run calibration on shared samples, and require markers to reference rubric lines in feedback reduce scope for personal discretion and make judgements feel anchored to standards.

Do students perceive grading as overly harsh?

Yes, some cohorts describe grading as discouraging rather than developmental. Where grade distributions or feedback tone feel punitive, motivation drops and students disengage. Programmes that align challenge with attainable standards, explain grade rationales, and provide feed-forward opportunities see better engagement and reduced anxiety.

Why does feedback often fail to support improvement?

Students frequently describe feedback as too brief, generic or late to inform the next submission. Prioritise timely, specific comments that reference the marking criteria and indicate what to change next time. A short “how your work was judged” summary tied to the rubric lines can turn feedback into guidance rather than post-hoc justification.

How does variability in tutor practice affect assessment?

Inconsistent tutor approaches create confusion and perceived unfairness. Regular staff development, shared marking samples, and periodic moderation conversations help align expectations. Text analysis of student comments can surface hotspots where criteria are interpreted variably, allowing targeted coaching.

What would transparency in assessment look like?

Transparency means students understand expectations, how work is judged, and what high performance looks like. Publish full rubrics and weightings, explain any differences between modules up front, and invite questions via short Q&A sessions. Involving students in assessment design conversations fosters ownership and improves comprehension of standards.

What actions enhance assessment practices now?

  • Standardise and simplify criteria where learning outcomes overlap, while explaining intentional differences.
  • Publish annotated exemplars and require markers to reference rubric lines in feedback.
  • Calibrate markers using shared samples and close the loop with students on what was agreed.
  • Offer short feed-forward clinics before submission windows for high-volume modules.
  • Track and respond to recurring queries about criteria via a visible FAQ on the VLE.

How Student Voice Analytics helps you

  • Track sentiment on Marking criteria over time by cohort, mode and site, with drill-downs from provider to school, department and programme.
  • Compare like-for-like across Law and other CAH areas, and by demographics, to target where tone is most negative.
  • Export concise, anonymised summaries for programme teams and boards, including representative comments and year-on-year movement.
  • Evidence impact by monitoring changes in assessment-related topics and sentiment after interventions.

Request a walkthrough

Book a Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready governance packs.
  • Benchmarks and BI-ready exports for boards and Senate.

More posts on marking criteria:

More posts on law student views: