Updated Mar 07, 2026
assessment methodslawLaw students are not asking for easier assessment, they are asking for assessment they can trust. NSS open-text analysis shows that transparency, consistency and programme-level coordination remain the main pressure points in law, even though overall subject sentiment is more positive than the sector view on assessment methods. In National Student Survey (NSS) open-text analysis for assessment methods across the sector, 28.0% of comments are Positive and 66.2% Negative, with law sitting at a sentiment index of -14.6 within that category. Marking criteria in law are a particular weak spot, carrying a strongly negative index of about -46.7. Taken together, these two sector lenses, one showing how students experience assessment and one aggregating law programmes across providers, explain why students keep returning to clarity, parity and flexibility.
What do law students want from diverse assessment?
Students want a broader mix of methods beyond high-stakes exams, because variety helps them demonstrate different strengths across modules. They value coursework, presentations and applied projects that mirror legal practice. Given how negative assessment-method comments remain, especially for mature and part-time cohorts, programme teams can act on student voice by publishing a brief for each task, using checklist-style rubrics and sharing exemplars, which mirrors what law students say about marking criteria and assessment practices. Orientation on formats and academic integrity helps students who are not UK domiciled, while accessibility by design and asynchronous alternatives for oral components reduce avoidable friction. Coordinating assessment at programme level avoids duplication and keeps method choice aligned with learning outcomes.
How do students experience the move to online exams?
Online exams can widen flexibility, but only when students trust the process. Students appreciate the reduced stress of sitting assessments in familiar settings, yet concerns about reliability and fairness remain when systems falter. Law schools can protect confidence by providing practice environments, clear guidance on permitted resources and rapid support routes during exams. Where online exams continue, design choices should prioritise academic integrity without over-policing, with contingency plans that preserve parity for those affected by technical issues. Post-exam debriefs at cohort level further improve perceived fairness by explaining common strengths, misunderstandings and how the paper mapped to outcomes.
Do marking criteria and feedback help students improve?
Students improve faster when they can use criteria while drafting, not just after grading. They repeatedly ask for feedback that is specific, timely, and usable in the next submission. Law programmes can respond by mapping criteria to learning outcomes, publishing annotated exemplars at key grade boundaries and calibrating markers to reduce variance. Sampling double-marking in larger cohorts and recording moderation notes builds trust. Committing to a realistic feedback turnaround and communicating progress enhances transparency and supports student development.
Why does exam timetabling still create friction?
Better timetabling reduces avoidable stress before it harms performance. Students flag clustered deadlines, overlapping methods and long waits for results as sources of pressure and under-performance. A programme-level assessment calendar prevents deadline pile-ups and method clashes across modules, a pattern echoed in what law students need from earlier, more stable timetables, while a "no surprises" change window steadies communications. Where feasible, early release of assessment briefs and predictable submission windows support students balancing study, work and caring responsibilities. Reducing the lag between exams and results, even by providing a short cohort debrief before marks are finalised, improves confidence.
What works well in assessments?
Students respond best to assessments that feel authentic to legal study and practice. They praise case studies, moots and problem questions that test analysis, judgment and professional communication in realistic scenarios. When tasks are well scaffolded and criteria are explicit, students report deeper learning and a stronger sense of progression. Visible teaching expertise and accessible resources amplify these benefits by making expectations easier to meet.
Where should law schools prioritise improvement?
Law schools should prioritise the changes that make assessment feel clearer, calmer and fairer. Start with clarity and consistency: rubric-based briefs, exemplars and marker calibration reduce ambiguity. Then strengthen operational rhythm: a single assessment calendar and stable communications limit avoidable stress. Finally, close the loop with brief cohort-level debriefs after each assessment, so students can act on feedback and see how fairness is being protected. These steps target the most negative areas in student comments, assessment methods and marking criteria, while preserving the areas students already rate highly.
How Student Voice Analytics helps you
Student Voice Analytics shows where assessment issues concentrate in law, by discipline, demographic and cohort, so teams can act on the highest-friction themes first. It tracks topic sentiment over time, surfaces concise summaries for programme and module teams, and supports like-for-like comparisons by subject mix and cohort profile. Explore Student Voice Analytics to benchmark law assessment feedback, target interventions where they will move sentiment most, and export evidence for boards, TEF and quality reviews.
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.