Mostly, students say assessment in law needs to be more transparent, consistent, and better coordinated across programmes. In National Student Survey (NSS) open-text analysis for assessment methods across the sector, 28.0% of comments are Positive and 66.2% Negative, with law sitting at a sentiment index of -14.6 within that category. In contrast, overall student mood in law trends more positive (51.1% Positive), yet the detail shows pressure points: marking criteria in law carry a strongly negative index ≈ −46.7. These sector lenses—one grouping how students experience being assessed, the other aggregating law programmes across providers—shape what follows and explain why students prioritise clarity, parity and flexibility.
What do law students want from diverse assessment?
Students ask for a broader mix of methods beyond high‑stakes exams so different strengths can be evidenced across modules. They value coursework, presentations and applied projects that mirror legal practice. Given how critical the sector tone is in assessment methods—especially for mature and part‑time cohorts—programme teams act on student voice by publishing an assessment method brief for each task, using checklist‑style rubrics, and providing exemplars. Orientation on formats and academic integrity helps students not UK domiciled, while building accessibility in from the outset and offering asynchronous alternatives for oral components reduces friction. Coordinating assessment at programme level avoids duplication and aligns method choice with learning outcomes.
How do students experience the move to online exams?
Students appreciate flexibility and the reduced stress of sitting assessments in familiar settings, but they remain concerned about reliability and fairness when systems falter. Law schools mitigate this by providing practice environments, clear guidance on permitted resources, and rapid support routes during exams. Where online exams continue, design choices prioritise academic integrity without over‑policing, with contingency plans that maintain parity for those affected by technical issues. Post‑exam debriefs at cohort level improve perceived fairness by explaining common strengths, misunderstandings and how the paper mapped to outcomes.
Do marking criteria and feedback help students improve?
Students repeatedly ask for criteria they can apply while drafting and feedback that arrives in time to use on the next assessment. Law programmes respond by mapping criteria to learning outcomes, publishing annotated exemplars at key grade boundaries, and calibrating markers to reduce variance. Sampling double‑marking in larger cohorts and recording moderation notes builds trust. Committing to a realistic feedback turnaround and communicating progress enhances transparency and supports student development.
Why does exam timetabling still create friction?
Students flag clustered deadlines, overlapping methods, and long waits for results as sources of stress and under‑performance. A programme‑level assessment calendar prevents deadline pile‑ups and method clashes across modules, while a “no surprises” change window steadies communications. Where feasible, early release of assessment briefs and predictable submission windows support students balancing study, work and caring responsibilities. Reducing the lag between exams and results, even by providing a short cohort debrief before marks are finalised, improves confidence.
What works well in assessments?
Students praise assessments that require applying doctrine to realistic scenarios—case studies, moots and problem questions that test analysis, judgment and professional communication. When tasks are well‑scaffolded and criteria are explicit, students report deeper learning and a stronger sense of progression. Visible teaching expertise and accessible resources amplify these benefits by making expectations easier to meet.
Where should law schools prioritise improvement?
Focus first on assessment clarity and consistency: rubric‑based briefs, exemplars, and marker calibration reduce ambiguity. Next, strengthen operational rhythm: a single assessment calendar and stable communications limit avoidable stress. Finally, close the loop: brief cohort‑level debriefs after each assessment help students act on feedback and enhance perceptions of fairness. These steps target the most negative areas in student comments—assessment methods and marking criteria—while preserving areas students rate highly.
How Student Voice Analytics helps you
Student Voice Analytics pinpoints where assessment method issues concentrate for law by cutting data by discipline, demographics and cohort. It tracks topic sentiment over time, surfaces concise summaries for programme and module teams, and supports like‑for‑like comparisons by subject mix and cohort profile. You can target interventions where they move sentiment most, then export evidence for boards, TEF and quality reviews.
Request a walkthrough
See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.
© Student Voice Systems Limited, All rights reserved.