What needs to change in medical student assessments?

Published May 16, 2024 · Updated Oct 12, 2025

marking criteriaMedicine

Medical assessment improves when programmes publish unambiguous criteria, calibrate markers, and return actionable feedback on time. Sector evidence from the marking criteria conversation in the National Student Survey (NSS) shows only 8.4% of comments are positive and 87.9% negative (index −44.6), with younger students contributing 72.7% of remarks; within medicine (non-specific), tone on criteria is similarly negative (index −45.1). The category aggregates UK NSS open‑text on how criteria are presented and applied, while the CAH groups medicine programmes across providers. These patterns explain why students prioritise legible rubrics, exemplars and consistent OSCE judgements; the sections below translate those signals into practice.

Engaging directly with assessment and feedback drives medical students’ learning and professional growth. Programmes should release criteria with the assessment brief, use checklist-style rubrics, provide annotated exemplars across grade bands, and run marker calibration so students know exactly how work will be judged. Incorporating student voice through structured analysis of feedback and regular surveys supports targeted improvement. Staff should audit and iterate assessment methods so processes remain both informative and formative.

How do delays in feedback and results affect learning and progression?

Delays undermine progression decisions and weaken the feed-forward loop into subsequent placements and modules. In medicine, operational friction around scheduling and communications often sits behind slow returns, so programmes should set realistic turnaround times, publish a single source of truth for updates, and explain any late changes. A brief “how your work was judged” summary with each grade helps students act immediately, even when fuller commentary follows.

What does genuine transparency in the marking process require?

Students need to see how marks are allocated and what quality looks like. Provide rubrics with weightings and common error notes alongside exemplars at key grade bands. Walk through criteria in class or online when releasing the assessment brief and take questions. Standardise criteria where learning outcomes overlap and flag any intentional differences across modules up front. These practices lower ambiguity and reduce disputes.

How can programmes reduce variability in assessment marking?

Variability in OSCEs and written assessments erodes trust. Regular marker calibration using a short bank of shared samples, with a recorded “what we agreed” note shared with students, aligns standards. Use moderation strategically and ensure assessors receive concise guidance on interpreting descriptors. Invite student representatives to observe or review the calibration outputs to strengthen legitimacy without compromising integrity.

Where do perceptions of marking bias arise, and how can we mitigate them?

Perceptions often stem from opaque criteria and inconsistent application. Anonymise marking where feasible, constrain subjectivity through checklist-style descriptors, and train assessors to recognise and mitigate bias. Publishing exemplars for diverse response styles and providing rubric-referenced feedback reduce the sense that preference or presentation unduly influences outcomes.

How should staff respond to student concerns about marking?

Treat queries as signals to improve, not challenges to authority. Acknowledge issues, explain decisions with reference to the rubric, and close the loop with a short “you said/we’re doing” update to the cohort. This approach strengthens the student voice partnership and raises confidence in assessment governance.

What drives consistency and style in marking across assessors?

Consistency follows from shared expectations, repeated calibration, and visible artefacts. Use concise assessor guides, align criteria across modules where outcomes coincide, and track recurring questions to refine FAQs on the VLE. Offer a short feed-forward clinic before submission windows so students test understanding of requirements before they commit to a final approach.

What should medical schools do next?

Stabilise turnaround, make criteria legible, and calibrate regularly. Focus on predictable assessment operations and transparent communication, and use exemplars and rubric-linked feedback to show how students close the gap. These steps address the sector evidence on criteria and align with how medicine students experience delivery, placements and assessment.

How Student Voice Analytics helps you

  • Track how sentiment on marking criteria moves over time and by cohort, site or mode, with drill‑downs from institution to programme.
  • Compare like‑for‑like across medicine and other disciplines to pinpoint where tone is most negative and why.
  • Export concise, anonymised summaries for programme teams and boards, including representative comments and year‑on‑year movement.
  • Surface recurring queries about criteria and feedback to prioritise calibration, exemplars and communications that improve student understanding.

Request a walkthrough

Book a Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready governance packs.
  • Benchmarks and BI-ready exports for boards and Senate.

More posts on marking criteria:

More posts on Medicine student views: