Can psychology assessments be made consistent and fair?

By Student Voice Analytics
marking criteriapsychology (non-specific)

Yes. Across the National Student Survey (NSS) open-text responses, the cross-sector topic of marking criteria attracts 87.9% negative comments (index −44.6) because students cannot see how standards are applied. In psychology (non-specific), the broad classification covering most UK Psychology programmes, the tone on marking criteria is similarly adverse (−45.0) within a largely positive discipline corpus of ≈23,488 comments, 97% successfully categorised. The practical fix prioritises visibility and calibration: publish annotated exemplars, use checklist rubrics with weightings, release criteria with the assessment brief, and calibrate markers so standards match.

Why does marking and feedback feel inconsistent?

Subjective readings of criteria and uneven calibration across staff drive inconsistency. Students receive disparate grades and comments that reflect differing interpretations of the same descriptors. Scrutinise and standardise criteria across modules where learning outcomes overlap, publish annotated exemplars at key grade bands, and use checklist-style rubrics with weightings and common error notes. Release criteria with the brief and provide a short “how your work was judged” summary with returned grades. Short, regular calibration using a shared bank of samples keeps markers aligned and sustains trust in the process.

How do we reduce bias and personal preferences in marking?

Bias surfaces when preferences for particular theories or methods seep into judgement. Use detailed rubrics that map explicitly to learning outcomes and make weighting visible so personal preferences recede. Run marker calibration and second-marking on a small stratified sample, then record and share “what we agreed” notes with staff and students. Ongoing professional development on criterion-referenced judgement, plus anonymised marking where viable, helps staff analyse their own tendencies and moderate them.

How should programmes handle external disruptions to assessment and feedback?

Students experience uncertainty when strikes or other disruptions delay marking and feedback. Set contingency plans with clear service standards for turnaround, maintain a single source of truth on the VLE, and enable remote moderation and approval workflows so assessment quality and timelines hold up under pressure. If timelines slip, communicate the revised schedule and what support is available, and prioritise assessments that feed forward into subsequent tasks.

What changes when assessments move online?

Digital submissions can obscure aspects of structure and argument flow unless criteria and tools are adapted. Use digital marking tools with inline annotation, linked rubrics and reusable comments that point to specific criterion lines. Train staff to apply the same standards online as on paper, and pair grades with brief feed-forward steps that students can action before the next submission. Keep online feedback specific and actionable, recognising that students cannot rely on informal clarifications after class.

Do word limits and essay structure help or hinder fairness?

Word limits teach disciplined argument but can create noise if misaligned with criteria. Specify what must be evidenced within the limit and how structure is judged, and reflect this explicitly in rubric weightings. Encourage planning templates that foreground argument, evidence and critical analysis so “student voice” comes through without verbosity. Where appropriate, standardise ranges across comparable modules to aid student planning.

Why do delays and vague feedback persist, and how do we fix them?

Slippage often stems from uneven workload allocation and unclear service standards. Set predictable turnaround times, publish them with the brief, and resource moderation accordingly. Use structured feedback templates that reference the rubric, include “what to do next”, and signpost drop-ins for clarification. Provide short feed-forward clinics before submission windows to reduce recurring errors and improve attainment.

How do we align lecturer expectations across modules?

Expectation drift narrows when teams calibrate around shared samples and common descriptors. Hold termly workshops that differentiate pass, merit and distinction using exemplars, agree on what good evidence looks like, and standardise criteria for overlapping learning outcomes. Use student survey insights to identify where expectations remain opaque and adjust assessment briefs and guidance to remove ambiguity.

How Student Voice Analytics helps you

  • Track sentiment on marking criteria and related assessment topics over time by provider, school or programme, so teams can target the modules and cohorts where tone is most negative.
  • Compare like-for-like with Psychology peers and by demographics to see where clarity gaps persist, then prioritise interventions such as exemplars, rubric redesign and calibration.
  • Export concise, anonymised summaries for programme teams and boards, with year-on-year movement and representative comments to support action planning and quality reporting.
  • Evidence impact by linking local changes in criteria, calibration and feedback turnaround to subsequent shifts in student tone and attainment.

Request a walkthrough

Book a Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready governance packs.
  • Benchmarks and BI-ready exports for boards and Senate.

More posts on marking criteria:

More posts on psychology (non-specific) student views: