Can psychology assessments be made consistent and fair?

Updated Mar 30, 2026

marking criteriapsychology (non-specific)

Psychology students lose confidence quickly when marking criteria feel opaque or inconsistently applied. Across the National Student Survey (NSS) open-text responses, as outlined in our NSS open-text analysis methodology, the cross-sector topic of marking criteria attracts 87.9% negative comments (index −44.6), because students cannot see how standards are applied.

In psychology (non-specific), the broad classification covering most UK psychology programmes, sentiment on marking criteria is similarly negative (−45.0) within a largely positive discipline corpus of about 23,488 comments, 97% successfully categorised. The most practical response is to make standards visible and consistent: publish annotated exemplars, use checklist rubrics with weightings, release criteria with the assessment brief, and calibrate markers so standards match.

Why does marking and feedback feel inconsistent?

Subjective readings of criteria and uneven calibration across staff are the main cause. Students receive different grades and comments because the same descriptors are interpreted differently. Standardise criteria across modules where learning outcomes overlap, publish annotated exemplars at key grade bands, and use checklist-style rubrics with weightings and common error notes. Release criteria with the brief, then return grades with a short “how your work was judged” summary. Short, regular calibration using a shared bank of samples keeps markers aligned, reduces avoidable appeals, and sustains trust in the process.

How do we reduce bias and personal preferences in marking?

Bias surfaces when preferences for particular theories or methods seep into judgement. Detailed rubrics that map explicitly to learning outcomes and show weightings help personal preferences recede. Run marker calibration and second-marking on a small stratified sample, then record and share concise “what we agreed” notes with staff and students. Ongoing development on criterion-referenced judgement, alongside anonymised marking where viable, helps staff spot their own patterns and moderate them earlier. The result is a fairer process and a clearer moderation trail when marks are challenged.

How should programmes handle external disruptions to assessment and feedback?

Students feel exposed when strikes or other disruptions delay marking and feedback. Set contingency plans with clear service standards for turnaround, maintain a single source of truth on the VLE, and enable remote moderation and approval workflows so assessment quality and timelines hold up under pressure. If timelines slip, communicate the revised schedule, explain what support is available, and prioritise assessments that feed forward into subsequent tasks. Strong contingency planning protects both academic standards and student confidence.

What changes when assessments move online?

Digital submissions can hide issues in structure and argument flow unless criteria and tools are adapted. Use digital marking tools with inline annotation, linked rubrics, and reusable comments that point to specific criterion lines. Train staff to apply the same standards online as on paper, especially where online psychology can weaken assessment clarity and interaction, and pair grades with brief feed-forward steps students can use before the next submission. Specific online feedback reduces the need for informal clarification and helps students act sooner.

Do word limits and essay structure help or hinder fairness?

Word limits teach disciplined argument, but they create noise when they are misaligned with criteria. Specify what must be evidenced within the limit and how structure is judged, then reflect this explicitly in rubric weightings. Encourage planning templates that foreground argument, evidence, and critical analysis so students can show clear thinking without padding. Where appropriate, standardise ranges across comparable modules to make planning easier and reduce avoidable variation in staff expectations.

Why do delays and vague feedback persist, and how do we fix them?

Delays often stem from uneven workload allocation and unclear service standards. Set predictable turnaround times, publish them with the brief, and resource moderation accordingly. Use structured feedback templates that reference the rubric, include “what to do next”, and reflect the features of feedback comments students actually judge as fair. Offer short feed-forward clinics before submission windows to reduce recurring errors and improve attainment. Students benefit from faster, clearer feedback, and teams spend less time repeating the same explanations.

How do we align lecturer expectations across modules?

Expectation drift narrows when teams calibrate around shared samples and common descriptors. Hold termly workshops that distinguish pass, merit, and distinction using exemplars, agree what good evidence looks like, and standardise criteria for overlapping learning outcomes. Use psychology students’ own student voice evidence to identify where expectations remain opaque, then revise briefs and guidance to remove ambiguity. When expectations are aligned across modules, students can plan with more confidence and staff can defend decisions more consistently.

How Student Voice Analytics helps you

  • Track sentiment on marking criteria and related assessment topics over time by provider, school, or programme, so teams can target the modules and cohorts where tone is most negative.
  • Compare like-for-like with psychology peers and by demographic to see where clarity gaps persist, then prioritise exemplars, rubric redesign, and calibration.
  • Export concise, anonymised summaries for programme teams and boards, with year-on-year movement and representative comments to support action planning and quality reporting.
  • Measure whether changes in criteria, calibration, and feedback turnaround are improving student confidence over time.

See Student Voice Analytics if you need decision-grade evidence on where assessment fairness is breaking down, or read the buyer's guide if you are comparing approaches.

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.