Are psychology assessments working for students?

By Student Voice Analytics
assessment methodspsychology (non-specific)

Not consistently. In the National Student Survey (NSS), comments on assessment methods trend negative overall (28.0% Positive, 66.2% Negative; sentiment index −18.8), and students in psychology particularly flag ambiguity in how work is judged (Marking criteria −45.0; Assessment methods −24.0). The NSS category aggregates students’ views on how assessments are designed and communicated across UK higher education, while psychology here reflects the sector’s standard Common Aggregation Hierarchy grouping. These sector patterns shape the story below: prioritise clarity, consistent standards, and friction‑reducing design for diverse cohorts.

Are expectations and marking standards aligned in psychology?

There is a persistent gap between staff expectations and students’ experiences. Ambiguity in essay-based assignments and opaque criteria undermine confidence and performance. Publish a concise assessment method brief for each task that sets out purpose, marking approach, weighting, allowed resources, and common pitfalls. Use checklist-style rubrics with separated criteria and grade descriptors, and calibrate markers against anonymised exemplars at grade boundaries with short moderation notes. Text analysis helps surface recurring misunderstandings so criteria and exemplars map more tightly to what students submit.

How can we make assessment instructions unambiguous?

Ambiguous briefs depress outcomes and satisfaction. Provide plain‑English instructions with a worked exemplar, short “what good looks like” notes, and explicit links between learning outcomes, tasks, and marking criteria. Early release of briefs and predictable submission windows support different modes of study. Encourage quick student feedback on brief clarity before launch, then issue a short post‑assessment debrief summarising common strengths and issues to improve perceived fairness and transparency.

What goes wrong in group work, and how do we fix it?

Uneven contribution and weak accountability make group assessment feel unfair. Require individual components or viva elements alongside the group product, use structured peer evaluation and reflective logs, and state in the brief how peer evidence affects marks. For larger cohorts, sample double‑marking and spot checks where variance is highest. Provide asynchronous options for presentation components to reduce timetabling friction.

How do students adapt to different assessment forms?

Format shifts can unsettle students and distort performance. Offer short orientation and mini‑practice tasks for new formats, including online tests and practicals, and explain academic integrity and referencing conventions explicitly. Mock exams reduce anxiety and surface technical barriers early. Programme‑level coordination avoids deadline clusters and method duplication within a term, aligning a balanced mix of assessments to learning outcomes.

How do we build inclusive assessment for diverse cohorts?

Disabled, mature, part‑time and not UK domiciled students often report more negative experiences of assessment. Build accessibility in from the start: alternative formats, captioned or oral options, plain‑language instructions, and early signposting to reasonable adjustments. Provide consistent use of assistive technologies in both teaching and assessment. Train staff to implement adjustments robustly, and use brief checklists during design to remove common barriers.

How did COVID-19 reshape psychology assessments?

The rapid online pivot exposed weaknesses in format clarity and equity. Open‑book exams and increased coursework diversify how students evidence learning, but they require precise briefs, robust integrity guidance, and reliable access arrangements. Institutions that supported students with technology loans, flexible slots, and clear communication reduced stress and preserved academic integrity. Retaining these practices improves resilience and fairness beyond the pandemic.

What should change now?

Prioritise unambiguous briefs, calibrated standards, and accessible design, then coordinate assessment across the programme to smooth workload. Close the loop with short cohort‑level debriefs before individual marks, and maintain predictable operations that support different learner profiles. These changes directly address the sector signals that assessment clarity and marking transparency most affect psychology students’ experience.

How Student Voice Analytics helps you

Student Voice Analytics segments your NSS open‑text by discipline and demographics to pinpoint where assessment method issues concentrate in psychology. It tracks sentiment over time, surfaces concise anonymised summaries you can share with programme and module teams, and supports like‑for‑like comparisons by subject mix and cohort profile. Export‑ready outputs make it straightforward to evidence progress in boards, TEF submissions, and quality reviews.

Request a walkthrough

Book a Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready governance packs.
  • Benchmarks and BI-ready exports for boards and Senate.

More posts on assessment methods:

More posts on psychology (non-specific) student views: