Mostly not. Across the National Student Survey (NSS), comments about feedback skew negative, with 57.3% negative (index −10.2). Within the Common Academic Hierarchy area for politics, students describe feedback tone as negative (−17.3), and concerns about marking criteria are sharper still (−49.0) alongside assessment methods (−20.1). The NSS feedback category aggregates sector views on the timeliness, usefulness and clarity of comments; the politics CAH groups politics provision across the UK for like‑for‑like comparison. These signals shape the practical actions below: standardise criteria, provide actionable feed‑forward, and deliver feedback on time.
Politics students experience assessment feedback as the bridge between teaching and learning. They prioritise timeliness, precision and transparency, and they respond when programmes treat feedback as part of teaching rather than an afterthought. Analysing student surveys and open‑text at scale surfaces consistent issues across modules: inconsistency in marking, specificity and usefulness of comments, turnaround times, and how exam feedback contributes to learning. These factors shape engagement with complex political ideas and graduate outcomes, a focus echoed in the Teaching Excellence Framework (TEF).
Where does inconsistency across marking leave politics students?
In political science education, inconsistency in marking among different staff within the same module often leads to confusion and frustration. When politics students receive varying grades for similar work quality, it undermines their understanding of expected assessment criteria. Divergent interpretations—some valuing detailed analytical approaches, others concise argument—alter how students prepare. To address this, programmes publish shared rubrics, use annotated exemplars, and run regular calibration sprints that include shared marking of samples. These steps give students a stable target and improve confidence without flattening academic judgement.
How can clarity and detail in feedback guide improvement?
Specific, criteria‑referenced comments help politics students navigate theoretical nuance and applied analysis. Vague statements such as ‘needs improvement’ give little steer on argument structure, use of evidence or engagement with ideology. Many institutions implement structured feedback pro formas that require feed‑forward actions linked to the rubric, alongside brief notes on strengths. This produces a practical plan for the next submission and encourages deeper engagement. Feedback should prompt students to test claims, scrutinise counter‑arguments and situate evidence, not just meet competencies.
Why does timing of feedback matter, and how do we improve it?
Long delays break the learning cycle. If coursework feedback arrives after the next task is submitted, students cannot apply it and motivation dips. Publish a feedback service standard by assessment type, track on‑time rates, and show performance to cohorts. Use digital tools to notify when marking starts, moderation finishes and feedback is released. Co‑design timelines with student representatives so expectations and workload align.
How should feedback on summative assessments and exams work for learning?
Summative feedback often arrives as a mark only, limiting development of critical analysis and argumentation. Politics programmes can provide scalable commentary: brief script annotations on argument structure and evidence use; generic cohort feedback mapped to criteria with exemplars at grade bands; and short debriefs that explain common misconceptions. Text analysis helps module leads spot recurrent issues across large cohorts and focus improvement.
What do students expect, and how do we involve them?
Students want to see how their work aligns with module outcomes and marking criteria. Publish criteria in plain language with exemplars, explain what “good” looks like at each band, and show how feedback links to the next task. Build dialogic opportunities—short feedback clinics, small‑group reviews and tutor drop‑ins—so students can query advice and plan actions. Brief ‘how to use your feedback’ guides within modules raise uptake, particularly in large full‑time cohorts.
How do we tackle perceived bias and improve transparency?
Perceived bias undermines trust, especially in interpretative work. Anonymous marking where feasible, second marking and moderation, and routine use of rubrics reduce risk. Provide a concise breakdown showing how each criterion informed the grade, and keep an audit trail of changes after moderation. Invite student panels to review feedback examples each term and comment on clarity and actionability.
What practical strategies lift feedback in politics?
Prioritise standardisation, transparency and constructive engagement. Share rubrics and annotated exemplars across staff; agree a realistic feedback SLA and report performance; require feed‑forward actions in every return; run regular calibration sprints; and spot‑check feedback for specificity, actionability and alignment to criteria. Lift practice from mature and part‑time provision—staged feedback and dialogic sessions—and replicate in high‑volume modules. Close the loop visibly with brief ‘you said → we did’ updates each term.
How Student Voice Analytics helps you
See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and standards and NSS requirements.