Do teacher training students trust marking criteria in UK higher education?

Updated Mar 15, 2026

marking criteriateacher training

Teacher training students need marking criteria they can trust, because unclear standards make it harder to judge progress in both written work and placements. Across the National Student Survey (NSS) open-text, analysed using our NSS open-text analysis methodology, the marking criteria category skews heavily negative, with 87.9% Negative and a sentiment index of −44.6. In teacher training, sentiment on criteria is similarly low at −45.6, while students place heavy emphasis on placements (16.1% of comments) and often discuss feedback with a −18.8 tone. The category aggregates sector-wide NSS comments about how criteria are presented and applied, and teacher training refers to the UK subject grouping used for like-for-like comparisons. Together, these patterns show why clearer criteria can improve confidence, feedback and readiness for school-based practice.

When programmes prioritise usable, transparent criteria and foreground the student voice, students spend less time decoding expectations and more time improving their practice. Analysing structured surveys alongside open-text feedback helps staff calibrate criteria, check whether students understand what progress looks like, and adjust assessment design where needed. That keeps marking aligned with programme outcomes and the realities of school-based practice.

What is unique about marking criteria in teacher training?

Marking criteria in teacher training need to span evidence-informed theory and assessed practice. Students must integrate pedagogy, policy and subject knowledge with classroom management, lesson design and reflective practice. Traditional academic metrics alone rarely capture this duality. Programmes that translate learning outcomes into criteria that show what good looks like in both written work and practice-based tasks help students track progress and recognise strengths earlier. Transparent, consistent criteria tailored to teacher education reduce mixed signals about performance and progression.

Where do expectations about criteria diverge from reality?

Students expect criteria to be relevant, fair and actionable. They report gaps between the criteria published in assessment briefs and the way those criteria are applied in marking. Variation in marker interpretation undermines confidence and obscures how to improve. Involving students in reviewing criteria and exemplars, and inviting questions before submission windows, surfaces ambiguity early. Programme teams can then refine wording, align expectations across modules and clarify intentional differences where outcomes diverge, which gives students a clearer route to improvement.

How does feedback connect to marking criteria?

Feedback is most useful when it shows how performance maps to the criteria. Formative comments guide learning within modules; summative comments should reference rubric lines and explain judgements. Students say feedback that arrives late, feels generic or lacks alignment to criteria is hard to use. Referencing the rubric directly, signposting the next step, and sequencing feed-forward opportunities before major submissions all increase utility. Given that feedback sentiment in teacher training trends negative in sector data, programmes that standardise turnaround expectations and embed brief feed-forward touchpoints, as discussed in how teacher training programmes can improve feedback, improve both learning and perceived fairness.

How can we improve transparency and clarity?

Students need to see criteria early, alongside the assessment brief, and to explore them in class or online. Checklist-style rubrics with unambiguous descriptors, weightings and common error notes reduce interpretation drift. Annotated exemplars at key grade bands demystify standards and support self-assessment. Where modules share outcomes, standardising criteria and highlighting any intentional differences up front prevents confusion. A short “how your work was judged” summary with each grade helps students connect output to judgement and reduces guesswork before the next submission.

How can programmes improve consistency in marking?

Reliability improves when teams calibrate before marks go back to students. Short calibration sessions using a bank of shared samples, with agreed notes recorded for students, align expectations. Exemplar libraries, moderation and light-touch audits identify patterning in marks and language. These steps do not constrain academic judgement; they support more consistent application of standards across assessors and placements, which protects trust in the process.

How do criteria shape professional development?

Criteria signal what the profession values. When they point directly to effective lesson planning, adaptive pedagogy and evidence-informed decision-making, students internalise standards they will later apply in the classroom and on teacher training placements. If criteria become rigid or detached from practice, they risk misdirecting effort. Alumni reflections, mentor input from schools, and structured self-assessment can test fit with classroom realities and prompt iterative refinement. That makes criteria a tool for professional formation, not just grading.

What should programmes change next?

Start with changes students will notice quickly and that staff can sustain.

  • Involve students in criteria reviews and Q&A sessions before assessment windows.
  • Publish annotated exemplars for common tasks and map them clearly to rubrics.
  • Calibrate markers and share short “what we agreed” summaries with cohorts.
  • Standardise criteria where learning outcomes overlap; explain differences where they do not.
  • Provide rubric-referenced feedback within agreed timelines, including one clear feed-forward action.

These steps make criteria more legible, reduce inconsistency and strengthen readiness for school-based practice.

How Student Voice Analytics helps you

Student Voice Analytics shows where sentiment on marking criteria deteriorates and why, with drill-downs from provider to programme and cohort. It enables like-for-like comparisons for teacher training against the wider sector, including mode, domicile and age, so teams can target modules where tone is most negative. Exportable, anonymised summaries highlight priority fixes, track the impact of calibration and rubric changes over time, and help programme teams evidence progress to boards and external reviewers. Explore Student Voice Analytics if you need a faster way to pinpoint where assessment clarity is breaking down.

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.