Do computer science students trust marking criteria?

Updated Mar 13, 2026

marking criteriacomputer science

Computer science students do not always trust marking criteria, and the gap is hard to ignore. Across UK National Student Survey (NSS) open-text analysis, the marking criteria theme records 87.9% negative sentiment and a sentiment index of -44.6 from about 13,329 comments, pointing to widespread concerns about clarity and consistency.

Within computer science, the subject grouping used for sector benchmarking and CAH reporting, marking criteria is one of the most negative assessment topics (4.7% share; sentiment index -47.6). When students cannot see how criteria connect to learning outcomes, or how markers apply them, trust falls quickly.

Computer science departments can respond without diluting standards. Clearer rubrics, earlier briefing, marker calibration, and more actionable feedback make grading easier to understand and easier to trust. The sections below show where friction appears and what teams can do next.

How should group work recognise individual contribution?

Group projects often hide uneven contribution, which makes fair students question the mark rather than the task. In a subject where structured collaboration in computer science programmes matters, the assessment model needs to separate shared output from individual effort.

Publish criteria for both the group product and each student's contribution with the assessment brief. Use checklist-style rubrics, milestone check-ins, and structured peer assessment to show how marks are derived. Short feed-forward clinics before submission help teams calibrate expectations early.

When students can see how contribution is recognised, they are more likely to view group assessment as fair rather than arbitrary.

Where does inconsistency arise, and how do teams calibrate?

In computer science, inconsistency often starts when assessors interpret the same rubric differently across modules or markers. Students experience that gap as uncertainty, then unfairness.

Release criteria with the assessment brief, provide annotated exemplars across grade bands, and walk through the rubric in class or online. Run marker calibration with a shared sample set, then publish short "how we apply this rubric" notes so students know what markers agreed.

This improves reliability for staff and gives students a clearer picture of what strong work looks like.

What feedback helps students act?

Generic feedback leaves students guessing how to improve, which echoes wider feedback challenges in computer science education and weakens confidence in both the mark and the module. Feedback works best when it explains the decision and shows the next step.

Give each student a short "how your work was judged" summary linked to rubric lines and weightings. Pair that with timely feed-forward on common mistakes, especially in large modules, and use digital workflows to standardise turnaround. Brief conversations in office hours or scheduled feedback slots turn grades into something students can use.

The benefit is practical: students spend less time decoding the mark and more time improving the next submission.

Why does marking vary across modules, and what should be standardised?

Variation between modules can reflect real disciplinary differences, but too much local variation makes the whole programme feel incoherent. Students then struggle to judge what good looks like from one module to the next.

Standardise the parts that should travel: core rubric structure, shared terminology, explicit weightings, and exemplar formats where learning outcomes overlap. Signal intentional differences early, and involve student representatives when reviewing draft criteria and examples.

This preserves academic flexibility while making expectations easier to understand across the programme.

How do we account for individual circumstances without undermining rigour?

Health, caring responsibilities, access barriers, and unexpected crises all affect performance. Students are more likely to trust the process when support is consistent, visible, and applied without guesswork.

Use a clear extensions policy, explain it in the handbook and VLE, and keep assessment standards explicit. Where possible, spread assessment points across the term so one setback does not dominate the final outcome. Consistent communication matters as much as the adjustment itself.

This supports equity while protecting the credibility of the assessment process.

How do we close communication gaps about criteria?

Students often do not see one clear explanation of how grades are determined, a pattern that mirrors wider communication barriers in computer science education. Instead, they piece together information from slides, the VLE, verbal comments, and late updates, which creates avoidable confusion.

Publish criteria with the brief, walk through them once in a focused session, and maintain one source of truth on the VLE. Track recurring questions, convert them into a living FAQ, and send timed reminders before key milestones.

That reduces ambiguity at the point students need clarity most, not after submissions are already underway.

How do we remove ambiguities in content and assessment?

Vague learning outcomes and loosely aligned tasks create grey areas for both students and markers. In computer science, that quickly shows up as frustration about fairness rather than healthy challenge.

Review module materials and assessment briefs together, not in isolation. Rewrite criteria with concrete descriptors, common error notes, and clearer links to intended outcomes; then use student feedback to test whether the wording actually makes sense. Short staff development sessions on assessment design can remove a lot of repeated confusion.

The result is simpler: fewer disputes, clearer feedback, and stronger confidence in standards.

What should computer science departments change now?

Departments do not need a wholesale redesign to improve trust in marking criteria. A few visible, repeatable changes have the biggest effect.

  • Publish standardised rubrics and annotated exemplars with every assessment brief, then reinforce them through short Q&As.
  • Run regular marker calibration with shared samples and publish short notes on how criteria were applied.
  • Add a brief "how your work was judged" summary to each grade, and schedule feed-forward clinics for high-volume submissions.
  • Align criteria across modules where learning outcomes overlap, and explain any intentional differences at the start.
  • Use the VLE as the single source of truth for criteria, FAQs, updates, and recurring student queries.

These changes make assessment feel more transparent without lowering expectations, which is exactly where trust is won or lost.

How Student Voice Analytics helps you

  • See where students question clarity, consistency, or fairness in marking criteria, then drill down by provider, school, department, programme, cohort, or mode.
  • Compare computer science against like-for-like providers and demographic segments to find where negative sentiment is concentrated.
  • Export anonymised summaries and benchmark tables for programme teams, assessment leads, and quality committees.
  • Track whether rubric changes, calibration work, or feedback redesign actually shift sentiment year over year.

If you need evidence for where marking criteria is breaking trust, Student Voice Analytics gives you a faster way to find it and act.

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.