Do current assessment methods meet computer science students’ needs?

By Student Voice Analytics
assessment methodscomputer science

Mostly not: student evidence points to a need for more unambiguous methods, transparent marking, and faster, developmental feedback. In the National Student Survey (NSS), the assessment methods lens captures how students describe being assessed across UK programmes, showing 28.0% positive and 66.2% negative sentiment from 11,318 comments; computing sits among the most critical subject areas (index −24.5). Within sector benchmarking, computer science exposes opaque standards as a particular weakness, with marking criteria sentiment at −47.6. These patterns frame the changes students prioritise here and where staff can most effectively intervene.

What works well in current assessment methods?

Students value transparency and structured support. Clear briefs, well-defined objectives and exemplars help them target effort. Revision workshops and sample questions provide practical rehearsal. Automated feedback on code enables rapid iteration and helps identify errors early. Continuous assessment sustains engagement and allows staff to evaluate progress more holistically than a high-stakes exam, integrating assessment with learning to build durable understanding.

Where do assignment clarity and feedback fall short?

Ambiguous requirements and delayed or generic comments undermine learning. Complex programming tasks require precise instructions and explicit marking criteria; when these are missing, students feel judged against shifting standards. Provide checklist-style rubrics, annotated exemplars at grade boundaries, and feedback that arrives in time to influence the next task.

How do subjectivity and inconsistent information affect students?

Subjective judgements on projects and conflicting guidance between lecturers create uncertainty and erode confidence. Standardise the framework across modules: share criteria, run short marker calibration using anonymised exemplars, and record brief moderation notes. Consistency reduces variance, improves perceived fairness, and helps students plan their approach.

How do workload and support shape the experience?

Heavy workloads combined with limited access to staff depress performance and wellbeing. Rigour matters, but it must be predictable and supported. Coordinate a programme-wide assessment calendar to avoid deadline clusters, schedule labs and office hours reliably, and expand teaching assistant and peer-support capacity so help and feedback remain timely.

How do new course structures change assessment?

Increasing the use of coding simulations, portfolios and other practical tasks aligns with employment outcomes but requires adaptation time. Build short orientations on formats and academic integrity, release briefs early, and include formative checkpoints so students can adjust before summative assessments. Staff benefit from aligning teaching approaches and feedback rhythms to these structures.

What is the right balance between machine and human marking?

Automated marking provides speed and consistency for syntax and unit tests, but students want humans to evaluate design choices, trade-offs and reasoning. A hybrid approach works best: automate routine checks, add human judgement for architecture and style, and publish concise debriefs to explain boundary decisions.

What should we change next?

Prioritise unambiguous methods by issuing a one-page assessment method brief per task covering purpose, weighting, allowed resources and common pitfalls. Calibrate markers with 2–3 exemplars and note moderation decisions; sample double-marking where variance is likely. Reduce friction for diverse cohorts by offering predictable submission windows, asynchronous alternatives for oral elements, early brief release and accessible formats; provide short orientations on assessment conventions with mini-practice tasks for not UK domiciled students. Coordinate at programme level through a single assessment calendar that avoids duplication of methods. Close the loop with a quick post-assessment debrief on common strengths and issues before individual marks, then provide actionable feed-forward.

How Student Voice Analytics helps you

Student Voice Analytics pinpoints where assessment method issues concentrate in computer science by cohort, mode, domicile and disability; tracks sentiment over time for assessment methods, marking criteria and feedback; and surfaces concise, anonymised summaries for programme and module teams. It supports like-for-like comparisons by subject mix and cohort profile, with export-ready outputs for boards and quality reviews.

Request a walkthrough

Book a Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready governance packs.
  • Benchmarks and BI-ready exports for boards and Senate.

More posts on assessment methods:

More posts on computer science student views: