What do computer science students need from feedback?
By Student Voice Analytics
feedbackcomputer scienceThey need consistent, timely, and actionable comments aligned to transparent marking criteria and calibrated across modules. In the National Student Survey (NSS), the feedback theme consolidates what students say about timeliness, usefulness, and clarity; across 27,344 comments it registers 57.3% negative with a sentiment index of −10.2. Within computer science (CAH11‑01‑01), feedback remains one of the most‑discussed topics (8.5% by share) and trends negative (index −27.8). These sector patterns explain why students in this discipline keep returning to consistency, turnaround, and marking standards, and they shape the priorities in this case study.
Why does inconsistency across modules undermine learning?
A significant issue within computer science education is the inconsistency of feedback across modules. Different staff members apply divergent expectations and standards when grading assignments, which confuses students and can affect learning outcomes and confidence. Some instructors provide detailed comments that guide improvement in coding and problem‑solving; others offer minimal notes with limited utility. Consistent criteria and feedback formats enable students to consolidate complex concepts and track progress. Programmes can reduce variability by running brief calibration exercises on sample scripts, using concise rubrics with annotated exemplars, and checking alignment between feedback and marking criteria. Staff development that prioritises shared standards and a common vocabulary helps sustain fairness and constructive learning.
Why does timeliness matter so much in computer science?
Delayed feedback deprives students of the chance to iterate while work is still live in their module sequence, which matters in a fast‑moving field. Quick turnaround supports application of corrections to subsequent projects, whereas delays reduce motivation and break the learning loop. Institutions should publish and track a service‑level agreement for feedback by assessment type, then report on‑time rates to students. Where feasible, automate administrative steps (e.g., release of marking criteria, structured feedback templates) and streamline moderation to improve turnaround without compromising academic judgement.
How can we make feedback more actionable?
Students need comments that specify what to do next, not just what went wrong. Feedback should function as feed‑forward, giving explicit steps to enhance code quality, testing strategies, analytical justification, and referencing of sources or libraries. Staff can use brief checklists tied to the assessment brief, exemplars with annotations, and short reflective prompts for students to plan how they will use feedback. Spot checks on specificity and utility help maintain standards across a cohort and surface areas for refresher training.
What does fairness and transparency in grading look like here?
Students want to understand how marks are derived and how feedback connects to criteria. Transparent grading links assessment briefs, marking criteria, and the comments students receive. Text analysis tools can support consistency by mapping feedback to criteria and flagging gaps, but staff should guard against reducing complex work to simple metrics. A balanced approach combines calibrated marking, criteria‑referenced comments, and periodic review of samples to ensure nuance is retained while students can see the rationale for judgements.
How do staff–student communications shape feedback?
Communication drives engagement with assessment. Irregular updates and unclear ownership of changes undermine students’ ability to act on feedback. Programmes can stabilise communications with a single source of truth for changes, weekly “what changed and why” posts, and brief “you said → we did” updates that show how student voice shapes feedback formats and timetabling. Survey results and pulse checks provide rapid evidence on whether students can find, interpret, and use their comments.
How does feedback influence perceived value for money?
Students weigh the usefulness and timeliness of feedback when judging value for money. Specific, prompt, criteria‑aligned comments signal that tuition fees support genuine learning and progression; vague or late feedback has the opposite effect. Some automation can lift consistency, but computer science students also value dialogue—office hours, quick clarifications on marking criteria, or short debriefs—so programmes should balance efficiency with interaction. Practices common in mature and part‑time provision (staged feedback points, structured checklists) can be adapted for large full‑time cohorts to improve perceived value.
What is the impact of external factors on feedback continuity?
Strikes and pandemic‑driven shifts to remote delivery disrupted feedback loops, with consequences for timeliness and quality. Computer science modules often rely on iterative assessment and up‑to‑date tooling; interruptions therefore have outsized effects. Contingency planning should include alternative feedback routes (asynchronous audio/text, group debriefs with exemplars), clear escalation points for queries, and transparent timelines for any catch‑up activity, so feedback continuity is maintained when usual patterns are interrupted.
How Student Voice Analytics helps you
- Turns NSS open‑text into trackable metrics for feedback, showing sentiment over time and by cohort, mode, disability, domicile, and subject, so you can focus where tone is weakest.
- Enables drill‑downs from provider to school/department/programme, with concise, anonymised summaries for module teams and boards.
- Provides like‑for‑like comparisons across CAH areas and demographics for Computer Science, highlighting topics such as feedback and marking criteria where sentiment trends negative, and evidencing improvement with export‑ready outputs.
Request a walkthrough
Book a Student Voice Analytics demo
See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.
-
All-comment coverage with HE-tuned taxonomy and sentiment.
-
Versioned outputs with TEF-ready governance packs.
-
Benchmarks and BI-ready exports for boards and Senate.
More posts on feedback:
More posts on computer science student views: