They need timely, transparent assessment guidance that shows what good looks like and how to improve next time. Across the UK-wide National Student Survey (NSS), the feedback category trends negative, with 57.3% of 27,344 comments coded as negative (sentiment index −10.2). Within the sector’s subject taxonomy used for benchmarking, students in chemical, process and energy engineering concentrate on assessment clarity: Feedback attracts 9.0% of topic share and sentiment around marking criteria sits at −50.8. These signals shape the practices highlighted below.
What makes chemical, process and energy engineering feedback difficult to get right?
The disciplines’ complexity and the precision required in lab and process work demand feedback that is both technically exact and usable. Students ask for guidance that links theory to practical execution, and the data show that assessment clarity dominates their concerns. Programmes benefit when marking criteria, tolerances, and safety-critical judgements are translated into accessible guidance that still respects the technical standards expected in professional practice.
Communicating feedback consistently is challenging because students’ understanding and application vary across a cohort. Feedback must dissect process design, reaction engineering, and control strategies while staying readable and oriented to the next attempt. Staff strive to balance technical accuracy with straightforward advice, yet volume and complexity can fragment that balance without shared exemplars and calibration.
How do students judge feedback?
Students judge feedback on whether it arrives in time to use, maps to the assessment brief and marking criteria, and contains actionable feed-forward. In fast‑paced modules where topics compound, late or ambiguous comments stall progress. In the NSS narrative, younger and full‑time cohorts drive most negativity, while mature and part‑time students often report better experiences; borrowing staged, dialogic approaches from the latter helps the former. In this subject area, comments highlight the need to show what merits or distinctions look like in complex calculations, process safety reasoning, and lab technique.
Which types of feedback work best?
Written feedback on lab reports and design coursework anchors specific, criteria‑referenced advice; oral feedback in studios and labs accelerates correction of procedural errors; and peer discussion builds confidence and shared problem‑solving. These modes work best when aligned: concise rubrics, annotated exemplars, and a short feed‑forward note per task reduce ambiguity and cut repeat queries. Calibrated use of each mode supports both conceptual understanding and repeatable technical practice.
How does feedback shape practical and laboratory work?
Immediate verbal guidance during experiments prevents poor habits and reinforces safe, efficient practice. Written notes on data quality, uncertainty, and process choices guide deeper reflection and literature follow‑up. Precision matters, but over‑technical language or delayed comments blunt learning; the most effective lab feedback prioritises a small number of specific, high‑impact changes students can apply in the next session.
How should we use digital and online mechanisms?
Digital platforms extend access and structure: searchable comments, version history, quick polls, and short micro‑rubrics help students act on advice. Online spaces also enable anonymous student voice, informing course teams where explanations or exemplars are missing. A hybrid model works best: keep in‑person clarifications for complex concepts, and use the virtual environment as a single source of truth for assessment briefs, rubrics, exemplars, and turnaround expectations.
What makes providing effective feedback hard?
Workload peaks and complex submissions stretch staff time, and mixed prior knowledge across cohorts necessitates more tailored explanations. Without coordination, students experience timetabling clashes, shifting deadlines, and fragmented communications, which compound dissatisfaction with feedback usefulness. Teams that schedule calibration sprints, share samples, and spot‑check specificity and actionability show more consistent practice across modules.
What will improve feedback systems now?
Prioritise timeliness and usefulness: publish a feedback service‑level by assessment type, include criteria‑linked feed‑forward on every task, and maintain a bank of annotated exemplars that illustrate why work meets a grade band. Stabilise the operational rhythm by nominating an owner for course communications, issuing a weekly update as the single source of truth, and smoothing workload pinch points to protect turnaround times. Lift practice from settings where tone is stronger by using staged feedback, short “how to use your feedback” guides within modules, and predictable staff availability. Close the loop each term with concise “you said → we did” updates.
How Student Voice Analytics helps you
Student Voice Analytics turns open‑text survey comments into trackable metrics for assessment and feedback in this subject area. It surfaces where tone is weakest, compares like‑for‑like against the wider engineering benchmark, and shows which topics (e.g., marking criteria and assessment methods) drive sentiment. Programme and department leads can drill from provider to module, export concise summaries for staff–student fora, and evidence improvements across cohorts and sites.
Request a walkthrough
See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.
© Student Voice Systems Limited, All rights reserved.