Is feedback in liberal arts studies working for students?

Updated Mar 22, 2026

feedbackliberal arts (non-specific)

Students in liberal arts do not just want more feedback, they want feedback they can use. Across the National Student Survey (NSS) open-text, analysed using our NSS open-text analysis methodology, feedback attracts 57.3% negative sentiment (index −10.2, with terms defined in our student feedback analysis glossary), and within liberal arts (non-specific) the topic itself carries a sharply negative index of −35.9.

Students also talk frequently about module choice and variety, which accounts for 16.2% of comments in this subject area. But when feedback arrives late, feels generic, or fails to show the next step, it quickly undermines confidence in the wider course experience. In sector terms, the feedback category captures student views on the usefulness, timeliness and clarity of assessment comments, while the liberal arts CAH grouping covers interdisciplinary programmes within the UK's subject taxonomy. The practical implication is clear: reset turnaround expectations, standardise what good feedback looks like, and make comments useful enough to shape the next piece of work.

Feedback matters in liberal arts because students are often being asked to interpret, argue and synthesise rather than repeat a single right answer. In that context, strong feedback does more than justify a grade. It shows students how to sharpen their thinking, strengthen their evidence and improve the next submission. Student surveys help teams see where that process is breaking down, and text analysis makes those patterns visible at scale so the student voice can inform teaching, assessment and course design.

Why does feedback often feel inadequate?

One of the biggest problems is feedback that identifies weaknesses without showing how to improve them. Vague or overly critical comments can stall progress, especially in subjects where judgement is interpretive. Staff need to reference the marking criteria directly, explain what strong performance would have looked like, and provide feed-forward steps students can use in their next assignment. Short rubrics, annotated exemplars, and two or three concrete next actions make feedback easier to apply, which gives students a clearer route to improvement and makes marking feel fairer.

Why does feedback arrive too late to help?

A common barrier is delay. Many institutions work to a four-week turnaround, but missed deadlines leave students waiting until the next task is already underway. Publishing a feedback service level by assessment type, tracking on-time performance, and telling students when and how comments will arrive reduces uncertainty. When feedback is prompt and usable, students can act on it while the learning is still live, which improves confidence as well as performance.

How can we reduce inconsistent marking?

Variations in marking and feedback across staff quickly erode trust. Even with standardised schemes, interpretive disciplines create room for drift in how criteria are applied. Programme teams can run short calibration sprints with shared samples, add spot checks on feedback quality, and refresh guidance on common liberal arts assessment types. Open conversations around criteria, exemplars at key grade bands, and routine second-marker dialogue create a more consistent experience across modules, so students are less likely to feel that standards change from one assessor to another.

Where are the feedback opportunities across modules?

Courses that rely heavily on final exams or large end-of-module assignments give students too few chances to adjust. Without interim tasks, they receive advice only after the moment to use it has passed. Staged assignments, formative assessment activities such as mini-assessments and reflective seminars, and short dialogic feedback sessions create iterative touchpoints across the term. Those touchpoints help students test their approach earlier, reduce end-point surprises, and make improvement part of the module rather than an afterthought.

What makes guidance in feedback concrete and actionable?

Students often receive comments that name a problem but do not explain the next move. A stronger model links remarks to marking criteria, identifies two strengths and two priorities, and includes a short improvement plan with signposts to relevant resources. Checklist-style rubrics and annotated exemplars reduce ambiguity, while clear verbs such as "define", "compare", or "support with evidence" tell students what to do next. That turns feedback from explanation into a plan.

How should staff handle disagreements with assessment?

Disagreement with grades is common in interpretive disciplines, and silence after release rarely helps. Transparency about criteria, standards, and review routes makes the process easier to trust. Short meetings that walk through the brief, the criteria, and relevant exemplars can resolve confusion before frustration hardens into formal challenge. An accessible queries route and clear explanations of review processes reassure students that concerns will be handled consistently and fairly.

What does a valued feedback experience look like?

Students value detailed comments that engage with their actual argument, not stock phrases that could fit any essay. The strongest feedback balances recognition of what worked with precise advice on what to strengthen next, and it shows how those points connect to the criteria. When comments feel tailored, timely, and clearly linked to improvement, students are more likely to use them and more likely to see assessment as part of an ongoing learning cycle.

How does feedback in liberal arts compare with other courses?

Subjects with continuous assessment often provide more frequent and easier-to-apply feedback because students encounter smaller tasks and tighter cycles of response. In liberal arts, where interpretation and critical thinking are central, comments often address broader ideas and can feel harder to operationalise without exemplars or criteria mapping. The goal is not to copy another discipline's model. It is to design feedback that respects the interpretive nature of liberal arts while still being timely, specific, and easy to act on.

Why do psychology-style modules draw criticism for feedback?

Students often describe feedback in these modules as brief, generic, or disconnected from the work they submitted. That can make comments feel procedural rather than developmental. Training staff to give concrete, criteria-referenced advice, alongside regular opportunities for students to discuss what comments mean in practice, improves perceived fairness and usefulness. It also makes it more likely that students will apply the feedback instead of ignoring it.

What should liberal arts programmes change next?

Start with the changes students will notice fastest, and that teams can sustain.

  • Define a feedback service level by assessment type, then monitor and publish on-time rates.
  • Require structured feed-forward in every return, supported by rubrics and annotated exemplars.
  • Calibrate marking with shared samples and short quality checks for specificity, actionability, and alignment to criteria.
  • Add more formative checkpoints and brief dialogic feedback moments within modules.
  • Close the loop each term with concise "you said, we did" updates so students can see what has changed.

These steps respond directly to the negative tone around feedback in liberal arts and give students a clearer, more usable assessment experience.

How Student Voice Analytics helps you

  • Turns NSS open-text into trackable metrics for feedback and related assessment themes across liberal arts, with drill-downs by cohort and programme.
  • Benchmarks tone and topics against CAH areas and demographics, so you can see where sentiment is weakest and prioritise the right fixes first.
  • Surfaces recurring action patterns, including structured feed-forward, exemplars and calibration, with export-ready summaries for module teams and boards.
  • Shows change over time with like-for-like comparisons, so you can evidence whether improvements to timing, clarity, and consistency are actually working.

Explore Student Voice Analytics if you need a faster way to see where feedback is breaking down across liberal arts provision.

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.