A validated employability scale can sharpen student surveys and careers support

Updated Mar 20, 2026

At Student Voice AI, we often see employability show up in student comments long before it appears in graduate destination data. Students tell universities when careers support feels generic, when placements seem out of reach, or when a course is not building confidence for the workplace. That is why Tran Le Huu Nghia, Pham Lan Anh and Nguyen Thi My Duyen's paper in Higher Education, “Measuring students’ self-perceived employability capital attainment: the development and validation of a scale”, matters. For UK universities using student feedback to improve the student experience, it asks a practical question: how do you measure employability development while students are still studying, rather than waiting for late outcome data?

Context and research question

Universities have spent years building employability into curricula, placements, careers services, and extracurricular activity. The problem is that evaluation often lags behind delivery. Graduate outcomes data arrives late, and local surveys frequently rely on a few improvised questions that do not add up to a robust picture of what students think they have actually developed.

This paper addresses that gap by constructing and validating the Self-Perceived Employability Capital Attainment (SPECA) scale. The core research problem is straightforward: if institutions want to tailor employability provision and help students identify areas for growth, they need a defensible way to assess attainment, not just participation or destination.

Key findings

The paper begins from a measurement problem, not from another employability intervention. The authors note that universities already support employability through many initiatives, but still lack strong instruments for assessing what students believe they have gained from those efforts.

"there remains a lack of robust tools to assess students’ attainment of employability capital"

The authors respond by building a 20-item scale across three studies involving 1,719 participants. That matters because the paper is not presenting a single pilot questionnaire. It reports a staged validation process designed to show that the measure is usable, coherent, and more rigorous than ad hoc employability items added to a survey at the last minute.

Content validity was built deliberately through both evidence and stakeholder input. The abstract reports a systematic literature review plus consultation with career experts and students. That is an important design choice. It means the scale is not only statistically tested, but also grounded in how employability is discussed in research and understood by the people expected to act on the results.

The psychometric testing is broader than many institutional surveys ever receive. Exploratory and confirmatory factor analyses supported the intended structure, and the paper also reports evidence of reliability, convergent validity, and discriminant validity. In practical terms, that suggests the scale is doing more than capturing a vague sense of confidence or satisfaction. It is attempting to measure a distinct construct with internal coherence.

The paper also tests whether the instrument holds up beyond one-off administration. By examining nomological validity and temporal stability, the authors move the discussion from “does this look sensible?” to “does this behave like a real measurement instrument over time and in relation to other variables?” For Student Experience and Careers teams, that is the difference between an interesting questionnaire and a tool that may support longitudinal tracking.

Practical implications

For UK higher education, the first implication is that employability should be measured earlier and more directly than graduate outcomes alone allow. Graduate Outcomes data is useful, but it is a delayed signal. If institutions want to know whether curriculum changes, placement reforms, or careers interventions are working, they need a way to assess perceived development during the course.

Second, a validated scale is most useful when paired with open-text student voice. A measure like SPECA can show where confidence or capability feels weaker, but it cannot explain why. That is where qualitative feedback becomes operationally valuable. Open comments can reveal whether the problem sits in placement access, employer links, unclear skills mapping, weak feedback, or uneven careers support across cohorts. This is a natural fit for Student Voice Analytics: structured text analysis can group employability-related comments at scale, then help institutions compare those themes against survey results.

Third, segmentation matters. An institutional average can hide major differences between commuter and residential students, between widening participation groups, between disciplines, or between students with and without placement experience. If a university adopts a stronger employability measure, it should plan from the outset to review results by cohort and connect them to open-text evidence.

Finally, institutions should treat validated instruments carefully rather than as plug-and-play templates. The paper is valuable precisely because it takes measurement seriously. UK teams should do the same: pilot locally, review wording, and check how well the instrument fits their own context before treating results as board-level evidence.

FAQ

Q: How can a university use a scale like SPECA without overloading students with another long survey?

A: The best route is usually integration, not addition. A university could pilot a small employability block within an existing student experience, careers, or placement survey, then compare the scaled responses with one or two open-text prompts about what has most helped or hindered students' work-readiness. That keeps the burden manageable while still producing evidence that teams can act on.

Q: What should institutions be cautious about when adapting a validated employability scale?

A: A validated instrument is a strong starting point, not a licence to stop checking. Institutions should review wording for local relevance, pilot the scale with their own students, and be careful about comparing scores across very different disciplines or cohorts without further testing. The paper's value lies in its rigour, and that rigour should continue in local use.

Q: What is the broader implication for student voice work on employability?

A: It suggests that employability should be treated as part of the student experience, not just as an outcome after graduation. When students comment on placements, feedback, employer contact, or career guidance, they are often describing the conditions that shape employability capital in practice. Combining robust survey instruments with systematic comment analysis gives universities a much clearer basis for improvement.

References

[Paper Source]: Tran Le Huu Nghia, Pham Lan Anh and Nguyen Thi My Duyen "Measuring students’ self-perceived employability capital attainment: the development and validation of a scale" DOI: 10.1007/s10734-026-01646-w

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.