Jisc Online Surveys switches Insights to median response time, and why it matters for student feedback surveys

Updated Mar 11, 2026

On 6 March 2026, Jisc updated the Online Surveys change log for version 3.34.2. The release notes say Jisc Online Surveys now shows median response time on the Insights page instead of the mean, and adds new standalone question types for single-choice and multi-choice Choice and Grid questions. At Student Voice AI, we think this matters because small survey-platform changes can affect how Student Experience teams judge survey burden, spot fieldwork problems, and decide whether student feedback data is robust enough to act on.

What has changed in Jisc Online Surveys Insights

The 6 March release contains three user-facing changes. Jisc says it has added standalone question types for single-choice and multi-choice Choice and Grid questions, redesigned the Add item menu to accommodate them, and changed the Average response time on the Insights page to display the median instead of the mean. It also changes the response-time display format to HH MM SS.

"Changed the Average response time on the Insights page to display the median instead of the mean."

This sits inside Jisc's wider Insights feature, which Jisc introduced on 19 November 2025 as a way to quickly see how a survey is performing. The scope here is operational, not regulatory. It does not change NSS, PTES, PRES, UKES, or OfS guidance. It affects institutions using Jisc Online Surveys for local module evaluations, pulse surveys, service feedback, or other student experience questionnaires.

Jisc does not explain the rationale for the switch from mean to median in the release note. In practice, the likely effect is a more stable picture of response behaviour, because median response time is less sensitive to a small number of unusually long sessions, paused responses, or abandoned survey windows than the mean.

What this means for institutions

The first implication is that survey teams should revisit how they interpret platform analytics. If you use response-time metrics to judge whether a questionnaire is too long or confusing, median is usually a better operational signal than mean. It will not eliminate poor survey design, but it should reduce the risk that a few outlier responses make a reasonable instrument look more burdensome than it is.

The second implication is comparability. If your institution tracks survey performance across waves, schools, or question sets, log the release date of the platform change and avoid comparing pre-March and post-March response-time figures without context. That is the same basic discipline we recommend in our NSS open-text analysis methodology: keep version control clear so that method changes do not get mistaken for experience changes.

The third implication is survey design governance. The new standalone question types are a prompt to review shared templates, especially if multiple teams build module evaluations or service surveys. A clearer choice between single-answer and multi-answer formats should help reduce setup mistakes, but only if local guidance is updated as well. For a related methodological lens, our summaries on what gets students to fill in teaching evaluations and non-response bias in student evaluations are useful reminders that response data is only valuable when the instrument is both usable and representative.

How student feedback analysis connects

At Student Voice AI, we see a consistent pattern: when a survey is too long, badly structured, or hard to complete on mobile, the quality of the open-text tends to fall as well. Comments become shorter, thinner, or vanish entirely. A platform metric such as median response time is not a substitute for response-rate monitoring or text analysis, but it can be a useful early warning sign that a survey needs simplifying.

The practical next step is to keep the collection and analysis loop joined up. Use platform analytics to refine the instrument, then analyse open comments with a governed method so you can see what students are actually saying and whether changes improved the evidence you collect. Our student comment analysis governance checklist is a good starting point if different teams across the institution manage surveys differently.

FAQ

Q: What should institutions using Jisc Online Surveys do now?

A: Review any guidance or templates used for student surveys, note the 6 March 2026 release in your method log, and brief local survey owners that Insights now reports median response time. If you compare survey burden across waves, keep that change in view.

Q: When does this change apply, and does it affect national surveys such as NSS or PRES?

A: The change was recorded in Jisc Online Surveys version 3.34.2 on 6 March 2026. It affects institutions using Jisc Online Surveys, but it does not change the methodology of NSS, PTES, PRES, UKES, or other national surveys.

Q: What is the broader implication for student voice practice?

A: The update is a reminder that student voice quality depends on survey operations as well as survey questions. Better instrument design, cleaner monitoring, and consistent text analysis all help institutions collect feedback that is easier to trust and act on.

References

[Jisc Online Surveys]: "Change log" Published: 2026-03-06

[Jisc Online Surveys]: "Introducing Insights: see how your survey is performing" Published: 2025-11-19

Source URL: https://onlinesurveys.jisc.ac.uk/change-log/

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.