ISPOR Glasgow 2025: Conference Overview

The CLINIGMA® team attended ISPOR Europe in Glasgow from November 9–12, 2025. Over three days, we attended sessions exploring patient evidence across drug development, regulatory decisions, and Health Technology Assessment (HTA) evaluations—from personalized endpoints and AI in qualitative research to the evolving regulatory landscape. Nearly 40% of the conference content featured patient input, highlighting its growing impact. Here's a summary of the sessions we attended each day:
Day 1: Measuring what matters to patients
The session on personalized endpoints and patient-centric approaches in HTA highlighted a persistent gap in clinical research: We often measure what's convenient rather than what genuinely matters to patients.
The shift toward personalized endpoints acknowledges a basic reality—patients with the same diagnosis don't experience the same disease burden or value the same outcomes. A clinically meaningful improvement for one patient may be inadequate for another with different priorities and life circumstances.
Our perspective: In-trial patient interviews offer crucial insights. Clinical trial endpoints and Clinical Outcome Assessments (COAs) may show changes—but in what way are they relevant to the patients? Patients can describe in their own words changes that substantially impact their daily lives, e.g., returning to work, sleeping through the night, or managing daily tasks that had become impossible. These narratives reveal how even small improvements can translate into meaningful life impacts, providing context that standardized clinical scales cannot fully convey. This is relevant not only to regulatory bodies but also to HTAs.
For HTA bodies increasingly focused on patient-relevant evidence, understanding meaningful change requires asking patients directly—not just measuring what we think matters, but learning what actually does.
Patient interviews are thus an important tool to provide additional evidence of treatment benefit and meaningful change, to support COA strategy and regulatory submissions.

Day 2: The AI question in qualitative research
Artificial intelligence has the potential to improve speed, efficiency, and scalability of qualitative patient research—but can we trust machines to understand the nuanced experiences of patients living with cancer? That question was posed at a panel exploring the opportunities and challenges of using AI in qualitative patient research. Christina Silver (University of Surrey/CAQDAS Networking Project), Karen Bailey (Thermo Fisher Scientific), and Jane Wells (Sanofi) presented the results of their pilot study comparing human versus AI coding of concept elicitation and cognitive interviews with patients diagnosed with Non-Hodgkin's Lymphoma (n=30 transcripts). Key findings included:
- Speed: AI coded transcripts in 50 seconds versus 120 minutes for humans.
- Volume: AI generated excessive codes (2,318 codes in 578 categories) compared to human coding (1,298 codes in 40 categories).
- Quality: AI identified some relevant concepts missed by humans but required substantial human quality control (76 codes removed, 41 newly created from one AI-coded transcript).
- Utility: Human coding provided superior final analysis relevant to study objectives. AI struggled with context and accuracy.
While AI can deliver unprecedented speed, extensive human oversight currently overrides efficiency savings. For the time being, manual coding is the gold standard for qualitative data analysis for regulatory submissions. That said, there are many ways in which qualitative researchers can work with AI to improve both quality and efficiency in data analysis and patient-centered research.

Day 3: The EMA and HTA requirements
A workshop on regulatory and HTA requirements addressed a common challenge: EMA wants disease-specific clinical outcomes in randomized controlled trials, HTA bodies want EQ-5D and real-world utility data, and Joint Clinical Assessments (JCAs) want validated patient-reported outcome measures with conceptual frameworks. How do you satisfy everyone simultaneously?
The draft EMA Reflection Paper on Patient Experience Data, currently in public consultation and published in September 2025, establishes that patient experience data should be systematically considered throughout the drug development lifecycle and that patient-reported outcomes, patient preference studies, and data from patient engagement activities “contributes to the totality of evidence” in regulatory assessment.
The workshop explored how pharmaceutical companies can address these EMA requirements alongside HTA and JCA needs. The discussion highlighted that regulatory requirements, HTA evidence needs, and patient research shouldn't be treated as separate challenges. They're different stakeholders asking variations of the same fundamental question: Does this treatment matter to patients in ways that improve their lives?
Qualitative interviews with patients during Phase III trials can help meet EMA, HTA, and JCA requirements for patient experience evidence to serve:
- EMA's patient experience data requirements
- HTA's need for patient-defined utility evidence
- JCA's conceptual framework validation requirements
- Product labelling narratives

The draft EMA Reflection Paper on Patient Experience Data was also discussed at the joint ISPOR Patient-Centered Research meeting, where the ISPOR Clinical Outcome Assessment, Patient-Centered, and Health Preference Research Special Interest Groups were represented. The purpose of the joint meeting was to discuss across the study interest groups and plan a joint ISPOR Special Interest Groups response to the EMA Reflection Paper on Patient Experience Data. Deadline for providing input to the draft EMA reflection paper is 31st of January 2026.
Read our overview of the draft EMA Reflection Paper on Patient Experience Data here.





