<img src="https://secure.intelligence52.com/795232.png" style="display:none;">

QCast Episode 18: Real World Evidence in Clinical Trials

By Marketing Quanticate
October 24, 2025

QCast Header RWE

In this QCast episode, co-hosts Jullia and Tom unpack real-world evidence in clinical development — clarifying how real-world data become decision-ready evidence and where it sits alongside randomised trials. They explore current regulatory expectations for data fitness and methodological transparency, practical uses from feasibility and endpoint selection to hybrid trials and post-authorisation monitoring, and the high bar for external controls. Along the way, they share concrete steps for target trial emulation, data curation, governance, and bias control — helping sponsors turn routine care data into reliable insights.

🎧 Listen to the Episode:

 

 

Key Takeaways

What Real-World Evidence Is and Why It Matters
Real-world data are routinely collected records from care settings; real-world evidence is the analysis that answers a defined clinical or regulatory question. RWE complements, not replaces, randomised trials by addressing generalisability, long-term outcomes, rare events, and patterns of care that trials alone may not capture.

Designing Fit-for-Purpose Studies
Start with the decision and emulate the “target trial” you would have run: define eligibility, time zero, treatment strategies, outcomes, and follow-up. Use prespecified protocols and transparent analysis plans. Justify methods for confounding control, report diagnostics such as covariate balance and overlap, and run sensitivity analyses for unmeasured bias.

Data Quality and Governance as Non-Negotiables
Prioritise provenance, completeness, timeliness, and traceability. Curate to a common data model with controlled terminology, audit trails, and versioned code. Plan privacy and compliance early: data minimisation, lawful bases, secure linkage, role-based access, and clear cross-border arrangements where relevant.

Where RWE Adds Most Value
Use RWD for feasibility, endpoint selection, and smarter eligibility. Link routine records to streamline follow-up in pragmatic or hybrid trials. When proposing external controls, ensure clinical comparability, strict cohort construction, and robust confounding control so results are credible and decision-useful.

Practical Tips and Common Pitfalls
Anchor time zero correctly to avoid immortal time bias. Validate outcomes using multiple signals or adjudication. Avoid uncontrolled scope creep in cohorts or datasets. Treat RWE as disciplined science: prespecify, document, and make diagnostics a habit so findings are reproducible and inspection-ready.

Full Transcript

Jullia
Welcome to QCast, the show where biometric expertise meets data-driven dialogue. I’m Jullia.

Tom
I’m Tom, and in each episode, we dive into the methodologies, case studies, regulatory shifts, and industry trends shaping modern drug development.

Jullia
Whether you’re in biotech, pharma or life sciences, we’re here to bring you practical insights straight from a leading biometrics CRO. Let’s get started.

Tom
Today, we’re going to be discussing real-world evidence in clinical trials. Jullia, why don’t we begin with a clean definition. People use “real-world data” and “real-world evidence” interchangeably. What do they actually mean, and where do they fit alongside classic randomised trials?

Jullia
Thanks, Tom. So, real-world data, or RWD, are data routinely collected outside traditional trial settings. Think electronic health records, registries, insurance claims, prescribing and dispensing data, and increasingly devices and apps. Real-world evidence, or RWE, is the analysis and inference we derive from those data to address a specific clinical or regulatory question. RWD are the raw ingredients while RWE is the finished dish. They sit beside randomised controlled trials rather than replacing them. Trials remain the primary way to estimate unbiased treatment effects, but RWE helps answer questions trials cannot cover well, such as long-term effectiveness, utilisation in subgroups, rare adverse events, or patterns of care in routine practice.

Tom
So the value proposition is breadth and pragmatism. But regulators can be sceptical if methods are soft. What do current expectations look like?

Jullia
The direction is consistent. Regulators encourage the use of RWE when the question, data, and methods are fit for purpose. Expectations centre on three pillars. First, data fitness: provenance, completeness, timeliness, and traceability, with audit trails and governance. Second, study reliability: prespecified protocols, transparent analytic plans, and appropriate methods to handle confounding, missingness, and measurement error. Third, relevance to the decision: endpoints that map to clinical benefit and clear alignment with the target population. Guidance emphasises quality-by-design, risk management, and strong data integrity controls. In short, agencies welcome RWE that is planned with the same discipline as a trial and executed with reproducibility in mind.

Tom
Let’s turn to where RWE makes a material difference. What are the main applications across the product lifecycle, from discovery to post-authorisation?

Jullia
Several stand out. In early development, RWD define epidemiology, unmet need, and feasibility: event rates, standard of care, and likely enrolment by geography. During pivotal planning, RWE supports endpoint selection and operational design, including eligibility criteria that reflect clinical reality. For execution, you can enable hybrid or pragmatic trials that link trial data to health system records for outcomes follow-up. External controls derived from RWD can contextualise single-arm studies in areas like oncology or rare disease, when built with strict comparability criteria. After approval, RWE monitors safety, effectiveness, utilisation, and adherence, and can inform label expansions or health-technology assessments. The common thread is using routine care data to answer timely, decision-relevant questions.

Tom
You mentioned external controls, which attract scrutiny. What are the non-negotiables if a sponsor wants to propose an external control derived from RWD?

Jullia
Three non-negotiables. First, clinical comparability: match inclusion and exclusion criteria, line of therapy, baseline disease severity, and prior treatments as closely as possible. If the trial protocol changes, the external cohort must adapt or be re-derived. Second, methodological rigour: a prespecified plan for cohort construction, time-anchoring, and follow-up, with robust confounding control. That includes propensity scores or more advanced causal methods, balance diagnostics, and sensitivity analyses to probe unmeasured confounding. Third, data quality and traceability: documented provenance, a reproducible curation pipeline, and transparent handling of missing or conflicting fields. If any of these are weak, the evidence becomes contextual at best rather than pivotal.

Tom
Let’s stay with methods for a moment. Many listeners have heard terms like propensity score, inverse probability weighting, or target trial emulation. How should teams choose and justify an approach?

Jullia
Start with the causal question and emulate the “target trial” you would have run if randomisation were feasible. Define eligibility, treatment strategies, time zero, follow-up, and outcomes. Then choose tools that address the confounding structure and data reality. Propensity scores help balance measured covariates; matching improves interpretability, while weighting uses more data but needs careful stabilisation. Doubly robust estimators combine outcome modelling and weighting for protection against model misspecification. For time-varying treatment or confounding, consider marginal structural models. Always show diagnostics: covariate balance, positivity, overlap, and sensitivity to unmeasured confounding. The method is not the headline; the justification, transparency, and diagnostics are.

Tom
Data quality is the bedrock. What practical steps raise the quality of RWD before analysis, especially when sources vary in structure and completeness?

Jullia
Begin with a data management plan tailored to observational data. Map source fields to a common model with controlled terminology. Define admissible values and clinical plausibility rules. Run systematic checks for duplicates, date logic, and discordant records across sources. For baseline measures, set clear look-back windows; for outcomes, specify adjudication logic using multiple signals, such as diagnoses, procedures, and medication patterns. Document all transformations with versioned code and maintain an audit trail. Where appropriate, align to well-adopted standards so variables are consistent across data partners, and implement role-based access and encryption to protect personal data in line with privacy laws. The output should be a curated, analysis-ready dataset with traceable lineage.

Tom
Speaking of privacy, how do teams navigate governance, consent, and transparency when linking datasets or working across borders?

Jullia
Plan governance at the protocol stage. Define legal bases for processing, data minimisation, and purpose limitation. Use data sharing agreements that specify roles, retention, and deletion. Pseudonymise data before transfer, segregate keys, and restrict re-identification. For cross-border projects, confirm an adequate transfer mechanism and local requirements. Build a transparent privacy notice for participating sites or data partners and, where consent is used, ensure it is informed and specific to the purpose. Maintain a data protection impact assessment for high-risk processing and keep a record of processing activities. Good governance is not a bolt-on; it is integral to credibility and participant trust.

Tom
Let’s bring this into study design. How can RWE practically streamline trial conduct without diluting scientific standards?

Jullia
Two examples. First, outcome ascertainment. Instead of frequent on-site visits for every endpoint, link to health records to collect hospitalisations or procedures while reserving site visits for assessments that truly require them. This reduces burden and cost and can extend follow-up for effectiveness endpoints. Second, smarter eligibility. Use RWD to test how proposed criteria affect generalisability and enrolment. You can simulate screen failure rates and adjust criteria to widen access without compromising safety. Across both, predefine the data flows, validate linkage, and run pilot extractions before first patient in. The goal is to keep the rigour of a trial while using routine data to do the heavy lifting.

Tom
Many teams ask for a starter roadmap. If a sponsor is building RWE capability from scratch, what are the first five steps you would recommend?

Jullia
One, set governance and quality frameworks that mirror clinical study standards, including standard operating procedures for observational research. Two, establish a data partnerships strategy with due diligence on provenance, refresh cycles, and linkage potential. Three, build a curated common data model and a reproducible engineering pipeline, with version control and code review. Four, invest in core methods skills across biostatistics, epidemiology, and data science, and embed causal thinking through target trial emulation. Five, create an engagement model with clinical, regulatory, and safety colleagues so RWE questions are aligned to decision points. Start with a high-value use case, measure impact, and scale deliberately.

Tom
That brings us to pitfalls. Where do RWE projects most often stumble, and how can teams avoid those traps?

Jullia
Common pitfalls include immortal time bias from misaligned time zero, inadequate control for confounding, outcome misclassification, and optimistic assumptions about data completeness. The antidotes are well known: emulate the target trial with precise time anchoring; check and report covariate balance; validate outcomes with multiple signals or adjudication; and quantify missingness with sensitivity analyses that explore plausible mechanisms. Another trap is treating RWE as a shortcut. It is not faster if you discover quality problems late. Front-load curation and feasibility checks, and engage regulators early when the evidence will inform labelling or key decisions.

Tom
Before we wrap, could we give listeners a concise takeaways moment they can action this week?

Jullia
Three quick takeaways. First, define the decision and emulate the target trial before you touch code. Methods follow the question. Second, invest in data curation and provenance tracking early. You cannot recover quality later. Third, make diagnostics a habit: show balance, overlap, and sensitivity, and document everything with versioned code and audit trails. If you do those three things, your RWE will carry more weight with reviewers and clinicians.

Jullia
With that, we’ve come to the end of today’s episode on real-world evidence in clinical trials. If you found this discussion useful, don’t forget to subscribe to QCast so you never miss an episode and share it with a colleague. And if you’d like to learn more about how Quanticate supports data-driven solutions in clinical trials, head to our website or get in touch.

Tom
Thanks for tuning in, and we’ll see you in the next episode.

About QCast

QCast by Quanticate is the podcast for biotech, pharma, and life science leaders looking to deepen their understanding of biometrics and modern drug development. Join co-hosts Tom and Jullia as they explore methodologies, case studies, regulatory shifts, and industry trends shaping the future of clinical research. Where biometric expertise meets data-driven dialogue, QCast delivers practical insights and thought leadership to inform your next breakthrough.

Subscribe to QCast on Apple Podcasts or Spotify to never miss an episode.