Quanticate Blog

Clinical Trial Simulation: Models, Workflows, and Common Pitfalls

Written by Statistical Consultancy Team | Mon, Mar 09, 2026

Clinical trial simulation is the structured use of mathematical and statistical models to test study scenarios before a trial begins. Instead of relying on a single assumption about dose, effect size, or variability, teams simulate many plausible realities and examine how a proposed design performs under each. In practice, this often means running many ‘virtual’ versions of the trial (sometimes framed as ‘virtual patients’ or ‘in silico trials’) to understand operating characteristics such as power, bias, precision, and probability of success across uncertainty.

It is decision support. Used well, simulation improves design efficiency, supports rational dose selection, reduces avoidable protocol amendments, and clarifies development risk before patients are enrolled.

Clinical trial simulations vs clinical simulation

The terminology is often confused, so it is important to separate the concepts clearly.

Clinical trial simulation refers to modelling and statistical simulation used to inform trial design, dose selection, analysis strategy, or operational planning.

Clinical simulation, in contrast, typically refers to healthcare training. Hospitals and universities use simulated environments or digital tools to train clinicians in diagnosis or emergency response. It does not relate to trial design or drug development modelling.

In short, trial simulation supports research decisions about design and delivery. Clinical simulation supports clinical education and skills training.

Clinical trial simulations across drug development

The role of simulation evolves as evidence accumulates. Early phases focus on translational assumptions and exposure; later phases focus on robustness and regulatory defensibility.

First-in-human and early phase

Early development integrates preclinical and translational data to inform:

Starting dose justification
This typically translates preclinical and early human evidence into a defensible first dose and escalation plan. It also clarifies which assumptions drive safety margins.

Dose-escalation schemes
Simulations can compare escalation approaches under different toxicity assumptions to understand overdose risk and decision speed. This helps teams choose a scheme that fits the risk profile.

Expected exposure ranges
Simulations explore plausible exposure variability across individuals and dose levels. This supports decisions about escalation steps and monitoring needs.

Sampling design efficiency
Sampling schedules can be stress-tested to ensure they capture enough information without unnecessary burden. This is often used to balance learnings against feasibility.

Operational and feasibility scenario testing can also matter early, particularly when cohorts are staggered or assessment windows drive pace. Teams may simulate site activation lags, screening failure rates, cohort review timelines, and expected recruitment under different eligibility and visit-burden assumptions to understand schedule risk before patient dosing.

For example, simulation may compare traditional 3+3 escalation with model-based approaches to assess overdose risk and escalation speed under different toxicity assumptions.

Phase 2 and proof of concept

In Phase 2, simulation addresses a different question: how likely is the trial to detect a clinically meaningful effect? Typical applications include:

Exploring treatment effect distributions rather than single-point estimates
Rather than fixing one effect size, simulations vary plausible effects and variability to estimate probability of success. This supports more realistic go/no-go planning.

Evaluating continuous vs categorical endpoints
Simulations can compare how endpoint choice affects power and interpretability under plausible data patterns. This is useful when clinical meaning and statistical sensitivity trade off.

Testing biomarker-enriched populations
Enrichment scenarios can be explored to understand how thresholds affect effect size, variability, and recruitment feasibility. This helps teams quantify the benefit of targeting.

Examining multiplicity strategies across dose arms
Simulations can test how multiplicity control behaves under different true dose–response shapes. This helps avoid strategies that look acceptable only under one assumed pattern.

Here, clinical trial design simulation quantifies the probability of success across uncertainty ranges, rather than relying on a single assumed effect size.

Confirmatory and late phase

Late-phase programmes require statistical robustness and operational realism. Simulations may test:

Time-to-event assumptions in survival trials
Simulations vary event rates and follow-up patterns to assess power and information timing. This helps teams stress-test assumptions that are hard to verify upfront.

Impact of non-proportional hazards
When hazards may change over time, simulations can test how the planned analysis performs under departures from proportional hazards. This supports selecting estimands and analysis approaches that remain interpretable.

Sensitivity to missing data mechanisms
Simulations can impose different missingness patterns to understand bias risk and sensitivity analysis behaviour. This is particularly useful when missingness may be informative.

Consequences of protocol deviations
Deviation scenarios can be simulated to understand dilution of effect and operational risk. This supports decisions about monitoring focus and analysis strategy.

Operational modelling, such as recruitment variability across regions or dropout patterns, also becomes increasingly relevant at this stage. Teams may also use simulation to generate realistic synthetic datasets to check analysis performance under complexity, including type I error control, multiplicity strategies, and sensitivity to missing data assumptions.

Where designs include adaptive features, simulations can stress-test interim rules such as stopping boundaries, response-adaptive randomisation, or sample size re-estimation, and explore the potential for operational bias introduced by interim decisions.

Clinical trial simulation models used in practice

Different modelling families support distinct decisions. Alignment between the question and the model is essential.

Physiologically based pharmacokinetic (PBPK) models
PBPK models represent drug disposition using mechanistic descriptions of physiology. In practice, they are commonly used to:
•    Predict drug–drug interactions with enzyme inhibitors or inducers
•    Explore intrinsic factors such as renal or hepatic impairment
•    Inform paediatric extrapolation scenarios

Regulators often review PBPK outputs when assessing interaction risk or labelling language, provided assumptions are transparent and justified.

Population PK/PD models
Population pharmacokinetic/pharmacodynamic (PopPK/PD) models quantify variability in exposure and response across individuals. They support:
•    Covariate identification (e.g. weight, age, organ function)
•    Dose individualisation strategies
•    Linking biomarker response to clinical endpoints

These models form a backbone for exposure–response analysis in many development programmes.

Exposure–response models
Exposure–response modelling focuses directly on the relationship between concentration and effect. It informs:
•    Dose–response curve shape
•    Therapeutic window characterisation
•    Benefit–risk trade-offs across regimens

This evidence often underpins dose justification arguments in submissions.

Disease progression models
Disease progression models describe how outcomes change over time, with and without treatment, and are often used to:
•    Link short-term biomarkers to longer-term endpoints
•    Explore visit schedules and follow-up duration assumptions
•    Stress-test endpoint sensitivity under realistic trajectories and noise

Modelling can also support formulation and biopharmaceutics decisions when teams need to link dissolution or release assumptions to predicted exposure. This can help teams compare immediate versus modified-release approaches, explore the impact of dissolution variability, and plan bioequivalence expectations under different variability assumptions.

Workflow and common simulation types

A structured workflow ensures simulations are decision-grade rather than exploratory.

A typical workflow

  1. Define the decision context
    Specify the regulatory or development decision to be supported. 
  2. Assemble and clean data
    Integrate clinical, non-clinical, and literature data where relevant. 
  3. Develop and verify the model
    Confirm that the model adequately reproduces observed data patterns. 
  4. Define uncertainty and scenarios
    Explicitly vary effect size, variability, dropout, adherence, or operational assumptions. 
  5. Run virtual trials and summarise decision metrics
     
    Generate metrics such as probability of success, expected power, bias, or precision.

In many settings, teams implement steps 4-5 via Monte Carlo simulation, repeating the ‘virtual trial’ many times to estimate operating characteristics under each scenario

Three broad types of simulation

Across programmes, most simulations fall into three categories:

Design simulations
These focus on statistical performance, such as power, bias, and precision, under different endpoint and sample size assumptions. They help teams understand how sensitive a design is to uncertainty in effect size and variability.

Dose and regimen simulations
These explore exposure, safety, and efficacy across dose levels and schedules, often using PK/PD or exposure–response relationships. They support decisions about which regimens are worth carrying forward and what sampling is sufficient to learn.

Operational simulations
These test delivery assumptions such as recruitment, dropout, adherence, and protocol deviations. They can clarify how operational variability may affect timelines, information at interim looks, and the interpretability of results.

Each addresses a different development risk.

Credibility, documentation, data, and technology constraints

Simulation outputs are only as reliable as their assumptions and documentation. Decision-grade clinical trial simulation in drug development requires:

Transparent assumptions and clearly stated limitations
Document what drives the results, including what is held fixed, what varies, and why scenarios are clinically plausible. State limitations in the same place as conclusions, not as an afterthought.

Verification and validation of models
Check that code and implementation behave as intended, and that the model reproduces key observed patterns at the level needed for the decision. Where evidence is limited, keep conclusions proportionate.

Traceable documentation suitable for regulatory review
Maintain a clear trail from data and model versions to scenario definitions and outputs. Decision-makers should be able to reproduce results and understand how each conclusion was generated.

Alignment with data integrity expectations and privacy regulations such as GDPR
Ensure inputs and outputs follow applicable privacy and data governance requirements. Where simulations support regulated decisions, align handling of electronic records and derived datasets with established data integrity expectations.

Where simulation code, inputs, or outputs sit within regulated workflows, teams also typically consider expectations for validated systems, audit trails, access control, and electronic records governance. For example, principles reflected in 21 CFR Part 11 and comparable GxP data integrity guidance.

In regulatory contexts, modelling and simulation is typically positioned as supportive evidence within the totality of data, with credibility driven by scientific rationale, transparency, reproducibility, and consistency with observed results. This framing often sits within broader ‘model-informed drug development’ discussions rather than as a standalone substitute for clinical evidence.

Practical constraints also matter. Data may sit in separate systems across clinical, biometrics, and pharmacometrics teams. Organisational silos can delay alignment on assumptions. Specialist expertise may be limited, and computational resources must scale with simulation complexity. Clear communication to non-technical stakeholders is equally important to avoid misinterpretation of probabilistic outputs.

Pitfalls, limits, and when not to simulate

Simulation is not always necessary. Common pitfalls include:

Overfitting sparse early-phase data
Early datasets often support multiple explanations, especially when dose levels and outcomes are limited. If the model fits noise rather than signal, simulated ‘certainty’ can look stronger than the evidence.

Testing unrealistic or clinically implausible scenarios
Simulation is only useful if scenarios reflect credible biology and feasible delivery. When assumptions drift from what patients, sites, and protocols can actually support, outputs stop being decision useful.

Ignoring parameter uncertainty
Point estimates can hide how little information sits behind key inputs. Treat uncertainty as an explicit part of the scenario set, not a footnote, so decision metrics reflect what’s unknown as well as what’s expected.

Treating projections as firm predictions
Simulation supports decisions under uncertainty; it does not remove uncertainty. Teams can misread probabilistic outputs as forecasts, which can drive overconfident design choices and under-planned contingency. If a straightforward analytical calculation answers the question adequately, complex simulation may not add value. The purpose should drive the method, not the other way around.

Conclusion

Clinical trial simulation provides a structured way to examine uncertainty before committing patients and resources to a study. When aligned with a clear decision question, supported by credible models, and communicated transparently, it helps teams choose designs and regimens with a clearer view of risk before patients are enrolled.

Quanticate’s statistical consultancy team can help you use clinical trial simulation to stress-test assumptions, quantify risk, and support dose, endpoint, and analysis decisions. We’ll review your decision question, available data, and practical constraints, then recommend a proportionate approach with clear assumptions and decision-ready outputs. Request a consultation to discuss your programme and next steps.