<img src="https://secure.intelligence52.com/795232.png" style="display:none;">

QCast Episode 2: Guide to Integrated Summaries of Safety and Effectiveness

By Marketing Quanticate
June 17, 2025

Integrated Summaries Blog Header

In this QCast episode, co-hosts Tom and Jullia peel back the curtain on how drug developers and medical device companies compile, harmonise, and analyse every clinical trial to create a unified story of safety and efficacy. From early Phase II planning and CDISC compliance to cross-functional collaboration, technical pooling of data, and meticulous quality control, we explore the critical steps that ensure regulators receive a transparent, traceable case for approval. Whether you're a regulatory professional, clinical operations leader, or data managers, this episode offers practical insights into building strong ISS/ISE packages that accelerate review timelines and optimise labelling.

🎧 Listen to the Episode:

 

 

Key Takeaways

  • ISS & ISE Purpose: Consolidate all clinical trial data into one narrative, enabling regulators to detect safety signals and confirm efficacy. This is mandatory for US NDAs and in alignment with CTD requirements.
  • ISE Essentials: List and justify every trial per indication, pool consistent endpoints, perform subgroup and comparator analyses, and maintain full traceability from each figure to raw data.
  • ISS Essentials: Summarise total patient exposure, demographics, pooled adverse events, lab/vital-sign trends, toxicology highlights, long-term/withdrawal effects, and interaction signals, with each statistic traceable to source data.
  • Integration Challenge: Harmonise varied variable names, coding systems, lab units, and endpoint definitions. Convert legacy data to CDISC-compliant SDTM/ADaM, and build a SAP audit trail to resolve discrepancies.
  • Early Planning & SAP: Draft the integrated SAP in Phase II, defining study inclusion criteria, standardising variable definitions, outlining statistical methods, and ensuring traceability from CRFs through SDTM/ADaM to final outputs.
  • Team & Data Management: Coordinate statisticians, programmers, data managers, medical writers, and clinicians through regular integration meetings to align on data capture, analysis, narrative, and regulatory strategy.
  • Technical Execution: Define raw data standards, map variables into unified analysis datasets, align medical dictionaries, recalculate endpoints as needed, generate ADaM datasets, and run pooled analyses to produce tables and figures.
  • Review & Quality Control: Perform programmer QC (double programming), statistical validation, clinical review, narrative alignment, regulatory strategy simulations, and mock review panels to preempt agency questions.
  • Regulatory Impact: A polished, pre-emptive ISS/ISE minimises review queries, prevents CRLs, supports balanced labelling, and enables global approval under ICH and CDISC standards.

Full Transcript

Jullia
[00.00.01]
Welcome to QCast, the show where biometric expertise meets data driven dialogue. I'm Jullia.

Tom
[00.00.06]
I’m Thomas, and in each episode, we dive into the methodologies, case studies, regulatory shifts, and industry trends shaping modern drug development.

Jullia
[00.00.16]
Whether you’re in biotech, pharma, or life sciences, we’re here to bring you practical insights straight from a leading biometrics CRO. Right, let’s get started. In today’s episode, we’re diving into a popular and well-known topic: Integrated Summaries of Safety, known as ISS, and Integrated Summaries of Effectiveness, otherwise known as ISE. These aren’t just simple overviews, but rather comprehensive, consolidated accounts of every clinical trial conducted on a product. What I mean by this is, Integrated Summaries are when you combine multiple clinical trial results together to help demonstrate the efficacy and safety of your investigational drug or device.

Tom
[00.01.01]
Thanks Jullia. Okay, let’s start at the very beginning. You could argue that regulators might simply work through each trial report one by one, but that approach risks missing the forest for the trees. Imagine a rare adverse event appearing in one participant out of a hundred in a single small phase 2 study, which can potentially be easy to dismiss as a fluke. However, if that same event appears 5 times when you pool data across 5 separate trials – so a total of 500 participants – it suddenly shifts from a statistical afterthought to a genuine safety signal. The same logic applies to efficacy. A subtle dose-response relationship or differences in effect across subgroups – say, younger versus older patients – won’t really stand out unless you draw from a much larger, combined dataset.

Jullia
[00.01.53]
Yeah, it’s this integrated perspective that lets regulators see overall the benefit to risk. They need to confirm that a drug behaves consistently across diverse populations, dosing regimens, and geographies. In the US, it’s absolutely mandatory to include a standalone ISS and ISE for New Drug Applications, or NDAs. But this approach is hardly unique to the US as it aligns with global standards such as the CTD.

Tom
[00.02.20]
Yeah, the Common Technical Document is the internationally recognised format in Europe, Japan, Canada, Australia, and beyond. Even if a particular agency doesn’t explicitly call for a document labelled ISS or ISE, they still expect integrated analyses in the CTD’s structure. Within the CTD, Module 2 covers high-level summaries – Sections 2.7.3 for effectiveness and 2.7.4 for safety. Meanwhile, Module 5 is where you place the full, exhaustive ISS and ISE files under Section 5.3.5.3. Module 5 provides the space required for enormous tables, figures, appendices, and detailed statistical methods that simply wouldn’t fit into Module 2.

Jullia
[00.03.07]
Alright, now let’s take a proper dive into the ISE. What exactly do regulators look for here? The ISE must show regulators that every proposed indication, whether that be dose, schedule, or patient group, has solid backing. That means listing all trials for each indication, explaining why certain studies carry more weight, and pooling consistent endpoints like overall response or time to progression. The ISE also breaks down efficacy by subgroups such as age, sex, ethnicity, and disease severity, allowing regulators to see whether treatment works uniformly or needs adjustments. If one trial is borderline and another more definitive, the combined analysis clarifies whether the overall benefit meets approval thresholds. Then, there’s comparator analyses, which includes pooling outcomes against placebos or active controls, and long-term data, showing if efficacy endures or what happens when treatment stops. Every table and figure must trace back to its original data to preserve transparency.

Tom
[00.04.11]
Moving onto safety, the ISS brings safety findings from first-in-human through late-stage trials. First, it needs to summarise total patient exposure by dose, duration, and demographics. It’ll do so by calculating patient-years, for example, ‘1000 people on 50 milligrams for six months equals 500 patient-years.’ Then, it needs to detail who these patients are. We’re talking everything from age, sex, weight, ethnicity, comorbidities, concomitant meds as well as possible lifestyle factors. By doing so, any subgroup patterns, like a lab abnormality in older subjects with renal issues, become clear. Afterwards, the ISS will need to pool all adverse events. This includes AE, SAEs, dropouts, and deaths, to spot dose-related trends or subgroup risks. At this point, it’ll also need an inclusion of lab and vital-sign summaries, animal toxicology highlights, long-term and withdrawal effects, and any drug-to-drug interaction signals that only a large dataset can reveal. As you would expect, every statistic and listing must remain traceable to raw data because if regulators see an unexpected spike, like a cluster of cardiac AEs, they’ll inevitably dive deeper.

Jullia
[00.05.43]
Thanks Tom. Now that we’ve covered the key components of both an ISE and ISS, we can look at why exactly integrated summaries are such a challenge, and the complexities that can crop up as a result. Broadly speaking, there are three big hurdles to consider when pulling together data from multiple trials. The first is data harmonisation. Trials often use different variable names, coding systems, lab units and endpoint definitions, so these integrated summaries must map each variable carefully and align medical dictionary definitions. For example, referring to MedDRA for AEs and WHO Drug Dictionary for medications. The next challenge is CDISC compliance. Regulators will insist on SDTM for raw data and ADaM for analysis datasets. Legacy trials might predate CDISC, so entire datasets might often have to be reprocessed to meet current standards, and any mistakes made here can delay review. The last hurdle to consider is traceability and audit trails. The integrated Statistical Analysis Plan, or SAP, must spell out which trials go into each pool, which variables to include, how to calculate endpoints, and how to resolve discrepancies, ensuring to document every transformation. If you don’t plan by the end of Phase 2, you risk finding out late in the game that you can’t harmonise key variables or that you’re missing essential data elements, forcing redesigns or new data collection that can significantly push timelines back.

Tom
[00.07.17]
Yep. Therefore, right from Phase 2, you should already be drafting the integrated SAP for both the ISS and ISE. As you touched on a little earlier, Jullia, this should define the study selection criteria and why, for instance, you might include Phase 2b and Phase 2 studies for a particular indication but exclude a small exploratory cohort that utilises a different endpoint altogether. The SAP also sets variable definitions, including how you uniformly calculate ‘progression-free survival’, group adverse events under consistent categories like ‘hepatoxicity’, and standardise lab units and visit windows. On top of that, it should also detail statistical methods like covariate adjustments, subgroup models, missing-data handling, multiplicity corrections, and sensitivity analyses, establishing a traceability plan so every table and figure links back to raw case report forms through SDTM and ADaM datasets to final outputs. By locking in these decisions early, each trial is designed to flow seamlessly into pooled analyses.

Jullia
[00.08.31]
Alright, just to quickly recap, we’ve explored quite deep into what the ISS and ISE entail but have not yet discussed who’s typically involved in constructing these integrated summaries in the first place. As you’d imagine, it’s really a matter of cross-functional collaboration between many teams. Just to give you an idea, statisticians will design and run pooled analyses, making judgement calls on inconsistencies and ensuring statistical precision. Programmers will handle extracting, transforming, and loading raw CRF data into CDISC formats, then generating ADaM analysis datasets to produce the tables, listings, and figures the SAP requires. Then, we’ve got the data managers who oversee the trial data collection, ensuring CRFs capture all necessary elements, cleaning the data and resolving queries across studies. Medical writers craft the narrative, explaining methods, interpreting pooled results, discussing inconsistencies, and contextualising subgroups. Clinicians will then interpret safety signals, weight benefit to risk in different populations and anticipate regulator questions. It’s important for regular communication – say, fortnightly integration meetings – among these teams to make sure everyone’s on the same page and prevent any last-minute surprises.

Tom
[00.09.53]
Now we know who’s involved, I want to take a moment to expand on the pooled analyses that I mentioned earlier in more detail. First, raw data standards must be defined. Teams must ensure new trials collect data per CDISC guidelines, and for legacy trials, map raw data into SDTM format retrospectively. Next is variable mapping. Teams must build metadata that reconciles each study’s variables into unified analysis variables. Then, medical dictionaries need to be aligned. For instance, upgrade all AEs to a single MedDRA version and normalise concomitant meds to a consistent WHO Drug Dictionary version, being sure to avoid misclassifications, like ‘elevated liver enzymes’ being coded differently across studies. If trials calculate an endpoint differently – say, ‘percentage tumour shrinkage’ – programmers revisit raw measurements and recalculate similarly. If this isn’t feasible, they’ll need to document the discrepancy and adjust statistical models accordingly. After that, programmers will build ADaM analysis datasets – ADSL for subject-level data, ADAE for adverse events, ADTTE for time-to-event endpoints, and so on. Then, statisticians run pooled analyses of Kaplan-Meier curves, Cox models, logistic regressions, and subgroup tables. Finally, automated R or SAS code generates overall exposure tables, pooled efficacy tables, subgroup forest plots, AE incidence by dose tables, and lab parameter summaries. And if any analysis cannot strictly follow the SAP, for example an endpoint that can’t be recalculated, the deviation is documented in footnotes and in a dedicated ‘Deviations from Analysis Plan’ section.

Jullia
[00.11.58]
To add to that, once the data is harmonised and analyses are run, the outputs will undergo a rigorous review. First, programmers conduct a QC: an independent coder will re-create a subset of key tables – say, the primary efficacy endpoint – to confirm exact matches. Statisticians then check model specifications, verify assumptions, ensure subgroup definitions align with the SAP and confirm there are no mismatches between raw SDTM data and pooled ADaM datasets. Clinicians examine safety signals and efficacy trends, spotting any unexpected outliers – like a cluster of cardiac AEs at one site – or protocol inconsistencies that could undermine confidence. Medical writers ensure narratives precisely match tables and figures, confirm every study ID and dataset name is correct, and transparently discuss any limitations or missing data. Then the regulatory strategy team, which typically consists of senior clinical and regulatory leads, pretend to be agency reviewers, asking, for instance, ‘Why did Grade 3 insomnia spike in Study 5?’ or ‘Why weren’t there any specific subjects, or a specific race included in the pooled safety population?’ They often request extra analyses or clarifications. Some organisations even run a mock regulatory review panel, where internal experts systematically challenge the package. The team revises proactively to pre-empt obvious questions. Of course, timing is critical. Any delay, like waiting for a senior physician’s feedback, can push back submission dates and jeopardise regulatory milestones. Planning review meetings well in advance and circulating drafts early is essential.

Tom
[00.13.47]
Alright, we should probably start wrapping things up. Let’s talk about impact on regulatory outcomes. Why is it so essential to get the ISS and ISE right on the first pass?

Jullia
[00.13.58]
This is so important, as a well-crafted ISS and ISE can make or break a submission. Regulators use these summaries to decide approval versus issuing a Complete Response Letter. If they understand exactly how pooled safety signals break down by subgroup, they’re less likely to request additional analyses. On the other hand, an inconsistent presentation, like not clarifying that elderly patients had a higher incidence of Grade 3 insomnia, could prompt a CRL, adding months to the timeline. It’s also worth noting that the quality of safety analyses also shapes the final label. A clear demonstration that ‘Grade 3 insomnia occurred in 2% overall, with no serious complications’ supports a less restrictive warning. Omit subgroup nuances, and you risk a conservative label that limits commercial uptake. On the global front, if your ISS and ISE follows ICH E3 guidance and CDISC standards, you’re less likely to face conflicting requests from different regions. That consistency can accelerate simultaneous approvals in the EU, US, and Japan, for example. In short, a polished, pre-emptive ISS and ISE saves months in review and yields a balanced, patient-friendly label – benefitting both companies and the public.

Tom
[00.15.16]
So, the next time you see a new medicine or device approved, remember it’s not just the headline trial data that matters. There’s a huge, integrated body of evidence which has been painstakingly planned, harmonised, and reviewed that gives regulators the confidence to green-light safe, effective therapies.

Jullia
[00.15.36]
That’s all for today’s episode on Integrated Summaries of Safety and Effectiveness. We hope you now appreciate the hidden engine powering every approval – the detailed safety and effectiveness data that ensure new treatments reach patients safely and efficiently. If you found this discussion useful, don’t forget to subscribe and share. And if you’d like to learn more about how Quanticate supports data-driven solutions in clinical trials, head to our website to get in touch.

Tom
[00.16.04]
Thanks for joining us, and we'll see you in the next episode.

About QCast

QCast by Quanticate is the podcast for biotech, pharma, and life science leaders looking to deepen their understanding of biometrics and modern drug development. Join co-hosts Tom and Jullia as they explore methodologies, case studies, regulatory shifts, and industry trends shaping the future of clinical research. Where biometric expertise meets data-driven dialogue, QCast delivers practical insights and thought leadership to inform your next breakthrough.

Subscribe to QCast on Apple Podcasts, Spotify, or your favourite platform to never miss an episode.