Professional Documents
Culture Documents
ABSTRACT
The Integrated Summary of Safety (ISS) and Integrated Summary of Efficacy (ISE) are essential components of a successful
submission. In legacy studies where different types of data are frequently collected through diverse systems by various
vendors, programming ISS and ISE analysis can be a daunting job because all study data need to be converted and
harmonized to the same format before programming and analysis work can begin. Studies that utilize the Electronic Data
Capture (EDC) system have similar structured views which can greatly ease the harmonization process. However, even
though harmonization is limited there remain many unique challenges to be addressed by programmers in multi-study data
integration for ISS and ISE. This paper discusses specific tips and techniques to efficiently program integrated analyses which
focus on the following areas: (1) data source checking, (2) "spread and convene" programming approach, and (3) consistent
data and folder structure.
INTRODUCTION
The Integrated Summary of Safety (ISS) and Integrated Summary of Efficacy (ISE) are essential components of a successful
submission. They differ from a regular study since: (a) there is a larger amount of data, (b) usually each study has been
locked for frozen file before ISS and ISE, and (c) in the component individual study, different folder structures might have been
used since these studies could have been locked for a long period of time which means that they may have followed different
standards. This paper will detail the techniques to efficiently handle and accommodate these challenges which include:
Result obtained:
Cumulative Cumulative
AEACN Frequency Percent Frequency Percent
DOSE INCREASED 2 0.01 2 0.01
DOSE NOT CHANGED 32999 95.44 33001 95.44
DOSE REDUCED 136 0.39 33137 95.84
DRUG INTERRUPTED 650 1.88 33787 97.72
DRUG WITHDRAWN 556 1.61 34343 99.33
NOT APPLICABLE 230 0.67 34573 99.99
UNKNOWN 3 0.01 34576 100.00
Frequency Missing = 7
-1-
NESUG 2010 Pharmaceutical Applications
In the frequency distribution above, there are seven AE records with AEACN as blank and three with wording as
'Unknown'. Rather than going directly to the table production, further investigation and reports to the statisticians
and database team to consult for a final decision is recommended.
B). Missing values and blank values are our "friends" in ISS or ISE
Statistical programmers have to deal frequently with missing or blank values, and this is especially important in an
ISS or ISE when dictionary leveling is involved.
Commonly, ISS data is leveled to use the same dictionary version across all studies. This may result in missing data
due to expired terminology. For example, consider the following hierarchy in the drug dictionary:
If one CMDECOD expires and cannot be leveled per the dictionary version used by an ISS or ISE, no corresponding
CMCLAS is able to be assigned. This data leveling issue in the resulting ISS or ISE could be identified by performing
a simple frequency procedure against the leveled variables (CMDECOD and CMCLAS in the above example) to
ensure blanks do not occur. Due to the large amount of data in an ISS or ISE, this programmatic checking is
important as it can be overlooked by manual methods. Therefore, missing or blank values are our "friends" for an
ISS or ISE in that they help to identify harmonization issues for integrated analyses.
2. PROGRAMMING APPROACH
It is possible to put the raw data for all studies together and write one set of programs for ISS or ISE, but this approach
becomes problematic to debug and determine the source of problems, especially when there are a large number of
component studies. To save debugging and validation time, the approach adopted for our 19 ISS studies was to first program
by individual study and then reuse the code from the clinical summary report (CSR) or other existing programs. The results
are then compared with the existing CSR or other published results.
After the programming work is done for each individual component study of an ISS or ISE and all programs for individual
studies have been developed and validated, we then just need simple stacking programs to stack the analyses datasets
together. A simple set of stacking programs were written to stack the analysis datasets in ADaM format with the same data
structure; further work was completed on the stacked analysis data. We called this approach "spread and convene".
The advantage of this approach can be seen in the laboratory safety (LAB) and predefined limit of change (PDLC) analysis.
For the example listed in the next page, i.e. we produced a PDLC listing table for an ISS consisting of 19 studies, which had
11 columns as follows:
In this example, we show three patients: two from Prot123 with allocation number (AN) as10001and10030, and one from
Prot456, with AN as 1000. Since Prot456 was a Phase IIB study, and Pro123 was a Phase III study, the study design was
somewhat different, and the baseline definition was different. Therefore, in order to obtain the table below where baseline
value is in one column, the most efficient approach was to "spread" first, i.e. set up the lab data for Prot456 and Prot123
separately, compare the results against the original CSR or other exploratory outputs, and then "convene", i.e. stack the
analysis dataset where baseline value is set as one column. This is also suitable for the analysis time point column where the
way to define weeks was different for each study. Note that the table should only contain those patients who had at least one
dose of study medication. Using the "spread and convene" approach instead of trying to integrate all data together - in this
case data from 19 studies running the program took considerably less time.
-2-
NESUG 2010 Pharmaceutical Applications
-3-
NESUG 2010 Pharmaceutical Applications
( -- folder -- file)
| ISS2009
| OverallISS
| dataanalysis
| adlab.sas7bdat
| adpdlc.sas7bdat
| pgmsetup
| pgmanalysis
| utility
| startup.sas
| p456
| sdtmplus
| dm.sas7bdat
| lb.sas7bdat
| dataanalysis
| adlab.sas7bdat
| adpdlc.sas7bdat
| pgmsetup
| pgmanalysis
| utility
| startup.sas
| p123
| sdtmplus
| dm.sas7bdat
| lb.sas7bdat
| dataanalysis
| adlab.sas7bdat
| adpdlc.sas7bdat
| pgmsetup
| pgmanalysis
| utility
| startup.sas
| p789
| p012
.
-4-
NESUG 2010 Pharmaceutical Applications
The following is a consistent data structure example for our ADSL dataset within each component study:
-5-
NESUG 2010 Pharmaceutical Applications
CONCLUSION
This paper provides some basic techniques and tips for ISS and ISE programming. The steps help to enable the efficient and
accurate creation of multiple ISS and ISE studies. If all the component study data are collected using the SDTM format, more
development can be made to standardize the programs for each component study analysis, when applicable, and further
improve efficiency.
REFERENCES
CDISC Study Data Tabulation Model Implementation Guide: Human Clinical Trials Version 3.1.1(SDTM 3.1.1 IG)
SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in
the USA and other countries. indicates USA registration.
ACKNOWLEDGEMENTS
The author would like to thank the management team for their review of this paper.
CONTACT INFORMATION
Your comments and questions are valued and encouraged. Contact the authors at:
Changhong Shi
Merck Co. & Inc.
RY34-A320
P.O. Box 2000
Rahway, NJ 07065
Changhong_shi@merck.com
Qing Xue
Merck Co. & Inc.
RY34-A320
P.O. Box 2000
Rahway, NJ 07065
qing_xue@merck.com
-6-