The ICH E6 on Good Clinical Practice was updated on 9 November 2016, the first addendum for 20 years. Sections 5.0 on Risk and 5.18.3 on Extent and Nature of Monitoring in particular created an increased need for Risk-Based Monitoring (RBM) and Centralized Monitoring (CM). The details of how this may be covered and the increasing evolution of a risk in the industry have been noted. The required changes involved create a number of different team roles. These include opportunities for biostatisticians and statistical programmers - players who may not have been so directly involved in onsite monitoring, a more traditionally clinical domain.
Onsite Monitoring with Risk-Based Monitoring and Centralized Monitoring
Onsite Monitoring (OM), designed to identify risk and spot data errors, is traditionally built into protocols to occur approximately every 4-6 weeks, or can be triggered by events requiring further investigation, such as high rates of screen failure. This can be inefficient and does not always get to the root cause of a problem. Advances in technology mean that data is now often collected electronically (e.g. eCRFs, tablets, and wearables), making RBM and CM strategies easier to implement and improving OM efficiency. Examining data centrally and in real-time, through computer analysis or formal statistical tests, enables identification of potential risks which individuals might not spot. Problems can then be addressed earlier, reducing time and cost, and increasing overall efficiency.
The RBM and CM team, or monitor, should work with the OM team to help them carry out their roles by informing when OM is required, at which sites or regions, and to what degree, for example whether a phone call or a full visit is required.
The Need for Statisticians and Programmers
So what would the RBM and CM team do exactly and what is the need for biostatisticians and statistical programmers? They will look at the bigger picture, and provide statistical expertise to produce appropriate descriptions of the trial data, as well as using their statistical expertise to interpret the data presentations and analyses. The main objective of RBM is to accurately identify potential risks, which may come from specific sites, geographic regions, or individual investigators and could be due to reasons such as staff requiring more training, recording errors or fraud. RBM also enables the efforts of the OM team to be more focused, with the ultimate aim of reducing the number of onsite visits, while still assessing and reducing risks.
Examples of Statistical Interpretations for Risk
The following provides some examples of techniques from Quanticate’s online Centralized Statistical Monitoring platform designed to be reviewed by statisticians for dialogue with OM teams.
Demographics: By examining tables and graphs of subject-level information such as sex, race, and DOB, CM teams can spot trends and anomalies to indicate whether sites, regions, and investigators are behaving as we would expect.
Adverse Events: It could be the case that different sites have different proportions of subjects experiencing AEs. This could be a true reflection of the situation - a potential safety concern - or due to misrecording. Statistical analysis to compare the proportion of subjects with AEs over sites could help to inform the OM if there may be an issue and at which sites the issue is arising.
Dates: Programmatic checking of dates can identify a variety of potential issues such as duplicate records, subjects attending on unexpected dates, (such as public holidays and weekends), and perfect attendance of subjects.
Safety Parameters: It is important to observe the trends occurring in safety parameters. ANOVA and MMRM can be used to investigate differences and/or similarities across sites for lab parameters, or sets of lab parameters. Care and statistical awareness is required to interpret the results of these analyses due to large scale multiplicity.
Digit Preference: Checking distributions of initial and final digits can be of use to check if anything is being misrecorded or fabricated. Benford’s Law states that the leading digit in naturally occurring data is distributed as log10(i+1)-log10(i). The finance industry has used this to identify fraud, by checking whether data adheres to this distribution. For the Law to hold, the data must be positively skewed and span at least 4 orders of magnitude. This is not typically true of clinical trial data. An alternative is to use Chi-squared tests to make the comparison across sites.
Inliers and Outliers: Safety parameter results for patients which are too close or too distant from the mean can be detected using CM. With non-normal data, outliers can be detected using the the interquartile range (IQR) and investigating those patients with values lying outside of a specified distance. With normal data, there are a number of options. Univariate analyses examine each parameter separately through data visualizations such as box plots and by identifying outlying patients for each parameter using Grubbs’ method or similar. Multivariate analyses allow a set of parameters to be examined as a whole, using the Euclidean or Mahalanobis distance - essentially the distance of each patients set of parameters from the average of these parameters, with the latter making use of the correlation between variables. These distances can be used to find outliers but also inliers by taking the negative of the log. Patients and sites with extremes can then be flagged for further investigation by OM.
Summary and Conclusion
As can be seen, while OM alone can identify risk and data errors, this can be much improved with RBM and CM. They can also make the process more consistent since analyses can be done more systematically using programs carrying out the same assessments without fail. The need has been highlighted here for statisticians and programmers in carrying this out. They can use their expertise to help interpret the vast amounts of data involved in a clinical trial. This is through not only writing the relevant programs, but also through devising the necessary techniques and interpreting results and trends which can help to inform OM teams as to where risks and errors may lie. This blog has outlined some of the more general statistical and programmatic methods involved. Additional different methods may be required from trial to trial, and as RBM and CM develop, thus meaning that there will always be a need for statisticians and programmers in RBM and CM.
Quanticate offers Statistical Monitoring solutions using SAS JMP® and has Risk-Based Monitoring technology available, for more information submit a RFI and a member of our team will contact you shortly.
Relevant Reading and References
- Application of Methods for Central Statistical Monitoring in Clinical Trials Kirkwood: A., Cox T. and Hackshaw A., 2013
- Reality Checks for a Distributional Assumption: The Case of “Benford’s Law”: Goodman W. M., 2000
- ICH E6 (R2): Guideline Reaches Step 4 of the ICH Process.
- Remote Monitoring During Clinical Trials, a Risk Based Approach
- ICHGCP E6 Addendum R2 - What do you need to know?
- The Evolution of Risk Based & Remote Monitoring
- The Rise of Risk Based Monitoring [Infographic]