• Users Online: 340
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
RESEARCH METHODOLOGY
Year : 2021  |  Volume : 33  |  Issue : 1  |  Page : 29-32

Adaptive designs for clinical trials


Research Mentor, AMMA Healthcare Research Gurukul, Kochi, Kerala, India

Date of Submission16-Dec-2020
Date of Decision17-Dec-2020
Date of Acceptance18-Dec-2020
Date of Web Publication19-Apr-2021

Correspondence Address:
Dr. Praveen K Nirmalan
Research Mentor, AMMA Healthcare Research Gurukul, Kochi, Kerala
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/kjo.kjo_202_20

Rights and Permissions
  Abstract 


The gold standard for interventional studies is the randomized controlled trial (RCT). The RCT follows a linear path with prespecified plans and a little option for flexibility within its implementation. The RCT is also resource and time-intensive and can be a limitation in epidemics or pandemics where more rapid responses can reduce adverse effects. Adaptive designs (ADs) allow for continuous evidence-based modifications to key elements of trial design even as data collection is going on. Reduced use of resources, reduced time to complete the trial, flexibility in the allocation of participants to study arms and improved likelihood for scientifically valid trial results are added advantages with AD. ADs can be applied from early-phase trials to confirmatory trials. In this paper, we introduce key aspects of AD that Ophthalmologists can apply to clinical trials in Ophthalmology. We briefly introduce the rationale for AD, commonly used terminology, and design elements.

Keywords: Adaptive designs, clinical trials, design modification, interim analysis, randomization


How to cite this article:
Nirmalan PK. Adaptive designs for clinical trials. Kerala J Ophthalmol 2021;33:29-32

How to cite this URL:
Nirmalan PK. Adaptive designs for clinical trials. Kerala J Ophthalmol [serial online] 2021 [cited 2021 Jun 18];33:29-32. Available from: http://www.kjophthal.com/text.asp?2021/33/1/29/314091




  Conventional Fixed Trial Designs Top


The randomized controlled trial (RCT) is the gold standard design to assess the effectiveness of an intervention in healthcare. Conventional controlled trial designs follow a linear path from the design of the study, implementation of the study, and analysis and interpretation of data after the completion of the trial.[1] The linear path has several advantages with prespecified fixed paths that may improve the validity and reliability of results. However, the fixed linear path has several major disadvantages. These trials need more resources, time, personnel and hence are limited to a smaller number of institutions or triallists.[2],[3] Phase III trials usually take a long time to answer clinically relevant questions as the answers are analyzed only on completion of the trial. A previous study has reported significant delays in patient recruitment and enrolment in clinical trials and an increase in the median time from protocol approval to completion of data collection in trials.[4]

Conventional trial designs need specific clinical or public health inputs to determine the sample size. These include factors such as outcome variability, treatment intensity, treatment duration, outcomes, and effect sizes that may not be available to determine sample size.[5],[6] The amount of variability in real-life situations compared to the controlled environment of a trial is a factor to consider. Statistical methods that rely on the commonly used frequentist methods will lead to larger sample sizes partly because of the additional variability to be factored in and when trials compare interventions whose effectiveness differs by smaller margins.[2] The larger sample sizes lead to more delays in enrolment and trial completion, and more time and resources to complete the trial and for completion of the analysis. Invariably, this results in more noncompleted trials, a longer lead time for interventions, and increased costs to the public as research and development costs are transferred to the consumer.

Fixed trial designs do perform interim analysis at fixed intervals, with interim stopping rules and exit criteria. However, a disadvantage of the fixed trial design is the inability to add a newer, possibly better, intervention to the trial or drop an underperforming intervention from the trial. A comparison of results at a single endpoint as a dichotomous significant or nonsignificant outcome differs from real-world health-care situations, where evidence is a dynamic process of change that accumulates over time. Changes can be done in a fixed trial design method only after completion of the entire trial and through a new trial study. The inability to make optimal use of ancillary information and new information as it arises delay the transition of trial results into clinical and policy domains. The time delays assume greater significance during epidemics and pandemics where rapid responses are needed. It is therefore essential to improve operational efficiency, analytic efficiency, and application of results to a larger population within a reasonable time frame and cost.[2],[4],[5],[7],[8],[9],[10]


  What are Adaptive Designs? Top


An adaptive design (AD) is described as a clinical trial design that modifies aspects of an ongoing trial based on predetermined strategies and using trial data as it is collected while retaining the validity and integrity of that trial.[11] Adaptive methods are described from the 1960s with gradual progress to wider acceptance by the clinical community and trial lists.[11],[12],[13],[14],[15],[16] The United States Congress passed the 21st Century Cures Act in 2016 with instructions to the Food and Drug Administration to provide updated guidance on AD.[17]

ADs offer the possibility to adapt [Table 1], based on a predetermined plan, one or more of the following: A reassessment of the study sample size, allocation to different treatment arms, number of subjects in each arm with a focus to recruiting more subjects likely to benefit, abandon ineffective treatment or doses, and early stopping of the trial either because of success or lack of effectiveness.[18],[19],[20],[21] The actual power of the study at any point during the trial is assessed by event-based evaluations and sample size reassessments.[18],[21] The randomization ratio can be changed during the trial using response adaptive randomization so that newly enrolled patients are more likely to be assigned to the favorable treatment arm.[19] Trial eligibility criteria and outcome evaluation can be adapted using adaptive enrichment such that modifying eligibility criteria, clinical or biochemical outcomes to enroll patients from a subgroup that has a more favorable response.[20] Seamless adaptive trial designs allow for the continuation of the trial from one phase to another, usually from Phase II to Phase III trials with the results from the Phase II trial determining the initial allocation ratio, planned total sample size, and a potentially enriched population group for the subsequent Phase III.[20],[21]
Table 1: Some types of adaptive designs

Click here to view


The AD introduces a continuous review-adapt element to the linear path of the fixed trial design [Figure 1] so that the efficiency of the trial is improved.[22] These continuous reviews and adaptations are based on preplanned interim analyses and are not ad hoc or unplanned decisions. Modifications to the trial design based on these interim analyses are implemented without affecting the integrity and validity of the trial. Integrity implies that trial data and processes have not been compromised and that there is no data loss or leakage at the interim analyses stage.[23] Validity implies that the trial will answer the original research question including treatment effects, P values and confidence or credible intervals using the appropriate statistical methods.[24],[25],[26],[27],[28],[29]
Figure 1: Adaptive design planning process

Click here to view



  Planning and Interpretation of Adaptive Trials Top


Fixed trial designs and AD share many similarities, but the AD needs more extensive pretrial planning to determine the interim analyses and possible modifications. Fixed trial designs have interim analyses to determine if the trial should be stopped early, however, these analyses are based on monitoring boundaries that are not adaptive.[30] ADs use decision rules that are prespecified before starting the trial. The most common include rules to terminate treatment arms or the trial and modifications of the allocation ratio between treatments based on the interim analyses.[9] Other decision rules include predefined quantitative criteria for re-estimating the sample size or narrowing patient eligibility or picking new arms for more optimal dose-response escalations. The outcomes to consider and the effect of the choice of the outcome on the design, time needed to measure outcome and relevance must be considered. The outcomes of interest must be clinically relevant and correlate with the primary outcomes of the trial.[9] Shorter times to observe the outcome, and therefore collect sufficient interim data, can help to achieve efficiency and ethical gains with adaptations before the trial is terminated.[9] The patient information sheet and consent forms should include details of all possible planned adaptations. It is recommended that the analyses plan and decision rules are determined in consultation with a biostatistician to ensure the validity of the results.

Fixed trial designs usually report differences in interventions as effect sizes, a difference in means or proportions, a P value and 95% confidence intervals around the point estimates. These are used to form a dichotomous opinion if there is a significant or nonsignificant difference between the treatment arms. Other metrics that are used to interpret trial results include the accuracy of estimation, the probability of identifying the true best treatment and the best treatment dosage. Data analysis in AD, however, needs combining data from different stages of the trial. Computing estimated treatment effects, P values and confidence with similar statistical approaches used in a fixed trial design may lead to errors as the AD can impact the estimated treatment effect.[31] The use of pretrial simulations and appropriate statistical techniques help to reduce errors in determining differences in treatment effects. Several statistical methods are useful to find treatment effects in AD including unbiased estimators, bias-corrected maximum likelihood estimators, median unbiased estimators, shrinkage approaches, and bootstrapping.[32],[33],[34],[35],[36],[37] Computation of the P values, confidence intervals, and type 1 errors depend on the estimation of treatment effects and need appropriate statistical techniques with adjustment for multiple testing.[38] Several Bayesian methods offer advantages over the commonly used Frequentist methods to determine appropriate effects as multiple tests of the data is statistically possible without adjustment in a Bayesian framework.[39],[40],[41],[42],[43],[44],[45] We are not going into a detailed explanation of the statistical processes in this paper.

The methods used for interim analyses can affect the results of the trial. An operational bias can be introduced as the knowledge of or speculation of interim results can alter investigator and patient behavior.[46],[47] Careful consideration must be given to the confidentiality of the data analysis process, who had access to the data, and the possible role of trial sponsors in the decision-making process. Operational bias can also be introduced through changes in the care provided and assessment of outcomes and undermine the validity and credibility of the trial unless these are carefully planned changes and analyzed appropriately. These can introduce heterogeneity of the study population (as patient eligibility and enrolment may change) and needs exploration of key patient characteristics and results by different stages of the trial and treatment groups.[8]


  Conclusion Top


AD offers several advantages compared to the fixed trial design including the ability to limit exposure of participants to unnecessary treatments, enrolment of participants to more promising treatment arms, the ability to stop the study early, and limiting the possibility of underpowered trials through interim analyses and sample size reassessments. These translate to reduced time to completion of the trial and lower costs that can translate to additional benefits to the end consumers. However, the advantages of AD must balance with the detailed planning and biostatistical support needed for AD compared to the fixed trial design and frequentist method of analyses.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Friedman L, Furberg C, DeMets D, Reboussin D, Granger C. Fundamentals of Clinical Trials. 5th ed.. Cham, Switzerland: Springer International Publishing; 2010. p. 550.  Back to cited text no. 1
    
2.
Luce BR, Kramer JM, Goodman SN, Connor JT, Tunis S, Whicher D, et al. Rethinking randomized clinical trials for comparative effectiveness research: The need for transformational change. Ann Intern Med 2009;151:206-9.  Back to cited text no. 2
    
3.
Bothwell LE, Greene JA, Podolsky SH, Jones DS. Assessing the gold standard-lessons from the history of RCTs. N Engl J Med 2016;374:2175-81.  Back to cited text no. 3
    
4.
Getz KA, Wenger J, Campo RA, Seguine ES, Kaitin KI. Assessing the impact of protocol design changes on clinical trial performance. Am J Ther 2008;15:450-7.  Back to cited text no. 4
    
5.
Ryan EG, Bruce J, Metcalfe AJ, Stallard N, Lamb SE, Viele K, et al. Using bayesian adaptive designs to improve phase III trials: A respiratory care example. BMC Med Res Methodol 2019;19:99.  Back to cited text no. 5
    
6.
Lewis RJ. The pragmatic clinical trial in a learning health care system. Clin Trials 2016;13:484-92.  Back to cited text no. 6
    
7.
Bothwell LE, Avorn J, Khan NF, Kesselheim AS. Adaptive design clinical trials: A review of the literature and ClinicalTrials.gov. BMJ Open 2018;8:e018320.  Back to cited text no. 7
    
8.
Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, et al. Adaptive designs in clinical trials: Why use them, and how to run and report them. BMC Med 2018;16:29.  Back to cited text no. 8
    
9.
Thorlund K, Haggstrom J, Park JJ, Mills EJ. Key design considerations for adaptive clinical trials: A primer for clinicians. BMJ 2018;360:k698.  Back to cited text no. 9
    
10.
Bhatt DL, Mehta C. Adaptive designs for clinical trials. N Engl J Med 2016;375:65-74.  Back to cited text no. 10
    
11.
Dimairo M, Pallmann P, Wason J, Todd S, Jaki T, Julious SA, et al. ACE consensus group. The adaptive designs CONSORT extension (ACE) statement: A checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. BMJ 2020;369:m115.  Back to cited text no. 11
    
12.
Simon R. Adaptive treatment assignment methods and clinical trials. Biometrics 1977;33:743-9.  Back to cited text no. 12
    
13.
Pocock SJ. Allocation of patients to treatment in clinical trials. Biometrics 1979;35:183-97.  Back to cited text no. 13
    
14.
Simon R. A decade of progress in statistical methodology for clinical trials. Stat Med 1991;10:1789-817.  Back to cited text no. 14
    
15.
Emerson SS. Issues in the use of adaptive clinical trial designs. Stat Med 2006;25:3270-96.  Back to cited text no. 15
    
16.
Meurer WJ, Lewis RJ, Tagle D, Fetters MD, Legocki L, Berry S, et al. An overview of the adaptive designs accelerating promising trials into treatments (ADAPT-IT) project. Ann Emerg Med 2012;60:451-7.  Back to cited text no. 16
    
17.
United States Government. Rules committee print 114-67, text of House Amendment to the Senate, Amendment to H.R. 34, Tsunami Warning, Education, and Research Act of 2015, 2016:162–3. 114th Congress. Available from: http://docs.house.gov/billsthisweek/20161128/CPRT- 114-HPRT- RU00- SAHR34. Pdf. [Last accessed on 2020 Dec 02].  Back to cited text no. 17
    
18.
Guyatt GH, Mills EJ, Elbourne D. In the era of systematic reviews, does the size of an individual trial still matter. PLoS Med 2008;5:e4.  Back to cited text no. 18
    
19.
Bauer P, Koenig F. The reassessment of trial perspectives from interim data-a critical view. Stat Med 2006;25:23-36.  Back to cited text no. 19
    
20.
Ning J, Huang X. Response-adaptive randomization for clinical trials with adjustment for covariate imbalance. Stat Med 2010;29:1761-8.  Back to cited text no. 20
    
21.
Simon N, Simon R. Adaptive enrichment designs for clinical trials. Biostatistics 2013;14:613-25.  Back to cited text no. 21
    
22.
Lorch U, Berelowitz K, Ozen C, Naseem A, Akuffo E, Taubel J. The practical application of adaptive study design in early phase clinical trials: A retrospective analysis of time savings. Eur J Clin Pharmacol 2012;68:543-51.  Back to cited text no. 22
    
23.
Fleming TR, Sharples K, McCall J, Moore A, Rodgers A, Stewart R. Maintaining confidentiality of interim data to enhance trial integrity and credibility. Clin Trials 2008;5:157-67.  Back to cited text no. 23
    
24.
Bauer P, Koenig F, Brannath W, Posch M. Selection and bias-two hostile brothers. Stat Med 2010;29:1-3.  Back to cited text no. 24
    
25.
Posch M, Maurer W, Bretz F. Type I error rate control in adaptive designs for confirmatory clinical trials with treatment selection at interim. Pharm Stat 2011;10:96-104.  Back to cited text no. 25
    
26.
Graf AC, Bauer P. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look. Stat Med 2011;30:1637-47.  Back to cited text no. 26
    
27.
Graf AC, Bauer P, Glimm E, Koenig F. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications. Biom J 2014;56:614-30.  Back to cited text no. 27
    
28.
Magirr D, Jaki T, Posch M, Klinglmueller F. Simultaneous confidence intervals that are compatible with closed testing in adaptive designs. Biometrika 2013;100:985-96.  Back to cited text no. 28
    
29.
Kimani PK, Todd S, Stallard N. A comparison of methods for constructing confidence intervals after phase II/III clinical trials. Biom J 2014;56:107-28.  Back to cited text no. 29
    
30.
O'Brien PC, Fleming TR. A multiple testing procedure for clinical trials. Biometrics 1979;35:549-56.  Back to cited text no. 30
    
31.
Jennison C, Turnbull BW. Analysis following a sequential test. In: Group sequential Methods with Applications to Clinical Trials. Boca Raton: Chapman & Hall/CRC; 2000. p. 171-87.  Back to cited text no. 31
    
32.
Whitehead J. On the bias of maximum likelihood estimation following a sequential test. Biometrika 1986;73:573-81.  Back to cited text no. 32
    
33.
Jovic G, Whitehead J. An exact method for analysis following a two-stage phase II cancer clinical trial. Stat Med 2010;29:3118-25.  Back to cited text no. 33
    
34.
Carreras M, Brannath W. Shrinkage estimation in two-stage adaptive designs with midtrial treatment selection. Stat Med 2013;32:1677-90.  Back to cited text no. 34
    
35.
Brückner M, Titman A, Jaki T. Estimation in multi-arm two-stage trials with treatment selection and time-to-event endpoint. Stat Med 2017;36:3137-53.  Back to cited text no. 35
    
36.
Bowden J, Wason J. Identifying combined design and analysis procedures in two-stage trials with a binary end point. Stat Med 2012;31:3874-84.  Back to cited text no. 36
    
37.
Choodari-Oskooei B, Parmar MK, Royston P, Bowden J. Impact of lack-of-benefit stopping rules on treatment effect estimates of two-arm multi-stage (TAMS) trials with time to event outcome. Trials 2013;14:23.  Back to cited text no. 37
    
38.
Posch M, Koenig F, Branson M, Brannath W, Dunger-Baldauf C, Bauer P. Testing and estimation in flexible group sequential designs with adaptive treatment selection. Stat Med 2005;24:3697-714.  Back to cited text no. 38
    
39.
Thall PF, Wathen JK. Practical Bayesian adaptive randomisation in clinical trials. Eur J Cancer 2007;43:859-66.  Back to cited text no. 39
    
40.
Jansen JO, Pallmann P, MacLennan G, Campbell MK, UK-REBOA Trial Investigators. Bayesian clinical trial designs: Another option for trauma trials? J Trauma Acute Care Surg 2017;83:736-41.  Back to cited text no. 40
    
41.
Kimani PK, Glimm E, Maurer W, Hutton JL, Stallard N. Practical guidelines for adaptive seamless phase II/III clinical trials that use Bayesian methods. Stat Med 2012;31:2068-85.  Back to cited text no. 41
    
42.
Liu S, Lee JJ. An overview of the design and conduct of the BATTLE trials. Chin Clin Oncol 2015;4:33.  Back to cited text no. 42
    
43.
Cheng Y, Shen Y. Bayesian adaptive designs for clinical trials. Biometrika 2005;92:633-46.  Back to cited text no. 43
    
44.
Lewis RJ, Lipsky AM, Berry DA. Bayesian decision-theoretic group sequential clinical trial design based on a quadratic loss function: A frequentist evaluation. Clin Trials 2007;4:5-14.  Back to cited text no. 44
    
45.
Ventz S, Trippa L. Bayesian designs and the control of frequentist characteristics: A practical solution. Biometrics 2015;71:218-26.  Back to cited text no. 45
    
46.
Gallo P. Confidentiality and trial integrity issues for adaptive designs. Drug Inf J 2006;40:445-50.  Back to cited text no. 46
    
47.
Broglio KR, Stivers DN, Berry DA. Predicting clinical trial results based on announcements of interim analyses. Trials 2014;15:73.  Back to cited text no. 47
    


    Figures

  [Figure 1]
 
 
    Tables

  [Table 1]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Conventional Fix...
What are Adaptiv...
Planning and Int...
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed218    
    Printed0    
    Emailed0    
    PDF Downloaded43    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]