Randomized controlled trial

(Redirected from Randomized clinical trial)
Jump to: navigation, search

WikiDoc Resources for Randomized controlled trial

Articles

Most recent articles on Randomized controlled trial

Most cited articles on Randomized controlled trial

Review articles on Randomized controlled trial

Articles on Randomized controlled trial in N Eng J Med, Lancet, BMJ

Media

Powerpoint slides on Randomized controlled trial

Images of Randomized controlled trial

Photos of Randomized controlled trial

Podcasts & MP3s on Randomized controlled trial

Videos on Randomized controlled trial

Evidence Based Medicine

Cochrane Collaboration on Randomized controlled trial

Bandolier on Randomized controlled trial

TRIP on Randomized controlled trial

Clinical Trials

Ongoing Trials on Randomized controlled trial at Clinical Trials.gov

Trial results on Randomized controlled trial

Clinical Trials on Randomized controlled trial at Google

Guidelines / Policies / Govt

US National Guidelines Clearinghouse on Randomized controlled trial

NICE Guidance on Randomized controlled trial

NHS PRODIGY Guidance

FDA on Randomized controlled trial

CDC on Randomized controlled trial

Books

Books on Randomized controlled trial

News

Randomized controlled trial in the news

Be alerted to news on Randomized controlled trial

News trends on Randomized controlled trial

Commentary

Blogs on Randomized controlled trial

Definitions

Definitions of Randomized controlled trial

Patient Resources / Community

Patient resources on Randomized controlled trial

Discussion groups on Randomized controlled trial

Patient Handouts on Randomized controlled trial

Directions to Hospitals Treating Randomized controlled trial

Risk calculators and risk factors for Randomized controlled trial

Healthcare Provider Resources

Symptoms of Randomized controlled trial

Causes & Risk Factors for Randomized controlled trial

Diagnostic studies for Randomized controlled trial

Treatment of Randomized controlled trial

Continuing Medical Education (CME)

CME Programs on Randomized controlled trial

International

Randomized controlled trial en Espanol

Randomized controlled trial en Francais

Business

Randomized controlled trial in the Marketplace

Patents on Randomized controlled trial

Experimental / Informatics

List of terms related to Randomized controlled trial

Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]

Overview

A randomized controlled trial (RCT) is a scientific procedure most commonly used in testing medicines or medical procedures. RCTs are considered the most reliable form of scientific evidence because it eliminates all forms of spurious causality. RCTs are mainly used in clinical studies, but are also employed in other sectors such as judicial, educational, social research. Clinical RCTs involve allocating treatments to subjects at random. This ensures that the different treatment groups are 'statistically equivalent'.

Sellers of medicines throughout the ages have had to convince their consumers that the medicine works. As science has progressed, public expectations have risen, and government health budgets have become ever tighter, pressure has grown for a reliable system to do this. Moreover, the public's concern for the dangers of medical interventions has spurred both legislators and administrators to provide an evidential basis for licensing or paying for new procedures and medications. In most modern health-care systems all new medicines and surgical procedures therefore have to undergo trials before being approved.

Trials are used to establish average efficacy of a treatment as well as learn about its most frequently occurring side-effects. This is meant to address the following concerns. First, effects of a treatment may be small and therefore undetectable except when studied systematically on a large population. Second, biological organisms (including humans) are complex, and do not react to the same stimulus in the same way, which makes inference from single clinical reports very unreliable and generally unacceptable as scientific evidence. Third, some conditions will spontaneously go into remission, with many extant reports of miraculous cures for no discernible reason. Finally, it is well-known and has been proven that the simple process of administering the treatment may have direct psychological effects on the patient, sometimes very powerful, what is known as the placebo effect.

Types of trials

Randomized trials are employed to test efficacy while avoiding these factors. Trials may be open, blind or double-blind.

Open trial

In an open trial, the researcher knows the full details of the treatment, and so does the patient. These trials are open to challenge for bias, and they do nothing to reduce the placebo effect. However, sometimes they are unavoidable, particularly in relation to surgical techniques, where it may not be possible or ethical to hide from the patient which treatment he or she received. Usually this kind of study design is used in bioequivalence studies.

Blind trials

Single-blind trial

In a single-blind trial, the researcher knows the details of the treatment but the patient does not. Because the patient does not know which treatment is being administered (the new treatment or another treatment) there might be no placebo effect. In practice, since the researcher knows, it is possible for them to treat the patient differently or to subconsciously hint to the patient important treatment-related details, thus influencing the outcome of the study.

Double-blind trial

In a double-blind trial, one researcher allocates a series of numbers to 'new treatment' or 'old treatment'. The second researcher is told the numbers, but not what they have been allocated to. Since the second researcher does not know, they cannot possibly tell the patient, directly or otherwise, and cannot give in to patient pressure to give them the new treatment. In this system, there is also often a more realistic distribution of sexes and ages of patients. Therefore double-blind (or randomized) trials are preferred, as they tend to give the most accurate results.

Triple-blind trial

Some randomized controlled trials are considered triple-blinded, although the meaning of this may vary according to the exact study design. The most common meaning is that the subject, researcher and person administering the treatment (often a pharmacist) are blinded to what is being given. Alternately, it may mean that the patient, researcher and statistician are blinded. These additional precautions are often in place with the more commonly accepted term "double blind trials", and thus the term "triple-blinded" is infrequently used. However, it connotes an additional layer of security to prevent undue influence of study results by anyone directly involved with the study.

Aspects of control in clinical trials

Traditionally the control in randomized controlled trials refers to studying a group of treated patients not in isolation but in comparison to other groups of patients, the control groups, who by not receiving the treatment under study give investigators important clues to the effectiveness of the treatment, its side effects, and the parameters that modify these effects.

Should control groups use placebo or active controls

Placebo controls are important, because the placebo effect can often be strong. The more financial cost a subject believes an unknown drug has, the more placebo effect is has.[1] The use of historical rather than concurrent controls may lead to exaggerated estimation of effect.[2]

Comparing a new intervention to a placebo control may not be ethical when an accepted, effective treatment exists. In this case, the new intervention should be compared to the active control to establish whether the standard of care should change.[3] The observation that industry sponsored research may be more likely to conduct trials that have positive results suggest that industry is not picking the most appropriate comparison group.[4] However, it is possible that industry is better at predicting which new innovations are likely to be successful and discontinuing research for less promising interventions before the trial stage.

There are times when placebo control is appropriate even when there is accepted, effective treatment.[5][6][7]

The placebo effect can be seen in controlled trials of surgical interventions with the control group receiving a sham procedure.[8][9][10] Guidelines by the American Medical Association address the use of placebo surgery.[11]

Control group optimization

Optimization of the delivery of health care in the control groups is important. This was illustrated in a randomized control trial of extracorporeal membrane oxygenation (ECMO). The ECO showed no improvement in survival although the survival of patients treated with ECMO was similar to prior published results.[12] However, the authors had protocolized the management of the mechanical ventilation in the control group and found high adherence to the protocols and higher than expected survival in the control group. In a subsequent randomized controlled trial, the authors found that protocolized management of mechanical ventilation was better than usual care management of mechanical ventilation.[13]

Types of control groups

  • Placebo concurrent control group
  • Dose-response concurrent control group
  • Active concurrent control group
  • No treatment concurrent control group
  • Historical control

Randomization in clinical trials

There are two processes involved in randomizing patients to different interventions. First is choosing a randomization procedure to generate a random and unpredictable sequence of allocations. This may be a simple random assignment of patients to any of the groups at equal probabilities, or may be complex and adaptive. A second and more practical issue is allocation concealment, which refers to the stringent precautions taken to ensure that the group assignment of patients are not revealed to the study investigators prior to definitively allocating them to their respective groups.

Randomization procedures

There are a couple of statistical issues to consider in generating the randomization sequences[14].:

  • Balance: since most statistical tests are most powerful when the groups being compared have equal sizes, it is desirable for the randomization procedure to generate similarly-sized groups.
  • Selection bias: depending on the amount of structure in the randomization procedure, investigators may be able to infer the next group assignment by guessing which of the groups has been assigned the least up to that point. This breaks allocation concealment (see below) and can lead to bias in the selection of patients for enrollment in the study.
  • Accidental bias: if important covariates that are related to the outcome are ignored in the statistical analysis, estimates arising from that analysis may be biased. The potential magnitude of that bias, if any, will depend on the randomization procedure.

Complete randomization

In this commonly used and intuitive procedure, each patient is effectively randomly assigned to any one of the groups. It is simple and optimal in the sense of robustness against both selection and accidental biases. However, its main drawback is the possibility of imbalances between the groups. In practice, imbalance is only a concern for small sample sizes (n < 200).

Permuted block randomization

In this form of restricted randomization, blocks of k patients are created such that balance is enforced within each block. For instance, let E stand for experimental group and C for control group, then a block of k = 4 patients may be assigned to one of EECC, ECEC, ECCE, CEEC, CECE, and CCEE, with equal probabilities of 1/6 each. Note that there are equal numbers of patients assigned to the experiment and the control group in each block.

Permuted block randomization has several advantages. In addition to promoting group balance at the end of the trial, it also promotes periodic balance in the sense that sequential patients are distributed equally between groups. This is particularly important because clinical trials enroll patients sequentially, such that there may be systematic differences between patients entering at different times during the study.

Unfortunately, by enforcing within-block balance, permuted block randomization is particularly susceptible to selection bias. That is, since toward the end of each block the investigators know the group with the least assignment up to that point must be assigned proportionally more of the remainder, predicting future group assignment becomes progressively easier. The remedy for this bias is to blind investigator from group assignments and from the randomization procedure itself.

Strictly speaking, permuted block randomization should be followed by statistical analysis that takes the blocking into account. However, for small block sizes this may become infeasible. In practice it is recommended that intra-block correlation be examined as a part of the statistical analysis.

A special case of permuted block randomization is random allocation, in which the entire sample is treated as one block.

Urn randomization

Covariate-adaptive randomization

When there are a number of variables that may influence the outcome of a trial (for example, patient age, gender or previous treatments) it is desirable to ensure a balance across each of these variables. This can be done with a separate list of randomization blocks for each combination of values - although this is only feasible when the number of lists is small compared to the total number of patients. When the number of variables or possible values are large a statistical method known as Minimisation can be used to minimize the imbalance within each of the factors.

Outcome-adaptive randomization

For a randomized trial in human subjects to be ethical, the investigator must believe before the trial begins that all treatments under consideration are equally desirable. At the end of the trial, one treatment may be selected as superior if a statistically significant difference was discovered. Between the beginning and end of the trial is an ethical grey zone. As patients are treated, evidence may accumulate that one treatment is superior, and yet patients are still randomized equally between all treatments until the trial ends.

Outcome-adaptive randomization is a variation on traditional randomization designed to address the ethical issue raised above. Randomization probabilities are adjusted continuously throughout the trial in response to the data. The probability of a treatment being assigned increases as the probability of that treatment being superior increases. The statistical advantages of randomization are retained, while on average more patients are assigned to superior treatments.

Allocation concealment

In practice, in taking care of individual patients, clinical investigators often find it difficult to maintain impartiality. Stories abound of investigators holding up sealed envelopes to lights or ransacking offices to determine group assignments in order to dictate the assignment of their next patient[15]. This introduces selection bias and confounders and distorts the results of the study. Breaking allocation concealment in randomized controlled trials is that much more problematic because in principle the randomization should have minimized such biases.

Some standard methods of ensuring allocation concealment include:

  • Sequentially-Numbered, Opaque, Sealed Envelopes (SNOSE)
  • Sequentially-numbered containers
  • Pharmacy controlled
  • Central randomization

Great care for allocation concealment must go into the clinical trial protocol and reported in detail in the publication. Recent studies have found that not only do most publications not report their concealment procedure, most of the publications that do not report also have unclear concealment procedures in the protocols[16][17].

Difficulties

Biased trials are more common, especially in trials with subjective outcomes, if:[18]

  • Inadequate or unclear random-sequence generation
  • Inadequate or unclear allocation concealment
  • Lack of or unclear double-blinding

A major difficulty in dealing with trial results comes from commercial, political and/or academic pressure. Most trials are expensive to run, and will be the result of significant previous research, which is itself not cheap. There may be a political issue at stake (compare MMR vaccine) or vested interests (compare homeopathy). In such cases there is great pressure to interpret results in a way which suits the viewer, and great care must be taken by researchers to maintain emphasis on clinical facts.

Regarding data analyses of randomized controlled trials, research sponsored by industry may incompletely report or analyze drug toxicity.[19][20] Similarly, industry-sponsored trials may be more likely to omit intention-to-treat analyses.[21] These problems with statistical analyses have led the Journal of the American Medical Association (JAMA) to require independent analysis of data.[22][23] This policy has been associated with a decreased in the number of trials published by JAMA.[24]

Most studies start with a 'null hypothesis' which is being tested (usually along the lines of 'Our new treatment x cures as many patients as existing treatment y') and an alternative hypothesis ('x cures more patients than y'). The analysis at the end will give a statistical likelihood, based on the facts, of whether the null hypothesis can be safely rejected (saying that the new treatment does, in fact, result in more cures). Nevertheless this is only a statistical likelihood, so false negatives and false positives are possible. These are generally set an acceptable level (e.g., 1% chance that it was a false result). However, this risk is cumulative, so if 200 trials are done (often the case for contentious matters) about 2 will show contrary results. There is a tendency for these two to be seized on by those who need that proof for their point of view.

Small study effect

Small trials report stronger effect estimates.[25]

Publication bias

Publication bias refers to the tendency that trials that show a positive significant effect are more likely to be published than those that show no effect or are inconclusive.

A variation of publication bias is selective reporting bias, or outcome reporting bias, occurs when several outcomes within a trial are measured but these are reported selectively depending on the strength and direction of those results.[26] Related terms that have been coined are p-hacking[27] and HARKing (Hypothesizing After the Results are Known)[28].

Omitted-variable bias occurs when an "adjusting variable has an own effect on the dependent variable and is correlated with the variable of interest, excluding this adjusting variable from the regression induces omitted-variable bias".[29]

Trial registration

At the same time, in September 2004, the International Committee of Medical Journal Editors (ICMJE) announced that all trials starting enrollment after July 1, 2005 must be registered prior to consideration for publication in one of the 12 member journals of the Committee.[30] This move was to reduce the risk of publication bias as negative trials that are unpublished would be more easily discoverable.

Available trial registries include:

Five years after the ICMJE announced trial registration, registration may still occur late or not at all.[31][32]

Trial registration may have reduced publication bias[33] although in one study of this the results did not quite reach significance[34]. More recent assessments of trail registration refute the association.[35][36]

One explanation is that the emphasis on trial registration was more associated with study results early, with less importance years later.

It is not clear how effective trial registration is because many registered trials are never completely published.[37]

Interim analysis - stopping trials early

Trials are increasingly stopped early[38]; however, this may induce a bias that exaggerates results[39][38]. Data safety and monitoring boards that are independent of the trial are commissioned to conduct interim analyses and make decisions about stopping trials early.[40] [41]

Reasons to stop a trial early are efficacy, safety, and futility.[42][43]

Regarding efficacy, various rules exist that adjust alpha to decide when to stop a trial early.[44][45][46][47][48] A commonly recommended rules are the O'Brien-Fleming (the O'Brien-Fleming rule requires a varying p-value depending on the number of interim analyses) and the Haybittle-Peto (the Haybittle-Peto which requires p<0.001 to stop a trial early) rule.[44][45][49]

Using a more conservative stopping rule reduces the chance of a statistical alpha (Type I) error; however, these rules do not alter that the effect size may be exaggerated.[50][45] According to Bassler, "the more stringent the P-value threshold results must cross to justify stopping the trial, the more likely it is that a trial stopped early will overestimate the treatment effect."[43] A review of trials stopped early found that the earlier a trial was stopped the larger was its reported treatment effect[38], especially if the trial had less than 500 total events[51]. Accordingly, examples exist of trials whose interim analyses were significant, but the trial was continued and the final analysis was less significant or was insignificant.[52][53][54]

Methods to correct for exaggeration exists.[55][50] A Bayesian approach to interim analysis may help reduce bias and adjust the estimate of effect.[56]

As an alternative to the alpha rules, conditional power can help decide when to stop trials early.[57][58]

Missing data

Missing data

Several approaches to handling missing data have been reviewed.[59][60] Regarding assigning an outcome to the patient, using a 'last observation carried forward' (LOCF) analysis may introduce biases.[61]

Presentation of results

Results may be presented with misleading "spin".[62]

References

  1. Waber, Rebecca L., Baba Shiv, Ziv Carmon, and Dan Ariely. 2008. Commercial Features of Placebo and Therapeutic Efficacy. JAMA 299, no. 9:1016-1017.
  2. Sacks H, Chalmers TC, Smith H (1982). "Randomized versus historical controls for clinical trials". Am. J. Med. 72 (2): 233–40. PMID 7058834. 
  3. Rothman KJ, Michels KB (1994). "The continuing unethical use of placebo controls". N. Engl. J. Med. 331 (6): 394–8. PMID 8028622. 
  4. Djulbegovic B, Lacevic M, Cantor A; et al. (2000). "The uncertainty principle and industry-sponsored research". Lancet. 356 (9230): 635–8. PMID 10968436. 
  5. Temple R, Ellenberg SS (2000). "Placebo-controlled trials and active-control trials in the evaluation of new treatments. Part 1: ethical and scientific issues". Ann Intern Med. 133: 455–63. PMID 10975964. 
  6. Ellenberg SS, Temple R (2000). "Placebo-controlled trials and active-control trials in the evaluation of new treatments. Part 2: practical issues and specific cases". Ann Intern Med. 133: 464–70. PMID 10975965. 
  7. Emanuel EJ, Miller FG (2001). "The ethics of placebo-controlled trials--a middle ground". N. Engl. J. Med. 345 (12): 915–9. PMID 11565527. 
  8. Cobb LA, Thomas GI, Dillard DH, Merendino KA, Bruce RA (1959). "An evaluation of internal-mammary-artery ligation by a double-blind technic". N. Engl. J. Med. 260 (22): 1115–8. PMID 13657350. 
  9. Dimond EG, Kittle CF, Crockett JE (1960). "Comparison of internal mammary artery ligation and sham operation for angina pectoris". Am. J. Cardiol. 5: 483–6. PMID 13816818. 
  10. Moseley JB, O'Malley K, Petersen NJ; et al. (2002). "A controlled trial of arthroscopic surgery for osteoarthritis of the knee". N. Engl. J. Med. 347 (2): 81–8. PMID 12110735. doi:10.1056/NEJMoa013259. 
  11. Tenery R, Rakatansky H, Riddick FA; et al. (2002). "Surgical "placebo" controls". Ann. Surg. 235 (2): 303–7. PMID 11807373. 
  12. Morris AH, Wallace CJ, Menlove RL, Clemmer TP, Orme JF, Weaver LK; et al. (1994). "Randomized clinical trial of pressure-controlled inverse ratio ventilation and extracorporeal CO2 removal for adult respiratory distress syndrome.". Am J Respir Crit Care Med. 149 (2 Pt 1): 295–305. PMID 8306022. doi:10.1164/ajrccm.149.2.8306022. 
  13. East TD, Heermann LK, Bradshaw RL, Lugo A, Sailors RM, Ershler L; et al. (1999). "Efficacy of computerized decision support for mechanical ventilation: results of a prospective multi-center randomized trial.". Proc AMIA Symp: 251–5. PMC 2232746Freely accessible. PMID 10566359. 
  14. Lachin JM, Matts JP, Wei LJ (1988). "Randomization in Clinical Trials: Conclusions and Recommendations". Controlled Clinical Trials. 9 (4): 365–74. PMID 3203526. 
  15. Schulz KF, Grimes DA (2002). "Allocation concealment in randomised trials: defending against deciphering". Lancet. 359: 614–8. PMID 11867132. 
  16. Pildal J, Chan AW; et al. (2005). "Comparison of descriptions of allocation concealment in trial protocols and the publihed report: cohort study". BMJ. 330: 1049. PMID 15817527. 
  17. Allocation concealment and blinding: when ignorance is bliss
  18. Savović J, Jones HE, Altman DG, Harris RJ, Jüni P, Pildal J; et al. (2012). "Influence of Reported Study Design Characteristics on Intervention Effect Estimates From Randomized, Controlled Trials.". Ann Intern Med. PMID 22945832. doi:10.7326/0003-4819-157-6-201209180-00537. 
  19. Psaty BM, Kronmal RA (2008). "Reporting mortality findings in trials of rofecoxib for Alzheimer disease or cognitive impairment: a case study based on documents from rofecoxib litigation.". JAMA. 299 (15): 1813–7. PMID 18413875. doi:10.1001/jama.299.15.1813. 
  20. Madigan D, Sigelman DW, Mayer JW, Furberg CD, Avorn J (2012). "Under-reporting of cardiovascular events in the rofecoxib Alzheimer disease studies.". Am Heart J. 164 (2): 186–93. PMID 22877803. doi:10.1016/j.ahj.2012.05.002. 
  21. Melander H; et al. (2003). "Evidence b(i)ased medicine--selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications". BMJ. 326: 1171–3. PMID 12775615. doi:10.1136/bmj.326.7400.1171. 
  22. DeAngelis CD, Fontanarosa PB (2010). "Ensuring integrity in industry-sponsored research: primum non nocere, revisited.". JAMA. 303 (12): 1196–8. PMID 20332409. doi:10.1001/jama.2010.337. 
  23. Fontanarosa PB, Flanagin A, DeAngelis CD (2005). "Reporting conflicts of interest, financial aspects of research, and role of sponsors in funded studies.". JAMA. 294 (1): 110–1. PMID 15998899. doi:10.1001/jama.294.1.110. 
  24. Wager E, Mhaskar R, Warburton S, Djulbegovic B (2010). "JAMA published fewer industry-funded studies after introducing a requirement for independent statistical analysis.". PLoS One. 5 (10): e13591. PMC 2962640Freely accessible. PMID 21042585. doi:10.1371/journal.pone.0013591. 
  25. Dechartres A, Trinquart L, Boutron I, Ravaud P (2013). "Influence of trial sample size on treatment effect estimates: meta-epidemiological study.". BMJ. 346: f2304. PMC 3634626Freely accessible. PMID 23616031. doi:10.1136/bmj.f2304. 
  26. Chang L, Dhruva SS, Chu J, Bero LA, Redberg RF. [Selective reporting in trials of high risk cardiovascular devices: cross sectional comparison between premarket approval summaries and published reports]. BMJ. 2015 Jun 10;350:h2613. doi:10.1136/bmj.h2613
  27. Simonsohn U, Nelson LD, Simmons JP (2014). "P-curve: a key to the file-drawer.". J Exp Psychol Gen. 143 (2): 534–47. PMID 23855496. doi:10.1037/a0033242. 
  28. Kerr NL (1998). "HARKing: hypothesizing after the results are known.". Pers Soc Psychol Rev. 2 (3): 196–217. PMID 15647155. doi:10.1207/s15327957pspr0203_4. 
  29. Bruns SB, Ioannidis JP (2016). "p-Curve and p-Hacking in Observational Research.". PLoS One. 11 (2): e0149144. PMC 4757561Freely accessible. PMID 26886098. doi:10.1371/journal.pone.0149144. 
  30. De Angelis C, Drazen JM, Frizelle FA; et al. (2004). "Clinical trial registration: a statement from the International Committee of Medical Journal Editors". The New England journal of medicine. 351 (12): 1250–1. PMID 15356289. doi:10.1056/NEJMe048225. 
  31. Law MR, Kawasumi Y, Morgan SG (2011). "Despite law, fewer than one in eight completed studies of drugs and biologics are reported on time on ClinicalTrials.gov.". Health Aff (Millwood). 30 (12): 2338–45. PMID 22147862. doi:10.1377/hlthaff.2011.0172. 
  32. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P (2009). "Comparison of registered and published primary outcomes in randomized controlled trials.". JAMA. 302 (9): 977–84. PMID 19724045. doi:10.1001/jama.2009.1242. 
  33. Emdin C, Odutayo A, Hsiao A, Shakir M, Hopewell S, Rahimi K; et al. (2015). "Association of cardiovascular trial registration with positive study findings: Epidemiological Study of Randomized Trials (ESORT).". JAMA Intern Med. 175 (2): 304–7. PMID 25545611. doi:10.1001/jamainternmed.2014.6924. 
  34. Dechartres A, Ravaud P, Atal I, Riveros C, Boutron I (2016). "Association between trial registration and treatment effect estimates: a meta-epidemiological study.". BMC Med. 14 (1): 100. PMC 4932748Freely accessible. PMID 27377062. doi:10.1186/s12916-016-0639-x. 
  35. Odutayo A, Emdin CA, Hsiao AJ, Shakir M, Copsey B, Dutton S; et al. (2017). "Association between trial registration and positive study findings: cross sectional study (Epidemiological Study of Randomized Trials-ESORT).". BMJ. 356: j917. PMID 28292744. doi:10.1136/bmj.j917. 
  36. Rasmussen N, Lee K, Bero L (2009). "Association of trial registration with the results and conclusions of published trials of new oncology drugs.". Trials. 10: 116. PMC 2811705Freely accessible. PMID 20015404. doi:10.1186/1745-6215-10-116. 
  37. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM (2009). "Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis.". PLoS Med. 6 (9): e1000144. PMC 2728480Freely accessible. PMID 19901971. doi:10.1371/journal.pmed.1000144. 
  38. 38.0 38.1 38.2 Montori VM, Devereaux PJ, Adhikari NK; et al. "Randomized trials stopped early for benefit: a systematic review". JAMA. 294 (17): 2203–9. PMID 16264162. doi:10.1001/jama.294.17.2203.  Unknown parameter |= ignored (help); Unknown parameter |month= ignored (help)
  39. Bassler D, Briel M, Montori VM, Lane M, Glasziou P, Zhou Q; et al. "Stopping randomized trials early for benefit and estimation of treatment effects: systematic review and meta-regression analysis.". JAMA. 303 (12): 1180–7. PMID 20332404. doi:10.1001/jama.2010.310.  Unknown parameter |= ignored (help)
  40. Trotta, F., G. Apolone, S. Garattini, and G. Tafuri. 2008. Stopping a trial early in oncology: for patients or for industry? Ann Oncol mdn042.http://dx.doi.org/10.1093/annonc/mdn042
  41. Slutsky AS, Lavery JV. "Data Safety and Monitoring Boards". N. Engl. J. Med. 350 (11): 1143–7. PMID 15014189. doi:10.1056/NEJMsb033476.  Unknown parameter |= ignored (help); Unknown parameter |month= ignored (help)
  42. Borer JS, Gordon DJ, Geller NL. "When should data and safety monitoring committees share interim results in cardiovascular trials?". JAMA. 299 (14): 1710–2. PMID 18398083. doi:10.1001/jama.299.14.1710.  Unknown parameter |= ignored (help); Unknown parameter |month= ignored (help)
  43. 43.0 43.1 Bassler D, Montori VM, Briel M, Glasziou P, Guyatt G. "Early stopping of randomized clinical trials for overt efficacy is problematic". J Clin Epidemiol. 61 (3): 241–6. PMID 18226746. doi:10.1016/j.jclinepi.2007.07.016.  Unknown parameter |= ignored (help); Unknown parameter |month= ignored (help)
  44. 44.0 44.1 Pocock SJ. "When (not) to stop a clinical trial for benefit". JAMA. 294 (17): 2228–30. PMID 16264167. doi:10.1001/jama.294.17.2228.  Unknown parameter |= ignored (help); Unknown parameter |month= ignored (help)
  45. 45.0 45.1 45.2 Schulz KF, Grimes DA. "Multiplicity in randomised trials II: subgroup and interim analyses". Lancet. 365 (9471): 1657–61. PMID 15885299. doi:10.1016/S0140-6736(05)66516-6.  Unknown parameter |= ignored (help)
  46. Grant A. "Stopping clinical trials early". BMJ. 329 (7465): 525–6. PMID 15345605. doi:10.1136/bmj.329.7465.525.  Unknown parameter |= ignored (help); Unknown parameter |month= ignored (help)
  47. O'Brien PC, Fleming TR. "A multiple testing procedure for clinical trials". Biometrics. 35 (3): 549–56. PMID 497341.  Unknown parameter |= ignored (help); Unknown parameter |month= ignored (help)
  48. Bauer P, Köhne K. "Evaluation of experiments with adaptive interim analyses". Biometrics. 50 (4): 1029–41. PMID 7786985.  Unknown parameter |= ignored (help); Unknown parameter |month= ignored (help) This method was used by PMID: 18184958
  49. DeMets, David L.; Susan S. Ellenberg; Fleming, Thomas J. Data Monitoring Committees in Clinical Trials: a Practical Perspective. New York: J. Wiley & Sons. ISBN 0-471-48986-7.  Unknown parameter |= ignored (help)
  50. 50.0 50.1 Pocock SJ, Hughes MD (1989). "Practical problems in interim analyses, with particular regard to estimation". Control Clin Trials. 10 (4 Suppl): 209S–221S. PMID 2605969.  Unknown parameter |month= ignored (help)
  51. Bassler, Dirk (2010-03-24). "Stopping Randomized Trials Early for Benefit and Estimation of Treatment Effects: Systematic Review and Meta-regression Analysis". JAMA. 303 (12): 1180–1187. doi:10.1001/jama.2010.310. Retrieved 2010-03-25.  Unknown parameter |coauthors= ignored (help)
  52. Pocock S, Wang D, Wilhelmsen L, Hennekens CH (2005). "The data monitoring experience in the Candesartan in Heart Failure Assessment of Reduction in Mortality and morbidity (CHARM) program". Am. Heart J. 149 (5): 939–43. PMID 15894981. doi:10.1016/j.ahj.2004.10.038.  Unknown parameter |month= ignored (help)
  53. Wheatley K, Clayton D (2003). "Be skeptical about unexpected large apparent treatment effects: the case of an MRC AML12 randomization". Control Clin Trials. 24 (1): 66–70. PMID 12559643.  Unknown parameter |month= ignored (help)
  54. Abraham E, Reinhart K, Opal S; et al. (2003). "Efficacy and safety of tifacogin (recombinant tissue factor pathway inhibitor) in severe sepsis: a randomized controlled trial". JAMA. 290 (2): 238–47. PMID 12851279. doi:10.1001/jama.290.2.238.  Unknown parameter |month= ignored (help)
  55. Hughes MD, Pocock SJ (1988). "Stopping rules and estimation problems in clinical trials". Stat Med. 7 (12): 1231–42. PMID 3231947.  Unknown parameter |month= ignored (help)
  56. Goodman SN (2007). "Stopping at nothing? Some dilemmas of data monitoring in clinical trials". Ann. Intern. Med. 146 (12): 882–7. PMID 17577008.  Unknown parameter |month= ignored (help)
  57. Statistical Monitoring of Clinical Trials: Fundamentals for Investigators. Berlin: Springer. 2005. ISBN 0-387-27781-1. 
  58. Lachin JM (2006). "Operating characteristics of sample size re-estimation with futility stopping based on conditional power". Stat Med. 25 (19): 3348–65. PMID 16345019. doi:10.1002/sim.2455.  Unknown parameter |month= ignored (help)
  59. Little RJ, D'Agostino R, Cohen ML, Dickersin K, Emerson SS, Farrar JT; et al. (2012). "The prevention and treatment of missing data in clinical trials.". N Engl J Med. 367 (14): 1355–60. PMC 3771340Freely accessible. PMID 23034025. doi:10.1056/NEJMsr1203730. 
  60. Fleming TR (2011). "Addressing missing data in clinical trials.". Ann Intern Med. 154 (2): 113–7. PMID 21242367. doi:10.1059/0003-4819-154-2-201101180-00010. 
  61. Hauser WA, Rich SS, Annegers JF, Anderson VE (1990). "Seizure recurrence after a 1st unprovoked seizure: an extended follow-up". Neurology. 40 (8): 1163–70. PMID 2381523. 
  62. Yavchitz A, Boutron I, Bafeta A, Marroun I, Charles P, Mantz J; et al. (2012). "Misrepresentation of randomized controlled trials in press releases and news coverage: a cohort study.". PLoS Med. 9 (9): e1001308. PMC 3439420Freely accessible. PMID 22984354. doi:10.1371/journal.pmed.1001308. 

See also

External links

de:Randomisierte, kontrollierte Studieid:Randomized controlled trial

lt:Eksperimentiniai tyrimai nl:Gerandomiseerd onderzoek met controlegroep




Linked-in.jpg