It's uncommon for there to be reanalyses of data from randomized clinical trials (RCTs) - once a product has failed, it isn't smart to keep putting more money into it.
Some critics insist that patients should have access to failed trial data and complain that only successful trials get published, but that thinking is simplistic anti-pharmaceutical posturing. Accessibility to raw data by people at universities or in advocacy groups who are just looking for problems risk trial patient confidentiality, along with the more likely inappropriate use of data sets, resulting in spurious findings. The people most likely to complain about unpublished trials are also most likely to release sensitive information and "rogue" reanalysis by non-experts or by analysts who have conflicts of interest could also result.
But an examination of those re-analyses that have been conducted finds that about one-third led to changes in findings that implied conclusions different from those of the original article regarding the types and number of patients who should be treated, according to a paper in JAMA. They can't open up trial data to militants but it may be worth spending a little money to make sure things did or did not work the way they thought.
Shanil Ebrahim, Ph.D., of Stanford University, Stanford, Calif., and colleagues conducted an electronic search of MEDLINE to identify all published studies (through March 9, 2014) that completed a reanalysis of individual patient data from previously published RCTs addressing the same hypothesis as the original RCT. The primary outcomes examined were changes in direction and magnitude of treatment effect, statistical significance, and interpretation about the types or numbers of patients who should be treated.
The researchers identified 37 re-analyses of patient-level data from previously published RCTs (reported in 36 articles). Most of the re-analyses were completed by authors involved in the original trial; five were performed by entirely independent authors. Re-analyses differed most commonly in statistical or analytical approaches (n = 18) and in definitions or measurements of the outcome of interest (n = 12). Four re-analyses changed the direction and two changed the magnitude of treatment effect; four led to changes in statistical significance of findings.
Approximately a third (35 percent) of the re-analyses led to interpretations different from that of the original article, 3 (8 percent) showing that different patients should be treated; 1 (3 percent) that fewer patients should be treated; and 9 (24 percent) that more patients should be treated.
"It is difficult to assess whether these changes in trial conclusions led eventually to major changes in clinical practice and, if so, how large these changes were. Clinical practice choices depend only partly on trial evidence, and sometimes multiple additional trials exist that inform the same question. Nevertheless, when contradicting messages exist, it is unclear which of the 2 discrepant articles will have more influence: the original is usually published in more influential journals, but the subsequent reanalysis may be viewed as a more correct appraisal of the data," the authors write.
Comments