This post brings another two spin methods; reporting statistically significant secondary endpoints (outcomes) in the abstract in the absence of non-significant primary endpoints (outcome) in Randomized controlled trials (RCTs) and not reporting of adverse effects of interventions.
The RCTs carry the highest level of evidence strength in research. In this study design, we compare a new treatment or an intervention against an existing one. Not all times, we find statistically significant results in the experimental arm. In those situations, evidence exists that researchers tend to give undue prominence to secondary outcomes pushing behind the primary outcome when the primary outcome is not statistically significant.
Let us find out how it occurs in context.
A group of Canadian researchers reviewed 164 papers on randomized clinical trials carried out to determine the efficacy of new treatment methods for breast cancer. They have extracted the papers from PubMed published 1995 and 2011. The breakdown of the type of trials was as follows: 148 on new drugs, 11 on new radiation methods, and 5 on new surgical methods. The main outcome (primary endpoint) was the survival – either overall, disease-free, or progression-free – in terms of the number of years. This paper appeared in the Annals of Oncology Journal in 2013.
These researchers found that of all the trials, 72 trials (43.9 percent) were reported with statistically significant primary outcomes (endpoints); the rest – 92 trials (56.1 percent) – did not have statistically significant primary outcomes (endpoints).
What did they find with regard to spin?
Reporting the trials as positive based on statistically significant secondary outcomes
They found that 54 out of 92 trials (59 percent) which showed statistically non-significant primary outcomes (endpoints) were reported as positive in the abstracts based on the statistically significant secondary outcomes (endpoints). However, the researchers do not elaborate, in the paper, what these secondary outcomes were.
Interestingly, this practice did not show any significant association with the journal’s impact factor.
Inaccurate reporting of adverse effects (toxicity) of the interventions
This warrants a little bit of explanation. They have used a hierarchical scale here; from 1 (excellent) to 7 (very poor). If the severe and life-threatening toxicities were not mentioned in the abstract, they have classified that paper as poor: somewhere between 5 – 7 in their scale.
According to the above scale, they had to classify 110 papers (67 percent out of all the papers) “poor”; obviously, it includes not only the trials that found to have non-significant primary outcomes but even trials with significant primary outcomes also.
Another interesting finding was that although most 103 (almost two-third) of the trials were funded by the industry, they could not find a significant association between the funding source and reporting/non-reporting of toxicities.