Most often, we read only abstracts of research papers. We trust the researchers report their findings accurately. It does don’t happen always as we expect, particularly when the research ends up with non-significant primary outcomes. Instead, we find only significant secondary outcomes.
This is also a type of spin and academic mischief.
And, sometimes, we do not even find significant adverse effects due to the study intervention as well.
These types of spin occur in randomized trials.
We know that randomized trials carry the highest level of evidence strength in research. In this study design, we compare a new treatment or an intervention against an existing one. Not all times, we find statistically significant results in the experimental arm. In those situations, evidence exists that researchers tend to give undue prominence to secondary outcomes pushing behind the primary outcome when the primary outcome is not statistically significant.
Let us find out how it occurs in context.
A group of Canadian researchers reviewed 164 papers on randomized clinical trials carried out to determine the efficacy of new treatment methods for breast cancer. They have extracted the papers from PubMed published in 1995 and 2011. The breakdown of the type of trials was as follows: 148 on new drugs, 11 on new radiation methods, and 5 on new surgical methods. The main outcome (primary endpoint) was survival – either overall, disease-free, or progression-free – in terms of the number of years. This paper appeared in the Annals of Oncology Journal in 2013.
These researchers found that of all the trials, 72 trials (43.9 per cent) were reported with statistically significant primary outcomes (endpoints); the rest – 92 trials (56.1 per cent) – did not have statistically significant primary outcomes (endpoints).
What did they find about spin?
Reporting the trials as positive based on statistically significant secondary outcomes
They found that 54 out of 92 trials (59 per cent) which showed statistically non-significant primary outcomes (endpoints) were reported as positive in the abstracts based on the statistically significant secondary outcomes (endpoints). However, the researchers do not elaborate, in the paper, on what these secondary outcomes were.
Interestingly, this practice did not show any significant association with the journal’s impact factor.
Inaccurate reporting of adverse effects (toxicity) of the interventions
This warrants a little bit of explanation. They have used a hierarchical scale here; from 1 (excellent) to 7 (very poor). If the severe and life-threatening toxicities were not mentioned in the abstract, they have classified that paper as poor: somewhere between 5 – 7 on their scale.
According to the above scale, they had to classify 110 papers (67 per cent out of all the papers) as “poor”; obviously, it includes not only the trials that were found to have non-significant primary outcomes but even trials with significant primary outcomes also.
Another interesting finding was that although 103 (almost two-thirds) of the trials were funded by the industry, they could not find a significant association between the funding source and reporting/non-reporting of toxicities.