Earlier we saw how distorted abstract reporting – a form of spin – occurs in health research. This post dives into a specific subject area: Psychiatry and Psychology.
In 2019, Samuel Jellison and his team published an excellent paper on this exact topic in the British Medical Journal. They looked at the frequency of distorted reporting in the abstracts of randomized controlled trials (RCTs) with non-significant primary endpoints, irrespective of the funding source, published in the Psychiatry and Psychology journals between January 2012 – December 2017.
They identified 116 papers and determined 65 papers (56 percent) with distorted reporting in the abstracts. And, they further found that it has no statistically significant association with the funding source whether it is industry-funded or otherwise; that is also very interesting.
How spin occurs
Samuel and his team described how spin occurs in the abstracts.
Spin in the results section of the abstracts
- Focusing on statistically significant secondary endpoints while omitting statistically non-significant one or more primary endpoints
- Focusing only on statistically significant primary endpoints while omitting statistically non-significant other primary endpoints
- Claiming equivalence to statistically non-significant primary endpoints
- Using phrases like “trending towards significance”
- Focusing on statistically significant sub-group analyses of the primary endpoint
Spin in the abstract conclusions
- Claiming benefit based on statistically significant secondary endpoints
- Claiming equivalence versus comparator for a statistically non-significant endpoint
- Claiming benefit using statistically significant sub-group analysis
In 2017, a Japanese group of researchers published a paper in the PLOS One exactly about that. They compared the conclusions written in the abstracts with the results of the expected primary outcomes of 60 papers. These papers reported effective interventions in the mental health and psychiatry field.
They determined that twenty out of sixty papers included “overstatements”. And, nine papers reported in the abstracts statistically significant results of secondary outcomes or subgroup analyses when none of its primary outcomes showed positive results.
Let us see few details as it appeared in the paper.
Not reporting non-significant results of the primary outcomes, instead reporting significant results of secondary outcomes
This study compared the efficacy of (web-based – this was not mentioned in the abstract) counselor-assisted problem-solving intervention method (n=65) with access to internet resources (n=67). It was a randomized clinical trial involving adolescents between 12 – 17 years admitted to a hospital with traumatic brain injuries. The interviewers were blinded to the intervention method. The primary outcome was measured using the child behavior checklist (CBCL) as reported by their parents before and after the intervention.
The result for the primary outcome – CBCL for the adolescent 12 – 17 years – was not statistically significant. They have not reported it in the abstract; instead, the abstract includes significant results for its sub-group analyses – late adolescents versus early adolescents.
This was a randomized controlled trial aimed at evaluating the effectiveness of depression intervention for women screen-positive for major depression, dysthymia, or both. The primary outcome was to change in depression symptoms and functional status 12 months after the intervention. The secondary outcomes were at least 50% reduction and remission in depressive symptoms, global improvement, treatment satisfaction, and quality of care. They have compared this intervention with the usual care.
According to the results reported in the main text, of both expected primary outcomes, symptom reduction had been statistically significant at 12 months but not their functional status at the end of 12 months; however, the secondary outcomes had achieved statistically significant results.
In the abstract, the authors have mentioned only the positive results.
Relationship with journal impact factor and sample size
The authors of this study reported a very interesting relation of abstract”overstatements” with the published journal’s impact factor and the study’s sample size; they found that the journal’s impact factor fewer than 10 and the sample sizes fewer than 300 are more associated with abstract “overstatements”.
Not reporting abstracts in the structured format
As early as 2013, the CONSORT recommended that all randomized controlled trials’ abstracts need to be reported in a structured format; however the study authors had noted that a number of studies had not followed the recommendation.