This post looks at how misreporting (spin) occurs in psychiatry and psychology research, more specifically in its abstracts.
In 2019, Samuel Jellison and his team looked into this. We will see what they found. They reviewed research papers published between January 2012 and December 2017. Of the located 116 papers, they concluded that 65 papers (that amounts to 56 per cent) with distorted reporting in the research abstracts. Interestingly, they could not find any significant association with the funding source; in other words, the spin did not vary whether the funding was from a for-profit industry source or not.
How spin occurs
Samuel and his team described how spin occurred in the abstracts.
Spin in the results section of the abstracts
- Focusing on statistically significant secondary endpoints while omitting statistically non-significant one or more primary endpoints
- Focusing only on statistically significant primary endpoints while omitting statistically non-significant other primary endpoints
- Claiming equivalence to statistically non-significant primary endpoints
- Using phrases like “trending towards significance”
- Focusing on statistically significant sub-group analyses of the primary endpoint
Spin in the abstract conclusions
- Claiming benefit based on statistically significant secondary endpoints
- Claiming equivalence versus comparator for a statistically non-significant endpoint
- Claiming benefit using statistically significant sub-group analysis
In 2017, a Japanese group of researchers published a paper in PLOS One exactly about that. They compared the conclusions written in the abstracts with the results of the expected primary outcomes of 60 papers. These papers reported effective interventions in the mental health and psychiatry field.
They determined that twenty out of sixty papers included “overstatements”. And, nine papers reported in the abstracts statistically significant results of secondary outcomes or subgroup analyses when none of its primary outcomes showed positive results.
Let us see a few details as they appeared in the paper.
Not reporting non-significant results of the primary outcomes, instead reporting significant results of secondary outcomes
This study compared the efficacy of (web-based – this was not mentioned in the abstract) counsellor-assisted problem-solving intervention method (n=65) with access to internet resources (n=67). It was a randomized clinical trial involving adolescents between 12 – 17 years admitted to a hospital with traumatic brain injuries. The interviewers were blinded to the intervention method. The primary outcome was measured using the child behaviour checklist (CBCL) as reported by their parents before and after the intervention.
The result for the primary outcome – CBCL for the adolescents 12 – 17 years – was not statistically significant. They have not reported it in the abstract; instead, the abstract includes significant results for its sub-group analyses – late adolescents versus early adolescents.
This was a randomized controlled trial aimed at evaluating the effectiveness of depression intervention for women screen-positive for major depression, dysthymia, or both. The primary outcome was to change depression symptoms and functional status 12 months after the intervention. The secondary outcomes were at least 50% reduction and remission in depressive symptoms, global improvement, treatment satisfaction, and quality of care. They compared this intervention with the usual care.
According to the results reported in the main text, of both expected primary outcomes, symptom reduction had been statistically significant at 12 months but not their functional status at the end of 12 months; however, the secondary outcomes had achieved statistically significant results.
In the abstract, the authors have mentioned only the positive results.
Relationship with journal impact factor and sample size
The authors of this study reported a very interesting relation between abstract”overstatements” with the published journal’s impact factor and the study’s sample size; they found that the journal’s impact factor of fewer than 10 and the sample sizes of fewer than 300 are more associated with abstract “overstatements”.
Not reporting abstracts in the structured format
As early as 2013, the CONSORT recommended that all randomized controlled trials’ abstracts need to be reported in a structured format; however the study authors had noted that several studies had not followed the recommendation.