In my previous post, I dealt with how spin occurs in reporting observational studies: The use of causal language in reporting findings of observational studies.
This post adds another spin method that occurs frequently in reporting results of observational studies: making recommendations based on results from observational study designs.
Observational studies are very useful in science; however, we cannot make clinical recommendations for practice based on findings only from observational studies. This is because these study designs allow us only to determine prevalence, and incidence or demonstrate either associations or correlations. And, indeed, it does not allow inferring causation.
However, this happens;
They found more than half (56 per cent exactly) of these studies have made clinical recommendations without first calling for randomized controlled trials; only 14 per cent suggested that step.
To put this into context, I delved into two studies that they have mentioned in their paper.
Case study 1
The title of this study is “fructose-rich beverages and the risk of gout in women. It was a prospective cohort study spanning 22 years. The researchers documented 778 new cases of gout during the study period. And, they have found a statistically significant association between the consumption of fructose-rich beverages and the occurrence of gout.
They have written their conclusion in their abstract as follows: “the data suggest that fructose-rich beverages increase the risk of gout in women”.
Can they make this conclusion? If you remember my previous post, these researchers have made the same mistake here: using causal language in observational study design.
Further, in the main text, they stepped into the other type of spin; under the discussion section, they promote the reduction of fructose intake – a recommendation the study design does not allow. They have not suggested a randomized study either.
Case study 2
In this study, the researchers have compared 366 children with ADHD for genetic variants with 1047 controls. Based on their positive results, they recommend referring such children with ADHD for routine screening for such variants.
Can they make such a recommendation based on this study?
The simple answer is no.
However, the presence of findings from randomized controlled trials is not the only criterion to make recommendations. It is a far more complex topic. Because of that, experts in the field have developed the GRADE approach as a guideline in developing recommendations for practice.
What is the GRADE approach?
The GRADE is an acronym for the Grading of Recommendations Assessment, Development, and Evaluation. Its working group presented its first report in 2004.
It categorizes the quality (strength) of evidence produced into four levels: high, moderate, low, and very low. While the evidence derived from randomized studies is considered high-quality, the evidence from observational studies is of low quality about making any recommendations.
Whatever it is before making any recommendation, one has to consult the GRADE approach since, in addition to the study design, several other factors should be considered.