Thanks to research biases, factors that purportedly predict the outcome of various diseases may be less reliable than people believe, a new report claims.
These prognostic markers or indicators are used to predict how well a patient will do when they are diagnosed with a particular disease, such as cancer.
"The problem is not that the scientific studies that are published and 'visible' are flawed," explained senior study author Dr John Ioannidis, chairman of the department of hygiene and epidemiology at the University of Ioannina's School of Medicine in Greece. "The main issue is that besides these studies, there are probably many other studies addressing the same or similar questions, but these happen to find less strong ['less exciting'] results, and thus remain unpublished or far less visible, even to experts."
Extreme results are more likely to be published
"The ones that are published and give emphasis on the prognostic marker are those that have found the most extreme results [either by chance or by selecting their analyses], and these are not fully representative of the truth," Ioannidis added.
His study appears in the July 20 issue of the Journal of the National Cancer Institute.
‘The published prognostic literature is a serious distortion of the truth’
Indeed, the issue is one that strikes a chord with other experts. An accompanying editorial stated that "this study provides the most compelling evidence yet that the published prognostic literature is a serious distortion of the truth."
Dr Jay Brooks, chairman of haematology/oncology at the Ochsner Clinic Foundation in New Orleans, agreed. "This paper brings home a good point. When we use these new prognostic indicators, most of the studies are done on small numbers of patients and the only way to really validate a new prognostic factor is with large numbers of patients in a standardised, prospective manner."
Despite decades of research into various factors, the number of prognostic indicators that are truly clinically useful "is pitifully small," the editorial noted.
For example, according to the study authors, at least 116 studies have been published on TP53, a tumour-suppressor protein that researchers think may predict outcome in patients with head and neck cancer.
How the research was done
To try to unearth biases, Ioannidis and his colleagues conducted a meta-analysis on studies of TP53 and death or survival rates.
Three levels of information about the studies were collected. First, they searched for more prominent studies, those that were indexed in the main scientific databases using the key words "mortality" or "survival." Next, they looked for published studies that were not indexed under these two keywords. Finally, they searched for unpublished data from researchers.
When just the 18 published and indexed studies were included, there appeared to be a strong association between TP53 status and mortality from head and neck cancer.
When the analysis was expanded to include another 13 studies published but not similarly indexed, however, the association weakened.
And when data from 11 unpublished studies was added to the mix, the previously robust association completely vanished.
Readily accessible and published data may mislead
Had a patient or doctor searched only the readily accessible, published data, the study stated, they would have received misleading information. And many of those studies included only small numbers of patients.
"This is another very good example of why it's so important to do large, prospective, randomised clinical trials before we rush to judgment on new prognostic indicators," Brooks said. "Individual studies with small numbers of people may have led to some false information."
In a randomised study, participants are randomly assigned to one of two or more treatment arms. Large, randomised trials conducted prospectively (i.e., unfolding over time as opposed to a review of past data) are considered the gold standard of clinical research.
"Prognostic markers are assessed by observational studies," Ioannidis added. "Selective reporting is likely to be an even more prominent problem here. This translates to considerable uncertainty about our knowledge on predictors." – (HealthDayNews)