Much focus is placed upon the methodological quality of clinical trials, and the impact that it has on the risk of bias at a study level. An often neglected area is the reporting quality of the manuscripts from trials, which can also have a big impact on the interpretation of results. A recent paper in Clinical Transplantation, published by Samia Hussain (a past research fellow at the CET), highlights this problem in 182 clinical trials of immunosuppression following renal transplantation published between 2010 and 2014.
Selective outcome reporting can result in “reporting bias” – a skewed perception of the benefits/risk of an intervention resulting from the reporting of only a subset of outcomes of interest. This may result from deliberate manipulation – a decision by the authors not to report an outcome due to unfavourable characteristics. More often, however, it arises from a perceived irrelevance of an outcome – if a study is not powered to a particular secondary outcome, and the results do not reach statistical significance, it is less likely to be reported. This is potentiated by journal editorial policy – word limits for articles mean that seemingly uninteresting outcomes will often be removed from study reports.
On the face of it, this doesn’t seem like too much of a problem. If the effect of an intervention on an outcome is not significant, then reporting it may not seem important. However, the inability to detect an effect due to lack of power (particularly for secondary outcomes and rare events) is not the same as there truly being no effect. The problem can be demonstrated by considering what would happen if the results of the study were subjected to a meta-analysis. If only those studies that found a significant effect on a given outcome report that outcome, then the results of the meta-analysis will be biased towards a much larger effect size. If all of the studies report the outcome, both significant and non-significant, then the effect size seen in meta-analysis will be much smaller. By suppressing results for relevant outcomes, we are introducing bias.
Often, outcomes are reported, but data are missing. Insufficient data can also preclude the use of the studies in meta-analysis; for example, missing measures of variance for continuous outcomes are a common problem. Samia’s analysis suggests that nearly 10% of outcomes reported in clinical trials of immunosuppression have missing data that would make inclusion in meta-analysis impossible.
Another issue with is the inconsistency in outcome definitions. The same outcome may be defined or measured differently in different studies, leading to variability in the incidence. Very often, the tool, test or scale used to measure an outcome is not defined at all. In Samia’s analysis, 45% outcomes identified did not have a clear definition. Whilst definition and reporting of efficacy outcomes was reasonably good (90% were clearly defined), safety outcomes (29%) and patient-reported outcomes (13%) were far less likely to have clear definitions.
So how do we address these issues? The concept of a “core outcome set” has been discussed here before. If we define a minimum outcome set that should be reported in all trials in a particular area, as well as the tools that should be used to measure these outcomes, then we can improve consistency and completeness. If these outcomes are universally reported, then we can apply the same core outcome set to systematic reviews in the field, reducing the risk of reporting bias. The SONG initiative (SONG-Tx) is an international collaboration attempting to create a core outcome set for trials in renal transplantation, and will be an important step forward in the transparent reporting of transplant trials.