History

Clinical trials seek to evaluate whether an intervention is effective and safe. This is determined by comparing the effects of interventions on outcomes chosen to identify the beneficial and harmful effects e.g. pain. Outcome domains are constructs which can be used to classify broad aspects of the effects of interventions e.g. functional status. The careful selection of appropriate outcome domains and outcomes is therefore crucial to the design of randomised controlled trials (RCTs), and these need to be relevant to health service users and other people making decisions about health care, if the findings of pragmatic RCTs are to influence health-care decision-making. There is a growing recognition among clinical researchers that insufficient attention has been paid to outcomes to be measured in clinical trials.

The difficulties caused by heterogeneity in outcome measurement are well known to systematic reviewers. For example, the five most accessed and the top cited Cochrane Reviews in 2009 all reported problems related to outcomes in eligible trials [Tovey 2010]. Furthermore, empirical research provides strong evidence that outcome reporting bias, defined as the results-based selection for publication of a subset of the original recorded outcome variables, is a significant problem in RCTs [Dwan et al 2008]. Importantly, outcomes reported for RCTs may not reflect those that are meaningful for patients and health service users.

These issues could be addressed with the development and application of agreed standardised sets of outcomes, called core outcome sets (COS). This approach reduces heterogeneity between trials, and leads to research that is more likely to have measured relevant outcomes. Importantly, it enhances the value of evidence synthesis by reducing the risk of outcome reporting bias and ensuring that all trials contribute usable information. Examples exist where patients have identified an outcome important to them as a group that might not have been considered if the outcome set was developed by health care professionals on their own [Sinha et al 2010, Serrano-Aguilar et al 2009, Kirwan et al 2005, Oliver and Gray 2006].

Likewise, the process of clinical audit needs to be strengthened to maximise its benefits for health care. One key part of clinical audit is the monitoring and reporting of outcomes, with data being collected to allow comparisons between centres and over time. Therefore, it is important that outcomes selected and reported in audit and studies other than randomised trials can also be synthesised or compared. Using core outcome sets in these settings will improve the standards of reporting and data synthesis.

The COMET Initiative was launched at a meeting in Liverpool in January 2010, funded by the MRC North West Hub for Trials Methodology (NWHTMR). More than 110 people attended, with representatives from trialists, systematic reviewers, health service users, clinical teams, journal editors, trial funders, policy makers, trials registries and regulators. The feedback was uniformly supportive, indicating a strong consensus that it was the time for such an initiative. The meeting was followed by a second conference in Bristol in July 2011, which reinforced the need for core outcome sets across a wide range of areas of health and the role of COMET in helping to coordinate information about these.