South London and Maudsley NHS Trust, Monks Orchard Road, Beckenham, Kent BR3 3BX, UK
See pp. 816, this
issue.
Mental health services in England and Wales are undergoing a revolution. The Mental Health National Service Framework and the National Health Service (NHS) Plan demand whole systems change (Department of Health, 2001a). Modernisation of mental health services is to be delivered by local health and social care communities within a tight time-scale. Implementation is being assertively performance-managed by the centre.
The paucity of the evidence base behind the seven standards in the National Service Framework is starkly underlined by a recent scoping review of the effectiveness of mental health services (NHS Centre for Reviews and Dissemination, 2001). Similar concerns apply to the service models already specified within The Mental Health Policy Implementation Guide (Department of Health, 2001a) and in preparation for subsequent addition to the guide. Of the mandatory service elements within current guidance, only assertive outreach has any reasonably robust claim to empirical support. Even this has been questioned when applied in a European context (Burns et al, 2001). The others have, at best, strong face validity, in that they make sense and are persuasive to the constituency of interest.
This frustrating lack of evidence is partly due to the difficulty of conducting evaluations of the complex social interventions typically deployed within mental health services, that meet the stringent quality criteria demanded by practitioners of evidence-based medicine. The gold standard design of a treatment trial within the evidence-based medicine paradigm is the randomised controlled trial (RCT), and the best evidence about a particular form of treatment is provided by a meta-analysis of all methodologically sound RCTs. Quasi-experimental designs, where allocation to treatment and control conditions is not randomised, come a very poor second to the RCT, and observational studies a distant third.
The applicability of the RCT methodology to the evaluation of socially complex interventions has been called into question. Generalisability is undermined by selection bias (which affects the representativeness of those included in studies) and the effect of unmeasured contextual variables such as the staffing arrangements, the detailed process of care and the social environment within which the study is carried out (Wolff, 2001). Improvement in RCT methodology, so that within a so-called pragmatic trial, real life questions are addressed in real life settings, may go some way to addressing these concerns (Hotopf et al, 1999). Gilbody et al (2002, this issue) explore a potentially attractive alternative strategy to the RCT for answering policy-relevant questions: outcomes research. In essence, this involves making use of routinely available real world data-sets to explore questions of interest.
TAKING OUTCOME MEASUREMENT SERIOUSLY
Following a number of public scandals about poor-quality health care, most notoriously the Bristol Royal Infirmary paediatric cardiac surgery affair, a major objective of health policy in the UK is to eliminate unacceptable variations in clinical practice and ensure uniformly high-quality care. The aim is to end the postcode lottery, within which the interventions offered and the consequent health outcomes for a given condition depend more on where you live than on what is wrong with you. To do so requires a change in emphasis by the commissioners (formerly purchasers) of care and service providers away from a preoccupation with resources, activity levels and service structures and towards a focus on outcomes.
It is often asserted that outcome is more difficult to assess in mental health care than in other areas of medicine. In the UK, the Department of Health has taken a strategic approach to the issue and has, for some years, been funding research into the development of tools that can reliably and (arguably) validly measure outcomes within routine clinical practice. This has resulted in the Health of the Nation Outcome Scales (HoNOS), which provides summary ratings of psychopathology and disability (Wing et al, 1999), measures of quality of life, such as the Lancashire Quality of Life Profile (LQOLP) and Manchester Short Assessment of Quality of Life (MANSA) (Priebe et al, 1999), and of needs, such as the Camberwell Assessment of Need (Slade et al, 1999). A separate research stream is developing outcome measures that can be routinely deployed within psychotherapy services (Margison et al, 2000).
MENTAL HEALTH INFORMATICS
One key element of the contemporary NHS reforms is the ambitious programme outlined in the Department of Health's Mental Health Information Strategy (Department of Health, 2001b). This aims to deliver improved mental health information for service users and carers, practitioners and managers at both local and national level. It requires the introduction of an integrated mental health electronic record (which will interface with the electronic patient record, summarising an individual's lifelong contact with health care) and a revised mental health minimum data-set (MHMDS), which is to be rolled out by 2003 (Department of Health, 2001b). The MHMDS, as well as capturing demographic and process data relating to a spell of mental health care, mandates the routine collection of clinical data using HoNOS. The vision is of a national, patient-based system for the collection of clinically focused data about all patients seen by specialist mental health services. When implemented, the MHMDS will offer an extraordinarily rich data-set for outcomes research.
WHAT MIGHT OUTCOME DATA TELL US?
Routinely collected process and outcome data might potentially answer important questions about services that are not amenable to the RCT at a variety of levels, as in the following examples:
Each of these questions raises further issues about an evaluative strategy. How do we define success in terms of costs, clinical outcomes, profile against a template? How do we characterise the population of interest, for example, people with schizophrenia, and measure the demands these people make on services? How do we characterise what is offered to patients and carers? In what way is readmission a reliable indicator of relapse? How do we take account of missing data?
OUTCOMES RESEARCH: PROMISING MUCH, DELIVERING LITTLE?
Will the vision behind the MHMDS bear fruit? Gilbody et al (2002, this issue) provide an important systematic review of the value of outcomes research. The results to date, which exclusively emanate from the USA, are disappointing. Ironically, this is for reasons that are similar to those limiting the relevance of the traditional RCT to complex interventions. Outcomes research data-sets, which by definition lack masking to treatment condition, must be prone to reporter bias and have generally been unable to provide clinically relevant outcome measures, characterise the interventions offered and adequately account for potential confounding variables (Gilbody et al, 2002, this issue). Problems with adequately taking account of confounding variables (such as illness severity) can lead to apparently bizarre findings, for example, that depressed people who receive treatment for depression have worse outcomes than those who have no treatment. Perhaps most worrying is the potential for researchers to trawl through the data-set for findings that support their position while suppressing negative data. Gilbody et al (2002, this issue) provide important pointers on how to read the outcome research literature which is set to expand as critically as we have been taught to read the literature on RCTs.
WHAT WILL BE THE OUTCOME OF A FOCUS ON OUTCOMES?
There is nothing new about the collection of outcomes data. For most of the 100 years it was open, the mental hospital within which I once worked measured its outcomes annually on a simple 4-point scale: discharged relieved, discharged improved, continuing stay and dead. These data had no discernible impact on the life (and eventual death) of the hospital. This institution was opened at a time when evidence had already shown that outcomes of mental hospital treatment using these parameters were poor. It was swept away by societal unease about the service model it offered after decades of improvement on the 4-point scale. In social policy, broader societal and political considerations invariably, and for good reasons, trump evidence.
Practitioners should, however, welcome the current focus on outcomes, provided that the measures used are valid and data collection is adequately resourced. It is clearly unethical for clinicians not to be concerned about the outcomes of their interventions. Benchmarking of services against comparators may allow us to see whether our practice has reasonable outcomes. Variances may raise important questions about levels of resources or demand. There is, for example, an increasing gap between current practice and best practice in the availability of psychosocial treatments for psychosis: local poor outcomes might provide an argument for additional resources. Researchers will relish the hypothesis-generating opportunities of large, clinically relevant data-sets. Scientifically important questions about treatments are, however, more likely to be answered by good-quality comparative research, well-conducted, large-scale pragmatic RCTs (Hotopf et al, 1999) and multi-site studies that pay close attention to contextual variables (Wolff, 2001).
REFERENCES
Burns, T., Fioritti, A., Holloway, F., et al
(2001) Case management in Europe. Psychiatric
Services, 52,
631-636.
Department of Health (2001a) The Mental Health Policy Implementation Guide. London: Department of Health.
Department of Health (2001b) Mental Health Information Strategy. London: Department of Health.
Gilbody, S. M., House, A. O. & Sheldon, T. A.
(2002) Outcomes research in mental health. Systematic review.
British Journal of Psychiatry,
181, 8-16.
Hotopf, M., Churchill, R. & Lewis, G. (1999) Pragmatic randomised trials in psychiatry. British Journal of Psychiatry, 175, 217-223.[Abstract]
Margison, F. R., Barkham, M., Evans, C., et al
(2000) Measurement and psychotherapy. Evidence-based practice
and practice-based evidence. British Journal of
Psychiatry, 177,
123-130.
NHS Centre for Reviews and Dissemination (2001) CRD Report 21 Scoping Review of the Effectiveness of Mental Health Services. York: NHS Centre for Reviews and Dissemination, University of York.
Priebe, S., Huxley, P., Knight, S., et al (1999) Application and results of the Manchester Short Assessment of Quality of Life (MANSA). International Journal of Social Psychiatry, 45, 7-12.[Medline]
Slade, M., Beck, A., Bindman, J., et al (1999) Routine clinical outcome measures for patients with severe mental illness: CANSAS and HoNOS. British Journal of Psychiatry, 174, 404-408.[Abstract]
Wing, J., Curtis, R. H. & Beevor, A. (1999) Health of the Nation Outcome Scales (HoNOS). Glossary for HoNOS score sheet. British Journal of Psychiatry, 174, 432-434.[Medline]
Wolff, N. (2001) Randomised trials of socially complex interventions: promise or peril? Journal of Health Services Research and Policy, 6, 123-126.[CrossRef][Medline]
Received for publication July 16, 2001. Revision received November 2, 2001. Accepted for publication November 13, 2001.
Related articles in BJP: