Royal College of Psychiatrists' Research Unit, London
Kingston Upon Thames
University of Southampton, Southampton
Royal College of Nursing Institute, Radcliffe Infirmary, Oxford, UK
Correspondence: Professor Judith Lathlean, Nightingale Building, School of Nursing and Midwifery, University of Southampton, University Road, Highfield, Southampton SO17 IBJ, UK. Tel: 023 8059 8234
Declaration of interest None. This work was funded by a Department of Health Policy Research Programme grant under the Outcomes of Social Care for Adults initiative.
See editorial, pp. 910,
this issue.
![]() |
ABSTRACT |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Aims To develop and test a self-assessment instrument to enable users of mental health services to rate their experience across the range of domains that they consider to be important.
Method Relevant domains were identified and a new instrument was drafted and field tested to examine its psychometric properties.
Results The 17-item, self-rated Carers' and Users' Expectations of
Services User version (CUESU) appears acceptable to most
service users. Its items have reasonable testretest reliability and a
total CUESU score correlates significantly with a total
score of the Health of the Nations Outcome Scales (Spearman's =0.42;
P < 0.01).
Conclusions The development and testing of CUESU suggest that it might be feasible to apply a self-rated measure of the expectations and experience of users of mental health services.
![]() |
INTRODUCTION |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
![]() |
METHOD |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
The methods and results are described more fully in the final report of the project team to the Department of Health (Lelliott et al, 1999).
Identification of domains
A comprehensive literature search was undertaken to identify domains that
might be included in such a measure. The search had two strands. Published
reports were identified by a systematic search of electronic databases
(MEDLINE, HEALTHStar, PSYCHLIT, PSYCINFO and EMBASE), by cascade
searches through the reference lists of identified papers and by direct
approaches to others working in this field. Unpublished or grey
literature was identified by writing to more than 300 national and local user
and carer organisations in the UK. Two types of material were sought:
instruments that measured needs, problems, quality of life or satisfaction of
service users; and reports of surveys or other research about the views of
service users.
In parallel with the literature search, the research worker employed by the National Schizophrenia Fellowship (J.H.) ran two focus groups of, and conducted seven in-depth semi-structured interviews with, users of mental health services. These users were all people who had experience of working with other service users, for example as advocates, and so had a wider knowledge of the experience of people with a mental illness.
Development of the instrument
The large number of possible domains, identified by the literature search
and interviews with service users, were mapped and grouped into the smallest
possible number of items without losing definition or meaning. Because the
instrument was intended to measure states that are relatively enduring, each
item was introduced with a normative statement. This described
what a service user should expect to be the case for the issue if it did not
constitute a problem. The wording of these normative statements was modified
in response to the comments of an advisory group of service users and through
the use of Flesch scores to increase their readability
(Flesch, 1948).
A scale with many points and attempts at overprecision would be more difficult to use. Therefore, after reading each normative statement, the person is asked to respond to two simple questions, each with a three-point scale. Part A asks how the person's situation compares with that described by the normative statement (as good as this/worse than this/very much worse than this) and Part B asks whether the person is satisfied with the issues described (yes/unsure/no). There is also space for a free-text response to each item (Part C).
Eighty-two service users completed the first full draft of the instrument and provided structured feedback. All but one reported that the instructions and language used in the normative statements were always or usually clear. Fifty-one (71%) stated that the pilot version covered all the domains that they considered to be important. Twenty-one (29%) took less than 15 min to complete the instrument, twenty-seven (38%) took between 15 and 30 min and five (7%) took more than 45 min. Fifty-four people (75%) thought that the instrument was about the right length and thirteen (18%) thought that it was too long.
The instrument was redrafted to take account of the feedback from those involved in the pilot and an analysis of the inter-relationships between items. The version used in the field tests, Carers' and Users' Expectations of Services User version (CUESU), is outlined in the Appendix.
Field testing
Four hundred and forty-nine service users from 32 locations in England,
Northern Ireland and Wales participated in the main field trials. Data
collection was coordinated by people who were working for statutory mental
health services (127 returns) or for local voluntary sector services (322
returns). Although not selected in any random or systematic way, the
participants were all users (mainly long-term) of local mental health services
managed by these local agencies.
The results of the first rating made by all participants (time 1) were included in the analyses of the internal psychometric properties of CUESU. Ninety-nine service users also made a second rating between 2 and 14 days after the first (time 2). These results were used to examine testretest reliability. The time interval is that recommended by Streiner & Norman (1995).
A sub-study was conducted separately from the main field trials. In this, a rating of the Health of the Nation Outcome Scales (HoNOS; Wing et al, 1996) was made by a mental health professional who knew the person well at the same time as a service user completed CUESU. Eighty-four pairs of ratings were collected.
![]() |
RESULTS |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
In summary, people who use mental health services often emphasise and value different aspects of their health and social function than do mental health care professionals. They appear to place less emphasis on symptom reduction than they do on improvements in other areas of their lives. These include: work, or other meaningful daytime activity; financial security; suitable and comfortable accommodation; choice and control over where they live; and the establishment and maintenance of relationships.
Certain qualities of mental health services are also important, including: accessibility and availability; the provision of information about services and treatments; continuity, particularly in terms of establishing and maintaining relationships with individual care workers; seamlessness in terms of care provided by different service facilities and agencies; and choice of treatment and care and of who acts as the keyworker. People with a severe mental illness also value access to physical health care services that take their needs seriously.
Service users also value certain attributes in health and social care workers: courtesy; respectfulness; honesty; openness; friendliness; informality; empathy; non-judgemental attitude; caring nature; reliability; punctuality; willingness to share information and decisions and to give practical help and support.
Main field trials
The mean age of the 449 participants was 42 years (range 18-78); 53% were
men and 91% were White. About 5% of the data items were missing from the
schedules returned. Each analysis included all valid cases.
Table 1 summarises the
responses to the Part A and Part B questions for the 16 CUESU items.
The proportion of participants who rated their situation as being as good as
the normative statements (Part A questions) ranged from 50.8% to 76.6%. The
proportion who expressed satisfaction with their situation ranged from 39.8%
to 73.6%. Correlations between Part A and Part B questions were generally high
(Spearman's =0.67-0.86). However, for all items there were some service
users whose response to the two questions appeared contradictory. For 15 of
the 16 items, fewer people responded positively to Part A questions than to
Part B questions (i.e. expressed dissatisfaction with their situation despite
reporting that it was as good as the normative statement).
|
Part A questions
A principal components analysis was conducted of the Part A questions for
the 16 items using a covariance matrix extraction method
(Norman & Streiner, 1994).
The KaiserMeyerOlkin measure of sampling adequacy (KMO), for
which summary values were 0.9, indicated that all 16 items should be included.
Varimax rotation yielded three factors with an eigenvalue greater than unity.
These accounted for 53% of the variance (Factor 1, 20%; Factor 2, 17%; Factor
3, 16%). Table 2 shows the
loadings of the individual items onto each factor where coefficients were
greater than 0.4.
|
Part B questions
A principal components analysis of Part B questions at time 1 yielded three
rotated factors with eigenvalues greater than unity that accounted for 50% of
the variance (Factor 1, 24%; Factor 2, 15%; Factor 3, 11%). Again, the KMO
indicated that all items should be included. The structure was quite similar
to that of the factors derived from the Part A questions
(Table 3).
|
Testretest reliability
Table 4 shows the intraclass
correlation coefficients, for the 16 items for both Part A and Part B
questions, between time 1 and time 2 for the 99 people who made two ratings.
Coefficients are good (0.61-0.80) for nine of the Part A and eleven of the
Part B questions and moderately good (0.41-0.60) for six of the Part A and
five of the Part B questions (Landis &
Koch, 1977). The exception is Part A of the item relating to
medication.
|
Comparison with HoNOS
The HoNOS have 12 items rated 0 (no problem) to 4 (very severe problem).
The items cover a range of problems of behaviour, impairment, symptoms and
social function. The mean total HoNOS score for the 84 service users in this
sub-study was 12.3 (95% CI 11.0-13.7), which is comparable to that reported in
the HoNOS field trial (Wing et
al, 1996).
Although HoNOS and CUESU are quite different in structure and mode
of application, there are three HoNOS items that have approximate counterparts
in the Part A question of five of the CUESU items: the HoNOS item 9
(problems with relationships) correlated significantly with CUESU items
5 (family and friends) (Spearman's =0.27; P < 0.05) and 6
(social life) (0.26; P < 0.05); HoNOS item 11 (problems with
living conditions) with CUESU item 1 (where you live) (0.31; P
< 0.01); and HoNOS item 12 (problems with daily occupation) with
CUESU item 4 (how you spend your day) (0.33; P < 0.01). A
total CUESU score for Part A questions, created by adding
responses to all 16 items, correlated significantly with the total HoNOS score
(Spearman's
=0.42; P < 0.01).
Ease of use
Three-quarters of the participants (n=335) stated that they
completed CUESU without help from another person. Common reasons why
help was sought were: difficulty in understanding the format, questions or
words; difficulty with reading and writing; visual impairment; and lack of
confidence.
![]() |
DISCUSSION |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
The purpose of developing CUESU was to produce an instrument that can measure issues considered important by service users from their perspective. It was anticipated that uses for such an instrument might be:
For an instrument to meet this specification it must be easy to use by the majority of service users, it should have good coverage of the issues considered important by service users and ratings should not be unduly influenced by transient influences on subject state, such as short-term changes in mood. The extent to which ratings on such an instrument should be consistent with some other independent and objective measure is debatable. Differences in perception, between service users and professionals, about what constitutes a desirable state or outcome is one important justification for the development of such a measure.
To what extent does CUESU meet this specification?
Ease of use and acceptability
The CUESU is a self-rated measure and so does not commit the time of
mental health care professionals, except for the time taken to encourage its
use. Feedback from the pilot indicates that the structure, layout and wording
of CUESU are generally clear and acceptable to service users and that
it can be completed quite quickly. About one-quarter of people in the field
trial sought help with its completion. However, the type of help needed
usually could be provided by a friend, carer or advocate.
There is an inevitable tension between the need for a general tool to enable comparisons between the experience of service users interacting with different services, or living in different geographical areas, and for one that measures the perspective of a particular individual or a specific group. Only a few people whose first language is not English participated in the study and CUESU has not been translated into any other language. The great majority of those participating in the field trials were White and CUESU needs to be tested further by people from minority ethnic groups.
Coverage of relevant domains
During the development phase, information was gathered from a variety of
sources (literature reviews of surveys and other instruments, focus groups,
interviews and consultation) to ensure that the resulting items covered the
important domains. The results of the piloting suggested that this had been
achieved to a large extent. The factors derived from the principal components
analysis, which might be summarised as quality of interactions with
mental health workers, sense of alienation and
finance, daytime activities and social relationships, are
recognisable clinical concepts.
The most notable omission from CUESU is of an item (or items) relating to symptoms of mental illness (e.g. depressed mood, hallucinations, delusions, etc.). These did not figure prominently in the literature as issues that service users wanted to be addressed, nor during the process of asking service users their views, and so were not included. The CUESU measure might be used alongside instruments that gather information about symptoms, or items relating to symptoms might be added through further development of the CUESU measure.
How stable are CUESU ratings?
Good testretest reliability is an important property for any
instrument that is intended to measure status or outcomes for service users
and carers. It is therefore encouraging that, for all but one of the 32
questions relating to the 16 items, the correlations between ratings at two
time points, 2-14 days apart, are moderately good or better. The ratings of
CUESU therefore do appear to measure states that are not influenced
unduly by, for example, short-term fluctuations in the raters' emotional
state. The exception is the Part A question relating to medication. It appears
that people's perception of the benefits of their medication and their
experience of side-effects is more subject to rapid change or fluctuation.
Do CUESU ratings reflect severity?
The CUESU is a new type of scale and there is no gold standard
against which to compare it. The sub-study comparing CUESU with HoNOS
suggests that CUESU scores do reflect severity. As a practitioner-rated
measure, HoNOS offers an independent perspective on this.
The CUESU has not been tested yet for its sensitivity as a measure of the outcome of care.
Do CUESU items need to have three parts?
Part A questions for each item ask how well the person's situation compares
with a standard descriptive statement. The purpose is first to focus attention
on the specific issues to which the item refers, and second to increase the
consistency of the person's response. Part B questions ask about how satisfied
the person is with the issues to which the item refers. Although there are
strong correlations between Part A and Part B questions for all items, some
people do respond differently to the two questions. The most common situation
is for a person to express dissatisfaction despite having rated the situation
as comparing favourably with the descriptive statement. It is hypothesised
that Part A questions might be more useful as a measure of state (and
therefore possibly of outcome when repeat ratings are made after a period of
care) and Part B questions as a vehicle for aiding communication between
service user and practitioner, especially during care planning.
The third part to each item (Part C) is a free-text section. This proved popular with those who completed CUESU during both pilot and field trials. It records information about the individual's situation that cannot be captured by the tick-box nature of Parts A and B. It is hoped that this information will support communication with practitioners and inform care planning. The Part C item also might be of value to identify specific issues either for a particular group of service users or about a particular service.
In conclusion, the development and testing of CUESU have demonstrated the feasibility of applying a self-rated measure of the expectations and experience of users of mental health services. However, more work is needed to explore potential uses of the instrument. Test of CUESU as an outcome measure would require its application to a cohort of service users before and after a substantial period of health and social care. Assessment of its usefulness in service evaluation would require a test of whether CUESU ratings reflect differences between services, or even developments over time within a service, for instance in the quality and extent of community care.
![]() |
Clinical Implications and Limitations |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
LIMITATIONS
![]() |
APPENDIX |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
![]() |
REFERENCES |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Department of Health (1997) The New NHS: Modern, Dependable. London: Department of Health.
Department of Health (1998) A First Class Service: Quality in the New NHS. London: Department of Health.
Flesch, R. (1948) A new readability yardstick. Journal of Applied Psychology, 32, 221-223.
Landis, J. R. & Koch, G. G. (1977) The measurement of observer agreement for categorical data. Biometrics, 33, 159-174.[Medline]
Lehman, A. F. (1996) Measures of quality of life among persons with severe and persistent mental disorders. In Mental Health Outcome Measures (eds G. Thornicroft & M. Tansella). Heidelberg: Springer Verlag.
Lelliott, P., Beevor, A., Hogman, G., et al (1999) The CUES Project Carers' and Users' Expectations of Services: Final Report. London: Royal College of Psychiatrists' Research Unit.
Norman, D. L. & Streiner, G. R. (1994) Biostatistics: The Bare Essentials. St Louis, MO: Mosby.
Oliver, J., Huxley, P., Bridges, P., et al (1996) Quality of Life and Mental Health Services. London: Routledge.
Phelan, M., Slade, M. & Thornicroft, G. (1995) The Camberwell Assessment of Need: the validity and reliability of an instrument to assess the needs of people with severe mental illness. British Journal of Psychiatry, 167, 589-595.[Abstract]
Ruggeri, M. & Dall' Agnola, R. (1993) The development and use of the Verona Expectations for Care Scale (VECS) and the Verona Satisfaction Scale (VSS) for measuring expectations and satisfaction with community-based psychiatric services in patients, relatives and professionals. Psychological Medicine, 23, 511-523.[Medline]
Streiner, D. L. & Norman, G. R. (1995) Health Measurement Scales: A Practical Guide to their Development and Use (2nd edn). Oxford: Oxford University Press.
Wing, J. K. (1978) Planning and evaluating services for chronically handicapped psychiatric patients in the United Kingdom. In Alternatives to Mental Hospital Treatment (eds M. A. Stein & L. I. Test). New York: Plenum.
Wing, J. K., Curtis, R. H. & Beevor, A. S. (1996) HoNOS: Health of the Nation Outcome Scales. Report on Research and Development. London: Royal College of Psychiatrists.
Received for publication January 20, 2000. Revision received May 11, 2000. Accepted for publication May 15, 2000.
Related articles in BJP: