The publication of the document entitled An organisation with a memory: report of an expert group on learning from adverse events in the NHS chaired by the Chief Medical Officer1 by the Department of Health (DoH) last year has focused attention on preventable failures in NHS care. It was observed that the mechanisms for detecting, reporting and analysing failures were incomplete and frequently flawed. Following on from this, it was noted that, in cases where important lessons were learned, there was often a failure to embed these into future practice, resulting in repetition of the same failures. The National Patient Safety Agency2 has recently been created to address these issues.
The observations made in the DoH document are particularly true in the areas of perioperative medicine and critical care, where several factors predispose to failure of care. These factors include the proliferation of high-technology medical devices;3 the low signal provided by individual devices in the context of the noise represented by clinical and other technological data inputs;46 the complex and demanding nature of clinical workloads; and human factors7 acting at the individual, team and organizational levels. Individuals are often expected to master the use of a large number of complex medical devices, which makes great demands on memory, recognition and decision-making. Failure of care in the context of medical device usage is a substantial problem. Over 6600 adverse incidents involving medical devices were reported to the Medical Devices Agency in 1999, including 87 deaths and 345 serious injuries.1 Mishaps in this context are frequently categorized as human error or equipment malfunction.8 However, there seems to be limited appreciation of the extent to which interactions between medical devices and their human operators affect safety.
This deficiency may prevent the correct categorization of medical mishaps, but it also has a more strategic significance, as it prevents the appropriate application of systems methodology in the analysis of failures (as recommended by the DoH). Such analyses identify mishaps that are related to device usability and they also provide the basis for improving deviceoperator interfaces, and thus help reduce the frequency of such mishaps. In this context (and in this discussion), the term usability refers to the quality of a devices user interface. It encompasses factors such as ease of learning, error prevention, help and documentation, physical controls, the perception of auditory and visual signals, the style and efficiency of the interaction, and the cognitive load placed on the user. This represents an important but under-investigated area, attention to which may improve patient safety substantially.
Detecting failures attributable to poor usability
Superficially, the detection and categorization of mishaps would appear to be a straightforward process. Failure of care is traditionally described in terms of unexpected patient morbidity and mortality and near misses. However, less obvious end-points that may be affected by suboptimal usability include a decreased margin of patient safety, decreased organizational productivity and staff morbidity, such as stress and low morale. These end-points are more difficult to quantify and may go undetected. A further detection problem arises from the traditional dichotomy between human error and equipment malfunction. The interaction between a medical device and its user can be a source of problems, and failures can occur in the presence of functionally operational equipment and reasonable users. As there is poor appreciation of this class of problem, the detection rate is likely to be low. It is likely that a proportion of these events could be avoided by the availability of improved user interfaces.
Difficulties in reporting and analysing usability problems
Reporting methods in place that are capable of highlighting problems with medical devices include critical incident reports, morbidity and mortality surveillance, reports to the Medical Devices Agency, national audits such as the National Confidential Enquiry into Perioperative Deaths, and clinical litigation settlements. In any voluntary reporting system, an individual is less likely to self-report if they are likely to focus attention on their own presumed inadequacies. This is likely to continue to happen if the focus remains on human error.
The existing limitations in detecting and reporting interaction problems with medical devices flow on to the analysis phase. Again, limited understanding of the interaction between humans and devices and its importance will lead to shortcomings in analysing the available information. Even if there were a good understanding of the interaction process, there is no widely accepted method for analysing and subsequently communicating medical device usability problems. The DoH document also makes the point that there is no reliable mechanism for analysing information collected through different reporting channels to distil common themes or lessons.
Implications for active learning
The DoH describes active learning as a process in which lessons learned from failure are embedded in an organizations culture and practices. In the context of a discussion about medical devices, this notion can be extended in a number of ways. Strategies to close the learning loop should include the development of manufacturing standards for medical devices which incorporate both functional and usability specifications. Deficiencies in usability design also need to be fed back to manufacturers to allow ongoing improvement. Individuals responsible for procuring medical devices for the NHS should have access to device evaluations which quantify usability, allowing them to choose the safest equipment. However, this information is not easily available, and recently published equipment evaluations have given this area a limited and qualitative treatment.9 10
Strategic approaches to usability analysis: problems and potential solutions
At present, significant barriers prevent progress. The first problem is the absence of standardized usability evaluation methods for medical devices. Secondly, there is no widely recognized protocol for describing and communicating such usability problems.
Over the last few decades, a significant number of usability evaluation methods have been developed by the computer industry for assessing the humancomputer interface. Preece and colleagues11 provide a categorization of these methods, which include usage data analysis, experimental evaluation and benchmarking, interpretive evaluation and predictive evaluation. Usage data analysis involves the collection of usage data (observation of users directly or with video cameras), software logging (recording and analysis of user keystroke chains) and the collection of users opinions. Experimental evaluation uses traditional hypothesis-testing with varying degrees of scientific rigour, while benchmarking is widely used for comparing computer hardware performance with a reference point. In interpretive evaluation, evaluators immerse themselves in the domain of the user in order to understand the users requirements and problems with existing systems. It is principally a product development tool. Predictive evaluation attempts to predict the usability problems that a computer system will exhibit. An example from this last group is heuristic analysis, in which the evaluator employs a checklist to detect the presence of desirable or essential usability features and the absence of known usability problems.
Whilst some of these evaluation techniques are clearly only relevant to product developers, others may have significant promise in evaluating and comparing existing medical devices. Collecting users opinions and heuristic analysis are the only usability evaluation techniques in common use for the after-market assessment of devices. A major drawback of these methods is that they are qualitative and provide limited information. Examples of quantitative measurements that may prove useful include the following:
Time taken to achieve a given speed of task completion (as a measure of ease of learning)
Decrement in error rate over time (as a measure ease of learning)
Number of keystrokes or other actions required to achieve a particular goal
Time taken to achieve a particular goal
Number of cognitive tasks (such as recollections, recognitions and decisions) required to achieve a particu lar goal (allowing inferences to be made about the likelihood of human error)
Performance decrement after a period of absence (as a measure of memory loading).
These techniques fall into the categories of usage data analysis, experimental evaluation and benchmarking, and predictive evaluation. Such measurements may demonstrate significant differences between devices from competing manufacturers. Some of these methods may have significant promise in the medical devices domain, but further work is needed to evaluate, adapt and validate the most relevant techniques.
Implications for equipment design
Human error is a frequently cited cause of medical mishaps. Inadequacies in training, experience and supervision are often identified as preventable factors contributing to human failure.12 The fact remains that humans will always be susceptible to error.13 Without diminishing individual or corporate responsibility, it is important to consider ways in which monitoring equipment can be engineered and designed in order to prevent human error. In the context of perioperative medicine and critical care, it should also be remembered that operators of medical equipment work in a difficult environment, often with numerous complex devices, and are frequently subject to factors such as stress, fatigue and fluctuating workloads. These factors further impair the cognitive processing necessary to operate medical devices, and should be factored into the design. The approach to these problems falls within the domain of equipment usability design rather than the functional specification of equipment. Undoubtedly, many manufacturers have made significant progress in improving the user interfaces of their devices. These gains have been offset to some extent by the diversity of approaches to interface design, which reduces the extent to which users can transfer their existing knowledge to different devices. Furthermore, the diversity of interface design has been implicated in failures of care.14
We welcome recent developments aimed at addressing medical device safety, including the creation of a Committee on Safety of Devices.15 However, the specific issue of technology in perioperative medicine and intensive care and the improvement of humandevice interfaces in this setting may require more focused attention. There is a clear need to recognize the contribution of device usability to good clinical care. We need to evolve standardized approaches for the evaluation of medical device usability and to provide means by which such standardized evaluations can be disseminated to the NHS community at large, so as to avoid the need for expensive re-evaluation by individual user groups. Such standardized evaluations would also facilitate the comparison of competing devices and contribute to rational procurement and purchasing strategies.
I. A. Bridgland
C/o Mrs M. Benton
Department of Anaesthetics
Box 93, Addenbrookes Hospital
Cambridge CB2 2QQ, UK
D. K. Menon
University Department of Anaesthesia
Box 93, Addenbrookes Hospital
Cambridge CB2 2QQ, UK
References
1 Department of Health, UK. An organisation with a memory: report of an expert group on learning from adverse events in the NHS chaired by the Chief Medical Officer. Norwich: HMSO, 2000
2 National Patient Safety Agency web page: www.doh.gov.uk/buildsafenhs
3 Saunders DA. On the dangers of monitoring. Or, primum non nocere revisited. Anaesthesia 1997; 52: 399400[ISI][Medline]
4 Cropp AJ, Woods LA, Brendle DL. Name that tone. The proliferation of alarms in the intensive care unit. Chest 1994; 105: 121720[Abstract]
5 Momtahan K, Hetu R, Tansley B. Audibility and identification of auditory alarms in the operating room and intensive care unit. Ergonomics 1993; 36: 115976[ISI][Medline]
6 Hodge B, Thompson JF. Noise pollution in the operating theatre. Lancet 1990; 335: 8914[ISI][Medline]
7 Cooper JB, Newbower RS, Long CD, McPeek B. Preventable anaesthesia mishaps: a study of human factors. Anesthesiology 1978; 49: 399406[ISI][Medline]
8 Cooper JB, Newbower RS, Kitz RJ. An analysis of major errors and equipment failures in anesthesia management: considerations for prevention and detection. Anesthesiology 1984; 60: 3442[ISI][Medline]
9 Menes JA, Bousfield DR, Reay C. Medical Devices Agency Evaluation 400Patient Monitors. London: Medical Devices Agency, 2000
10 Anon. Physiologic monitoring systems. Health Devices 1999; 28: 677[Medline]
11 Preece J, Rogers Y, Sharp H, Benyon D, Holland S, Carey T. HumanComputer Interaction. Harlow: Addison Wesley, 1984
12 Williamson JA, Webb RK, Sellen A, Runciman WB, Van Der Walt JH. Human failure: an analysis of 2000 incident reports. Anaesth Intensive Care 1993; 21: 67883[ISI][Medline]
13 Gallnut MF. Human factors in accidents. Br J Anaesth 1987; 59: 85664[ISI][Medline]
14 Medical Devices Agency. Equipped to care. The safe use of medical devices in the 21st century. London: Medical Devices Agency, 2000
15 Committee on Safety of Devices web page: www.medical-devices.gov.uk/csd.htm