Affiliations of authors: Center for Outcomes and Policy Research, Department of Adult Oncology, Dana-Farber Cancer Institute, and Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA.
Correspondence to: Stephanie J. Lee, M.D., M.P.H., Center for Outcomes and Policy Research, Dana-Farber Cancer Institute, 44 Binney St., Boston, MA 02115 (e-mail: stephanie_lee{at}dfci.harvard.edu).
Reprint requests to: Jane C. Weeks, M.D., M.Sc., Center for Outcomes and Policy Research, Dana-Farber Cancer Institute, 44 Binney St., Boston, MA 02115.
![]() |
ABSTRACT |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
![]() |
INTRODUCTION |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Taxonomic evolution is not unique to outcomes research. Parallels can be made with the biologic sciences where the discovery of new species, techniques, and paradigms frequently extends or rearranges nomenclature. In basic science, researchers could have stated 10 years ago that they worked in molecular biology, and that was explanation enough. Now molecular methods are so ubiquitous that further detail is required to characterize a research effort. Bench scientists describe their fields to specific audiences by the topics (signal transduction), methods (genetic engineering), or model systems (zebra fish) that best describe their pursuits. The same phenomenon is now occurring in outcomes research. For some, outcomes research could be identified by a question or topic (quality of care), a method (decision analysis), or a data source (administrative databases). Indeed, the field has evolved such that the term "outcomes research" no longer describes a particular activity, but rather it describes an array of related yet distinct fields of inquiry.
To achieve some insight into the current state of the science of outcomes research in oncology, we believe that it is helpful to review three separate but related topics. First, we present the background behind the outcomes movement and the forces that shaped the field and its nomenclature. This historic context helps to explain the current diversity of research topics classified under outcomes research. Second, we present a conceptual framework for classifying outcomes research and highlight areas of the field's evolution. We struggled to develop a clear, comprehensive, and cohesive definition of outcomes research. Ultimately, we decided that outcomes studies are distinguished both by what they are and what they are not. This definition is presented graphically and developed within the second section. Finally, we review oncology outcomes studies published in selected general, specialty, and methodology journals during the past 33 (1966-1998 inclusive) years. This literature review supports the proposed conceptual framework, examines trends in research emphasis over time, and provides an opportunity to see how outcomes research is being presented to the practicing oncology community.
![]() |
HISTORIC PERSPECTIVE |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Medicine has long been cautioned about the need to measure the results of its interventions. Early proponents of outcomes measurement focused on its value as an indicator of quality. In 1914, Ernst A. Codman, a surgeon, noted that hospitals were reporting the number of patients treated but not how many of the patients benefited from treatment. He argued (2) that all hospitals should produce a report "showing as nearly as possible what are the results of the treatment obtained at different institutions. This report must be made out and published by each hospital in a uniform manner, so that comparison will be possible." Furthermore, he suggested that all worthy products and activities of a hospital, from the physicians trained there, to the research conducted, to the care delivered, depended on the premise that ill patients were deriving benefit from the therapies that they received. Unless results were measured, how could one be sure of benefits conferred?
The field was relatively quiet for the next several decades. According to published histories of outcomes research (3-5), the next major figure was Avides Donabedian. He returned to Codman's concept of quality in 1966 (6), coining the term "outcome" as part of his structure, process, and outcome paradigm for quality assessment. Donabedian had a broad definition of outcome: "Although some outcomes are generally unmistakable and easy to measure (death, for example), other outcomes, not so clearly defined, can be difficult to measure. These include patient attitudes and satisfactions, social restoration and physical disability, and rehabilitation." He believed that "outcomes, by and large, remain the ultimate validators of the effectiveness and quality of medical care."
However, it was financial, political, and social pressures that pushed the outcomes agenda forward. In the decades leading up to the introduction of Medicare in the 1960s, scientific advances occurred at a rapid rate, and health care spending was relatively unscrutinized. However, from the 1970s to mid-1980s, concerns about lack of oversight grew as advanced technology was applied to a greater number of patients, resulting in exploding health care costs. Many writers noted that the increase in costs did not necessarily bring improved patient outcomes. Archie L. Cochrane warned, in his book Effectiveness and Efficiency: Random Reflections on the Health Services, that "cure is rare while the need for care is widespread, and that the pursuit of cure at all costs may restrict the supply of care" [reprinted in (7)]. He noted (7) the "very widespread belief that for every symptom, or group of symptoms there was a bottle of medicine, a pill, an operation, or some other therapy which would at least help." If all of these therapies were applied without evidence of benefit, the system would become bankrupt. He went on to advocate (7) the randomized clinical trial as the best way of determining effectiveness. In 1973, John Wennberg and Alan Gittelsohn (8) published a seminal article in Science that reported a wide variation in resource utilization, expenditures, and rates of hospitalization and procedures. In particular, they found that rates of tonsillectomy varied dramatically within the state of Vermont without obvious differences in health outcomes. Many people began to question why there were such different patterns of care. Shouldn't physicians know and basically agree on what worked? If not, they asked, could money be saved by eliminating needless medical interventions?
The first large-scale effort to study practice variation in oncology was the radiation therapy-focused Patterns of Care Study (PCS), funded by the National Cancer Institute (NCI), Bethesda, MD, in the early 1970s (9,10). This program was the first systematic attempt in the United States to evaluate the patterns of care, processes, access, and patient outcomes of an entire specialty by surveying radiation therapy practices. Six commonly treated tumors were selected, and outcomes were related to patient and disease characteristics, treatment variables, and practitioner and treatment center information. Other groups, such as the American College of Surgeons, continue similar efforts through the National Cancer Data Base (NCDB), established in 1989 (11). Approximately 50% of U.S. cancer surgical cases are registered in this database, which is used to monitor surgical practice variation and provide data for quality-control assessment.
Although efforts such as the PCS and NCDB have been useful in supporting epidemiologic research as well as clinical research, they have not proven to be sufficiently comprehensive to support true health services research. It was recognized that, to move from observation on small-area variation to clinically and financially meaningful changes in the practice of medical care, a new research paradigm was needed. In 1986, the National Center for Health Services Research and Health Care (Technology Assessment) (NCHSR) was directed to establish a "patient outcomes assessment research program to promote research with respect to patient outcomes of selected medical treatments and surgical procedures for the purpose of assessing their appropriateness, necessity and effectiveness," although no funds were appropriated that year (12). When funding was later approved, four Patient Outcomes Assessment Research Program projects were established (for prostate disease, cataracts, myocardial infarction, and low back pain). This effort was the precursor to the Agency for Health Care Policy and Research's (AHCPR's) Patient Outcomes Research Teams (PORTs). However, this effort was carried out on a limited scale until 1988, when several key articles in the medical literature again focused public attention on outcomes research as a possible solution to a growing problem.
Calls for Assessment and Accountability
By 1988, government, other third-party payers, patients, and physicians were all seeking improvements in the delivery of health care and objective evidence of value for money. Paul M. Ellwood (13) wrote about an ambitious plan to link treatment and outcome data in a massive database to facilitate "outcomes management." He believed that studying the outcomes of large numbers of patients, including standardized survival, disease status, quality of life, and cost information, could improve patient care and inform national policies. Later that same year, William L. Roper and colleagues (14) published an article calling for collaboration among the private and public sectors to improve medical care through an "Effectiveness Initiative." At the time, Roper was the Administrator of the Health Care Financing Administration (Washington, DC), which oversaw health care coverage for all Medicare beneficiaries. He also had access to a large database of claims data and noted that many results of randomized trials could not be extrapolated to the older persons whom he represented. In an accompanying editorial, Arnold S. Relman (15) labeled "assessment and accountability" the "third revolution in medical care," following the earlier revolutions of health care expansion and a backlash of cost-containment. Thus, the "outcomes movement," as defined in 1990 by Arnold M. Epstein (16), undertook efforts to address "the effectiveness of different interventions, the use of this information to make possible better decision making by physicians and patients, and the development of standards to guide physicians and aid third-party payers in optimizing the use of resources." Mounting concerns over variability in practice patterns, lack of documented benefit from medical interventions, and skyrocketing medical costs had coalesced into a call for standardization of health care based on empirical data.
Patient Outcomes Research Teams: Research Funding
In many ways, the availability of government funding helped to shape the outcomes movement. The Effectiveness Initiative evolved into the Medical Treatment Effectiveness Program (MEDTEP), which was charged with carrying out the research effort. By focusing on "the evaluation of outcomes (i.e., what resulted) of health care services, rather than on the processes (i.e., what was done)," there were echoes of Donabedian's initial formulation of quality (17). In 1988, the NCHSR was dismantled and the AHCPR was created in its place.
The AHCPR implemented MEDTEP by funding PORTs. Appropriate topics for funding
were decided on the basis of "the number of individuals affected, the extent of uncertainty
or controversy with respect to the use of a procedure or its effectiveness, the level of related
expenditure, and the availability of data" (17) (Table 1). Prospective, randomized trials were discouraged because the focus was on
alternative management strategies already in clinical practice. The following four components
were required in all applications for funding and emphasize the goals of the PORTs: 1) literature
review and synthesis (on which to base hypotheses for analyzing associations between practice
variations and outcomes), 2) analysis of variations in medical practice and associated patient
outcomes, 3) dissemination of findings about effectiveness, and 4) evaluation of the effects of
dissemination. Given that the funding became available because of an amendment to the Social
Security Act, the first projects reflected the needs of the older, Medicare-covered population. Of
the first 14 PORT projects funded, only one addressed a cancer diagnosislocalized
prostate canceras a subsidiary component of the project on benign prostatic hypertrophy.
|
Although quality of life, costs, and decision making were features
of the original PORTs, they were not emphasized until the concept of
"appropriate care" was embraced in the early 1990s. In 1993, AHCPR
published its second Request for Proposals for PORT studies, so-called
PORT-IIs. These projects differed somewhat from the original PORTs in
that they encouraged generation of primary data about the effectiveness
of interventions, through either randomized trials or prospective,
longitudinal study designs. There was general acknowledgment that
administrative databases would only be able to contribute some types of
information and that additional data would have to be generated. The
format was liberalized so that PORT-IIs could accomplish their goals by
whatever means was most efficient, rather than the more rigidly
outlined structure of the original PORTs. For example, some study
designs funded by the PORT-II grants included large, simple trials (to
enhance the generalizability of findings) and studies evaluating the
impact of PORT-II recommendations. Whereas the common theme running
through the original PORTs was variation in patterns of care, the
PORT-IIs emphasize the influence of patient characteristics and
preferences on ideal therapy and cost-effectiveness. A total of 21
PORTs have been funded. Of these, two have addressed prostate cancer
and one focuses on breast cancer (18,19) (Table 2).
|
The most striking shift in outcomes research has been its migration into mainstream clinical research. It is increasingly common to see traditional outcomes research end points and methods used alongside classic trial designs answering biologic questions. In addition, although outcomes research used to be the purview of generalists, specialists and subspecialists are increasingly involved in the conduct of such studies. In oncology, the American Society of Clinical Oncology has a health services research group charged with "the development of practice guidelines, conduct of technology assessment, and the development of standardized parameters and definitions to use in outcomes research. It tries to improve the quality and appropriateness of patient care, enhance the physician-patient relationship, and provide credible information to assist third-party payers determine reimbursement policies" (20). The American Society of Therapeutic Radiation Oncologists (21) likewise has an outcomes research committee. Several of the cooperative oncology groups have outcomes and health services research committees charged with integrating outcomes end points into clinical trials.
This integration is reflected in general funding sources beyond the AHCPR. For example, while the NCI has a long history of funding large-scale efforts in outcomes research, the agency also cooperates with the AHCPR to review and fund outcomes research using the R01 (i.e., investigator-initiated) grant mechanisms. Although the NCI already had a Health Services and Economics Branch within its Applied Research Program, it also recently established an Outcomes Research Branch, which focuses on "multi-dimensional measures of patient function, quality-of-life and health status, preference-based utility measures, and measures of the economic costs of cancer-specific interventions" (22). This Outcomes Research Branch is also responsible for helping to incorporate a broader array of outcome measures, such as quality of life and costs, into NCI-sponsored trials. In contrast, the AHCPR has been renamed the Agency for Health Care Research and Quality (AHRQ) as of December 6, 1999, and will have additional responsibilities.
![]() |
CONCEPTUAL FRAMEWORK |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Although the field was growing, the terminology of outcomes research
was not adapted and clarified as its study questions and methods became
more sophisticated. The term "outcomes research" is still commonly
used, although many investigators in the United States prefer the term
"health services research," and the two often seem to be used
interchangeably. The National Library of Medicine, the body responsible
for assigning Medical Subject Heading (MeSH) terms to publications, has
two separate categories of research that encompass most of the work
that is considered to be outcomes research: health services research
and outcome assessment (Table 3). The term "health
services research" was introduced in 1980 and is defined
(23) as "the integration of epidemiologic, sociological,
economic, and other analytic sciences in the study of health services.
Health services research is usually concerned with relationships
between need, demand, supply, use, and outcome of health services. The
aim of the research is evaluation, particularly in terms of structure,
process, output, and outcome." In contrast, outcome assessment is
defined (23) as "research aimed at assessing the quality and
effectiveness of health care as measured by the attainment of a
specified end result or outcome, improved health, lowered morbidity or
mortality, and improvement of abnormal states (such as elevated blood
pressure)." This category was introduced in 1992.
|
Other organizations appear to equate outcomes research and health services research. The 1999 AHCPR strategic plan (25) states, "The field of health outcomes research studies the end results of the structure and processes of health care on the health and well-being of patients and populations. A unique characteristic of this research is the incorporation of the patient's perspective in the assessment of effectiveness." However the AHCPR (26) also states, "health services research addresses issues of organization, delivery, financing, utilization, patient and provider behavior, quality, outcomes, effectiveness and cost. It evaluates both clinical services and the system in which these services are provided. It provides information about the cost of care, as well as its effectiveness, outcomes, efficiency, and quality. It includes studies of the structure, process, and effects of health services for individuals and populations. It addresses both basic and applied research questions, including fundamental aspects of both individual and system behavior and the application of interventions in practice settings."
Thus, the terms "outcomes research" and "health services research" are increasingly synonymous. Those who make a distinction between these terms regard outcomes research as measuring and addressing clinical issues and health services research as responding to policy questions. However, given the ambiguous way in which the terms are applied, we suggest the use, whenever possible, of more precise terminology that better reflects the diversity of study questions, methods, end points, and target audiences. This conceptual framework is presented below.
Classification of Studies
In the past, outcomes research has often been recognized more by what it is not (phase I, II, or III clinical trials evaluating survival or disease response) than by what it is. As a result, a single defining theme never developed. Classically, the key feature distinguishing clinical research from outcomes research was the emphasis on efficacy (the effect of an intervention measured under controlled circumstances, as in a clinical trial) rather than on effectiveness (the effect of an intervention as applied to broad populations in real practice). However, with the "mainstreaming" of outcomes research during the past decade, many studies, other than those looking at effectiveness, have come to be considered outcomes research. The umbrella term "outcomes research" now loosely covers a broad range of study questions (quality of care, access, decision making, prediction rules, and effectiveness), methods (analysis of administrative databases and decision analysis), and end points (health-related quality of life and costs).
Fig. 1 depicts our conceptualization of the relationship between the
study questions, end points, analytic methods, and applications that define outcomes research. It
is a complicated diagram that reflects the historic development of outcomes research as
contrasted to classical clinical trials. The upper section of Fig. 1
represents areas of research, and
the lower half gives examples of applications. The upper left portion of Fig. 1
shows phase I, II,
and III clinical trials designed to answer efficacy questions. These trials are not
outcomes research. The upper right section of Fig. 1
shows the research
topics classically
considered to be outcomes or health services research: quality of care, access, decision making,
prediction rules, and effectiveness. Sometimes, outcomes studies are defined by the source of
data, regardless of the question. For example, studies using large administrative databases are
usually identified as outcomes studies. Some study designs, such as large, simple trials, fall in
between the categories of clinical trials and outcomes research. These randomized studies are
conducted in real practice settings without the rigid controls of clinical trials; thus, they represent
something of a hybrid between clinical trials and outcomes research.
|
Methods of secondary analysis, such as meta-analysis (27) and decision analysis (28,29), often use the same data sources and similar techniques. In practice, meta-analysis is much more likely to be considered a branch of clinical trials, whereas decision analysis is often considered to be outcomes research. This distinction is both substantive and historic. The substantive difference is that meta-analysis results are more often used to inform clinical decisions, while decision analysis is associated with policy decisions. The historic background further separates these methods; meta-analysis is a method developed by biostatisticians who are typically associated with the clinical trials world, whereas decision analysis was developed by practitioners seeking to incorporate many variables, whether taken from the literature or estimated, into making the best clinical and policy decisions with the available information.
The bottom of Fig. 1 depicts examples of clinical and policy decisions
that are guided by the
research above. The positioning of clinical decisions under clinical trials and policy decisions
under outcomes research reflects the major influences, but this is by no means absolute and
cannot be used to define branches of research. Examples of cross-fertilization include clinical
decisions that are influenced by outcomes research on decision making, quality of life, or reports
of cohort studies and policy decisions that are heavily influenced by the results of randomized
clinical trials.
It is obvious that this schema will continue to evolve and that others may have developed different, equally effective classification systems. Our goal in presenting this framework is to depict one way of organizing the broad categories of outcomes studies. We hope it also encourages precision in describing one's work. Stating a research question, method, or data source will convey more meaningful information than the overview category of outcomes research. This may be especially true for quality-of-life end points and decision analysis that have particularly well developed methodologies.
Definition
An appropriate definition of outcomes research needs to incorporate many nuances. It must capture the diverse branches of a field that are more united by their historic development than by what they now have in common. It should recognize that research topics, study designs, methods, and data sources must be examined together to correctly identify outcomes research. It must allow outcomes research to be defined both by what it is and by what it is not. Thus, our favorite definition is provided by the Association for Health Services Research (30). They favor the term "health services research" and define it as "a field of inquiry using quantitative or qualitative methodology to examine the impact of the organization, financing, and management of health care services on the access to, delivery, cost, outcomes and quality of services" including studies of health status, quality of life, and effectiveness. Exclusion criteria include "projects that do not involve some component of research or evaluation, clinical trials looking at efficacy, and studies of animals." Although this definition is broad, we believe that it most accurately represents the current spectrum of outcomes/health services research.
![]() |
LITERATURE REVIEW |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Search Paradigm
A MEDLINE® search was performed to retrieve all articles in
English published from 1966 (the first year available in MEDLINE)
through 1998 with a keyword of "neoplasms" and the following
topics: cost, cost-benefit analysis, economics, health service
accessibility, health services research, outcome assessment (health
care), quality of life, quality of health care, quality assurance
(health care), quality indicators (health care), guidelines, practice
guidelines, decision making, and decision support techniques (Table 3).
The following journals were included: New England Journal of
Medicine, Journal of the American Medical Association, Lancet, Annals
of Internal Medicine, Journal of the National Cancer Institute, Journal
of Clinical Oncology, Blood, British Journal of Haematology, Annals of
Oncology, Cancer, Medical Decision Making, Medical Care, Health
Economics, Journal of Health Economics, European Journal of Cancer,
British Journal of Cancer, and Bone Marrow Transplantation.
Classification
The abstracts were independently reviewed by two of the authors (S. J. Lee and C. C. Earle) and classified according to the research topic, study design, and relevant tumor type. Discrepancies were resolved by consensus or mediation by a third party (J. C. Weeks), as necessary. Articles were retrieved as needed for clarification. Clinical reviews, letters, and editorials were not reviewed further. Articles dealing with nonhumans or primarily nononcologic topics were excluded from the analysis. Only primary reports were considered; secondary reports were not included.
Research topics/end points were classified as survival, disease status, complications, health-related quality of life, patient preferences, quality of decision making, costs, quality of care, access, differences in practice patterns, physician-patient relationships, appropriate use of diagnostic tests, and guidelines. Often, we found that traditional outcomes or health services research questions were secondary end points in trials primarily analyzing survival. In these cases, the end point was designated secondary. Tertiary end points were those that were not quantified according to current research standards or those that were mentioned but minimally addressed. For example, the sole use of Karnofsky performance status, return to employment, or nonstandardized methods to determine quality of life was considered to be a tertiary end point. Short-term and long-term complications of therapy, such as pain, memory problems, or mucositis, were considered to be clinical manifestations if they were reported as symptoms or as quality-of-life dimensions if they were measured with the use of structured patient self-report surveys. Psychologic, psychiatric, and emotional end points were always considered to be quality-of-life studies. Examination of the impact of guidelines on practice patterns was classified under quality of care.
Study design was categorized according to the primary design of the trial, with occasional consideration of the type of statistical analysis performed. Options included the following: phase I, II, or III clinical trial; nonrandomized intervention study; prospective and retrospective cohort studies; case-control study; cross-sectional study; decision analysis; meta-analysis; and systematic review of the literature. Studies that relied on administrative databases, as well as studies with a primary goal of instrument development and assessment, were considered to be distinctive enough to warrant separate classification.
Articles Retrieved
A total of 92 891 oncology references were published in these journals from 1966 through 1998. Of these, 1591 articles contained the MeSH terms of interest and were retrieved by the search; 678 (43%) met the inclusion criteria for the analysis. Of the 913 (57%) articles initially retrieved by the search but excluded from further review, 360 of the original 1591 (23%) were deemed to be editorials, 170 (11%) letters, 62 (4%) news, 2 (0.1%) comments, 138 (9%) reviews, and 111 (7%) phase I, II or III clinical trials or meta-analyses with survival or disease status end points. An additional 70 (4%) references were excluded because they either did not deal with humans or otherwise could not be classified as having one of the end points of interest.
Search Results
The distribution of research topics, journals, study designs, and
disease sites among the 678 included references is shown in Table
4. The most common topic of these publications was
quality of life (209 [31%] references). Effectiveness studies
examining clinical end points (excluding quality of life) outside
clinical trials accounted for the next largest groups (147 [22%]
references). Studies of costs (127 [19%] references), quality of
care (101 [15%] references), patient preferences (46 [7%]
references), appropriate use of diagnostic tests (40 [6%]
references), and guidelines (8 [1%] references) accounted for the
remaining studies. General medical journals published 102 (15%) of the
articles included in the analysis, specialty journals contributed 497
(73%), and methodology journals contributed 79 (12%). The greatest
number of studies were published by Cancer, with 152 (22%)
references, and by the Journal of Clinical Oncology, with 112
(17%) references. More articles focused on breast cancer (164 [24%]
references) than on other cancer diagnoses, followed by multiple cancer
sites (more than two sites), with 156 (23%) references, and
hematologic malignancies, with 74 (11%) references. All other sites
accounted for less than 10% each.
|
The number of outcomes studies has grown faster than the number of
oncology articles, leading to an increase in the proportion of all
cancer articles devoted to outcomes research (Fig. 2). However, the total
number of articles is still small. Our search identified no outcomes studies in the first 4 years
of the search period (1966-1969). Beginning in 1970, the number of
studies meeting our criteria was fewer than 10 per year until 1982.
There was then a gradual increase in the number of studies, starting
particularly in 1995. Almost half of all of the articles meeting our
criteria were published in the last 4 years of the study period
(1995-1998; n = 331 references). Despite this dramatic increase,
outcomes research continues to represent a very small proportion of
clinical research studies in oncology, making up only 2.6% of the
articles published in these journals in 1998.
|
Comparison of the study questions in reports published before and after 1995 did not reflect any clear shifts in attention. The distribution of studies since 1995 has reflected proportional growth across all categories. For example, in 1998, the proportions of studies on each topic resemble the distributions seen across all years combined. However, there have been qualitative shifts in research focus. In the early years of outcomes research, there was confidence that existing databases, especially administrative billing data, could be used to answer many of the primary questions of outcomes research. More recently, there has been a growing recognition that such databases are useful primarily for hypothesis generation and that prospective experimental and longitudinal studies are needed to test these hypotheses, as suggested in the PORT-II Request for Proposals. There has also been an evolution in attitudes toward evidence-based practice guidelines, originally heralded as a panacea that would offer better care at lower costs. Attempts to actually implement clinical practice guidelines have led to greater appreciation for the need to consider patient preferences and to allow for some tailoring of treatment choices to the individual. These considerations are increasingly reflected in published guidelines (31-37).
![]() |
DISCUSSION |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
In presenting the historic background of outcomes research, offering a conceptual framework to help classify outcomes studies, and reviewing the literature for trends, we tried to summarize concisely a very complicated, frequently used, but poorly defined concept. Considering ourselves to be outcomes researchers, we were encouraged by some findings, such as the noticeable improvement in the quality of studies over time observed in our literature search, and the increasing percentage of the oncology literature representing outcomes work. However, we were surprised by some of our other findings. Although the number of outcomes end points investigated in the context of phase I, II, or III clinical trials seems to be increasing, the literature review did not quite seem to reflect this increase, despite the considerable investment in this area by oncology cooperative groups, pharmaceutical companies, and funding agencies. Possible explanations for the small number of publications describing quality of life and costs collected alongside clinical trials include the following: 1) It is too early for these studies to have made their way into the literature, 2) the results of these studies are not being published, or 3) our literature search algorithm did not successfully identify these publications.
The conceptual framework and the definition of health services research put forth by the Association for Health Services Research represent the integration of the historic background and current state of the field. We expect that there will continue to be evolution in the nomenclature. Is a study evaluating quality of life in a phase III randomized trial considered to be outcomes research or not? What about a study reporting the effectiveness of a highly specialized technique, not performed as a part of a clinical trial, but only available in a tertiary care institution? Or what about a decision analysis of survival, using the results of multiple, randomized trials? For the purpose of this review, all of the above studies were considered to be outcomes research. However, as outcomes research continues its diffusion into mainstream clinical investigations, these end points and techniques may no longer automatically classify such studies as outcomes research.
Evidence for the Impact of Outcomes Research
Very little research has looked at whether outcomes research actually affects practice or policy. Fineberg (38) studied the effects of randomized clinical trials on patterns of care and concluded that this information had little influence on general medical practice. Kosecoff et al. (39) studied the effect of the NIH Consensus Development Conferences and concluded that these conferences also had little effect on medical practice. Although "report cards" giving mortality rates for cardiac surgeons may have improved the results of bypass surgery, patients were generally unaware of them (40).
On the other hand, gemcitabine was approved by the U.S. Food and Drug Administration for the treatment of pancreatic cancer largely on the basis of its ability to decrease symptoms. Efforts to assist patients in clarifying their treatment preferences and to assist them in their decision making are proliferating. Cost-minimization studies are used extensively to help organizations respond to budget constraints. Cost-effectiveness studies are used by Australia and the Canadian province of Ontario as part of their drug-approval mechanisms. Algorithms for evaluation of thyroid nodules or abnormal Pap smears, for example, are common in clinical practice. The National Comprehensive Cancer Network (41) has published consensus guidelines for cancer therapy and has created an outcomes database to track adherence to selected guidelines among participating institutions.
The Future
Outcomes research is fundamentally concerned with improving the practice of medicine as applied to patients treated outside clinical trials. Decreasing the variation in practice patterns was seen as the first solution to the problem. This was followed by a recognition of the need for better information about appropriate treatment algorithms. More recently, there is greater acknowledgment of the need for individualization of therapy. The result is that increasing numbers of variables (both patient characteristics and societal needs) have been added to the process of determining what is the most appropriate therapy for individual patients. This shift reflects a new recognition of the complexity of medical decisions and the myriad influences affecting patient outcomes.
As the discipline of outcomes research continues to develop, it is likely that the boundary between this field and classical clinical research will become progressively more blurred. If so, umbrella terms such as "clinical research," "outcomes research," and "health services research" may be replaced by more specific descriptors that reflect the type of question being addressed and the methods used to answer it.
![]() |
NOTES |
---|
We thank Taiye Odedosu for her assistance with the literature review.
![]() |
REFERENCES |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
1 Youngs MT, Wingerson L. The 1996 medical outcomes & guidelines sourcebook. New York (NY): Faulkner & Gray, Inc.; 1995.
2 Codman EA. The product of a hospital. Surg Gynecol Obstet 1914;18:491-6.
3 Weeks JC. Outcomes assessment. In: Holland JF, Frei EJ, editors. Cancer medicine. Baltimore (MD): Williams & Wilkins; 1996. p. 1451-8.
4 Weeks J, Pfister DG. Outcomes research studies. Oncology (Huntingt) 1996;10(11 Suppl):29-34.[Medline]
5 Weeks J. Overview of outcomes research and management and its role in oncology practice. Oncology (Huntingt) 1998;12(3 Suppl 4):11-3.
6 Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q 1966;44:Suppl:166-206.
7 Cochrane AL. Archie Cochrane in his own words. Selections arranged from his 1972 introduction to "Effectiveness and Efficiency: Random Reflections on the Health Services" 1972. Controlled Clin Trials 1989;10:428-33.[Medline]
8 Wennberg J, Gittelsohn A. Small area variations in health care delivery. Science 1973;182:1102-8.[Medline]
9 Hanks GE, Kramer S, Diamond JJ, Herring DF. Patterns of care outcome survey: national outcome data for six disease sites. Am J Clin Oncol 1982;5:349-53.[Medline]
10 Kramer S, Hanks GE, Diamond JJ, MacLean CJ. The study of the patterns of clinical care in radiation therapy in the United States. CA Cancer J Clin 1984;34:75-85.[Medline]
11
Steele GD Jr, Winchester DP, Menck HR, Murphy GP. Clinical
highlights from the National Cancer Data Base: 1993. CA Cancer J Clin 1993;43:71-82.
12 Agency for Health Care Policy and Research. Report to Congress: progress of research on outcomes of health care services and procedures. Rockville (MD): Agency for Health Care Policy and Research; 1991.
13 Ellwood PM. Shattuck lectureoutcomes management. A technology of patient experience. N Engl J Med 1988;318:1549-56.[Medline]
14 Roper WL, Winkenwerder W, Hackbarth GM, Krakauer H. Effectiveness in health care. An initiative to evaluate and improve medical practice. N Engl J Med 1988;319:1197-202.[Medline]
15 Relman AS. Assessment and accountability: the third revolution in medical care [editorial]. N Engl J Med 1988;319:1220-2.[Medline]
16 Epstein AM. The outcomes movementwill it get us where we want to go? N Engl J Med 1990;323:266-70.[Medline]
17 Agency for Health Care Policy and Research. Medical treatment effectiveness research. Rockville (MD): Agency for Health Care Policy and Research; 1990.
18 Agency for Health Care Policy and Research. Patient outcomes research teams (PORTs). Available from: http://www.ahcpr.gov/news/ports99.htm
19 Agency for Health Care Policy and Research. Current patient outcomes research teams (PORT-IIs). Available from:http://www.ahcpr.gov/news/ports99.htm
20 American Society of Clinical Oncology. Committee: Health Services Research (HSR). Available from: http://asco.infostreet.com/prof
21 American Society of Therapeutic Radiation Oncologists. ASTRO Committees. Available from: http://www.astro.org/about
22 National Cancer Institute. Applied research: outcomes research. Available from: http:/www-dccps.ims.nci.nih.gov/ARB
23 National Library of Medicine. Medical subject headings. Available from: http:/www.nlm.nih.gov/mesh
24
Nathan DG. Clinical research: perceptions, reality, and proposed
solutions. National Institutes of Health Director's Panel on Clinical Research. JAMA 1998;280:1427-31.
25 Agency for Health Care Policy and Research. AHCPR strategic plan. Available from: http:/www.ahcpr.gov/about/stratpln.htm
26
Eisenberg JM. Health services research in a market-oriented
health care system [published erratum appears in Health Aff (Millwood)
1998;17:230]. Health Aff (Millwood) 1998;17:98-108.
27 The University of York NHS Centre for reviews & dissemination. Undertaking systematic reviews of research on effectiveness. York (U.K.): York Publishing Services Ltd.; 1996.
28 Weinstein MC, Fineberg HV, Elstein AS, Frazier HS, Neuhauser D, Neutra RR, et al. Clinical decision analysis. Philadelphia (PA): Saunders; 1980.
29 Sonnenberg FA, Beck JR. Markov models in medical decision making: a practical guide. Med Decis Making 1993;13:322-38.[Medline]
30 Association for Health Services Research. Definition of health services research. Available from: http://www.ahsr.org/hsrproj/define.htm
31 Cancer Care Ontario. Program in evidence-based care and practice guidelines initiative. Available from: http://www.cancercare.on.ca/ccopgi
32 National Comprehensive Cancer Network. NCCN Oncology practice guidelines. Oncology 1996;10:47-315.[Medline]
33 National Comprehensive Cancer Network. NCCN Oncology practice guidelines. Oncology 1997;11:25-347.
34 National Comprehensive Cancer Network. NCCN Oncology practice guidelines. Oncology 1998;12:35-271.
35 National Comprehensive Cancer Network. NCCN Oncology practice guidelines. Oncology 1998;12:53-462.[Medline]
36 National Comprehensive Cancer Network. NCCN Oncology practice guidelines. Oncology 1999;13:35-256.
37
Silver RT, Woolf SH, Hehlmann R, Appelbaum FR, Anderson
J, Bennett C, et al. An evidence-based analysis of the effect of busulfan, hydroxyurea, interferon,
and allogeneic bone marrow transplantation in treating the chronic phase of chronic myeloid
leukemia: developed for the American Society of Hematology. Blood 1999;94:1517-36.
38 Fineberg HV. Clinical evaluation: how does it influence medical practice? Bull Cancer 1987;74:333-46.[Medline]
39 Kosecoff J, Kanouse DE, Rogers WH, McCloskey L, Winslow CM, Brook RH. Effects of the National Institutes of Health Consensus Development Program on physician practice. JAMA 1987;258:2708-13.[Abstract]
40
Schneider EC, Epstein AM. Use of public performance reports:
a survey of patients undergoing cardiac surgery. JAMA 1998;279:1638-42.
41 Weeks JC. Outcomes assessment in the NCCN. Oncology (Huntingt) 1997;11:137-40.[Medline]
Manuscript received April 13, 1999; revised October 13, 1999; accepted December 6, 1999.
This article has been cited by other articles in HighWire Press-hosted journals:
![]() |
||||
|
Oxford University Press Privacy Policy and Legal Statement |