Cleveland Clinic Foundation, Cleveland, OH, USA
Correspondence and offprint requests to: Dr N. S. Kanagasundaram, Section of Dialysis and Extracorporeal Therapy, M82, Cleveland Clinic Foundation, 9500, Euclid Avenue, Cleveland, OH 44195, USA.
Introduction
Dialysis support for the critically-ill patient with acute renal failure (ARF) is now under scrutiny [1], but despite the increasing attention many crucial questions remain unanswered. In this article, we examine some of the reasons for these ambiguities and discuss potential solutions.
The problems
The ESRFARF interface
Certain well-defined standards of dialysis support for the patient with end-stage renal failure (ESRF) have become basic tenets of nephrological practice [2]. Any suggestion of establishing similar standards in critical care dialysis is obviously premature, but can we generalize from our experience in ESRF?
Unfortunately, a number of crucial factors within the two, quite distinct populations, make extrapolations problematic.
Firstly, the pace of change in the intensive care unit (ICU) ARF population is very different; critical end-points are reached in days-to-weeks as opposed to the attrition that we are used to of months-to-years in ESRF patients, so dialysis efficacy will have to be judged within a very much shorter time-frame. Secondly, those end-points that concern us may be quite different in the two groupsrenal recovery and length of ICU stay [3] as opposed to hypertensive control [4], for instance. Lastly, the negligible impact of G, the urea generation rate, and the predictability of Vurea, the volume of distribution of urea, are assumptions made to implement formal urea kinetic modelling (UKM) in ESRF [5]. Clearly, this may not be the case in critically ill ARF patients with the potential for marked hypercatabolism [68] and large variations in Vurea [9], with major implications when considering delivered dialysis dose (see later).
High mortality
Will manipulations of dialytic therapy have any real effect on the devastatingly poor outcome of the condition?
The consistently high mortality of ICU ARF over the years hides the worsening face of the patient population [10], suggesting that we are gaining ground overall. Although the kidney is usually but one of several failing organs in this setting, ARF contributes disproportionately to mortality [11,12] and leads to an increased risk of developing `non-renal' complications such as bleeding and sepsis [11]. Of encouragement is the finding that increasing membrane biocompatibility significantly improves rates of renal recovery [13] and reduces rates of lethal sepsis [14], highlighting the potential impact of dialytic manipulations. Van Bommel and co-workers found the ratio of APACHE II at the time of dialysis initiation to that at ICU admission (AP2/AP1) to be highly predictive of death, suggesting that irreversible derangements prior to dialysis initiation were still (probably most) important [15]. Whether these derangements were a consequence of ARF or its precursors is not clear but Levy's work [11] would point towards the first mechanism.
Choice of modality
Intuitively, continuous renal replacement therapy (CRRT) is seen as the modality of choice, with suggested advantages including haemodynamic stability (enhancing fluid removal and hyperalimentation), improved dialysis dose delivery, enhanced removal of inflammatory mediators and an improvement in overall outcome. Recent experience [16,17], however, warns against taking for granted these perceived notions of therapeutic superiority in the absence of a good, evidence base.
We will examine each of these potential advantages in turn but it is worth noting that in Jakob's extensive review [18] of all 67 published studies dealing with CRRT that were available to them at that time (1996), only 15 were comparative. In only three of these [1921] were both modality groups studied prospectively and none had complete randomization. Small numbers hampered many studies. Incomplete descriptions of both patient populations and dialysis therapy were widespread. In the 12 studies in whom filter type was recorded, none applied the same filter across modalitiesbiocompatible membranes tended to be used in CRRT and bioincompatible membranes in intermittent therapies (a difference that we know can impact upon outcome [13,14]). Two of the 15 comparative studies [22,23] used a `conventional dialysis' group as historical controls but included patients receiving both peritoneal dialysis and adjunctive slow continuous ultrafiltration (SCUF) making it difficult to draw conclusions.
Haemodynamic stability.
Haemodynamic stability is, perhaps, the sine qua non for the use of continuous therapies although concrete data to support this assertion is rather scanty.
Although it has been shown that haemodialysis-resistant fluid overload in ARF responds well to SCUF, allowing enhanced nutrition and drug delivery [24], and that fluid removal by continuous arteriovenous haemofiltration (CAVH) improves the cardiac index [25], data from more rigorous, comparative studies leads to varying conclusions. Davenport et al. [19], confirming results of previous work [21], showed better haemodynamic tolerance in unstable patients with hepatic failure during the first 5 h of CAVH/CAVHD treatment than in 4 h intermittent haemofiltration sessions. Oxygen delivery and consumption also appeared to be better maintained with the continuous therapies. Although prospective, this study was not fully randomized, with candidates with raised intracranial pressure being diverted to the CRRT arm (whether full randomization would have actually changed the results is open to debate). Misset and co-workers, in a randomized, cross-over study [26], compared haemodynamic tolerance during 24 h periods of CAVH and 24 h periods encompassing a 4 h IHD treatment. Each period was separated by a 24 h wash-out. No difference could be found in any haemodynamic parameter (MAP, dose of adrenergic drugs employed or change in body weight). There was, however, a high drop-out rate (31%), mostly related to early deaths, which could potentially have selected out patients who might have shown a modality-specific haemodynamic response. In addition, discrete hypotensive episodes were not reported and may have been missed if not picked up in the course of regular monitoring. Other comparative studies, although showing greater haemodynamic instability in intermittent therapy, have been hampered by their retrospective and non-randomized nature [27,28].
A prospective analysis of all intermittent treatments performed at a single institution showed that only 7.9% of treatments were terminated prematurely [29]. Just over half of these terminations (4.9% of the total) were due to hypotension. The authors concluded that contemporary techniques of intermittent treatment could provide `excellent hemodynamic stability', although no indication of intradialytic hypotension, changes in pressor doses or of the timing of pre- and post-therapy blood pressure readings was provided. Hypotension may, of itself, be limited to the intradialytic period [27] although this may not be without its longer term implications for renal recovery [30]. The development of supplementary techniques such as sodium and ultrafiltration profiling, and on-line optical haematocrit monitoring, may further enhance tolerability of intermittent therapy [31] but the definitive modality choice for haemodynamic stability remains to be fully substantiated.
Dialysis dose delivery.
If haemodynamic stability is still a live issue, what of dialysis dosinga subject that has occupied many column inches of late?
The delivered dialysis dose has been shown to impact upon patient outcome, at least in certain subgroups: using retrospective data, dose seemed to have no effect at either end of the disease severity spectrum but, in those with intermediate severity scores, an URR (urea reduction ratio) of >58% in IHD and a TACurea (time averaged concentration of urea) of <45 mg/dl in CRRT were both associated with significant reductions in mortality [32]. This analysis did not allow a comparison between modalitiesthe perennial problem of comparing the delivered dose of a continuous therapy to that of an intermittent therapy remained. Other problems, specific to the ICU ARF population, further confuse the issue.
Clearance-based methods, such as the Kt/V, are primed with potential problems in this setting. Firstly, `Kt/V' translates quite differently depending on the continuity of therapy [33]. The weekly Kt/V of a continuous modality is equivalent to the ratio of solute mass removed per week to the solute mass actually present within the patient. At steady state, the plasma solute concentration will remain static with removal matching generation. The Kt/V of an intermittent modality is conceptually different, having to take account (using a logarithmic function) of declining efficiency of solute removal over the course of a dialysis session caused by falling solute concentrations and hence falling transmembrane solute concentration gradients. This is compounded by the effects of recirculation and compartmental solute disequilibrium [33]. Broadly speaking, the less intermittent (i.e. the more continuous) a therapy is, the lower the Kt/V needed to remove a given quantity of solute. Cumulative Kt/Vs can represent quite different quantities of solute removed in intermittent schedules of differing frequencies [8].
The second potential stumbling block with clearance-based methods, the wide, intra-individual variation in G makes both inter-and intra-individual comparisons even less tenableadequate solute removal in one dialysis session may be inadequate at the next due to an increase in G. Looked at another way, a given value of Kt/V may reflect widely differing amounts of solute removal in different dialysis sessions in the same patient and may not reflect so much on treatment efficiency as prescription efficiency. A high G may also lead to a significant underestimation of Kt/V in intermittent haemodialysis (IHD) if a two BUN (pre-/post-dialysis) formula for Kt/V is used. Even the three BUN method (pre-/post-/pre-') of formal UKM may not capture the rapid fluxes in G that may occur.
The third drawback of blood-side kinetics is the use of a single pool model. An expanded urea space (due to fluid overload) and diminished perfusion of urea-rich tissues (due to peripheral vasoconstriction) can cause potentially huge post-dialysis urea rebound, resulting in significant overestimations of delivered dialysis dose.
Theoretically, continuous forms of therapy should be able to provide greater low and middle molecular weight solute removal than intermittent modes [7,8], but statements of the superior metabolic control of CRRT have often been based upon flawed methodologies: comparing the mean serum solute concentration in IHD (based on samples of unspecified timing and number) to absolute serum solute concentrations taken over the course of continuous therapy [27], basing the Kurea in IHD on in vitro filter clearances alone (with its attendant problems [2]) [34], basing the Kurea in CAVHDF on uninterrupted therapy [34] (system clotting and subsequent down-time may be the most frequent problem encountered with CRRT [32]), and simply comparing serum solute levels at initiation of renal replacement therapy to those 24 h later [22,23].
Clark et al. [7,8] developed elegant mathematical models, based on real patients receiving CRRT, to assess theoretical delivered dose in continuous and intermittent therapies. They found that at least five, simulated, intermittent treatments were needed per week for the peak (pre-IHD) BUN to equal the TACurea in the real CRRT group [7]. Keshaviah's peak concentration hypothesis [35] was invoked whereby the peak toxin concentration is held to be more important than its time averaged concentration in terms of toxicity and outcome. It is not clear whether this assumption can be made in the ICU ARF population. Assumptions about G and Vurea in the IHD group, and the use of a single pool, fixed volume urea kinetic model, were potential sources of error. In a separate theoretical model [8], Clark et al. confirmed the relative inefficiency of intermittent treatmentsinefficiency that diminishes with increasing treatment frequencyand the theoretical ease of attainment of metabolic control with CRRT. Assumptions about G and Vurea, and the use of a single pool urea kinetic model, again hampered the clinical validity of this model.
Removal of inflammatory mediators.
The removal of inflammatory mediators during continuous therapies has received increasing attention and has been mooted as a potential advantage over IHD. A recent editorial in this journal [36] dampens enthusiasm, noting both a lack of consistent human data as well as the disparity between the high endogenous turnover of these mediators and their negligible extracorporeal clearance.
The role of middle molecular toxicity in ARF is unclear, but with the use of contemporary, high KUF dialysers it cannot be beyond conjecture that beneficial substances may be cleared too. This may not be such an issue with, for instance, anti-inflammatory cytokines in whom endogenous turnover may far outstrip exogenous removal [36] but may be important for those substances with a low endogenous turnover or those derived from exogenous sourcesnutrients and drugs for instance. Poor diffusive clearance of ß2M, even with a high flux membrane such as the AN69, has recently been shown [37], emphasizing the role of convective removal of middle molecules, especially at higher ultrafiltration rates when the solute effluent/plasma ratio may actually increase [37]. Although not necessarily a modality-specific issue, the more frequent use of convective techniques in CRRT could suggest a potential confounding factor for outcome that may not reveal itself by simply comparing small solute clearances.
Outcome.
Individual issues aside, what of the global picture?
In Jakob's comprehensive review [18], of the 15 comparative studies, three [20,38,39] showed a significantly lower mortality in the CRRT group although only one [20] was prospective and none were fully randomized. Renal recovery was only addressed in two studies with no definite conclusions being drawn. Whilst acknowledging their methodological problems, the authors combined all 15 studies for further analysis. No clear benefit of CRRT could be found despite adjustment for co-morbidities.
What is the rôle of severity scoring systems in predicting outcome in our population?
General severity scoring systems such as the APACHE II, used as predictive models in the context of ARF, can underestimate risk of death [12,40]. In contrast, ARF-specific risk models generally have a much closer fit between predicted and actual mortality [12]. This difference may reflect prognostic factors brought into play in the presence of ARF that are not completely represented in the general ICU models [12]. Additionally, some general models have been designed for use in the first 24 h post-ICU admission whereas models designed to follow subjects throughout their ICU stayespecially from the time of first dialysismay be more valid [12,41]. Both Parker [42] and van Bommel [15] suggested that the APACHE II score around the time of dialysis initiation may be more predictive of outcome than the same score, taken conventionally, at the time of ICU admission. Even then the score performed suboptimally. Others [32] found the APACHE II at dialysis initiation not to predict outcome. Van Bommel et al. showed the AP2/AP1 ratio (see above) to be highly discriminatory, with higher ratios predicting poor survival [15]. Even ARF-specific models with good performance within their institution may perform suboptimally outside of it [32,40] suggesting that they describe patient groups rather than the individual outcome expectation [32].
The Cleveland Clinic ICU ARF scoring system, based on prospectively collected data on 512 dialysis-requiring ICU ARF patients, has been prospectively validated within our institution [32]. We are using a novel, web-based technique to test both the Cleveland Clinic Foundation scoring system and other models on a global basis, allowing individual physicians to generate severity scores based upon anonymised patient data. Although necessarily relying on the individual to enter as complete a data set as possible from any number of their ICU ARF cohort, the website (http://www.bio.ri.ccf.org/ARF/) will hopefully provide a unique and expanding `living' database that may provide the framework for more rigorous cyberspace-based interactions.
The future
We are still faced with a lack of definitive data to support many of the suspected advantages of CRRT. Interpretation of much of the published data has been hampered by retrospective analysis, the use of historical controls, incomplete randomization, incomplete descriptions of patient populations and dialysis prescriptions, and study group-control group heterogeneity.
As practicing clinicians, we make choices on best available evidence but to build on undoubted progress in the acute dialysis field, the establishment of common denominators such as `who' and `what' we are comparing seems essential.
The `who' relates to the building up of a comprehensive picture of our population under study using a valid severity scoring system. Some method of risk stratification may allow us to delineate specific patient groups who would benefit more from one particular therapy. The optimal model for ICU ARF is still not clear but would need to be reproducible in large numbers across different institutions. It is worth highlighting the potential danger of ARF-specific scoring systems, whilst evolving towards multicentre validity, becoming so complex as to preclude everyday clinical use [40]. The potential for inter-observer variability has been noted even in general ICU severity scoring models [43].
The `what' relates to dialysis dosing. If clearance-based methods are fraught with such problems in the ICU ARF population, could direct, dialysate-side quantification of solute removal prove more useful? Keshaviah and Star's SRI (solute removal index) provides just such an approach [44]. It would certainly allow meaningful comparison between modalities [45] and could allow estimations of Vurea whilst taking account of G. This approach has, so far, only been utilized in ESRF.
There is only one prospective study to date (published since Jakob's review [18] in abstract form) which has tried to correct for both illness severity and dialysis dose [46]. Seventy-nine ICU ARF patients were first stratified (severe vs less severe) with the CCF ARF severity score, then randomized to either CVVHD or IHD. Filter type (low flux, polysulfone) and dialysate (bicarbonate) were consistent across modalities. Dialysis dose was prescribed to produce a 3.6 weekly Kt/V (with its attendant problems for cross-modality comparisons) but subsequently adjusted using the TACurea. The two groups were similar in terms of age, gender, illness severity, and pre-therapy BUN and creatinine. No significant differences were found in any parameter. There were trends towards a lower TACurea and higher renal recovery rates in the CVVHD group, but with longer ICU stays and a higher mortality.
The need for large scale, multicentre study seems pressing but validated ICU ARF patient severity scores and unifying dosing methodologies seem pivotal to any future approach. Until we know both `who' and `what' we are comparing, the scope of further study will indeed be limited and this particular Gordian knot, although teased at, will remain uncut.
Acknowledgments
NSK is funded by the Satoru Nakamoto, MD Endowed Clinical Dialysis Fellowship and from NIH grant RO1 DK 5341101A1.
References