Subscribe to RSS
DOI: 10.1055/s-2000-9618
Cost-Effectiveness Studies in Endoscopy: Are They Worth it?
A. V. Sahai, M.D., M.Sc. (Epid), F.R.C.P.C.
GI Dept. CHUM-St. Luc
1058 St Denis Montreal
Quebec H2X SJ4 Canada
Fax: Fax:+ 1-514-281-6135
Email: E-mail:anand.sahai@sympatico.ca
Publication History
Publication Date:
31 December 2000 (online)
- Introduction
- Pitfalls in Assessing the Cost-Effectiveness of Endoscopic Procedures
- Optimizing the Value of Cost-Effectiveness Analysis
- Conclusion
- References
Cost-effectiveness analysis is currently in great demand. Evaluating the cost-effectiveness of endoscopy in comparison with surgical and radiological alternatives for diagnosis and therapy would appear particularly important, since the costs they incur may be substantial. However, several technical and practical issues may limit the perceived value of cost-effectiveness studies and the applicability of their results to daily practice. Despite this, they are likely to be used to make decisions regarding health-care resource allocation. A better understanding of their limitations by all parties involved and active participation by physicians (as opposed to health-care administrators) in their conception and execution should help optimize our ability to provide excellent patient care at a reasonable price.
#Introduction
What a silly question! Obviously cost analysis is worth it - otherwise people would not talk about it so much. Right? Since endoscopy certainly appears expensive, is prone to inappropriate use [1], and is not always effective, it would seem to be a good subject for studies assessing costs relative to potential benefits.
Although some may also wonder whether we should concentrate more on optimizing patient well-being than on saving money, there are situations where the “costs” of endoscopy (in terms of dollars and health risks) would clearly seem to outweigh the benefits. However, while endoscopy may often appear to be cost-effective, innovations in minimally invasive surgery and in diagnostic and therapeutic radiology make it increasingly clear that we can no longer assume that endoscopy is the cheapest route. It must be proved (or disproved) by formal cost assessment. The question is, will performing a cost study produce results that will really affect our practice? Physicians are notoriously poor at following guidelines. This is probably not due solely to bull-headedness! When guidelines are aimed specifically at optimizing cost-effectiveness, there are many reasons why physicians may question their relevance to daily practice. Like any study methodology, cost analysis has potential pitfalls and even the best executed studies may still produce conclusions that are difficult to apply confidently in daily life. In a sense, they may suffer from a type of inadequate “external validity” - both technically and practically.
#Pitfalls in Assessing the Cost-Effectiveness of Endoscopic Procedures
#Technical Issues
Measuring costs. Assessing the costs of endoscopy and alternatives to endoscopy (as well as their consequences) is easier said than done. To begin, it should be clear what type of cost analysis is desired. The terminology relating to cost analysis may be confusing and has been misused in the medical literature [2, 3]. Cost-effectiveness analysis does not encompass all studies that use costs as an outcome - it refers to one of many types of cost analysis. To measure the cost-effectiveness of an intervention, its cost must be related to a measure of its effectiveness. In this way, a cost-effectiveness ratio can be calculated (e. g. cost/rebleeding episode prevented). The calculated cost-effectiveness of a single procedure is meaningless if is not compared with values obtained for alternative interventions. Therefore, to determine if one intervention is more “cost-effective” than another, their respective cost-effectiveness ratios must be calculated and compared. It is also important to remember that, in some cases, a viable option that should be included for comparison is that of no intervention at all. If the measure of effectiveness includes a measure of the intervention's utility from the patient's perspective (e. g. quality-adjusted survival), the term cost-utility analysis may be used. One may also assume interventions are equally effective and compare only their costs; this is known as cost-minimization analysis, since cost-effectiveness ratios cannot be calculated.
Measuring the true cost of an endoscopic procedure and its consequences is complex. The total cost of an endoscopy may vary substantially depending on what, and how many, components are included in a cost equation. Firstly, it is important to realize that charges are often a poor substitute for costs [4] because they incorporate an additional variable - the profit margin. Costs may vary significantly depending on the type of health-care delivery system (e. g. privatized vs. government-run), on variability in how procedures are performed (e. g. differences in the frequency of reuse of “disposable” accessories), and on how follow-up health care is delivered (e. g. outpatient vs. inpatient post-procedure surveillance). Several variables must therefore be considered including analysis perspective, procedural cost calculation techniques, and the time horizon for the analysis.
The perceived cost of an intervention depends heavily on the perspective of the analysis, be it that of the patient or the provider (e. g. the hospital, the insurance company, the government, or “society” as a whole). The cost to patients may include costs for services not covered by insurance, incidental costs associated with medical visits (e. g. income losses due to time off work, parking, babysitters, etc.), and the more “intangible” costs due to emotional and physical suffering. The latter are difficult, if not impossible, to quantify in terms of dollars. Hospitals are interested primarily in direct and indirect costs for inpatient care and outpatient visits. Insurance companies bear the costs stemming from charges for patient care as well as for financial support provided to patients during periods of invalidity. Finally, the cost to society may include costs for all aspects of governmentally funded health-care delivery as well as potentially enormous indirect costs due to short-term and long-term disability.
Perspective aside, what is the true cost of performing (or not performing) an endoscopic or surgical procedure? Hospital administrators can provide estimates of the direct cost for various interventions based on rough estimates of the quantities of resources used. This includes the cost estimates for equipment, personnel, and procedure-room time, but may also include the most minute details such as the costs for number of gauze pads used, intravenous tubing sets, and even refreshments in the recovery area! However, despite the apparent precision of these figures, they are only estimates. Inaccurate estimates or omission of a few important variables may have a significant effect on the overall cost of the procedure. For example, costs for sedatives vary substantially depending on the product and quantity used. In our institution, a vial of midazolam is approximately four times the cost of a vial of diazepam. The number of vials used may also differ greatly for procedures of varying complexity (e. g. diagnostic vs. therapeutic). The proportion of the procedural cost due to sedation may differ enough that, over a large number of procedures, the effect on the total cost of the procedure may be substantial. The “per-procedure” cost of endoscopic equipment may also vary because of differences in accounting procedures such as amortization, cost discounting, and budgeting for repairs and maintenance. Purists would also add that the overhead costs of the intervention must also be counted. These include the costs of electricity, procedure-room maintenance, secretarial support, photocopies of reports, etc. If the relative contributions of all the elements used to determine to the total cost of the procedure are unclear, it may be difficult to reliably apply reported cost data in different health-care delivery settings.
The true cost of an intervention is not limited to procedural costs alone. Realistically, some of the greatest economic consequences of an intervention result from events occurring before and after the procedure is finished. If the period during which costs are measured starts too late or ends too early, the true cost-savings of a particular intervention may be underestimated or overestimated. Consequently, an intervention that is cost-effective in the short-term may be less so in the long-term or vice versa. Patients often present without an established diagnosis. Their health-care resource utilization begins as soon as they seek medical care. Studies evaluating the costs of interventions that have both diagnostic and therapeutic potential should therefore allow for patient enrolment as early as possible after presentation. This requires that patients be classified based on their presenting symptoms and signs (i. e. a “clinical context”), rather than by an established diagnosis. Since newer medical, radiological, and minimally invasive surgical techniques also have both diagnostic and therapeutic potential, relatively “expensive”-appearing procedures may actually be less expensive in the end if, when applied early enough, they simplify both diagnosis and therapy. A study that evaluates the costs of therapy alone may therefore provide an inaccurate impression of the cost of a particular intervention. In the case of newer techniques that are not widely available, it may be considered unrealistic to include them very early in the management of common clinical problems. However, it may be equally unrealistic to assume that such procedures will remain experimental or limited to referral centers. Therefore, if it is at all reasonable, the feasibility of applying newer techniques as early as possible should be evaluated.
Measuring most post-procedure costs for therapeutic interventions may be straightforward. However, measuring the true cost of certain outcomes is difficult (e. g. prolonged disability resulting from severe complications) or impossible (e. g. death). Furthermore, if the diagnostic potential of an intervention is also being studied, the boundaries of resource utilization that are related to the diagnostic component of procedure may be difficult to set. Unfortunately, diagnostic procedures do not always provide a diagnosis! They may be inconclusive or, if they do provide a diagnosis, it may be incorrect. Inconclusive tests, false positives, and false negatives may generate further testing and/or result in mismanagement. The costs associated with inappropriate management decisions may be particularly difficult to quantify. For example, what is the dollar cost associated with missing a small, potentially resectable pancreatic neoplasm that presents later with metastatic disease? Inversely, increased accuracy for tumor staging in esophageal cancer patients may lead to better designed adjuvant chemo-radiotherapy regimens, which may lead to better outcomes. How do we quantify the financial value of this? Since accuracy, in turn, depends on operator expertise, this often overlooked variable may also have an important effect on overall costs.
The reported total cost of an intervention will therefore vary depending on the costing methods used and to what extent pre-procedure and post-procedure costs are included. One can imagine how reported estimates may be substantially different, but all potentially “correct!” Owing to the inherent imprecision in cost estimation, sensitivity analysis is required to verify whether the conclusions of a cost analysis are “robust.” Decision trees can be created to model various management algorithms. By varying the initial model inputs for costs and other variables over wide but clinically relevant ranges, one can determine which variables most affect the total cost of a management. It can also be determined whether the conclusions are likely to change if important variables (e. g. procedural costs, accuracy, length of follow-up etc.) change in value. Currently available computer software programs permit complex sensitivity-analysis calculations, but may limit the number of variables which can be studied simultaneously. Erroneous assumptions made when designing the model may also lead to the inappropriate exclusion of important variables without testing. Therefore, unless a cost analysis suggests that one strategy is clearly preferable to others, it is probably safer not to make formal conclusions as to the “best” management options. Finally, some of the strongest scientific evidence is obtained from randomized controlled trials. Unfortunately, if cost is used as the primary outcome, the inherent variability in cost data may make the required sample sizes too large for practical purposes.
Measuring effectiveness. As stated earlier, cost-effectiveness analysis requires that costs be compared with effectiveness. The effectiveness of therapeutic procedures can be measured in terms of procedural success (e. g. adequate homeostasis, relief of jaundice, improvement in dysphagia score, etc.). However, if the ultimate goal is to optimize overall patient well-being or quality of life, these short-term surrogate end points may be inadequate. Endoscopic palliation of malignant diseases may be less costly than surgery. However, patients with limited life expectancy may be willing to accept an effective operation if it saves them having to worry whether their endoscopically deployed device will clog and require replacement. Symptoms may play only a small part in a patient's quality of life. For example, relief of dysphagia in esophageal cancer patients is desirable but may provide less satisfaction than anticipated, because of the underlying anorexia. On the other hand, “generic” (as opposed to “disease-specific”) quality-of-life instruments may miss clinically important changes in quality of life because they ignore symptoms that have an important effect on the overall well-being of patients with a particular disease. There may also be practical problems in applying quality-of-life questionnaires, especially with patients who, in the later stages of terminal illnesses, may find that answering the questions serves only to prove they are getting worse.
Documenting the effectiveness of the diagnostic component of procedures may be particularly challenging. Traditionally, end points such as “increases in physician certainty,” “changes in diagnosis or management,” or “prevention of surgery” have been used. Increases in certainty may be desirable, but home-made non-validated scales may produce unreliable information. Furthermore, certainty may increase, but the diagnosis may be incorrect. Obviously, this is undesirable but is interpreted as a positive outcome. There must therefore be some attempt to verify whether the final diagnosis was actually correct. Unfortunately, since the primary aim of these studies is usually not to assess diagnostic accuracy, a reliable gold standard may be unavailable. The ability of a test to affect management depends heavily on who is using the information. Preferably, the decision-maker should be a referring physician and not the physician who performs the test; and, as stated earlier, the correctness of test-induced changes in management must be verified. It should also be stressed that diagnostic tests need not change management to be useful. They may help confirm pre-test impressions and may provide anatomical information that may help during therapeutic interventions. For example, the documentation of a bile-duct stone by magnetic resonance cholangiography before endoscopic stone extraction may be useful in deciding whether pre-cut is justified if cannulation proves difficult. Quantifying the value of this type of information to physicians, and especially its effects on patient outcomes, may be impossible. As for “preventing surgery”: as stated earlier, this may be desirable for endoscopists, but not necessarily for patients … The effect of interventions on quality of life is currently of interest. However, in the context of diagnostic tests, the accuracy of the diagnosis may be less important than the diagnosis itself. In other words, a test that incorrectly suggests benign disease may have a more positive impact on quality of life than a correct diagnosis of malignancy. Results regarding the impact of diagnostic tests on quality of life may therefore be very difficult to interpret.
#Practical Issues
Physicians try to provide optimal care for patients within the limits inherent to the health-care delivery system in which they work. This implies providing the most effective (as opposed to the most cost-effective) care. Therefore, cost-effectiveness analysis may be of little interest to physicians, and irrelevant to patients. Saving money is primarily of interest to hospital administrators, insurance companies, and government agencies. In fact, most cost analyses adopt the perspective of payers other than patients. It should also be noted that reducing the long-term costs of disease-induced disability may be an important outcome for society. However, insurance companies may be interested primarily in reducing costs in the short term, since patients may be unlikely to stay with the same insurance provider for more than a few years.
For physicians and patients, it may be unrealistic to withhold more expensive interventions simply because it is more cost-effective for society or an insurance company. There are also clearly situations where physicians will favor effectiveness over cost-effectiveness without hesitation, either because of a fear of litigation or because of a personal interest in a particular case. In some clinical contexts, mistakes in diagnosis or therapy may be considered so unacceptable (e. g. diagnosis and staging of suspected pancreatic cancer) that a “cost-effective approach” may never be considered reasonable. If this is the case, one wonders whether the existence of “exceptions to the rules” makes it unethical to have rules at all. Physicians working in managed care environments are also well aware of the frustrations that arise when rigid guidelines (possibly based on cost-effectiveness data) get in the way of common sense and appropriate patient management.
Ethical considerations aside, there are other reasons why applying cost-effectiveness data to daily practice may be difficult. As with all studies, inclusion and exclusion criteria and referral-center bias may limit the external validity of some studies. Diagnostic and/or therapeutic algorithms resulting from cost analyses may also be difficult to apply locally owing to locoregional differences in the relative costs of different interventions, in local expertise, and in the availability of technology. Analyses performed in one country may be completely irrelevant in another because of differences in health-care delivery systems and in standards of practice. In other words, no single algorithm will suit everyone. In the case of some procedures, rapidly advancing improvements in diagnostic and therapeutic technology may make study results obsolute by the time they are published. Finally, even if physicians are willing to attempt to apply cost-effectiveness data to daily practice, reading and understanding cost-analyses may be tedious. The often numerous assumptions and apparently complex statistical manipulations may make them difficult to digest. Consequently, the conclusions may be considered interesting but unreliable.
#Optimizing the Value of Cost-Effectiveness Analysis
Despite its limitations, the potential value of cost-effectiveness studies should not be ignored. Increasing limitations in health-care budgets will force providers to choose between different diagnostic and therapeutic alternatives. In the absence of objective evidence, third-party providers (who have little or no patient contact) may adopt algorithms that primarily meet their financial objectives - as opposed to those of patients and physicians. However, strategies that, through objective analysis, have shown some ability to cut costs while maintaining acceptable effectiveness will probably be looked upon favorably by administrators. If these strategies are developed and tested by the physicians who use them, the result may be guidelines that are more appropriate for optimizing both economic and clinical outcomes.
As with any study, the problems assessed should be clinically relevant to as many people as possible. However, the results must also be applicable to daily practice in diverse health-care delivery settings and should have some hope of affecting physician behavior. All reasonable strategies should be evaluated. This should include strategies that appear unlikely to be cost-effective as well as the option of no intervention - since they may be useful bench-marks for comparison. Practice standards and economic environments differ worldwide; it is therefore impossible to obtain procedural cost estimates that are applicable universally. However, the variables that may contribute to the total cost for various interventions are well known. Investigators must therefore provide a detailed breakdown of the pre-, peri-, and post-procedural costs used, in order to permit readers to assess their local applicability and make adjustments if possible. The time period for the analysis should also begin as early as possible in the management process and continue long enough to provide a meaningful assessment of the relevant distant consequences of all interventions. If the value of the diagnostic components of the tests is to be assessed, the costs related to diagnostic imprecision should also be accounted for. It may be useful to provide data for both short-term (e. g. in hospital) and long-term costs. Sensitivity analysis must be performed to identify variables that most affect overall costs and to determine how much these variables can change before the conclusions become invalid. If effectiveness measures are included, they too should be subjected to sensitivity analysis. Using sensitivity-analysis results, physicians may verify whether study conclusions still apply to their patients. However, this implies that they must make the effort to obtain local estimates for the relevant variables. In this computer age, one may even wonder whether physicians might be able to verify the relevance of study conclusions by obtaining access to published decision models on the World Wide Web. In this way, they could “personalize” the model inputs to test which strategy is best for their practice.
#Conclusion
Cost-effectiveness analysis may provide an insight into the intricacies of providing the best possible health care within the financial constraints of various health-care delivery systems. However, there are several technical and practical limitations that must be considered when interpreting and applying the results - especially when they are being applied to gastrointestinal endoscopy and its alternatives. Despite this, it may be an important determinant of health-care resource allocation for payers and will likely remain in demand. Physicians must play an active role in performing these studies so that patients' interests, as well as their own, are given full consideration. However, whether cost-effectiveness studies “are worth it” depends ultimately on whether we as providers are prepared to choose cost-effectiveness over effectiveness alone.
#References
- 1 Kahn K L, Kosecoff J, Chassin M R, et al. The use and misuse of upper gastrointestinal endoscopy. Ann Intern Med. 1988; 109 664-670
- 2 Doubilet P, Weinstein M C, McNeil B J. Use and misuse of the term “cost effective” in medicine. N Engl J Med. 1986; 314 253-256
- 3 Sahai A V, Pineault R. An assessment of the use of costs and quality of life as outcomes in endoscopic research. Gastrointest Endosc. 1997; 46 113-118
- 4 Finkler S A. The distinction between cost and charges. Ann Intern Med. 1982; 96 102-109
A. V. Sahai, M.D., M.Sc. (Epid), F.R.C.P.C.
GI Dept. CHUM-St. Luc
1058 St Denis Montreal
Quebec H2X SJ4 Canada
Fax: Fax:+ 1-514-281-6135
Email: E-mail:anand.sahai@sympatico.ca
References
- 1 Kahn K L, Kosecoff J, Chassin M R, et al. The use and misuse of upper gastrointestinal endoscopy. Ann Intern Med. 1988; 109 664-670
- 2 Doubilet P, Weinstein M C, McNeil B J. Use and misuse of the term “cost effective” in medicine. N Engl J Med. 1986; 314 253-256
- 3 Sahai A V, Pineault R. An assessment of the use of costs and quality of life as outcomes in endoscopic research. Gastrointest Endosc. 1997; 46 113-118
- 4 Finkler S A. The distinction between cost and charges. Ann Intern Med. 1982; 96 102-109
A. V. Sahai, M.D., M.Sc. (Epid), F.R.C.P.C.
GI Dept. CHUM-St. Luc
1058 St Denis Montreal
Quebec H2X SJ4 Canada
Fax: Fax:+ 1-514-281-6135
Email: E-mail:anand.sahai@sympatico.ca