Skip to main content

Priority setting in health care: Lessons from the experiences of eight countries

Abstract

All health care systems face problems of justice and efficiency related to setting priorities for allocating a limited pool of resources to a population. Because many of the central issues are the same in all systems, the United States and other countries can learn from the successes and failures of countries that have explicitly addressed the question of health care priorities.

We review explicit priority setting efforts in Norway, Sweden, Israel, the Netherlands, Denmark, New Zealand, the United Kingdom and the state of Oregon in the US. The approaches used can be divided into those centered on outlining principles versus those that define practices. In order to establish the main lessons from their experiences we consider (1) the process each country used, (2) criteria to judge the success of these efforts, (3) which approaches seem to have met these criteria, and (4) using their successes and failures as a guide, how to proceed in setting priorities. We demonstrate that there is little evidence that establishment of a values framework for priority setting has had any effect on health policy, nor is there evidence that priority setting exercises have led to the envisaged ideal of an open and participatory public involvement in decision making.

The challenge

One main challenge for health care systems is that resources are limited, making it impossible to provide everyone with every effective intervention they might need or want. Scarcity raises questions of justice and efficiency: how should limited health care resources be allocated? What health services should be publicly funded? How should indications for particular interventions be defined? [1–6].

Priority setting or rationing in health care continues to be a politically charged topic, but recently its necessity has gained wider recognition [7–11]. Explicitly addressing priority setting is necessary to develop fairer methods of allocation for scarce health care resources [7, 10, 12] and to begin a public dialogue to ensure legitimacy in the process [3, 5].

In this paper we will examine seven countries, Israel, Norway, the Netherlands, Sweden, Denmark, New Zealand, and the United Kingdom, and one state in the US, Oregon, that have made explicit efforts to address health priorities. While their systems differ, many core allocation issues are the same [13–21]. The countries vary in how health care is financed (Table 1). Some, such as the UK and the Scandinavian countries, have tax based national health care systems, whereas others, such as the Netherlands, New Zealand and Israel, rely on various forms of universal social insurance. The priority setting approaches can be broadly grouped into two categories: outlining principles and defining practices. In the following we will discuss the efforts under two broad headings. Some countries such as Norway, the Netherlands, Sweden, and Denmark decided to develop principles that would guide prioritization efforts, while others, such as the UK, Israel, New Zealand and the state of Oregon established bodies that would actually recommend what services should be provided within the system. In assessing their efforts we will (1) describe the process each country or state used, (2) suggest two broad criteria to judge the success of these efforts, (3) assess which approaches seem to have met these criteria, and (4) using their successes and failures as a guide, make recommendations for priority setting. In the country descriptions we will focus on the structure of the process, and the principles and values guiding the process. In the evaluation section we will assess the actual impact of the priority setting exercises.

Table 1 Health expenditure data for 2003*

Explicit priority setting efforts: the outlining principles approach

Since the late 1980's many governments have instituted transparent and explicit discussions about priorities for health care [14, 22]. These efforts took different forms: all included health care experts, but they differed in inclusion of government officials and public representatives (Table 2) and in the details of the frameworks they outlined (Table 3). In most, if not all countries, the priority setting efforts started in response to political reasons to address the issue. In the UK, and to a certain extent the Scandinavian countries, the newspapers continuously reported cases where patients were denied potential life saving treatments, such as bone marrow transplantation for certain cancers. In the UK, in addition, there were reports of differential access in different parts of the country, labeled rationing by post-code. In Norway, the ever-expanding waiting lists for treatment created political pressure for a system that would prioritize patients on waiting lists. In countries such as the Netherlands and Israel, new legislation regarding health insurance created a need to decide what services should be provided in the package offered to all citizens. In Oregon, although Medicaid would provide expensive treatments for all those covered by the scheme, not everyone was covered, and the effort was launched in part to eliminate high cost, low effectiveness interventions and use the subsequent savings to increase the number of people covered by Medicaid.

Table 2 Processes for priority setting discussions
Table 3 Overview of centralized priority setting efforts

Norway

In 1987, in the context of increased demand for health care resources and the question of how to prioritize their use, the Norwegian government convened the Lønning Commission, the first body to set forth principles for prioritization and discuss their implementation[23, 24]. The commission decided to use severity of condition as the exclusive basis for prioritization, outlining five priority groups

â–ª Emergency care for life-threatening diseases

â–ª Treatment which prevents catastrophic or very serious long-term consequences: example, cancer

â–ª Treatment which prevents less serious long term consequences: example, hypertension

â–ª Treatment with some beneficial effects: example, common cold

â–ª Treatment with no documented effects

They believed this division could guide funding decisions for various treatments [24]. Ten years after the first, Norway convened a second commission. This commission acknowledged the need to take into account potential effect and cost-effectiveness as secondary principles to be balanced with severity, and introduced four priority groups: core or fundamental services, supplementary services, low priority services, and services with no priority. The commission attempted to define what services should be included in the first category by providing three criteria:

▪ Risk of dying from disease within five years is more than 5–10% (severity)

â–ª Increase in probability of five year survival is greater than 10% (expected treatment outcome)

â–ª Costs reasonable in relation to benefits

This second commission focused more on the process of setting priorities, than on the principles used to set them [23, 25]. One of their suggestions was that the process of priority setting should be started by establishing clinician groups that would set priorities within their specialties.

Netherlands

In 1990, the Netherlands established the Committee on Choices in Health Care, the so-called Dunning Committee, to discuss methods and principles for setting priorities. That year, the Dutch Cabinet decided to include approximately 95% of available health services in the publicly funded package [26, 27]. The committee felt that to ensure all necessary services could be readily provided, non-essential services should be eliminated from the package. It delineated four priority principles: necessity, effectiveness, efficiency, and individual responsibility. They described these principles as forming a sieve for sifting out services that should not be publicly funded ("Dunning's funnel") [27]. The idea was that one should apply these successively beginning with the principle of necessity, and then limiting the number of services provided. The principle of necessity was defined very broadly, basically meaning any intervention that could provide some medical benefit. With regard to the principle of effectiveness, the committee distinguished between interventions where there was evidence for an effect, where there was limited evidence, and where there was no evidence. The services would be further narrowed down by those that gave value for money, by only funding efficient services, and finally, services that were best dealt with by the individuals themselves were excluded. This latter criterion was not meant to exclude lifestyle choices, such as illness because of bad eating habits. Rather, it was meant to exclude services that could easily be paid for by the individuals themselves. One example the committee gave was routine adult dental care. Although such dental care is necessary, effective and efficient, adults can easily pay for such services out of pocket, and it is therefore best left to an individual's responsibility. There was also a strong focus on solidarity and emphasis on approaching macro-level decisions from the community point of view. While Dutch attitudes seemed to be shifting to accept more reliance on personal responsibility, the Dunning Committee did not want changes based on this shift to overlook the needs of vulnerable populations [26].

Sweden

In 1992, Sweden convened the Parliamentary Priorities Commission. Much work on substantive issues was left to local health authorities; however the central government commission outlined three platform principles: human dignity, need and solidarity, and cost-efficiency. The commission defined five priority groups based on the type of disease or treatment in question [17]. While cost-efficiency was listed as a platform principle, the committee was clear that cost should only be considered in comparing treatments for the same condition, and that measures of effectiveness that attempt to quantify quality of life, such as quality adjusted life years (QALYs), should not be used. As in the Netherlands, there was an emphasis on solidarity and needs of vulnerable populations. In 1994, Sweden convened a second committee, which focused more on eliciting public opinions, a factor that had virtually no role in the work of the first commission [21].

Denmark

In 1996 the Danish Council of Ethics laid out values that should form the basis of the health service: equality, solidarity, security, and autonomy. From these values, the council derived a general goal for the health service, framed in terms of "opportunity for self-expression...irrespective of...social background and economic ability" [28]. In attempting to fulfill this general goal, there were a number of "partial goals" under consideration, including equity, quality, cost-effectiveness, and democracy. The committee was explicit that these secondary considerations must be "balanced against each other," and that even once goals are agreed upon it can "become difficult when these goals and partial goals are to be translated into decisions with consequences for everyday life in the health service" [28]. However, it did not specify methods for choosing between them.

Explicit priority setting efforts: the defining practices approach

Rather than begin with abstract principles, Israel, New Zealand, the UK and the state of Oregon confronted priority setting in the context of concrete allocation decisions, such as defining a package of publicly funded services or establishing clinical guidelines.

Oregon

The experience of priority setting in Oregon's Medicaid program starting in the early 1990's represents the most explicit, as well as one of the most controversial, examples of priority setting in health care in the US. The goal of the program was to extend coverage to all Oregonians below 100% of the federal poverty level (as opposed to 58% FPL, as it was previously) by limiting coverage to a basic bundle of services decided by the Medicaid budget and a cost-effectiveness ranking of available medical services [29–32]. The Oregon process was concerned with incorporating public values from the beginning, and the Oregon Health Services Commission sponsored eleven public hearings, forty-seven town meetings and fielded a telephone survey before the initial rankings were decided [29]. The information gathered was employed in developing the Quality-of Well-Being Scale used to determine the cost-effectiveness ranking of condition treatment pairs and subsequently the services that would be covered by Medicaid [33]. This initial method was, however, abandoned because of public outcry over the resulting ranking of services. In response to this public reaction as well as criticism of the methodology, the Commission basically set out to rank health care services based on more broadly defined criteria, where expert knowledge and intuitive judgments by the Commissioners about appropriateness played a much larger role. While the program of rationing Medicaid services in Oregon was successful in covering a larger population and reducing the number of uninsured, it was extremely controversial and faced a number of political and practical roadblocks along the way [32]. The Oregon Health Services Commission continues to this day and continues to update the methodology used to prioritize and the resulting list of prioritized services. There are several community representatives on this Commission, although its ongoing work is not accompanied by as much public discussion as was the original implementation of the program. Oregon presents an example of both the potential and difficulty of implementing explicit priority setting in the US context.

New Zealand

In 1993, New Zealand established its first National Advisory Committee on Core Health and Disability Support Services to evaluate which services should be included in the publicly funded health package. While recognizing that existing practices represented an ad hoc list of priorities [34], the Core Services Committee started with this list and worked to identify: 1) discrepancies in provision and management of services (between Maori and non-Maori, men and women, urban and non-urban populations, populations in different geographic regions, etc.), 2) areas where efficiency could be improved, and 3) preferences of communities regarding health care.

While the committee in New Zealand discussed principles in much the same way as other commissions, the discussion occurred in the context of making recommendations about covered services. The committee rejected the idea of having an "Oregon-like" list of services constitute the basic package, but did define eligibility criteria for coverage of specific services. In order to make appropriate recommendations, the Committee looked at unit cost and volume of treatment data for common conditions and identified areas where efficiency could be improved [34, 35]. They also used information on public values and opinions, gathered through public meetings, to inform their recommendations. Subsequently, the Committee, renamed the National Health Committee (NHC) in 1996, met yearly to reevaluate and recommend changes to publicly funded health services based on new information or developments.

Israel

In 1995 Israel passed a National Health Insurance (NHI) Law, guaranteeing health insurance coverage to all citizens. The insurance would be provided by competing private sick funds, with the government acting as the single payer. Under the NHI law, all of the sick funds are required to provide the same basic basket of services to enrollees, with services not in the basic basket available at additional cost to the individual. When the law was first adopted it was decided that the basket comprising the extensive list of services offered by the largest existing sick fund at that time would also be the basic basket covered under the law. Thus, as in New Zealand, no explicit process for deciding on the basic basket was undertaken at the time, though it was recognized that a process was needed for updating the basic basket of services [36, 37].

The Ministry of Health created a process to undertake technology assessment and make recommendations regarding the updating of the basic basket of services. Under this system new technologies are screened on a regular basis in order to identify those with no existing alternative or with significant clinical efficacy compared with existing technologies. The assessment of these technologies includes the "evaluation of evidence-based clinical, epidemiological and economic aspects" [[38], p 172]. After several ad-hoc teams assess these aspects of the identified set of new technologies, their evaluations are passed on to the Medical Technology Forum, chaired by the Director of Medical Technology and including senior officials, managers, and researchers in technology assessment. Using a set of guiding criteria (including considerations such as the potential of the technology to prevent mortality or morbidity, the number of patients to benefit, the financial burden on society and the individual patient, and whether the net gain to society is higher than the cost) the Forum grades each technology on a scale of 1–10 and categorizes each as high, intermediate, or low priority. The rankings are then passed on to a National Advisory Committee, which, as of 1999, includes public representatives in addition to officials from the Ministries of Health and Finance and the sick funds. The Committee takes into account the assessments and recommends whether new technologies should be included in the basic basket, as well as limitations on their use [38].

United Kingdom

From its early days, the idea of rationing was a contentious political issue in the British National Health Service (NHS) [15, 39]. When other countries began convening commissions on priority setting there was a call by some for the UK to follow [19, 40–43]. Instead, in 1999 the National Institute for Clinical Excellence (now the National Institute for Health and Clinical Excellence) (NICE) was established to: 1) appraise new health technologies, 2) develop clinical guidelines, and 3) assess interventional procedures [44, 45]. In conducting these activities, NICE addresses questions including what constitutes necessary and appropriate care, how to incorporate cost-effectiveness considerations, and what interventions should be publicly funded. NICE also includes various pathways for public input.

Criteria for evaluating priority setting efforts

How well did these priority setting efforts succeed? We propose three criteria for evaluating these efforts. The first criterion is adequate public input in the priority setting exercise. The second criterion is appropriate principles, including the incorporation of an evaluation of the costs and benefits of interventions. The third criterion concerns the effect of the prioritization effort: has it actually had an impact on policy and practice, including the establishment of a review process to evaluate performance (Table 4). Let us briefly justify the selection of these criteria.

Table 4 Criteria by which to judge priority setting efforts

It has often been stressed in the prioritization literature that it is necessary to engage the public in order to gain acceptance for the often painful choices that need to be made. This can be achieved through different mechanisms, including, for example, group exercises in choosing hypothetical health care packages [3, 46–48]. Not only is it necessary to educate the public about the need for prioritization, it is also generally accepted that the public should have a real influence on how these choices are made. Most commentators reject an approach where these decisions are made by technocrats behind closed doors without public input. While it can be difficult to realize, public engagement with prioritization issues is necessary to ensure fairness and legitimacy [5, 10, 49–51] Exactly how this is done can vary. One may simply attempt to elicit the views of the public and incorporate these into decision making [52], or one may aim at a more deliberative process where one arrives at a consensus after a public dialogue. Still unresolved are the questions of how extensive public involvement in the priority setting process should be and who best represents the public's views. There are also problems of ensuring that avenues for public input established on paper are implemented and that input reflects broad and relevant community views [52–55].

Principles and procedures are supposed to help ensure prioritization is consistent with society's values and goals for the health care system. In fact, most commissions were established precisely to establish shared values on which prioritization decisions can be based. Producing value for money is central to efficiently allocating health care resources and will likely lead to fairer allocation as well. We have therefore noted specifically how the commissions have dealt with the issue of cost. A successful approach will integrate cost considerations, rather than acknowledging but avoiding the task of directly addressing the issue. Despite its limitations, CEA is currently the best developed and most used approach to assess whether interventions produce value for money [56–62].

Discussions about how to fairly and efficiently set priorities serve little purpose if they do not impact how priorities are established. Hence, exercises in priority setting must be concretely linked to policy and practice, and we expect that the work of government commissions will have some influence on what interventions are covered. In some cases, such as for NICE in the UK, the bodies have a direct influence on coverage decisions, and are set up to influence health policy. For commissions given the task of establishing a framework for priority setting, the influence will have to be more indirect. One important result would be if the work has led to an increased awareness among the general public about the need for priority setting, as we noted above. Another result would be if some of the concrete recommendations by the commission were subsequently adopted. Because of the controversial nature of most prioritization decisions, it is particularly important to note what procedures have been established for review and appeal of decisions reached.

Evaluating the eight efforts on priority setting

The approaches of the eight countries we consider vary in how well they fulfilled these criteria.

Solicitation of public input and promotion of public discussion and education

To fully evaluate the extent to which the priority setting bodies have promoted public discussion and education, we would have to distinguish between what the commissions did to encourage discussion during their tenure, and any effects the commissions have had on subsequent public involvement. Here we will confine ourselves to the involvement during the existence of the commissions, and the structures set up by the standing bodies established. All the priority setting bodies recognized the importance of transparency in setting priorities, by emphasizing the importance of promoting public discussion in order to make the need for priority setting clear. The Swedish commission, for example, emphasized that public discussions of priorities help clarify the reasons and methods on which priority setting decisions are based [21]. Similarly, the Dutch Committee emphasized the importance of introducing the topic of priorities into public discussion, not only to make people aware of the need for prioritization, but to encourage individuals to make their own choices regarding health care options [27]. The Commissions, however, varied in the way the public was involved.

While Norway involved public representatives on its commissions, the Norwegian Commissions only discussed the importance of public education, whereas the Danish Council actually held public meetings and distributed materials on priority setting [28]. The Commissions in the Netherlands and Sweden actively incorporated feedback from public meetings and surveys into their deliberations [21, 27]. The second Swedish commission referred to four public surveys, two of which it funded, and held five regional conferences[21, 63]. The Dutch committee set out a plan for a long-term program to solicit feedback from various consumer groups, including women, the elderly, the "low-involved," and different patient groups by beginning a discussion primarily through the media, then soliciting feedback through various meetings [27].

One of the stated goals of the New Zealand's Core Services Committee was to ensure that core health services "reflect the diverse needs and values of the population being served" [34]. The first step in this effort is to inform the public and engage them in a discussion of health services. The New Zealand Committee held periodic "Best of Health" public meetings based around discussion documents. The views expressed at these meetings were part of the committee's considerations in making recommendations regarding core services [64]. Further, they recommended the continuation of consultation by the Health Authorities in charge of funding services with communities, including Maori communities, in different regions [34, 65, 66]. The NICE model of public involvement allows for public input at different levels on both broad principles and specific guideline development. Input is solicited by including lay members with relevant experience on guideline committees, posting draft guidelines on NICE's website to solicit public feedback before issuing final guidelines, and convening Citizens Councils, composed of 30 individuals representing the public, to discuss questions such as how to define and evaluate clinical needs for treatment or what role age should play in making clinical decisions [44].

In Israel, explicit public involvement was not taken into account in establishing the terms of the NHI Law in 1995, though the importance of involving members of the public in the subsequent decision-making process about the basic basket was quickly realized. Thus, beginning in 1999, a quarter of the membership of the National Advisory Committee that makes the final recommendation to the Minister of Health regarding which new technologies should be added to the basic basket were public representatives with no medical background [38].

Establishment of principles

Except the UK, all of the countries considered here started the discussion by establishing some set of principles on which to base priorities [14, 66]. In the UK, this discussion has gone on through other avenues, but has not been centralized and systematic [39]. The principles cited by the commissions include a range of medical, philosophical, and economic factors [17, 21, 27, 28, 34].

It seems unreasonable to base prioritization on a single principle. Indeed, when the Norwegian Commission attempted to formulate a system based exclusively on severity, it found that important considerations, such as effectiveness of interventions and cost, were excluded and saw the need to add additional principles. With multiple principles, the challenge is determining how to balance them when they conflict. As the Danish committee pointed out, balancing "become [s] difficult when [principles] are to be translated into decisions with consequences for everyday life in the health service" [[28], p 57]. While commissions acknowledged the challenge of balancing, none solved the problems they identified.

In general there is hesitancy to place much weight on CEA due to discomfort and uncertainty in dealing with cost. In Norway, the first commission avoided cost [24], and only after negative responses did it add the secondary principles of potential effect and cost-effectiveness. The Danish committee discussed problems with using cost related analyses, including uncertainty about what measure of utility to use and the need for more information. Rather than addressing these problems, the committee was hesitant to endorse the use of any economic analyses [28]. The initial Swedish report stated that cost should only be a deciding factor when "all else is equal"[21]. In both Denmark and Sweden, the commissions specified that cost should only be considered when comparing treatments for the same illness, such as comparing a titanium hip prosthesis to a less expensive but less durable steel prosthesis [21]. While Israel's process does not include a requirement that cost-effectiveness specifically be taken into account in making decisions, it does include an economic evaluation that considers the overall cost of including a new technology in the basic basket by comparing it with currently available treatments. In a small number of cases, explicit cost-effectiveness or cost-benefit analysis is conducted as well ([38] The Dutch Committee in principle allowed the lack of cost-effectiveness to determine that an intervention should not be covered.

Only New Zealand lists cost-effectiveness as a primary consideration [34]. NICE explicitly integrates cost in every case of guideline development and technology assessment [44, 67]. Still, the use of CEA in NICE guidelines has been controversial in the past; its recommendation against using beta-interferon for treatment of multiple sclerosis based on cost-effectiveness grounds, for example, caused an outcry from MS groups, and its final recommendation was controversial among the medical and research communities as well [34, 65, 68–72].

Impact on policy and practice

Because of their abstractness, government commissions that outlined principles have had little direct impact on their countries' policies. For example, the Danish Council of Ethics explicitly noted that their partial goals would sometimes conflict, yet did not set out methods for implementing them in these cases [28]. Even when a commission outlined priority groups for use in practice, the guidelines were so broad as to be useless in resolving difficult prioritization questions [17, 24, 27]. In Norway, for example, the division of priority groups into fundamental, supplementary, low priority and no priority did little to help resolve questions of choice in individual circumstances. Further, the lack of political tension within the Swedish Commission, which included parliamentary representatives of all major parties, was seen by many as a sign that the group avoided controversial issues central to priority setting [23].

After the early rounds of priority setting discussions, principles set forth by the committees had little direct effect on health care planning and provision. According to Berg and van der Grinten, the criteria that make up Dunning's funnel were difficult to implement because of contention about the definition of "necessary" and the difficulty of making macro level judgments about the effectiveness and efficiency in specific cases or whether an intervention can be left up to individual responsibility. Even when the Dutch government took the approach of the Dunning committee seriously, "problematic substantive criteria..., financial considerations...and political pressures ...made it very difficult to remove services from the package" of publicly funded health services [[26], p 124]. In general, governments and planning groups continued to make decisions about coverage based on a host of other factors, including political concerns and media pressure [14, 61, 73].

After a decade of discussions and repeat performances by some commissions, little progress had been made. Changes in health services after recommendations were issued did not reflect the extensive discussions and proposals put forth by the commissions [73, 74]. One study showed evidence that less than half of new medical technologies actually implemented between 1993 and 1997 in Norway fit the Lønning definition of core services [74]. "By 2002, few of the recommendations [of the Lønning Commissions in Norway] had been implemented" [[24], p 2003]. In particular, specific priority setting committees were supposed to issue recommendations for different fields of medicine, but as of 2007 this had not been done. While the discussion of priority setting was successfully started, it was not clear that the government commissions had any significant impact on actual practices at the policy, planning, or clinical levels.

The approaches of Israel, the state of Oregon, New Zealand and the UK affected policy and practice most directly. For example, by making specific recommendations regarding covered services in New Zealand and establishing specific clinical guidelines in the UK, the groups in these countries have anchored the discussion of priorities in concrete policy determinations. Yet, within NICE, there is no systematic review of existing health technologies; thus there is a bias towards reviewing only new technologies. While the UK is a step ahead in affecting practice, there are still improvements to be made in the system.

All the countries that have set up bodies that decide on priorities, and the state of Oregon, have ongoing processes that are conducive to iterative reflection on past successes and failures. In Israel, new technologies are screened and assessed for inclusion in the basic basket of services on a regular basis. Similarly, the ongoing nature of NICE creates opportunities for review and evaluation of the process, though the review has been less systematic than in New Zealand. The yearly reevaluation of the health services and discussion of other health care issues by the NHC in New Zealand is the best example of effective review and evaluation [14, 41]. For example, in 1996 the Committee recommended against population screening for prostate cancer using the prostate specific antigen (PSA) test, but recommended the question be kept under review [75]. In 2001 the NHC began a review that culminated in a 2004 report echoing their earlier recommendation against population-wide PSA screening [53, 76, 77]. Rather than attempting to settle questions of prioritization with one discussion, the New Zealand Committee has established an iterative process that allows priority setting to evolve with medical and political changes.

Finally, one should note that the priority setting efforts had little effect on the political events that actually led to the establishment of the priority setting exercises in the first place. The Norwegian effort did not solve the problem of waiting lists, the Dutch politicians did not use the criteria of the Dunning committee to decide what should be included in the package of health services, and the Oregon effort did not lead to any substantial cost-savings by eliminating interventions from the services provided that could be used to increase the number of people covered by Medicaid [24, 26, 78].

Future directions for priority setting

What can we conclude from this examination of eight priority setting efforts? First, the bodies established to recommend or decide prioritized interventions have been relatively successful. They key to ensure impact on policy and practice is therefore to establish bodies with some decision making power on what is actually implemented in the health care system. Although controversial, the policies in the state of Oregon, Israel, New Zealand and the UK, have been largely accepted. Second, and in agreement with an apparent consensus in the literature, formulation by public bodies of abstract priority setting principles will not have much impact on policy. In the words of Søren Holm, "The Danish Council argued that none of the priority-setting systems which had been produced were really operational, and all suffered from one or both of two fatal flaws: .... They were based on a simplistic view about the purpose of the health care system; and/or they did not really give any specific guidance as to how one should prioritize" [[79], p. 31]. None of the Commissions given the task of formulating a principled framework for priority setting has had much impact on health policy in their respective countries. This general negative conclusion has led many commentators, including the Danish Commission as well as the second Norwegian Commission and commentators such as Norman Daniels to advocate a different approach [80]. Again, according to Søren Holm, "If we cannot find rule-based systems which can legitimize the decisions, we will instead have to devise priority-setting processes that can lend legitimacy to the outcome." Holm goes on to quote from the Danish Commission describing such a process

There should be an effort to ensure that decision-makers at different levels be aware – informed – of which priority-setting consequences different decisions entail. The issue is ensuring clearness, the necessary information being available, and that analyses have been executed of which consequences different decisions entail. At the same time the public should also be ensured a higher level of information on which decisions are made at which levels, and which reasons there are for the individual decisions. Such openness is crucial to ensuring that individual decisions can be subjected to criticism and possibly changed on the basis of the public debate. For this reason great importance should also be attached to the health planning in the counties being organized in such a way as to ensure the possibility for common citizens to participate in the decision making process, for instance at hearings and public meetings. [[79], p.34]

Our third conclusion is, however, that such a process, in spite of its attractiveness and in spite of the emphasis placed on public involvement in the prioritization literature, has also not really been implemented in any of the priority setting exercises examined here. The two country commissions mentioned by Holm as proponents of this approach, Norway and Denmark, have not yet, a decade after the reports, implemented any process with even the most rudimentary elements of the public process described above. In fact, the recommendation in Norway was that this process should be expert driven, and not involve much public debate. The processes implemented in countries with actual priority setting bodies, Israel, New Zealand, the UK and the state of Oregon in the US, also do not appear to fit the description of an open and transparent process with public discussion and decisions about the trade-offs that need to be made. New Zealand, Israel and Oregon have largely delegated the decisions about what services should be included in the health care package to a body of experts, with few, if any, possibilities for appeal. NICE in the UK, which perhaps comes closest to the ideal of a process for evaluating new technologies, has developed a structure of public involvement at all levels, and there is a possibility of appealing the decisions reached. However, the public is not engaged in the envisaged debate about what trade-offs need to be made, and how to balance different principles. The basis for appeal is also very narrow; successful appeals can only be made if the committee has made obvious mistakes. The three grounds for appeal are only:

â–ª The Institute has failed to act fairly and in accordance with its procedures;

â–ª The Institute has prepared guidance which is perverse in the light of the evidence submitted; and

â–ª The Institute has exceeded its powers

NICE to a large extent limits itself to examining new technologies for their cost-effectiveness, where the major determinant of the decision reached rests on the examination of technical evidence of proven effect, and the costs of the procedure in relation to a more or less pre-set level of cost-effectiveness. In that respect it resembles more a traditional technology assessment agency, rather than a body with a mandate to involve the public in an open dialogue about the difficult moral choices in health care priority setting. In Israel the National Advisory Committee included about a third of the members without any medical background. In its advisory role, however, the committee largely relied on the expert judgments of the Medical Technology Forum, and the final decision about what to include for reimbursement was reached without any public discussion or involvement.

Against this one might want to argue that the case of Oregon demonstrates that the success of a community led process as opposed to a technocrat led process. The initial ranking based on a mechanical application of cost-benefit calculation was abandoned in favor of judgments by a panel with a sizable proportion of community members. This revised methodology was accepted and has been utilized successfully during the subsequent decade. In spite of this, there are two reasons why Oregon is not a counter-example to the position taken in this paper. First, the Health Services Commission has recently recognized that it is necessary to place a much greater emphasis on the evidence for the effectiveness of interventions and its cost-effectiveness [81]. One can therefore expect that the recommendations by experts will place a much larger role in deciding the prioritized list. Second, although there were a large number of public hearings and input during the initial work of the Commission, the work today is largely carried out without much public debate. Furthermore, the initial discussion probably had more to do with gaining acceptance and legitimacy for the process, than about a public deliberation about the conflicting values and reasons that would initially lead different people to come to different conclusions but then, as a result of the open public debate, result in consensus about the list of priorities [78].

What does this mean for the future direction of health care priority setting? In our view, it suggests that we perhaps should reevaluate the place for some type of expert led model of implicit rationing and priority setting in health care. One the one hand, the experience of the priority setting commissions of countries such as Norway, the Netherlands, and Denmark suggests that we will never reach agreement about an explicit framework for priority setting. Although these countries did not establish any priority setting bodies with decision-making power, the intention was that the recommendations of the commissions should be implemented. As we discussed above, this has not yet happened. On the other hand, the experience of actual priority setting efforts in Israel, New Zealand, Oregon and the UK, suggests that this is best done by a group of experts who consider the evidence for the effectiveness of various interventions, without much public involvement and discussion. The experience in these countries show that such an expert led process may be accepted by the general public. The actual acceptance of the public that we have seen in these countries, could, of course, simply be a reflection of the fact that the public feels powerless to influence the process. Some of the public criticism of the specific decisions made by NICE may be a reflection of that fact. The relative success of this approach also does not mean that there should be no public involvement, nor an absence of appeal processes. All the priority setting bodies examined here involve the public and allow for an appeal of the decisions, but it is quite clear that it is much less than is envisaged by those who advocate open and transparent processes involving the public. The challenge of health care prioritization would therefore seem to be to identify an appropriate balance between an expert led process and a process that emphasizes public involvement in decision making. We recognize that this conclusion is controversial, and goes against much of the thinking in the current prioritization literature, where there is much more emphasis on public involvement. The examination of the prioritization efforts discussed in this paper, however, leads us to conclude that implementation of public discussion and open and transparent deliberative processes has not been achieved. In spite of this, some countries have been able to achieve some degree of public acceptance of actual rationing. In light of this, one of the main challenges for the priority setting field would be propose appropriate levels of public involvement and appeal that is much less extensive than the usual rhetoric suggests, but that still ensures that decisions reached are legitimate.

One key element for appropriate public involvement would probably be transparency in providing reasons for decisions. Even though there may not be much possibility of actually appealing a decision and reversing it, the possibility of public discussion and criticism of justifications for decisions, will in all likelihood influence priority setting bodies. Although such influence is largely indirect, in the long run it will probably be more substantial than the formal ability to directly appeal particular decisions.

References

  1. Beauchamp TL, Childress JF: Principles of Biomedical Ethics. 2001, New York , Oxford University Press, 5th

    Google Scholar 

  2. Eddy DM: Clinical Decision-making: From Theory to Practice. The Individual vs Society. Resolving the Conflict. JAMA. 1991, 265: 2405-2396.

    Google Scholar 

  3. Eddy DM: Clinical Decision-making: From Theory to Practice. The Individual vs Society. Is There a Conflict? . JAMA. 1991, 265: 1446, 1449-1450.

    Google Scholar 

  4. Emanuel EJ: Justice and Managed Care. Four Principles for the Just Allocation of Health Care Resources. Hastings Center Report. 2000, 30: 8-16. 10.2307/3528040.

    Article  CAS  PubMed  Google Scholar 

  5. Fleck LM: Healthcare Justice and Rational Democratic Deliberation. American Journal of Bioethics. 2001, 1: 20-21.

    Article  CAS  PubMed  Google Scholar 

  6. Rawls J: A Theory of Justice. 1999, Cambridge , Belknap Press of Harvard University

    Google Scholar 

  7. Alexander GC, Werner RM, Ubel PA: The Costs of Denying Scarcity. Archives of Internal Medicine. 2004, 164: 593-596. 10.1001/archinte.164.6.593.

    Article  PubMed  Google Scholar 

  8. Anand G: Who Gets Health Care? Rationing in an Age of Rising Costs: Life Support: The Big Secret in Health Care: Rationing is Here; With Little Guidance, Workers On Front Lines Decide Who Gets What Treatment; Nurse Micheletti's Tough Calls. The Wall Street Journal. 2003

    Google Scholar 

  9. Bloche MG, Jacobson PD: The Supreme Court and Bedside Rationing. JAMA. 2000, 284: 2776-2779. 10.1001/jama.284.21.2776.

    Article  CAS  PubMed  Google Scholar 

  10. Fleck LM: Rationing: Don't Give Up. Hastings Center Report. 2002, 32: 35-36. 10.2307/3528521.

    Article  PubMed  Google Scholar 

  11. Freudenheim M: Broader Health Coverage May Depend on Less. New York Times. 2004

    Google Scholar 

  12. Fleck LM: Just Caring: Health Reform and Health Care Rationing. Journal of Medicine and Philosophy. 1994, 19: 435-443.

    Article  CAS  PubMed  Google Scholar 

  13. Dixon J, Welch HG: Priority Setting: Lessons From Oregon. Lancet. 1991, 337: 891-894. 10.1016/0140-6736(91)90213-9.

    Article  CAS  PubMed  Google Scholar 

  14. Ham C: Priority Setting in Health Care: Learning from International Experience. Health Policy. 1997, 42: 49-66. 10.1016/S0168-8510(97)00054-7.

    Article  CAS  PubMed  Google Scholar 

  15. Ham C: Priority Setting in the NHS: Reports from Six Districts. BMJ. 1993, 307: 435-438.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Hansson LF, Norheim OF, Ruyter KW: Equality, Explicitness, Severity, and Rigidity: the Oregon Plan Evaluated from a Scandinavian Perspective. Journal of Medicine and Philosophy. 1994, 19: 343-366.

    Article  CAS  PubMed  Google Scholar 

  17. Health Care and Medical Priorities Commission: No Easy Choices: The Difficult Priorities of Healthcare. 1993, Stockoholm , Ministry of Health and Social Affairs

    Google Scholar 

  18. Honigsbaum F, Calltorp J, Ham C, Holmstrom S: Priority Setting Processes for Healthcare. 1995, Oxford , Radcliffe Medical Press

    Google Scholar 

  19. Klein R: Puzzling out Priorities. Why We Must Acknowledge that Rationing is a Political Process. BMJ. 1998, 317: 959-960.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Street A, Richardson J: The Value of Health Care: What Can We Learn from Oregon?. Australian Health Review. 1992, 15: 124-134.

    CAS  PubMed  Google Scholar 

  21. Swedish Parliamentary Priorities Commission : Priorities in Health Care: Ethics, Economy, Implementation. 1995, Stockholm , Ministry of Health and Social Affairs

    Google Scholar 

  22. Ham C: What Can We Learn from International Experience?. Rationing Health Care. Edited by: Maxwell R. 1995, Edinburgh , Churchill Livingstone

    Google Scholar 

  23. Calltorp J: Priority Setting in Health Policy in Sweden and a Comparison with Norway. Health Policy. 1999, 50: 1-22. 10.1016/S0168-8510(99)00061-5.

    Article  CAS  PubMed  Google Scholar 

  24. Norheim OF: Norway. Reasonable Rationing: International Experience of Priority Setting in Health Care. Edited by: Ham C, Robert G. 2003, Philadelphia , Open University Press

    Google Scholar 

  25. Holm S: Goodbye to the Simple Solutions: the Second Phase of Priority Setting in Health Care. BMJ. 1998, 317: 1000-1002.

    Article  Google Scholar 

  26. Berg M, van der Grinten T: The Netherlands. Reasonable Rationing: International Experience of Priority Setting in Health Care. Edited by: Ham C, Robert G. 2003, Philadelphia , Open University Press

    Google Scholar 

  27. Committee on Choices in Health Care : Choices in Health Care. Edited by: Ministry of Welfare HCA. 1992, Rijswijk

    Google Scholar 

  28. Danish Council of Ethics : Priority-setting in the Health Service. 1997

    Google Scholar 

  29. Brown LD: The National Politics of Oregon's Rationing Plan. Health Affairs. 1991, 10: 28-51. 10.1377/hlthaff.10.2.28.

    Article  CAS  PubMed  Google Scholar 

  30. Fox DM, Leichter HM: State model: Oregon. The Ups and Downs of Oregon's Rationing Plan. Health Affairs. 1993, 12: 66-70. 10.1377/hlthaff.12.2.66.

    Article  CAS  PubMed  Google Scholar 

  31. Fox DM, Leichter HM: Rationing Care in Oregon: the New Accountability. Health Affairs. 1991, 10: 7-27. 10.1377/hlthaff.10.2.7.

    Article  CAS  PubMed  Google Scholar 

  32. Ham C: Retracing the Oregon Trail: the Experience of Rationing and the Oregon Health Plan. Reasonable Rationing: International Experience of Priority Setting in Health Care. Edited by: Ham C, Robert G. 1998, Buckingham , Open University Press

    Google Scholar 

  33. Blumstein JF: The Oregon Experiment: The Role of Cost-Benefit Analysis in the Allocation of Medicaid Funds. Social Science and Medicine. 1997, 45: 545-554. 10.1016/S0277-9536(96)00395-4.

    Article  CAS  PubMed  Google Scholar 

  34. National Advisory Committee on Core Health and Disability Support Services: Core Health and Disability Support Services for 1993/94. 1992, Wellington , National Advisory Committee on Core Health and Disability Support Services

    Google Scholar 

  35. National Advisory Committee on Core Health and Disability Support Services: Core Services for 1995/96. Wellington. 1994, National Advisory Committee on Core Health and Disability Support Services

    Google Scholar 

  36. Chinitz D, Israeli A: Health Reform and Rationing in Israel. Health Affairs. 1997, 16: 205-210. 10.1377/hlthaff.16.5.205.

    Article  CAS  PubMed  Google Scholar 

  37. Chinitz D, Shalev C, Galai N, Israeli A: The Second Phase of Priority Setting: Israel's Basic Basket of Health Services: the Importance of Being Explicitly Implicit. BMJ. 1998, 317: 1005-1007.

    CAS  PubMed  Google Scholar 

  38. Shani S, Siebzehner MI, Luxenberg O, Shemer J: Setting Priorities for the Adoption of Health Technologies on a National Level—the Israeli Experience. Health Policy. 2000, 54: 169-185. 10.1016/S0168-8510(00)00109-3.

    Article  CAS  PubMed  Google Scholar 

  39. Klein R, Day P, Redmayne S: Managing Scarcity: Priority Setting and Rationing in the National Health Service. 1996, Buckingham, Open University Press

    Google Scholar 

  40. Ham C: Health Care Rationing. BMJ. 1995, 310: 1483-1484.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  41. Klein R: Priorities and Rationing: Pragmatism or Principles? . BMJ. 1995, 311: 761-762.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. McKee M, Figueras J: Setting Priorities, Can Britain Learn from Sweden?. BMJ. 1996, 312: 691-694.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  43. Turnberg L, Lessof M, Watkins P: Physicians Clarify Their Proposal for a National Council for Health Care Priorities. BMJ. 1996, 312: 1604b-1605.

    Article  Google Scholar 

  44. National Institute for Clinical Excellence. [http://www.nice.org.uk]

  45. Rawlins M: In Pursuit of Quality: the National Institute for Clinical Excellence. Lancet. 1999, 353: 1079-1083. 10.1016/S0140-6736(99)02381-8.

    Article  CAS  PubMed  Google Scholar 

  46. Danis M, Biddle AK, Dorr Goold S: Enrollees Choose Priorities for Medicare. Gerontologist. 2004, 44: 58-67. 10.1159/000075618.

    Article  PubMed  Google Scholar 

  47. Danis M, Biddle AK, Dorr Goold S: Insurance Benefit Preferences of the Low-income Uninsured. Journal of General Internal Medicine. 2002, 17: 125-133. 10.1046/j.1525-1497.2002.10609.x.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Eddy DM: Clinical Decision-making: From Theory to Practice. Connecting Value and Costs. Whom Do We Ask, and What Do We Ask Them. JAMA. 1990, 264: 1737-1739. 10.1001/jama.264.13.1737.

    Article  CAS  PubMed  Google Scholar 

  49. Emanuel EJ: Choice and Representation in Health Care. Med Care Research and Review. 1999, 56: 113-140. 10.1177/107755899773743909.

    Article  Google Scholar 

  50. Emanuel EJ, Emanuel LL: Preserving Community in Health Care. Journal of Health Politics Policy and Law. 1997, 22: 147-184.

    Article  CAS  Google Scholar 

  51. Fleck LM: Just Caring: Oregon, Health Care Rationing, and Informed Democratic Deliberation. Journal of Medicine and Philosophy. 1994, 19: 367-388.

    Article  CAS  PubMed  Google Scholar 

  52. Ubel PA: The challenge of measuring community values in ways appropriate for setting health care priorities. Kennedy Institute of Ethics Journal. 1999, 9: 263-284. 10.1353/ken.1999.0021.

    Article  PubMed  Google Scholar 

  53. Abelson J, Eyles J, McLeod C, Collins P, McMullan C, Forest PG: Does Deliberation Make a Difference? Results From a Citizens Panel Study of Health Goals Priority Setting. Health Policy. 2003, 66: 95-106. 10.1016/S0168-8510(03)00048-4.

    Article  PubMed  Google Scholar 

  54. Abelson J, Forest PG, Eyles J, Smith P, Martin E, Gauvin FP: Deliberations About Deliberative Methods: Issues in the Design and Evaluation of Public Participation Processes. Social Science and Medicine. 2003, 57: 239-251. 10.1016/S0277-9536(02)00343-X.

    Article  PubMed  Google Scholar 

  55. Wailoo A, Roberts J, Brazier J, McCabe C: Efficiency, Equity, and NICE Clinical Guidelines. BMJ. 2004, 328: 536-537. 10.1136/bmj.328.7439.536.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Donaldson C, Currie G, Mitton C: Cost Effectiveness Analysis in Health Care: Contraindications. BMJ. 2002, 325: 891-894. 10.1136/bmj.325.7369.891.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Garber AM: Cost-Effectiveness and Evidence Evaluation As Criteria For Coverage Policy. Health Affairs (web exclusive). 2004

    Google Scholar 

  58. Garber AM, Phelps CE: Economic Foundations of Cost-effectiveness Analysis. Journal of Health Economics. 1997, 16: 1-31. 10.1016/S0167-6296(96)00506-1.

    Article  CAS  PubMed  Google Scholar 

  59. Gold MR, Siegel JE, Russell LB, Weinstein M: Cost-Effectiveness in Health and Medicine. 1996, New York , Oxford University Press

    Google Scholar 

  60. Nord E: Cost-Value Analysis in Health Care: Making Sense out of QALYs. 1999, New York , Cambridge University Press

    Chapter  Google Scholar 

  61. Robinson R: Limits to Rationality: Economics, Economists and Priority Setting. Health Policy. 1999, 49: 13-26. 10.1016/S0168-8510(99)00040-8.

    Article  CAS  PubMed  Google Scholar 

  62. Ubel PA: Pricing Life: Why It's Time for Health Care Rationing. 2000, Cambridge , MIT Press

    Google Scholar 

  63. Rosen P, Karlberg I: Opinions of Swedish Citizens, Health-care Politicians, Administrators and Doctors on Rationing and Health-care Financing. Health Expectations. 2002, 5: 148-155. 10.1046/j.1369-6513.2002.00169.x.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Edgar W: Rationing Health Care in New Zealand -- How the Public Has a Say. The Global Challenges of Health Care Rationing. Edited by: Coulter A, Ham C. 2000, Philadelphia , Open University Press

    Google Scholar 

  65. Ashton T, Cumming J, Devlin N: Priority-setting in New Zealand: Translating Principles into Practice. Journal of Health Services Research and Policy. 2000, 5: 170-175.

    Article  CAS  PubMed  Google Scholar 

  66. Ham C, Robert G: Reasonable rationing: International experience of priority setting in health care. 2003, Buckingham , Open University Press

    Google Scholar 

  67. Devlin N, Parkin D: Does NICE Have a Cost-effectiveness Threshold and What Other Factors Influence Its Decisions? A Binary Choice Analysis. Health Economics. 2004, 13: 437-452. 10.1002/hec.864.

    Article  PubMed  Google Scholar 

  68. Birch S, Gafni A: The 'NICE' Approach to Technology Assessment: an Economics Perspective. Health Care Management Science. 2004, 7: 35-41. 10.1023/B:HCMS.0000005396.69890.48.

    Article  PubMed  Google Scholar 

  69. Birch S, Gafni A: On Being NICE in the UK: Guidelines for Technology Appraisal for the NHS in England and Wales. Health Economics. 2002, 11: 185-191. 10.1002/hec.706.

    Article  PubMed  Google Scholar 

  70. Ellis SJ: Bad Decision NICE. Lancet. 2002, 359: 447-10.1016/S0140-6736(02)07582-7.

    Article  PubMed  Google Scholar 

  71. Howden-Chapman P, Ashton T: Public Purchasing and Private Priorities for Healthcare in New Zealand. Health Policy. 2000, 54: 27-43. 10.1016/S0168-8510(00)00095-6.

    Article  CAS  PubMed  Google Scholar 

  72. Mayor S: Health Department to Fund Interferon Beta Despite Institute's Ruling. BMJ. 2001, 323: 1087-10.1136/bmj.323.7321.1087.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  73. Hunter DJ: Desperately Seeking Solutions: Rationing Health Care. 1997, London , Longman

    Google Scholar 

  74. Norheim OF, Ekeberg O, Evensen SA, Halvorsen M, Kvernebo K: Adoption of New Health Care Services in Norway (1993-1997): Specialists' Self-assessment According to National Criteria for Priority Setting. Health Policy. 2001, 56: 65-79. 10.1016/S0168-8510(00)00135-4.

    Article  CAS  PubMed  Google Scholar 

  75. National Advisory Committee on Core Health and Disability Support Services: Fifth Annual Report. 1996, Wellington , National Advisory Committee on Core Health and Disability Support Services

    Google Scholar 

  76. National Advisory Committee on Core Health and Disability Support Services : Tenth Annual Report. 2001, Wellington , National Advisory Committee on Core Health and Disability Support

    Google Scholar 

  77. National Health Committee : Prostate Cancer Screening in New Zealand. 2004, Wellington , National Health Committee

    Google Scholar 

  78. Jacobs L, Marmor T, Oberlander J: Report from the field. The Oregon health plan and the political paradox of rationing: What advocates and critics have claimed and what Oregon did. Journal of Health Politics Policy and Law. 1999, 24:

    Google Scholar 

  79. Holm S: Developments in the Nordic countries: goodbye to the simple solutions. The Global Challenge of Health Care Rationing. Edited by: Coulter A, Ham C. 2000, Buckingham , Open University Press, 29-37.

    Google Scholar 

  80. Daniels N: Accountability for reasonableness in private and public health insurance. The global challenge of health care rationing. Edited by: Coulter A, Ham C. 2000, Buckingham , Open University Press, 89-106.

    Google Scholar 

  81. Oregon Health Services Commission : Prioritization of Health Care Services. A Report to the Governor and the 73rd Oregon Legislative Assembly. 2005, Office of Oregon Health Policy and Research, Department of Administrative Services

    Google Scholar 

Download references

Acknowledgements

We thank John Barton, Anthony Culyer, Marion Danis, Eli Feiring, Lindsay Hampson, Søren Holm, Steven Pearson, Jehanna Peerzada, Larry Temkin and, in particular, Ezekiel Emanuel for their helpful comments on earlier drafts. This research was supported by the Intramural Research Program of the NIH Clinical Center. The opinions expressed are the authors' own. They do not reflect any position or policy of the National Institutes of Health, US Public Health Service, or Department of Health and Human Services.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Reidar K Lie.

Additional information

Declaration of competing interests

The author(s) declare that they have no competing interests.

Authors' contributions

Both authors have been involved in initiating the project, carrying out the background research, and the writing of the article. Both authors have read and approved the final manuscript.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Sabik, L.M., Lie, R.K. Priority setting in health care: Lessons from the experiences of eight countries. Int J Equity Health 7, 4 (2008). https://doi.org/10.1186/1475-9276-7-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1475-9276-7-4

Keywords