Negotiated Complexity: Framing Multi-Criteria Decision Support in Environmental Health Practice

Abstract

The complexity we take into account when dealing with complex issues and the way we deal with that complexity is not given or self-evident, it is framed and negotiated. Based on two environmental health decision support case studies we address a set of key methodological choices, crucial in shaping the multi-criteria decision support and illuminate how they followed from transdisciplinary collaboration and negotiation: diversity tolerance, dealing with uncertainty and difference of opinion, weight of information and the epistemological divide between traditional closed and alternative open paradigms. The case studies exemplify the growing conviction amongst methodologists that, especially regarding complex issues, best methods do not exist as such: methods are chosen and tailored in practice and the quality to a large extent is dependent on the process in which methodological development is embedded. We hope to contribute to making explicit the importance of methodological decision making regarding environmental health complexity.

Share and Cite:

H. Keune, J. Springael and W. Keyser, "Negotiated Complexity: Framing Multi-Criteria Decision Support in Environmental Health Practice," American Journal of Operations Research, Vol. 3 No. 1A, 2013, pp. 153-166. doi: 10.4236/ajor.2013.31A015.

1. Introduction

Decision support methods have to potentially live up to quite a diversity of expectations, some of which are not easy to combine or reconcile. Moreover the methodological approach seems open for debate as no crystal clear nor any undisputed yardsticks for best practices seem to exist [1-5]. The challenge is not only to do justice to the complexity of many decision making issues and processes, but also to do this as pragmatically as possible. Multi-criteria decision analysis (MCDA) is considered a valuable method regarding decision making concerning complex issues [5-9]. Complexity can be characterized as a phenomenon that cannot be fully described nor understood due the presence of a large number of (often simple) system components that interact in a manner that cannot be explained by the characteristics of the individual components themselves. Moreover both the description and understanding of complexity is open for debate: “more than one description of a complex system is possible. Different descriptions will decompose the system in different ways. Different descriptions may also have different degrees of complexity” [10]. Apart from decision making issues being scientifically complex, often these issues also are of societal importance and socially complex resulting in “complex problems featuring high uncertainty, conflicting objectives, different forms of data and information, multi interests and perspectives, and the accounting for complex and evolving biophysical and socio-economic systems” [9]. Apart from describing, defining and interpreting complexity, also methodologically dealing with complexity is far from unambiguous. How can we select the methodological approach that best suits a specific context, with specific challenges, characteristics and actors?

In this paper we will draw lessons from the application of a multi-criteria group decision support method in two environmental health case studies in Belgium. Multicriteria methods have been used extensively over the last decades in quite a diversity of fields, such as energy [9, 11], forest management [12-14], water management [15- 16], e-democracy [17], agricultural management [18-19], natural resource management [6,20] and ecosystem services [21]. Though extensively used in the field of environmental issues [22-24], multi-criteria methods have only to a lesser extent been used in the field of environmental health [25-34]. Also in the field of health issues the use of multi-criteria methods still seems more limited [35-38]. Next to field diversity there is also quite a diversity of multi-criteria methods and of applications of those methods [5,12,15,24].

We will not go into detailed descriptions of the two case studies here as they are extensively described in other publications [39-40]. We will mainly try to make explicit which key methodological choices were negotiated in the process of tailoring the multi-criteria decision support to context specific requirements.

Two Case Studies

Between 2001 and 2011 in Flanders (the Dutch speaking part of Belgium) human bio-monitoring research is being carried out, investigating the very complex relation between environmental pollution and human health by measuring pollutants and health effects in human beings, using biomarkers1. The project is carried out in the scope of the Flemish Centre of Expertise for Environment and Health (CEH), funded and steered by the Flemish government. In the CEH, environmental health experts from all Flemish universities and from two research institutes cooperate. The CEH combines natural [41-42] and social scientific research [43].

We discuss two decision making case studies in which a multi-criteria group decision support method was applied. First the action-plan (2005-2007): together with medical and environmental scientific experts and policymakers, an action-plan for setting policy priorities with regard to the bio-monitoring results was developed [39] (Keune et al. 2009). Second the hotspot selection procedure (2007-2008): in the CEH we experimented with the input of a diversity of actors with regard to setting research priorities [40]2.

2. Method

2.1. Choosing Method

In the field of operations research, a decisions/management sciences field related to decision support systems (DSS) and multi-criteria analysis (MCDA), already in 1989 Rosenhead [2,6] sketched the need for an alternative methodological paradigm for dealing with issues that are characterized by “complexity, uncertainty and conflict”. Almost simultaneously Funtowicz and Ravetz [44- 47] present their critique on normal science, pleading for a post normal paradigm in cases when “facts are uncertain, values in dispute, stakes high and decisions urgent”, and do so by referring mainly to environmental issues. Funtowicz and Ravetz describe normal science [48] as puzzle solving within a scientific paradigm that is not disputed as such, clearly stipulating how the scientific endeavor should be performed as to solve problems, or more in general defines the truth. The alternative of post-normal science, especially applicable to complex issues, focuses on aspects of problem solving that are often neglected in traditional normal science: uncertainty and values. Funtowicz and Ravetz plead for a wider involvement/participation of actors next to scientific experts and a more explicit account of scientific uncertainties.

Rosenhead [2] characterizes the dominant paradigm in the field of operation research as “rational comprehensive planning”, planning that is organized as a mainly centralized (top down) expert activity, focusing on objective calculation of the best option for management or policy making. It will do so by identifying a single yardstick for comparing alternatives, collecting extensive data sets that will allow a scientific analysis in which uncertainty is abolished as much as possible, prescribing convincingly and uniquely what is in the best interest regarding the issues at stake. Rosenhead [2] points to extensive evidence stating a centralized expert method to have (limited) success with respect to issues of limited complexity, but does not work well regarding complex issues. Rosenhead follows in the footsteps of Lindblom and Cohen [1] who ten years earlier critically discussed the usability of social scientific knowledge for policymaking and social problem solving in the face of complexity, and propagated a necessary alliance with “ordinary knowledge”. Lindblom and Cohen [1] not only opened the visor to “other” knowledge, but also to “other” knowledge producers having a role in the process of knowledge production. This tolerance for diversity is echoed in the alternative paradigm propagated by Rosenhead [2]. This means that there is an openness for a diversity of solutions (alternatives) and for a diversity of yardsticks for assessing these alternatives that do not necessarily have to be commensurable. Compared to the traditional paradigm, in the alternative paradigm the process is more important than the collection of complete datasets or perfect (as possible) analysis of the data. Uncertainties and differences of opinion or even conflicts are accepted as a fact of complex life and the active involvement of a diversity of relevant actors in a bottom up approach is propagated. The importance of the process also means that the analytical part of the decision support method (data and analysis) should be supportive to the process of interaction and judgment, and should not overload it with too much data and analytical detail. We summarize the traditional and alternative paradigms in Table 1 (after Rosenhead [2]).

Similar distinctions are made by several other authors. Vatn [49] sketches more or less the same epistemological divide with respect to environmental appraisal. Vatn views environmental appraisal methods as institutional structures with “rules concerning 1) who should participate and in which capacity, 2) what is considered data

Table 1. Overview of key issues of the traditional and alternative paradigms.

and which form data should take, and 3) rules about how a conclusion is reached”. Vatn distinguishes two distinct underlying assumptions regarding rationality. Individual rationality considers preferences to be individual and given, whereas social rationality stresses the context dependency and dynamic character. Similar to Rosenhead with respect to trade-offs, Vatn stresses the methodological importance of how one views value dimensions: from commensurable to incommensurable. Vatn also distinguishes different types of human interaction, ranging from instrumental/strategic behaviour to communicative action. This may have methodological consequences on the involvement of different actors in the process. Finally he discusses the character of the issues at stake: does one consider these to be simple and fit for calculation and trade-off, or complex and in need of what he calls a “forum type value articulating institution”, or in other words, an issue for deliberation? In the field of decision support systems (DSS) Marakas [4] like Rosenhead and Vatn also distinguishes different views on rationality. He specifically points out that in the field of DSS the traditional rational actor model in which optimization will tell the decision maker what is the best of all possible solutions, over the years more and more has been challenged by the bounded rationality paradigm. According to Marakas many decisions are qualitative in nature and not fit for quantitative analysis. Moreover the search for all possible solutions is a too complex endeavor, let alone being able to effectively compare them. Arnott and Pervan [50] distinguish positivist, interpretivist and critical social science paradigms in their review of DSS literature; the first (positivist) being the more traditional approach, the latter two can be seen as exponents of an alternative paradigm. Arnott and Pervan show how the alternative paradigms seem to gain ground, especially in Europe: whereas in the United States (1990-2004) more than 95% of all DSS papers can be considered positivistic, in Europe this amounts to only some 56%.

According to Gamper and Turcanu [51] multi-criteria decision analysis (MCDA) is deemed to overcome the shortcomings of traditional decision-support tools, such as cost-benefit (CBA) or cost-effectiveness analysis (CEA), as it is better fit for dealing with qualitative information and with uncertainties. This especially is important regarding the support of complex decision problems, such as environmental or sustainability issues. Still Mendoza and Martins [6] underline that today many of the MCDA methods should be considered as “hard”, or consistent with the traditional and rational scientific management approach.

Stirling [52-53] points out that regarding both the analytical and the deliberative aspects of decision support methods, the political and institutional contexts, and the power structure within, largely influence the way the processes are organized and relate to governance. Public participation in issue framing and decision making is not, in itself, a guarantee of openness. Maasen and Lieven [54] characterise some of the open participatory processes as nothing more than an exercise in public relations. Stirling [53] describes how the increasing acceptance of such participation in scientific and technological assessments, at national and European Union level, has been to a large extent negated by the still closed, deterministic and linear approaches to innovation and technological progress. The quality of these processes therefore depends not just on making it more open, but also on genuine respect for what it produces, and use of the outcomes. The distinction Stirling makes between “opening up” and “closing down” is of relevance to methodological choices and their outcomes in practice and according to Stirling in many ways transcends the importance of the distinction between participatory (or deliberative) and analytic approaches. The distinction resembles to a large extent characteristics of the traditional and alternative paradigm of Rosenhead [2]. To our understanding especially the question of diversity tolerance is key in this respect and of key importance in methodological choices.

2.2. An Analytical Deliberative Approach

In the development of the multi-criteria decision support procedure used in the two environmental health case studies, two major developments were crucial. One was that it was realized that health risk assessment is very informative to the decision making process, but it is not the only issue relevant to policy makers when deciding on environmental health issues. It became clear that especially for policymakers also other issues had to be taken into account, specifically policy aspects (policy relevance) and social aspects (what do stakeholders or the general public think about these issues?). In appreciating this diversity of rather different aspects, the stage was set for a multi-criteria decision support method. Second, it was realized that no expert or small group of experts was able or willing to formulate decision making advice based on this diversity of aspects, partly outside their own domain of expertise. But even within their own domain, that is the broader domain, of e.g. environmental health, they could not be considered experts on all relevant, e.g. environmental health, specifics. As compensation for lack of broader than specialized expertise the organization of critical expert mass through expert elicittation was considered to be a good option. This was not only considered wise because of the diversity of fragmented but relevant (sometimes highly) specialized expertise within one domain, but also because knowledge on environmental health issues is rather limited due to complexity. Moreover the challenge of taking into account the combination of all relevant aspects (health risk + policy aspects + social aspects) and defining what’s of priority importance for policy making, was judged to go beyond (specialized) expertise: this really belonged to the domain of politics and social debate. Stakeholder deliberation was considered to be most informative in this respect.

The analytical deliberative approach proposed by Stern and Fineberg [55] offers a beautiful hybrid conceptual framework for this combined challenge of scientific analysis, expert elicitation and social debate: “Analysis uses rigorous, replicable methods, evaluated under the agreed protocols of an expert community (…) to arrive at answers to factual questions. Deliberation is any informal or formal process for communication and collective consideration of issues”. According to Chilvers [56] it is “one of the few evaluative frameworks providing a more symmetrical treatment of “analytic-deliberative” processes, including how science is conducted and relates to participatory processes”. Stern and Fineberg [55] consider deliberation not only of importance at the end of the pipeline, when analytical scientific results are available, but also at the start, when setting the research agenda or designing decision making procedures, and at other relevant intermediate steps in the process. We do not need to take this framework as a prescriptive one: “Structuring an effective analytic-deliberative process for informing a risk decision is not a matter for a recipe. Every step involves judgment, and the right choices are situation dependent” [55]. One of the weaknesses sometimes considered with respect to deliberative or participatory processes is lack of structuring capacity regarding decision making processes [20]. Linkov et al. [25] make the same point regarding the field of comparative risk assessment. This structuring capacity often is considered to be one of the main strengths of multi-criteria decision analysis (MCDA). Several authors [6,8] plead for a more integrated approach to MCDA, connecting the analytical approach to a more qualitative soft approach in which social aspects and participatory elements can be included. For example Proctor and Drechsler [20] have applied such integrated approach in the field of natural resource management, combining a deliberative method with multicriteria evaluation.

2.3. The Multi-Criteria Group Decision Support Method

The decision support method applied in the case studies is inspired by the traditions of group decision support systems and multi-criteria decision analysis. Marakas [4] provides us with a long list of attributes of decision support systems (DSS). DSS are employed in semi structured of unstructured decision contexts and are meant to be supportive to decision making, rather than replacing it. DSS are meant to be interactive and user-friendly, and generally are developed in an evolutionary iterative process, using relevant data and models. DSS can provide support to either an individual or a group and facilitates learning on the part of the decision maker(s). Group decision support systems are one type of DSS, specifically facilitating group interaction and decision making. Within the family of multi-criteria decision analysis (MCDA) methods, the use of group and especially participatory MCDA is expanding since the 1990s [6,24,57]. Mendoza and Martins [6] (like Stern and Fineberg [55]) stress the importance of involving all relevant actors (experts and stakeholders) in all crucial steps of the process, from start to finish, including methodological choices.

Huang et al. [24] state different MCDA methods not to be that different, and consider method choice to depend on familiarity with specific tools or approaches or on opportunities available within specific projects. According to Hajkowicz and Collins [15] different analytical techniques in comparison do not to have clear advantages or disadvantages. They stress the selection of criteria and decision alternatives to be the most important challenges. Several authors moreover point out that methods can be combined in hybrid approaches, as to benefit most from the different qualities of different methods [12,14]. In fact this is noted to become a trend in the field of MCDA use [12]. Also the importance is stressed of interactive use of the methods: close collaboration between MCDA experts and decision makers or other experts and actors active within or relevant to the process enhances the quality of the methodological application [12,14].

Often multi-criteria methods are rather demanding for participants in the assessment. Participants often have to supply a lot of information or information that is hard to get. AURORA (Aggregating Unicriterion Rankings into One RAnking), the method applied in the prioritization procedure, tries to overcome such shortcomings. AURORA was developed at the University of Antwerp (De Keyser and Springael [58]). It is based on the merge and comparison of rankings. The reason rankings are used is that with respect to complex issues, it is often difficult for experts to give absolute judgements. Moreover, it allows us to use qualitative notions like e.g. a range from very difficult to very easy. Especially, the latter would be problematic for most MCDA’s since they are not fit to work with qualitative data. The advantage of working with qualitative data, is that it creates some robustness with respect to expert judgements. A disadvantage is of course that some elementary mathematical operations are not allowed. AURORA solves this problem by respecting the ordinal character of the rankings. To this end Kendall’s tau or an extension [59] is used in the procedure. Based on expert judgements a ranking will be produced on each criterion. Pair wise comparison of rankings on different criteria will generate consensus rankings. The participants (decision makers or stakeholders) in the MCDA attribute relative importance/weight to the different criteria. We will not extensively elaborate on the mathematical design and solutions of the MCDA-method used in the case studies.

According to Arnott & Pervan [50] one of the most prominent shortcomings in the field of DSS is the widening gap between DSS-expertise and practice. They plea for improvement of DSS research by increasing the number of case studies and especially the case studies that can be characterized as interpretive case studies, thus approaches that are more judgemental and deliberative than the more traditional approaches that focus more on expert calculations. The methodological application that we are about to introduce can be considered an example of such interpretive case study approach.

2.4. The Decision Making Process

The decision making processes in the case studies in which the multi-criteria group decision support method was applied have important characteristics in common. First the close interdisciplinary cooperation between natural and social scientists. This implies that the general approach had to be negotiated between rather different disciplinary backgrounds, and that natural and social scientific data had to be combined in the process. Second the close cooperation with policy representatives: the research had to be policy relevant, which puts different demands to the decision making process than only scientific ones. Third the involvement of both external experts and stakeholders.

The processes in both case studies are organised in different analytical or deliberative phases, each consisting of specific procedural steps (Figure 1). First in a deliberative phase the decision support procedure is defined, the diversity of decision making criteria is chosen and the decision alternatives that have to prioritized are selected. As in both case studies the amount of alterna-

Figure 1. Decision making procedure.

tives was rather large, a pre-selection of alternatives was performed for pragmatic reasons.

Second in an analytical phase desk research (such as literature and data research) is performed as to provide the different alternatives with background information concerning the different assessment criteria. The environmental and health information relevant to assess the public health aspects is collected by natural scientists. The social scientists are responsible for policy-related and social aspects. Next, based on the desk research information the alternatives are assessed in an expert elicitation. Experts on environment and health assess the public health criterion, policy experts assess the policy aspects and social experts assess the social aspects. These assessments result in both quantitative information (priority rankings of alternatives on different criteria) and qualitative information (arguments, difference of opinion, uncertainties). The outcomes of the expert elicitation are processed in a multi-criteria decision analysis.

Third, in a deliberative phase the results of both desk research and expert consultation are discussed in a stakeholder deliberation that gives advice on the basis of all information: different than specialized expertise, a societal view deals with the political question of deciding what’s important considering all specific aspects together. The procedures used in the case studies are aimed at a well informed decision-making by the final decision making body (the Ministers of Environmental and Health policies decide on policy priorities presented in the action-plan; the CEH in close cooperation with policymakers decides on research priorities in the hotspot selection procedure).

Finally external communication. Transparency was considered of the utmost importance, not only at the end when decisions were taken, but also considering important intermediate steps such as defining the procedure and stakeholder deliberation.

3. Negotiated Complexity in Practice

Before we now turn to the practice of two case studies of the Flemish Centre of Expertise for Environment and Health (CEH) in which the multi-criteria group decision support method was applied. First the action-plan: together with medical and environmental scientific experts and policymakers, an action-plan for setting policy priorities with regard to the bio-monitoring results was developed [39]. Second the hotspot selection procedure: in the CEH we experimented with the input of a diversity of actors with regard to setting research priorities [40].

3.1. Complexity

Scientifically there is little doubt that the relation between the environment and human health is complex. In the European Union, for example, tens of thousands of chemical substances are on the market. For a number of individual toxic substances the health effects from high doses are well-established. The effects of small doses of many substances over a longer period are, however, unknown [60]. Also unknown are the effects of different substances in combination—though there are clues in the form of DNA damage, hormone disruptions, loss of sperm quality, the risk of cancer. It is not always possible to prove unambiguously that a causal relationship exists between environmental pollution and specific health effects. Scientific assessment of environmental health risks is faced with large (partly irreducible) uncertainties, knowledge gaps, and imperfect understanding, out of which may arise deep-seated conflicts and controversies [44,61- 62]. Nor is complexity merely a function of (natural scientific) science; risks are also socially complex because they are interwoven with our way of life, our perceptions and interests, our norms and values [63-65].

The question is to what extent and how do we take this complexity into account when dealing with it? Do we acknowledge complexity in our approach or do we drastically simplify and reduce it to relatively simple proportions? In the first case study regarding the policy relevance of human biomonitoring results, the initial reflex in the CEH was one of reduction: one yardstick (health risk) for assessing alternatives was chosen, to be objectified on the basis of environmental health scientific knowledge and performed by natural scientific experts of the CEH. There seemed no doubt in the early stages that this would work, and the fact that it did not really work came as a bit of a surprise for all involved, being mainly the natural scientific environmental health experts of the CEH and the policy representatives who also mainly have a natural scientific background. As indicated before (see the paragraph on the analytical deliberative approach) next to a growing awareness of underestimated complexity and lack of problem solving capacity of natural scientific science alone, step by step the complexity that was considered relevant for the methodological approach became a topic for negotiation, and as such negotiated complexity. In this process of negotiating complexity the social scientists more and more became involved. Gradually the methodological approach that was under construction opened up its visor to more than one yardstick, to more than one type of assessment and to more than one type of actors. Next to the health risk yardstick also the policy and social aspects were considered to be of relevance, setting the stage for a multi-criteria approach. Apart from focussing mainly on natural scientific CEH experts, gradually also social scientific and policy experts were considered to be relevant. And even step by step it was realized that the critical mass within the CEH would probably be too limited, scientifically, which opened the visor to external experts, and socially, which opened the visor to stakeholder involvement. The opening up to other actors also opened the visor to other methodological approaches for analysis and judgement, bringing more deliberative, qualitative and judgmental elements into play.

We want to stress that the above description of the opening up development by no means can be explained as merely or only the result of the inherent complexity of the endeavour as it gradually unfolded, or of environmental health science and policy making in general. It resulted from the interplay of a diversity of actors, approaching the matter from different perspectives and step by step trying to find a way forward and negotiating on how to methodologically approach this.

3.2. Diversity Tolerance

A key issue in methodological developments was the question of tolerance for diversity. This has constantly been a key topic of (sometimes intense) debate within the CEH, even when running the agreed upon procedure in practice. The main divide in this discussion has been between the traditional and alternative paradigm as characterized by Rosenhead [2]. To some extent this divide within the CEH was a disciplinary one, with the natural scientists taking a more traditional stance and the social scientists pleading more for an alternative approach. The fact that this divide largely was disciplinary does not of course mean that all natural scientists work within the traditional paradigm and all social scientists within the alternative. Both paradigms potentially are present in both disciplinary fields. In the CEH and the case studies though the divide overall characterizes the main differences between both groups rather well. Still we should not perceive this as a black and white issue: though mainly traditionally oriented, clearly openness for an alternative approach grew over time in the CEH. The policy representatives closely involved in the process seemed to constantly have the divide as a split in their mindset, depending on the topic, on who was making a point (natural scientists or social scientists), open to some extent to both positions on different occasions, be it never stable, ever searching for a good balance. The combination of a natural scientific background and the policy perspective constantly confronted them with this hybrid position.

In general the methodological development can be characterized as opening up a traditional approach to alternatives. This does not mean that all steps in the decision making process can be characterized as opening up. And though the same procedure was used in both case studies, there are clear differences, sometimes opening up to diversity, sometimes closing down. We highlight some illustrative examples; a full overview following the steps presented in Figure 1, is given in Table 2.

In defining decision alternatives, the case studies used different approaches. In the Action-plan, the alternatives were defined by the CEH (scientists and policy makers): they decided how to interpret the outcomes of the human biomonitoring research, as to come to a pre-selection of alternatives for further inquiry. In the Hotspot Selection Procedure the definition of alternatives was opened up to a larger group of actors, open ended to some extent, as not only a diversity of actors was invited directly to propose hotspots, but also the snow ball method was used. The approach was less open though regarding the definition of what could potentially be considered a hotspot: this was done by the CEH. Also the pre-selection that followed after the collection of hotspots was done by the CEH, limiting the scope of potential research drastically from 85 to 9 alternatives. Mainly research aspects were considered for the pre-selection: can the proposed hotspot be investigated by means of human biomonitoring?

Interestingly one of the alternatives that dropped out in the pre-selection (health problems caused by traffic) was put back on the table at a later stage by some stakeholders in the stakeholder deliberation. In retrospect here the narrowing down of the pre-selection to one criterion (research aspects) was questioned by the stakeholders, re-opening the discussion by pointing at the importance of another criterion: the health risk of traffic. In fact, at the end, traffic was selected as one of the most important candidate hotspots, be it that the ability to do scientific research on it was proposed to be further investigated first, to be funded by the government.

In the first case study (Action-plan), parallel to the discussions in the CEH on either a more traditional or a more alternative approach, a similar discussion developed in the team performing the multi-criteria decision analysis (MCDA): a MCDA expert external to the CEH and a social scientist from the CEH. At the start of their collaboration the MCDA expert took more of a traditional stance, while the social scientist preferred opening up to alternative notions. Gradually this developed into more openness towards stakeholder involvement in tailoring the MCDA to the stakeholder deliberation. When we first approached the stakeholders, we thought it wise not to overwhelm them with information. We considered the richness of both the desk research and the expert elicitation to be too much for the stakeholders to handle. At the same time, we also thought it to be important to be transparent: this should not be a black box exercise. Participants had a right to take full notice of relevant information if they wanted to. We thus employed a strategy of supplying limited information at the start but also highlighting that more information could be made available when deemed necessary.

Table 2. Overview of opening up and closing down in both case studies.

3.3. Uncertainty and Difference of Opinion

In individual interviews most stakeholders showed interest to take into account all information, including detailed information on aspects such as expert assessment uncertainty and dispersion of expert assessments. During the expert elicitation several types of uncertainty were brought forward by the experts: lack of expertise, lack of knowledge within the scientific domain, lack of information in the desk research, lack of interpretability of the human biomonitoring results, and lack of clear sight on cause-effect relationships. Also quite some difference of opinion amongst experts could be noted, the dispersion of which was calculated by means of an ordinal dispersion index (Leik [66]): see Figure 2.

We decided to introduce an extra stakeholder questionnaire in which we asked them about aspects they considered to be important and how they wanted to take these into account in the MCDA. The results regarding issues of expert uncertainty and number of experts involved in specific parts of the expert elicitation (called here knowledge base) are presented in Figure 3.

It shows that with respect to e.g. the issue lack of knowledge in science most stakeholders considered this to be an important issue. In taking this into account though, quite some differences occur: half of them want to weigh assessments showing this type of uncertainty extra in the analysis, whereas others want to give less weight. In the MCDA we treated this on the level of the experts or individual alternatives, depending on which level the qualification applied to. On the level of an expert e.g. suppressed means we weigh that experts’ ranking two times less. On the level of an alternative e.g. strongly stimulated means an alternative is promoted in ranking by four positions.

3.4. The Weight of Information on Deliberation

The richness of information that came out of MCDA, presented at the start of the stakeholder deliberation in the Action-plan, appeared to be too overwhelming for several participants. The information that was presented clearly was in need of some time for digestion and it was quite extensively debated and challenged: the presented outcomes from the MCDA were not taken for granted. This was also reflected in the group advice the stakeholders agreed upon in the end: no ranking was given as a group advice, but instead the alternatives were commented on and some general recommendations were given. Main arguments for not giving one or several rankings being complexity (too complex for non-experts), they considered it the responsibility of the ministers to

Figure 2. Average dispersion amongst experts on main criteria.

Figure 3. Stakeholder preferences regarding taking into account expert uncertainty and number of experts involved.

prioritize and they did not feel at ease with the fact that they were not backed by their organisations (they could not consult them on this). They also stated that normally when taking part in formal advisory settings, instead of a diversity of options to discuss a more crystallized advisory text is presented to them, and they can comment on it, e.g. by discussing mainly the arguments without in depth comments on specific technicalities.

Strikingly as such the stakeholders in the Action-plan on methodology advised for future decision making processes to simultaneously close down and open up the approach. Closing down regarding the amount of information and options to be discussed by the stakeholders: they preferred a more crystallized advisory text for them to comment on. Opening up regarding the outcome and responsibility of the stakeholder deliberation: the open ended character of the stakeholder deliberation was preferred over a clear cut group advice which the decision makers may hide behind their own decision making responsibilities. This advice was taken up in the Hotspot Selection Procedure. Instead of focusing on full transparency while providing information, we focused the discussion on a concrete advisory proposal, which could be commented upon. Some stakeholders who also took part in the Action-plan were more satisfied, because they had to deal with less detailed information. Some “new” stakeholders nevertheless as well as some “old” ones kept having questions about more or less the same type of detailed information on toxicological issues or on scientific uncertainties and diversity in expert opinions.

3.5. Epistemological Divide

While tailoring the decision support method to the case studies, the divide between the more closed expert centered approach of the traditional paradigm and the more open approach of the alternative paradigm remained prominent amongst the CEH actors involved. The more traditional experts complain about (from their perspective) nonessential extra complexity of time-consuming open procedures, which put the core of their work under pressure. They also question the relevance of the outcomes of such procedures, to their belief adding nothing to what was already known, and question the necessity of using ‘so much words’ in reporting about the process. Moreover they question the competence of non-experts and the fact that potentially the value of their own competence can be questioned. Some feel like guinea pigs in some social scientific experiment. The social scientists, more keen to alternative approaches, question the traditional stance of traditional experts stating their own expertise to be superior and objective whereas others subjective, their expert knowledge to be too complicated for outsiders and fear of misuse by others, and fear of panic when involving members of the general public in deliberations about their knowledge. To a large extent this superiority statement of more traditionally oriented CEH experts is formulated as an issue of sheer pragmatics: understanding or perhaps even supporting the merits of a more open social scientific approach, but not at the cost of their own position and work (see also Albert et al. [67]). Thus, even though at early stages in methodological development a more open approach was agreed upon also by more traditional experts, still on several occasions, when concrete practical method choices had to be made, the opening up features were criticized as being practically unfeasible.

The policymakers remained in a middle of the road position, seemingly being in two minds all the time. On the one hand clinging to the dominant evidence based policy culture of superior unambiguous scientific proof, on the other open to acknowledging scientific uncertainty, the importance of precaution and public support. A clear sign of this split is noticeable both in internal and external communications of policy representatives: though eagerly embracing the open procedure and especially the bonus of potential public support, they almost exclusively refer to health risk assessment when it comes to the quality of the whole approach, thus implying onecriterion expert assessment in the end to be superior for guiding and legitimating policy choices.

4. Discussion and Conclusions

What can we learn from the case study experiences? The specificity of a context often is too different from other contexts to be able to generally define lessons or best practice methods from a distance. Still we have learned some lessons that can be of inspiration to other cases. Fundamental epistemological issues relevant to methodological choices showed to be dynamic in practice and inconclusive: proponents of both traditional (closed) and alternative (open) paradigms being part of the process contributed to lively discussions along the way, resulting in mixed (or combined, hybrid) approaches, that were under discussion all the time. The dynamic nature of context specific negotiation and the resulting creative and discursive process override methodological properties in importance: in practice the process steers the method, not the other way around.

As Arnott and Pervan [50] stipulated for the field of decision support systems, in general, experts have created quite a distance between expertise and practical reality. Clearly there is the need for more reality checks of expert knowledge and approaches in order to become more practice relevant. The reluctance of traditional experts for opening up their expertise to critique and diversity appears as a disease of expertise, not being helpful in overcoming the gap between expert knowledge and practical relevance in real life complexity. Goetghebeur et al. [38] refer in this respect to “a ‘black box’ syndrome” of closed expert approaches such as cost effectiveness approaches. Banville et al. [57] point at “the ever present temptation to reduce stakeholders to a few mathematical parameters”. Of course this does not mean that there is no value in expert knowledge as such and that anything goes. Nor does this mean that we should discuss at length about everything with everybody. Pragmatic choices have to be made in order to result in practical approaches and adequate actions. The choice of what’s to be considered a good practical approach or adequate action though does no longer necessarily only reside in the ivory tower of traditional academia.

Methodological choices that stood out in the case studies are closely related to complexity and the viewpoints on how to deal with complexity. Even though we may suspect complexity to be an intrinsic property of issues or contexts, methodologically to a large extent this is a topic for debate and negotiation and as such becomes negotiated complexity. The extent to which complexity is acknowledged and taken into account cannot be fully objectified, largely it is chosen. Apart from disciplinary or social preferences also pragmatic considerations play a role here. A balance needs to be found between different actor perspectives and between quality and resources. Diversity tolerance, closely related to negotiated complexity, cannot turn a blind eye to pragmatic considerations either: open approaches may be more demanding on resources and actors involved than traditional approaches. Taking into account all potentially relevant information, including disclosure of uncertainties and difference of opinion, may weigh too heavy on the process, for example once it has to be taken into account in deliberations. Yet, one can also try to be pragmatic in taking into account elements of diversity, for example by investing less in extensive data collection and analysis, and more in deliberation as such. Moreover it is a matter of attitude: do we think in terms of superior knowledge and best solutions only, and accept all limitations of such approach, or do we (also) think in terms of diversity of relevant expertise and viewpoints? Or in other words: do we only make calculations about what can be calculated, or do we (also) discuss about importance?

The epistemological divide between the traditional and alternative paradigms largely sticks to ambassadors of either paradigm. Without ambassadors of either paradigm at the table where crucial methodological choices are being made, especially in practice and under resource constraints such as time pressure, the weight of a dominant paradigm will largely steer the process. This does not mean that there can be no cross boundary figures such as the policy makers in the case studies. This also does not mean that for example the traditional experts are not open to alternative approaches or that they do not see the value of it. In the CEH clearly there was good will regarding opening up, especially at the level of ambition, before concrete practical methodological choices had to be made. Still, along the way, the open arms appeared to be accompanied by closed mindsets amongst the traditional experts [40]. Without the social scientists being concretely involved, the opening up elements probably would have had a hard time to survive in real practice. It is therefore crucial that the diversity which is considered to be relevant in the process in one way or the other is represented by either ambassadors of diversity as such or representatives of specific (e.g. experts and stakeholder) diversity at the methodological decision table.

Finally we may conclude that as a structuring method, the MCDA hardly has been questioned along the way and seems suitable for not only dealing with complexity but also offers a suitable framework for negotiated complexity. MCDA is capable of embracing the above mentioned key methodological choices even when there is speak of a fundamental epistemological divide. Also it seems capable of structuring the complexity of taking into account diversity tolerance. Making explicit which methodological choices have been made and by whom is crucial in illuminating the value of its outcomes. As such, with our case study approach, we hope to have contributed to methodological decision making regarding multicriteria decision support in real life practice.

5. Acknowledgements

The human biomonitoring study was commissioned, financed and steered by the Ministry of the Flemish Community (Department of Science, Department of Public Health and Department of Environment). The human biomonitoring was performed by the Centre of Expertise for Environment and Health (CEH). We gratefully acknowledge the collaboration of the participants in the human biomonitoring research and of our colleagues of the CEH.

NOTES

2Readers who wonder why chronologically the procedure on research priorities was not instigated first: openness of our natural scientific colleagues and policy representatives to experiments involving “other” actors end elements in the process only gradually developed during the first years of the CEH.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] C. E. Lindblom and D. K. Cohen, “Usable Knowledge: Social Science and Social Problem Solving,” Yale University Press, New Haven, 1979.
[2] J. Rosenhead, “Rational Analysis for a Problematic World. Problem Structuring Methods for Complexity, Uncertainty and Conflict,” John Wiley & Sons, Chichester, 1989.
[3] C. H. Weiss, “Policy Research: Data, Ideas, or Arguments?” In: P. Wagner, C. H. Weiss, B. Wittrock and H. Wollmann, Eds., Social Sciences and Modern States: National Experiments and Theoretical Crossroads, Cambridge University Press, Cambridge, 1991, pp. 307-332.
[4] G. M. Marakas, “Decision Support Systems in the 21st Century,” Prentice Hall Pearson Education, New Jersey, 1999.
[5] V. Belton and T. Stewart, “Muliple Criteria Decision Analysis: An Integrated Approach,” Kluwer Academic, Dordrecht, 2002. doi:10.1007/978-1-4615-1495-4
[6] G. A. Mendoza and H. Martins “Multi-Criteria Decision Analysis in Natural Resource Management: A Critical Review of Methods and New Modelling Paradigms,” Forest Ecology and Management, Vol. 230, No. 1-3, 2006, pp. 1-22. doi:10.1016/j.foreco.2006.03.023
[7] W. Wittmer, F. Rauschmayer, B. Klauer, “How to Select Instruments for the Resolution of Environmental Conflicts?” Land Use Policy, Vol. 23, No. 1, 2006, pp. 1-9. doi:10.1016/j.landusepol.2004.09.003
[8] G. Munda, “Social Multi-Criteria Evaluation for a Sustainable Economy,” Springer-Verlag, Berlin, 2008. doi:10.1007/978-3-540-73703-2
[9] J. Wang, Y. Jing, C. Zhang and J. Zhao, “Review on Multi-Criteria Decision Analysis Aid in Sustainable Energy Decision-Making,” Renewable and Sustainable Energy Reviews, Vol. 13, No. 9, 2009, pp. 2263-2278. doi:10.1016/j.rser.2009.06.021
[10] P. Cilliers, “Complexity, Deconstruction and Relativism,” Theory, Culture and Society, Vol. 22, No. 5, 2005, pp. 255-267. doi:10.1177/0263276405058052
[11] S. D. Pohekar and M. Ramachandran, “Application of Multi-Criteria Decision Making to Sustainable Energy Planning—A Review,” Renewable and Sustainable Energy Reviews, Vol. 8, No. 4, 2004, pp. 365-381. doi:10.1016/j.rser.2003.12.007
[12] J. Kangas and A. Kangas, “Multiple Criteria Decision Support in Forest Management—The Approach, Methods Applied, and Experiences Gained,” Forest Ecology and Management, Vol. 207, No. 1-2, 2005, pp. 133-143. doi:10.1016/j.foreco.2004.10.023
[13] S. R. J. Sheppard and M. Meitner, “Using Multi-Criteria Analysis and Visualisation for Sustainable Forest Management Planning with Stakeholder Groups,” Forest Ecology and Management, Vol. 207, No. 1-2, 2005, pp. 171-187. doi:10.1016/j.foreco.2004.10.032
[14] J. Ananda and G. Herath, “A Critical Review of Multi-Criteria Decision Making Methods with Special Reference to Forest Management and Planning,” Ecological Economics, Vol. 68, No. 10, 2009, pp. 2535-2548. doi:10.1016/j.ecolecon.2009.05.010
[15] S. Hajkowicz and K. Collins, “A Review of Multiple Criteria Analysis for Water Resource Planning and Management,” Water Resources Management, Vol. 21, No. 9, 2007, pp. 1553-1566. doi:10.1007/s11269-006-9112-5
[16] B. A. Bryan and J. M. Kandulu, “Designing a Policy Mix and Sequence for Mitigating Agricultural Non-Point Source Pollution in a Water Supply Catchment,” Water Resources Management, Vol. 25, No. 3, 2011, pp. 875-892. doi:10.1007/s11269-010-9731-8
[17] G. E. Kersten, “e-Democracy and Participatory Decision Processes: Lessons from e-Negotiation Experiments,” Journal of Multi-Criteria Decision Analysis, Vol. 12, No. 2-3, 2004, pp. 127-143. doi:10.1002/mcda.352
[18] K. Hayashi, “Multicriteria Analysis for Agricultural Resource Management: A Critical Survey and Future Perspectives,” European Journal of Operational Research, Vol. 122, No. 2, 2000, pp. 486-500. doi:10.1016/S0377-2217(99)00249-0
[19] C. P. López, J. C. J. Groot, C. Carmona-Torres and W. A. H. Rossing, “An Integrated Approach for Ex-Ante Evaluation of Public Policies for Sustainable Agriculture at Landscape Level,” Land Use Policy, Vol. 26, No. 4, 2009, pp. 1020-1030. doi:10.1016/j.landusepol.2008.12.006
[20] W. Proctor and M. Drechsler, “Deliberative Multicriteria Evaluation,” Environment and Planning C: Government and Policy, Vol. 24, No. 2, 2006, pp. 169-190. doi:10.1068/c22s
[21] V. Oikonomou, P. G. Dimitrakopoulos and A. Y. Troumbis, “Incorporating Ecosystem Function Concept in Environmental Planning and Decision Making by Means of Multi-Criteria Evaluation: The Case-Study of Kalloni, Lesbos, Greece,” Environmental Management, Vol. 47, No. 1, 2011, pp. 77-92. doi:10.1007/s00267-010-9575-2
[22] G. A. Kiker, T. S. Bridges, A. Varghese, T. P. Seager and I. Linkov, “Application of Multicriteria Decision Analysis in Environmental Decision Making,” Integrated Environmental Assessment and Management, Vol. 1, No. 2, 2005, pp. 95-108. doi:10.1897/IEAM_2004a-015.1
[23] K. Steele, Y. Carmel, J. Cross and C. Wilcox, “Uses and Misuses of Multicriteria Decision Analysis (MCDA) in Environmental Decision Making,” Risk Analysis, Vol. 29, No. 1, 2009, pp. 26-33. doi:10.1111/j.1539-6924.2008.01130.x
[24] I. B. Huang, J. Keisler and I. Linkov, “Multi-Criteria Decision Analysis in Environmental Sciences: Ten Years of Applications and Trends,” Science of the Total Environment, Vol. 409, No. 19, 2011, pp. 3578-3594. doi:10.1016/j.scitotenv.2011.06.022
[25] I. Linkov, F. K. Satterstrom, G. Kiker, C. Batchelor, T. Bridges and E. Ferguson, “From Comparative Risk Assessment to Multi-Criteria Decision Analysis and Adaptive Management: Recent Developments and Applications,” Environment International, Vol. 32, No. 8, 2006, pp. 1072-1093. doi:10.1016/j.envint.2006.06.013
[26] I. Linkov, P. Welle, D. Loney, A. Tkachuk, L. Canis, J. B. Kim and T. Bridges, “Use of Multicriteria Decision Analysis to Support Weight of Evidence Evaluation,” Risk Analysis, Vol. 31, No. 8, 2011, pp. 1211-1225. doi:10.1111/j.1539-6924.2011.01585.x
[27] N. Chang, S. Ning and J. Chen, “Multicriteria Relocation Analysis of an Off-Site Radioactive Monitoring Network for a Nuclear Power Plant,” Environmental Management, Vol. 38, No. 2, 2006, pp. 197-217. doi:10.1007/s00267-005-0007-7
[28] A. Critto, S. Torresan, E. Semenzin, S. Giove, M. Mesman, A. J. Schouten, M. Rutgers and A. Marcomini, “Development of a Site-Specific Ecological Risk Assessment for Contaminated Sites: Part I. A Multi-Criteria Based System for the Selection of Ecotoxicological Tests and Ecological Observations,” Science of the Total Environment, Vol. 379, No. 1, 2007, pp. 16-33. doi:10.1016/j.scitotenv.2007.02.035
[29] E. Semenzin, A. Critto, C. Carlon, M. Rutgers, and A. Marcomini, “Development of a Site-Specific Ecological Risk Assessment for Contaminated Sites: Part II. A Multi-Criteria Based System for the Selection of Bioavailability Assessment Tools,” Science of the Total Environment, Vol. 379, No. 1, 2007, pp. 34-45. doi:10.1016/j.scitotenv.2007.02.034
[30] D. Hidalgo, R. Irusta, L. Martinez, D. Fatta and A. Papadopoulos, “Development of a Multi-Function Software Decision Support Tool for the Promotion of the Safe Reuse of Treated Urban Wastewater,” Desalination, Vol. 215, No. 1-3, 2007, pp. 90-103. doi:10.1016/j.desal.2006.09.028
[31] L. Carlsen, “Hierarchical Partial Order Ranking,” Environmental Pollution, Vol. 155, No. 2, 2008, pp. 247-253. doi:10.1016/j.envpol.2007.11.023
[32] M. Alvarez-Guerra, L. Canis, N. Voulvoulis, J. R. Viguri and I. Linkov, “Prioritization of Sediment Management Alternatives Using Stochastic Multicriteria Acceptability Analysis,” Science of the Total Environment, Vol. 408, No. 20, 2010, pp. 4354-4367. doi:10.1016/j.scitotenv.2010.07.016
[33] J. Pan, C. J. Oates, C. Ihlenfeld, J. A. Plant, and N. Voulvoulis, “Screening and Prioritisation of Chemical Risks from Metal Mining Operations, Identifying Exposure Media of Concern,” Environmental Monitoring and Assessment, Vol. 163, No. 1-4, 2010, pp. 555-571. doi:10.1007/s10661-009-0858-0
[34] M. Dursun, E. Karsak and M. Karadayi, “A Fuzzy MultiCriteria Group Decision Making Framework for Evaluating Health-Care Waste Disposal Alternatives,” Expert Systems with Applications, Vol. 38, No. 9, 2011, pp. 1145311462. doi:10.1016/j.eswa.2011.03.019
[35] R. Baltussen and L. Niessen, “Priority Setting of Health Interventions: The Need for Multi-Criteria Decision Analysis,” Cost Effectiveness and Resource Allocation, Vol. 4, No. 14, 2006. doi:10.1186/1478-7547-4-14
[36] M. Holdsworth, Y. Kameli and F. Delpeuch, “Stake-holder Views on Policy Options for Responding to the Growing Challenge from Obesity in France: Findings from the PorGrow Project,” Obesity Reviews, Vol. 8, Supplement 2, 2007, pp. 53-61. doi:10.1111/j.1467-789X.2007.00359.x
[37] S. Peacock, C. Mitton, A. Bate, B. McCoy and C. Donaldson, “Overcoming Barriers to Priority Setting Using Interdisciplinary Methods,” Health Policy, Vol. 92, No. 2, 2009, pp. 124-132. doi:10.1016/j.healthpol.2009.02.006
[38] M. Goetghebeur, M. Wagner, H. Khoury, D. Rindress, J. Grégoire and C. Deal, “Combining Multicriteria Decision Analysis, Ethics and Health Technology Assessment: Applying the EVIDEM Decisionmaking Framework to Growth Hormone for Turner Syndrome Patients,” Cost Effectiveness and Resource Allocation, Vol. 8, No. 4, 2010. doi:10.1186/1478-7547-8-4
[39] H. Keune, B. Morrens, J. Springael, G. Koppen, A. Colles, I. Loots, K. Van Campenhout, H. Chovanova, M. Bilau, L. Bruckers, V. Nelen, W. Baeyens and N. Van Larebeke, “Policy Interpretation of Human Biomonitoring Research Results in Belgium; Priorities and Complexity, Politics and Science,” Environmental Policy and Governance, Vol. 19, No. 2, 2009, pp. 115-129. doi:10.1002/eet.500
[40] H. Keune, B. Morrens, K. Croes, A. Colles, G. Koppen, J. Springael, I. Loots, K. Van Campenhout, H. Chovanova, G. Schoeters, V. Nelen, W. Baeyens and N. Van Larebeke, “Open the Research Agenda: Participatory Selection of Hot Spots for Human Biomonitoring Research in Belgium,” Environmental Health, Vol. 9, No. 33, 2010. doi:10.1186/1476-069X-9-33
[41] M. Bilau, C. Matthys, W. Baeyens, L. Bruckers, G. de Backer, E. den Hond, H. Keune, G. Koppen, V. Nelen, G. Schoeters, N. van Larebeke, J. Willems and S. de Henauw, “Dietary Exposure to Dioxin-Like Compounds in Three Age Groups. Results from the Flemish Environment and Health Study,” Chemosphere, Vol. 70, No. 4, 2009, pp. 584-592. doi:10.1016/j.chemosphere.2007.07.008
[42] C. Schroijen, W. Baeyens, G. Schoeters, E. Den Hond, G. Koppen, L. Bruckers, V. Nelen, E. Van De Mieroop, M. Bilau, A. Covaci, H. Keune, I. Loots, J. Kleinjans, W. Dhooge and N. Van Larebeke, “Internal Exposure to Pollutants Measured in Blood and Urine of Flemish Adolescents in Function of Area of Residence,” Chemosphere, Vol. 71, No. 7, 2008, pp. 1317-1325. doi:10.1016/j.chemosphere.2007.11.053
[43] H. Keune, I. Loots, L. Bruckers, M. Bilau, G. Koppen, N. Van Larebeke, G. Schoeters, V. Nelen and W. Baeyens, “Monitoring Environment, Health and Perception: An Experimental Survey on Health and Environment in Flanders, Belgium,” International Journal of Global Environmental Issues, Vol. 8, No. 1-2, 2008, pp. 90-111. doi:10.1504/IJGENVI.2008.017262
[44] S. O. Funtowicz and J. R. Ravetz, “Uncertainty and Quality in Science for Policy,” Kluwer Academic, Dordrecht, 1990. doi:10.1007/978-94-009-0621-1
[45] S. O. Funtowicz and J. R. Ravetz, “A New Scientific Methodology for Global Environmental Issues,” In: R. Costanza, Ed, Ecological Economics: The Science and Management of Sustainability, Columbia University Press, New York, 1991, pp. 137-152.
[46] S. O. Funtowicz and J. R. Ravetz, “Science for the Post-Normal Age,” Futures, Vol. 25, No. 7, 1993, pp. 739-755. doi:10.1016/0016-3287(93)90022-L
[47] S. O. Funtowicz and J. R. Ravetz, “The Worth of a Songbird: Ecological Economics as a Post-Normal Science,” Ecological Economics, Vol. 10, No. 3, 1994, pp. 197-207. doi:10.1016/0921-8009(94)90108-2
[48] T. Kuhn, “The Structure of Scientific Revolutions,” University of Chicago Press, Chicago, 1962.
[49] A. Vatn, “An Institutional Analysis of Methods for Environmental Appraisal,” Ecological Economics, Vol. 68, No. 8-9, 2009, pp. 2207-2215, doi:10.1016/j.ecolecon.2009.04.005
[50] D. Arnott and G. Pervan, “Eight Key Issues for the Decision Support Systems Discipline,” Decision Support Systems, Vol. 44, No. 3, 2008, pp. 657-672. doi:10.1016/j.dss.2007.09.003
[51] C. D. Gamper and C. Turcanu, “On the Governmental Use of Multi-Criteria Analysis,” Ecological Economics, Vol. 62, No. 2, 2007, pp. 298-307. doi:10.1016/j.ecolecon.2007.01.010
[52] A. Stirling, “Analysis, Participation and Power: Justification and Closure in Participatory Multi-Criteria Analysis,” Land Use Policy, Vol. 23, No. 1, 2006, pp. 95-107. doi:10.1016/j.landusepol.2004.08.010
[53] A. Stirling, “Opening Up and ‘Closing Down’: Power, Participation, and Pluralism in the Social Appraisal of Technology,” Science, Technology & Human Values, Vol. 33, No. 2, 2008, pp. 262-294. doi:10.1177/0162243907311265
[54] S. Maasen and O. Lieven, “Transdisciplinarity: A New Mode of Governing Science?” Science and Public Policy, Vol. 33, No. 6, 2006, pp. 399-410. doi:10.3152/147154306781778803
[55] P. Stern and H. Fineberg, “Understanding Risk: Information Decisions in a Democratic Society,” National Research Council, National Academy Press, Washington, 1996.
[56] J. Chilvers, “Deliberating Competence Theoretical and Practitioner Perspectives on Effective Participatory Appraisal Practice,” Science, Technology, & Human Values Vol. 33 No. 2, 2008, pp. 155-185. doi:10.1177/0162243907307594
[57] C. Banville, M. Landry, J. Martel and C. Boulaire, “A Stakeholder Approach to MCDA,” Systems Research, Vol. 15, No. 1, 1998, pp. 15-32.
[58] W. de Keyser and J. Springael, “Why Don’t We Kiss!?: A Contribution to Close the Gap between Real-World Decision Makers and Theoretical Decision-Model Builders,” University Press, Antwerp, 2009.
[59] M. G. Kendall, “Rank Correlation Methods,” Griffin, London, 1948.
[60] P. Grandjean, “Non-Precautionary Aspects of Toxicology,” Toxicology and Applied Pharmacology, Vol. 207, Supplement 2, 2005, pp. 652-657. doi:10.1016/j.taap.2004.11.029
[61] M. McCally, “Life Support, the Environment and Human Health,” The MIT Press, Cambridge, London, 2002.
[62] P. Harremoes, D. Gee, M. MacGarvin, A. Stirling, J. Keys, B. Wynne and S. Guedes Vaz, “The Precautionary Principle in the 20th Century, Late Lessons from Early Warnings,” Earthscan Publications Ltd, London, 2002.
[63] V. Covello, “Risk Comparisons and Risk Communication: Issues and Problems in Communicating Health and Environmental Risks,” In: R. E. Kasperson and P. J. M. Stallen, Eds., Communicating Risks to the Public: International Perspectives, Kluwer Academic, Dordrecht, 1991, pp. 79-124. doi:10.1007/978-94-009-1952-5_6
[64] P. Slovic, “Perceived Risk, Trust and Democracy,” In: R. Lofstedt and L. Frewer, Eds., Risk and Modern Society, The Earthscan Reader: Earthscan Publications Ltd, London, 1998, pp. 181-192.
[65] O. Renn and B. Rohrmann, “Cross-Cultural Risk Perception: A Survey of Empirical Studies,” Kluwer, Dordrecht, 2000.
[66] R. K. Leik, “A Measure of Ordinal Consensus,” The Pacific Sociological Review, Vol. 9, No. 2, 1996, pp. 85-90. doi:10.2307/1388242
[67] M. Albert, S. Laberge, B. D. Hodges, G. Regehr and L. Lingard, “Biomedical Scientists Perception of the Social Sciences in Health Research,” Social Science & Medicine, Vol. 66, No. 12, 2008, pp. 2520-2531. doi:10.1016/j.socscimed.2008.01.052

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.