Aggregation of Priorities in Multi-Criteria Decision Analysis ( MCDA ) : Connecting Decision Spaces in the Cognitive Space

In Multi-Criteria Decision Analysis, the well-known weighted sum method for aggregating normalised relative priorities ignores the unit of scale that may vary across the criteria and thus causes rank reversals. A new aggregation rule that explicitly includes the norms of priority vectors is derived and shown as a remedy for it. An algorithmic procedure is presented to demonstrate how it can as well be used in the Analytic Hierarchy Process when norms of priority vectors are not readily available. Also, recursion relations connecting two decision spaces with added or deleted alternatives give an opportunity to extend the idea of connectivity to a new concept of cognitive space. Expanded analytic modelling embracing multiple decision spaces or scenarios may assist in detecting deficiencies in analytic models and also grasping the big picture in decision making.


Introduction
In Multi-Criteria Decision Analysis (MCDA), often a complex decision problem is decomposed into a hierarchy of criteria, sub-criteria and alternatives.For such decomposition, one assumes that the decision about alternatives in respect of one criterion (or sub-criterion) can be made independent of decisions made in respect of other criteria (or sub-criteria).It also becomes easier for a decision maker (DM) to understand and analyze the decision hierarchy in small segments at a time.Finally, results (i.e., criteria and sub-criteria weights and alternative priorities) from different segments are aggregated.Integrated values for the decision alternatives can be presented as aggregate scores, normalised relative priorities or ordinal ranks.In this paper, we consider a simple three level hierarchy without any loss of generality.The top level represents the overall goal of the decision.The second level represents the criteria, and the third level is for a list of alternatives.The criteria are characterised by weights that may sum to one.The priorities of the alternatives under each criterion may be expressed as un-normalised preference scores or relative priorities that are normalised such that they sum to one.The preference scores or the relative priorities are often derived from a Multi-Criteria Decision Model (MCDM).Finally, the criteria weights and alternative priorities are aggregated to generate an overall rating for alternatives.
A commonly used aggregation procedure, introduced in the Multi-Attribute Value/Utility Theory (MAVT/ MAUT), is the weighted sum method (see [1] for a brief overview and references therein).Preference scores or value scores are obtained for each alternative in reference to all attributes or criteria.Then, the weighted sum aggregation can be done in two ways.a) One can obtain the weighted sum of the preference scores (i.e., before normalization) across the criteria for each alternative and then normalize them to obtain the final relative priorities.b) One can just get the weighted sum of the already normalised relative priorities of the alternatives across the criteria to get the final priorities that may not require further normalization.The final results obtained from these two approaches are not the same.Not only so, the ones obtained by method a) are free from the rank reversal problem.The ones obtained by method b) may be affected by the reversal of ranks if another alternative is added or an existing one is deleted from the decision space.Many other types of MCDM techniques and associated aggregation procedures are discussed in the MCDA literature (for example, see [2] and references therein).However, in this paper we will focus on the additive aggregation approach such as the weighted sum method (WSM) [3].
In any MCDM technique, since method b) may cause unwanted rank reversals, it is justified to question the validity of the aggregate values derived using this technique.This observation brings the Analytic Hierarchy Process (AHP) [4] [5], a familiar MCDM, into focus.It initiated vigorous discussions about the rank reversal problem (see [6] for a comprehensive review and references) during the last few decades.The AHP provides normalised relative priorities for alternatives from pairwise comparison matrices under each criterion and uses method b) for aggregation across criteria.However, the unit of scale (which essentially is the norm of the vector representing the priorities) used in normalising the relative priorities of alternatives for different criteria may be different.Aggregation method b) ignores this fact causing unwanted rank reversals.Therefore, Zahir [7] suggested a modified weighted aggregation formula (based on intuitive reasoning) that incorporated norms of the priority vectors.This procedure takes into account the variation of the unit of scale across the criteria.Zahir [7] used recursion rules and several numerical examples to illustrate the method for computing the correct aggregate relative priorities that are free from rank reversals.In contrast, the well-known simple weighted sum formula used with normalised priorities perhaps gives flawed results.In this paper, we derive the proposed aggregation rule from a rigorous and generalised mathematical approach [8] making the proposed aggregation rule for MCDA more significant.We discuss, with numerical examples, how this aggregation rule can be used albeit utilising additional information in methodologies that generate only normalised alternative priorities for each criterion (e.g., the AHP).
In addition, based on recursion relations that link two decision spaces having an added or deleted alternative, we introduce the concept of cognitive space spanning multiple decision spaces or scenarios.We present arguments for expanding the usual modelling experiences beyond a particular decision space into the cognitive space.Expanded modelling will help us detect shortcomings if any, of models developed on a particular decision space.In Section 2, we outline the new contributions of this paper.We provide a generalised derivation of a modified aggregation rule, further discuss recursion relations and overview the meaning of weights in this context.Section 3 and Section 4 demonstrate the use the new aggregation rule in the AHP with numerical examples.Section 5 delineates the concept of expanded analytical modelling in cognitive space and conclusions are made in Section 6.

Representation of the Generalised Decision Space and Formulation of the Modified Aggregation Rule
In this section, starting from a generalised multi-criteria decision space, a modified aggregation rule is derived and contrasted with the familiar WSM.Calculations involve combining measurements from two different spaces (i.e., criteria space and alternative space), each having different units of scale and may have different dimensions, too.However, no mathematical derivation for the WSM which represents a complex measurement procedure is available in the MCDA literature.Perhaps, the formula was introduced as a simple extension of the expected value formula with the normalised weights replacing probabilities.The earlier work [7] also started with the WSM and modified it to incorporate norms of the priority vectors.The new contributions in this paper include a formal mathematical derivation of the modified weighted sum method showing how the norms of priority vectors naturally appear in formal mathematics spanning the criteria and alternative spaces.The other contribution is the discussion of cognitive space that connects multiple decision spaces, each represented by a set of criteria and alternatives.This approach helps us to project the rank reversal problem as a cognitive issue in the context of an expanded analytic modelling framework.As an alternative is added or deleted, the logical connections between decision spaces are also analysed later in this section in terms of the recursion rules.In the end, we discuss the impact of normalisation of criteria weights whose meanings are further reviewed in reference to earlier MCDA research.

Modified Weighted Sum Aggregation Rule from the Generalised Decision Space
We consider a decision space with n alternatives judged with respect to m criteria as represented by the hierarchy diagram in Figure 1.The criteria and alternatives constitute two distinct spaces, S(C) and S(A) respectively.S(C) is m dimensional and S(A) is n dimensional vector spaces spanned by a set of orthonormal basis vectors α i , ( 1, , i m =  ) and β j , ( 1, , j n =  ) respectively where the basis vectors are just column vectors of dimension m and n respectively, where T denotes transpose and Kronecker delta is defined as 1, when 0, when Let us assume that the criteria weights (implying the relative importance of the criterion) can be obtained from any available technique.We develop formulations of this section independent of any particular model (e.g., AHP).We represent the weights by a vector w and for the sake of convenience we assume that the weights are normalised such that they sum to one.The rest of the discussions in this section will not change even if the weights are not normalised (in that case weights will be a constant multiple of the normalised ones).So, the weight vector w is, Similarly, with respect to i th criterion, let the priority vector for n alternatives be 1 , 1, , The priority vectors v i can be obtained in various ways subject to a decision model.For example, it can represent a set of scores as obtained from a decision maker's mental model or it can be obtained from the pairwise comparison matrix (P i ) as in the AHP.If one employs the eigenvector method in the AHP, then it can be just an eigenvector (not necessarily normalised) derived from the eigenvalue equation, Other methods such as minimization of logarithmic least-square approach have been discussed in the AHP literature as well (see [9]).However, our discussion of aggregation is not restricted to any particular method of eliciting the priority vector from decision maker's preferences.In a generic form, we can represent this decision-making exercise by a generic operation (GO) in Figure 2.
If u i is the corresponding normalised priority vector, then 1 1 and 1.
s is the norm of the unnormalised priority vector with n alternatives with respect to the i th criterion.In the scoring method, i n s is known, but in the AHP it is unknown.i n s , the norm of n-dimensional priority vector sets the unit of scale for the normalised priority vector u i j and can vary for different criteria.Therefore, the correct aggregation rule should incorporate i n s .The combined decision space D is a direct product of the criteria space and the alternative space, i.e., ( ) ( ) Thus, the priority state ψ is derived from a direct product of w and v i s so that it preserves the hierarchic structure of Figure 1.We compute the sum of the direct products of the orthogonal components of w in the criteria space with the corresponding priority vectors under each criterion.That is,

S. Zahir
Please note that the column vectors α i s are defined in m dimensional criteria space and v i s are defined in the n-dimensional alternative space.
It is worth pointing out that, in general, there are arbitrary constants multiplying each term in the expansion of Equation (6).For convenience, we absorbed them in v i s and thus in the corresponding norms i n s .Therefore, the norms that set the scale of normalisation are also arbitrary at this stage.It is the decision maker who eventually sets values for the norms as it will be explained later in the text when we discuss the computational algorithm and the numerical example.Now expanding v i s in terms of the basis vectors β j given in Equation ( 3), we finally have, We project out the intensity of priority corresponding to the p th state of the criteria space and the q th basis state of the alternative space by multiplying on the left by T In additive models, we sum over all criteria to obtain the aggregate priority R q of alternative q, Now, using the relation between the unnormalised local priority p q v and the normalised ones p q u of Equation ( 5), we have The normalised aggregate priority r q (i.e., ) for the q th alternative is given by a Modified Aggregation Rule (MAR), The above expression should be compared with the traditional weighted sum formula called Simple Aggregation Rule (SAR), 1 1, , The MAR was proposed as the correct aggregation procedure for obtaining overall priorities in [7].It is worth pointing out at this stage that if the weight vector was not normalised (i.e., if in Equation ( 6) we used un-normalised weights W p ), we would have 1 ; ; norm of the dimensional un-normalised weight vector It is straightforward to check that the expression of the normalised aggregate priority r q in Equation ( 11) would remain the same confirming that we did not sacrifice any generality by considering a normalised weight vector at the very beginning in Equation (6).
It should be further noted that the proposed aggregation rule (MAR) becomes identical with the familiar weighted sum rule (SAR) only when norms of all priority vectors are equal (i.e., 1 ).This implies a common unit of scale used to normalise the local priority vectors under all criteria.A similar general structure of decision space was earlier considered in [8] for developing the AHP in the Euclidean space where a common unit of scale was considered and thus lacked a general rule of aggregation.
The above result is independent of decision models that one would use to derive values for the alternative priority vectors.For example, Saaty [4] [5] developed an ingenious mechanism based on reciprocal pairwise comparison matrices and an eigenvalue equation for deriving the normalised local priorities in the AHP.Values for i n s are unavailable.Thus, it used the familiar weighted sum rule (SAR) for aggregation, assuming same unit of scale for local priorities under all criteria causing rank reversals when it should not.

Recursion Rules
Two recursion rules used in [7] for the rank reversal analysis are re-stated below (can be derived easily following the same procedure with a slight change in notations). 1) When a new alternative (i.e., the (n + 1) th alternative) is added to the decision space (the primed items refer to the corresponding quantities in the new decision space with n + 1 alternatives), in a fully consistent case, ( ) ( ) where for 1, , 1 w s i n 2) When an existing alternative (i.e., the n th alternative) is deleted from the decision space (the primed items refer to the corresponding quantities in the new decision space with n − 1 alternatives), in a fully consistent case, ( ) ( ) where for 1, , 1 Using the MAR, it can be proved [7] that ratio of two aggregate priorities for alternatives is preserved when a new alternative is added or an existing one is deleted (see Appendix A).

Meaning of Criteria Weights in the Modified Aggregation Rule
In MCDA research, it has been noted [10] that the most widely used and the simplest aggregation method is the weighted sum method (WSM).Considering n alternatives and m criteria, the aggregate preference P i of alternative A i ( 1, , i n =  ) is calculated according to the Formula (3)]: where, W j is the weight of the j th criteria.Here, ij a s can be normalised (i.e., ) or both can be normalised.In the discussion following Equation ( 13), it is explained that the normalisation of weights does not affect the MAR that is being proposed here as the correct aggregation method as it prevents rank reversals.But, rank reversals may occur in the SAR where the ij a s are normalised.Rank reversal arguments, often done in the context of the AHP, have not seriously raised questions about the validity of the SAR.Often, the normalisation of alternative priorities has been mentioned as possible reasons for it (see [1] and [2]).Thus, keeping the SAR as the unquestioned aggregation rule, attempts are made [11] to find solutions for the rank reversal problem by adding ambiguity to criteria weights.New versions of the AHP called "referenced AHP" and "B-G modified AHP" were introduced.Examples used in [11] required additional information about the absolute preference scores of the alternatives (note variables T i,k and other related derived ones in [11]).But no further discussions were presented to indicate how computations can be done when such additional information would not be available as it is the case commonly encountered in the AHP.
Here, in this research, we may be tempted to redefine the criteria weights as follows absorbing the norms (i.e., additional information) of the priority vectors: But, because of the recursion relations in Equation ( 14) and Equation ( 16), the new weights p n ω , which de- pend on the number of alternatives n, may change if a new alternative is added or an existing one is deleted.Thus, p n ω can be called an adjusted weight.Choo, Schoner and Wedley [12] analysed the interpretation of criteria weights referring to several multi-criteria decision-making models (e.g., Linking pin AHP, Referenced AHP, Saaty's AHP, Interactive MCDM, MVT SMART, Outranking Electre).They concluded that the most common interpretation of criteria weights was: marginal contribution, trade-offs, gradient of utility function, scaling factor and criteria importance.Here, in this research, criteria weights w p represent the importance of criteria as well and they get adjusted by the norms of priority vectors creating the adjusted weights p n ω .w p is obtained from a DM's inputs and p n ω is a derived quantity.They appear in formulations as we propose that aggregation should be done using the modified rule MAR as the correct one and we derived it from a general mathematical approach independent of any particular MCDA tool or method.

Using the Proposed Aggregation Rule: Special Case of the AHP
In the AHP, we can determine weights w p and the local normalised alternative priorities p q u ( 1, , q n =  ) for each criteria p (p =1,.., m) through pairwise comparisons and possibly by the eigenvalue method.However, how do we compute p n s ?Zahir [7] discussed this and presented a computational algorithm suggesting the need for extra inputs of m estimates for p n s (n = 1 and 1, , p m =  ) and the use of the recursion relations of Equation (14).Alternatively, a) one can estimate m values for p n s for any particular value of n, assuming a stochastic distribution or b) one can make a direct estimate for the norms along with errors.Then, one can compute an overall error estimate for aggregate normalised priorities.In cases a) and b), once a set of values are chosen for p n s s, their successive values (i.e., for addition or deletion of an alternative) are restricted by the recursion relations given in Equation ( 14) and Equation ( 16).These equations manifest reasoning that links the decision spaces in the cognitive space of the decision problem (see Section 5 below for more).
In the following formulations, we will assume that there are n alternatives, so we will suppress the index n and will restore it only in the final results.Estimating values for p n s directly as in a) and b) above may be involved.However, in a fully consistent case, the recursion relation of Equation ( 14) can be used conveniently to get values for p n s for any n starting from n = 1 when we have only one alternative.The preference score of one alternative is the same as the norm of the priority vector in this case.The normalised criteria weights and the normalised relative priorities for alternatives can be obtained from eigenvalue equations from pairwise comparison matrices.The recursion relation will generate successive estimates for the norms and thus facilitate aggregation based on the MAR.We describe the procedure in an algorithmic form in Text Box 1 (see [7]) for an earlier version) and demonstrate it step by step using a numerical example in Appendix B.

Discussion of Results and Future Research
In the preceding sections, we observed that as we move from one decision space to another (obtained by adding or deleting an alternative), the decision model can generate meaningful results only if the aggregation rule is the correct one.This "correctness" is based on our reasoning, i.e., ranks should not reverse when they are not expected to.We illustrated with numerical examples how the MAR supported our correct expectations.Deficiencies of the SAR were detected only when logical reasoning could not be satisfied by calculated results as an alternative was added to or deleted from a decision model.This observation emphasises the need for testing the validity of analytic modelling over an expanded model space.In the AHP (or MAVT using the SAR), if the early developers considered results of two decision spaces (obtained by adding and subtracting alternatives), they would have identified the deficiency of the SAR.An improved aggregation rule that incorporated the unit of scale would have been used in the normalisation of alternatives such as the MAR being discussed in this paper.Such an expanded "big picture" framework is outlined below.

Cognitive Space and Decision Space: Framework for Avoiding Modelling Deficiencies
In the context of an MCDM, decision making can be discussed in terms of the decision space that is characterised by the number of alternatives and criteria.Alternatives and criteria have numerical values or subjective values (derived from a decision maker's cognitive process and its interaction with the external world) in a quantitative or a qualitative decision model respectively.However, in this paper, since we are dealing with quantitative decision modelling we assume that we can assign numerical values to our judgements.In general, there can be a large number of choices for the number of alternatives and criteria making a decision problem complex.Often the problem is represented by a model that is nothing but an abstraction of the reality.However, the analysing capability of humans as rational beings, even with technological assistance, is limited and thus we are inclined to "satisfice" by setting a boundary to the decision spaces.The boundary can change as we acquire more knowledge about the problem or are subject to different external influences.This effect can be understood in light of Simon's bounded rationality [14].
As more alternatives (or criteria) are added or some are deleted, the decision space expands or contracts creating a new decision space for the underlying decision model.The set of related decision spaces collectively is the cognitive space of the decision problem, and we try to link them through reasoning and understanding.Each decision space is also a cognitive sub-space where model formulations are defined in terms of logic, reasoning and facts.Often, the links among decision spaces are done in human minds and are left out of the decision models and thus each model is treated independently in its own decision space.But, together they are discussed by researchers in their cognitive domain via reasoning processes.Our memory is a repository of information and it plays a significant role in linking the analyses of the decision spaces via reasoning.Thus, it helps us connect scenarios in decision spaces logically to the satisfaction of our rational minds.
We can think of a Gedanken experiment in reference to the example in Section 4. We keep the inputs same as in Section 4. Suppose a DM computes the relative priorities using the SAR for three alternatives.And then she takes a nap and wakes up when she has no memory of the priorities derived before the nap.Now, she computes the relative priorities using the SAR again for four alternatives and obtains new results.Since she has no memory of the results she obtained before the nap, does it matter even if the ranks now are reversed?Most probably it does not.However, in reality we have memory, and our cognitive process finds reasoning to link findings and results from related decision spaces.The links of reasoning connecting the results from one decision space to another as they reside in our memory are not incorporated in the decision models or, may be, models are deficient in part.We reason in our minds.Sometimes, we also fail to satisfy our reasoning and then we raise the question about the validity of a decision model as it stands.It may be due to a flaw in parts of the model and therefore it needs to be corrected.Another approach that has not been widely pursued in practice may be to extend the decision model across the cognitive space that would also model the reasoning links among the decision spaces.That would also help us find a solution to an apparent imperfection in a decision model.
As it has been discussed earlier, when a new alternative is added or an existing alternative is deleted (i.e., moving from one decision space to another in the cognitive space of the decision problem (see Figure 3) by expanding or contracting the decision space), the ranks may be reversed in the SAR-based AHP.We face a problem in our reasoning as we find it difficult to understand the reversal of ranks when such a reversal of ranks is not supposed to happen.In the MAR, associated recursion rules demonstrate how the units of scale (set by the norm of the preference vectors) change as we make a transition from one decision space to another (i.e., from DS-i to DS-j in Figure 3).The MAR takes this effect into the formulation and thus preserves ranks of existing alternatives when they are expected to preserve.The MAR models decision spaces DS-i and DS-j and also represents cognitive link CL ij representing the reasoning that links DS-i and DS-j in the cognitive space.CL ij represents the recursion rules that asserts that norms (setting the scale of normalisation) of priority vectors change as an alternative is added or deleted, and the MAR incorporates this assertion preserving ranks when it should.The SAR is deficient as it does not incorporate the scale of normalisation and hence produces unwanted rank reversals.
Here, we may attempt to relate decision spaces to a simplified form of scenarios as a parallel concept.The idea of scenario has been discussed for over 40 years (see [15] for some early references) in the context of strategic decision making in many areas of applications including the MCDA (see [16] and [17] and references therein).Huss [15] called a scenario "a narrative description of a consistent set of factors that define in a probabilistic sense alternative sets of future business conditions" and discussed its application to forecasting.The concept of a scenario is still an evolving one but it is clear that each scenario has a context.Tversky and Kahnemann [18] realised difficulties in developing scenarios and stated "it is exceedingly difficult for the human mind to apprehend sequences of variations of several interacting factors.We suggest that in evaluating the probability of complex events, only the simplest and most likely scenarios are likely to be considered."In this paper, we keep our discussions about scenarios limited to the MCDA.
We have stressed the need for linking decision spaces so as to test the validity of mathematical models.The analysis presented in the preceding sections lead us to the correct aggregation rule called the MAR.The simplified version of the scenario that we adopt is defined by a set of alternatives and criteria and preferences assigned to them by a DM, who sets the context of the scenario.The context emerges as the DM assigns to alternatives and criteria subjective and objective judgements that are influenced by his/her value system and sense of uncertainty.Mathematically speaking, in a modelling environment the context is created by a set of variables, operators and constraints formulated as mathematical expressions.Thus, as we navigate from one decision space to another, some of the things that can happen are,  Parameter values change. New parameters enter. Old parameters may drop out. Expressions are re-evaluated (e.g., the recursion relations in preceding sections). New constraints may emerge. And so on.
If we have a valid model, its predictions would satisfactorily track our cognitive expectations not only in a particular decision space but also as navigate to others.Gomes and Costa [19] concluded that the traditional approach of multi-criteria decision-making factoring in a single vision of scenarios might result in less than robust suggestions for decision-makers.In their research, they considered multiple scenarios where the criteria weights were evaluated independently for each scenario keeping number of alternatives unchanged to obtain a robust set of [17] explored possibilities of integrating MCDA with scenario analysis and illustrated the concept with an agricultural policy planning example in a developing country.In this example, they kept the number of alternatives and criteria across scenarios unchanged but preferences were assigned in the context of the scenarios.Ram S. Zahir et al. [16] considered a case where the number of alternatives and criteria were unchanged but their preferences were varied across scenarios for improved evaluation of strategic options.In our case, we got the correct aggregation rule (MAR) that avoided rank reversals as the number of criteria and their weights were unchanged but another alternative was added or deleted.While others have extended the use of scenario planning and MCDA for improved strategic decisions, we promote a framework for improved decision modelling in terms of a parallel concept of decision spaces (i.e., simplified scenarios) embedded in a cognitive space.

Future Research
While we discussed the multi-criteria decision problem in light decision spaces, we can extend the concept to scenarios applicable to other areas of decision sciences.In management science and operational research, we make models for inventory/supply chain, resource allocation, financial market analysis, portfolio management, etc. based on a particular scenario (analogous to a decision space).Then, we change models as scenarios change (often probabilities are assigned to each scenario in scenario analysis).We try to understand the varying results in our reasoning as we connect scenarios utilising our memory and cognitive strength.It may be possible to include intra-scenario reasoning by having cognitive links in the models spanning more than one scenario.If the models cannot explain results from two scenarios, that can be attributed to a deficient component or weakness of the model itself and thus will call for improvement in the model.We can also try to understand the significance of such integrated multi-scenario decision modelling in light of system thinking [20].An inter-related reasoning embedded in the expanded analytics is expected to help understand "the whole" for better decision making.The multiple decision spaces or scenarios will be linked by a network in the cognitive space with an objective to assist in forming a decision representing a "big picture".

Conclusion
We identified the deficiency of the traditional weighted sum aggregation rule used in the AHP and possibly in the MAVT with the prior normalisation as the cause for rank reversals and proposed a modified aggregation rule as a remedy.Utilising recursion rules, we analyzed the additive aggregation process in light of the new aggregation rule.We also suggested ways how it could be used with the AHP or MAVT with prior local normalisation.The ideas are illustrated with numerical examples.Also, we discussed possibilities for extending usual modelling techniques developed in the context of a particular decision space or scenario to an expanded form in the cognitive space.Concepts of satisficing and bounded rationality were invoked for managing such expansions.Testing the validity of mathematical models in decision sciences, operations management and possibly economics may need the support of a "big picture" beyond a single scenario.

In Step 3
.1 of Text Box 1, m estimates for the preference scores for alternative 1 is required.In MAVT, such values are obtained from value functions.However, m estimates for the preference scores for alternative 1can be estimated assuming a Beta distribution as a possible choice if we want to deal with the underlying uncertainty stochastically.Let a decision maker estimates the norm (n = 1), following a Beta distribution, as follows: b p = high end estimate for 1 p v , a p = low end estimate for 1 p v , m p = most likely estimate for 1 p v Then, then 1 p v can represent the average value for the preference score as computed by using well-known formulaAverage score also suggested a technology-based visual interface for estimating the preference scores 1 p v in a comparative mode in reference to m criteria.Such visual interfaces have been used in some AHP software (e.g., VISA-a software package not currently available, MCDSS[13].

Figure 3 .
Figure 3. Decision spaces in cognitive space.