From Decision in Risk to Decision in Time (and Return)

Abstract

This paper aims to restate, in a decision theory framework, the results of some significant contributions of the literature on probability discounting that followed the publication of the pioneering article by Rachlin et al. We provide a restatement of probability discounting, usually limited to the case of 2-issues lotteries, in terms of rank-dependent utility, in which the utilities of the outcomes of n-issues lotteries are weighted by probabilities transformed after their transposition into time-delays. This formalism makes the typical cases of rationality in time and in risk mutually exclusive, but allows looser types of rationality. The resulting attitude toward probability and toward risk are then determined in relation to the values of the two parameters involved in the procedure of probability discounting: a parameter related to impatience and pessimism, and a parameter related to time-consistency and the separation between non-optimism and non-pessimism. A simulation illustrates these results through the characteristics of the transformation of probabilities function.

Share and Cite:

Diaye, M. , Lapidus, A. and Schmidt, C. (2024) From Decision in Risk to Decision in Time (and Return). Theoretical Economics Letters, 14, 2036-2065. doi: 10.4236/tel.2024.145101.

1. Introduction

The existence of significant parallels between decision in time and decision in risk is rather intuitive because of the formal similarities between standard discounted and expected utility. However, the more specific thesis that delayed reward and probable reward could be treated in the same way because, contrary to a common view, they refer to the same matter, which is less familiar. It seems to have been first explored by psychologists like Rotter (1954), for whom delays of gratification could be regarded as involving risky rewards by their very nature. Later, some authors like Prelec & Loewenstein (1991), initiated a large stream of works by arguing, on the basis of anomalies observed in both expected utility and discounted utility models, that a delayed reward and a probable reward could be dealt with in the same way, within a multi-attribute choice model. In the same time, Rachlin and his co-authors ((Rachlin, Raineri, & Cross, 1991) in the continuation of (Rachlin, Logue, Gibbon, & Frankel, 1986)) developed, in a seminal paper which accounts for experiments with college undergraduates, the idea that a probable reward could be viewed as a delayed reward1, discounted to obtain its present value, provided probabilities, regarded as “odds-against”, are transposed into delays. Despite a small audience, this approach took hold (see, for instance, (Rachlin & Siegel, 1994; Rachlin, Siegel, & Cross, 1994; Ostaszewski, Green, & Myerson, 1998; Rachlin, Brown, & Cross, 2000; Green & Myerson, 2004; Takahashi, 2005; Yi, de la Piedad, & Bickel, 2006)) and gave rise to what was first called “probabilistic discounting” by Rachlin et al. (1991).

The discounting function which aimed to account for decision under risk, was assumed to be of a hyperbolic kind2 on the basis of arguments either empirical, or pertaining to the shape of the relation between the reward and the rate of reward. From an analytical viewpoint, something new occurred with the publication of a paper by Cajueiro (2006) who first introduced a hyperbolic discounting function based on the deformed algebra inspired by Tsallis’ non-extensive thermodynamics (Tsallis, 1994), the q-exponential function3, specially relevant to account for increasing impatience. In the continuation of Cajueiro (2006), Takahashi, either alone (Takahashi, 2007b, 2011) or with various authors (Takahashi et al., 2012, 2013) took over the q-exponential function to account for time discounting as well as probability discounting. Meanwhile, the same authors focused on the nature of the delay associated to probabilities in probability discounting, focusing on the distinction between physical and perceived waiting time. A classical approach, regarding the way an external stimulus is scaled into an internal representation of sensation, which was initiated by Weber and Fechner in the second half of the 19th century in psychophysics, concluded that the relation was logarithmic. Nearly a century later, the issue was revived by Stevens (1957), who discussed the possibility of an alternative (power functions) to the logarithmic relation. More recently, some authors (see (Dehaene, 2003)) have given a neural basis to the view that our mental scaling is logarithmic. In line with this perspective, Takahashi and his co-authors supported the view that the perceived waiting time was logari- thmically related to the physical waiting time (Takahashi, 2005, 2011; Takahashi et al., 2012). In Takahashi (2005) and Takahashi et al. (2012), the reward was submitted to an exponential discount, but relatively not to physical waiting time, but to perceived waiting time. Relatively to physical waiting time, this resulted in a general hyperbolic discounting function in Takahashi (2005) transposed into a q-exponential discounting function in Takahashi et al. (2012). It is obvious that, as a consequence, the outcome of the operation was, like in Kahneman & Tversky (1979) and with similar consequences, a transformation of the decision weight of the probability associated with the reward.

A common point of this literature (with, of course, the notable exception of papers like this of Prelec & Loewenstein (1991)) is that its main concern was to identify a few typical relations consistent with the results of limited experiments related to choices under risk or over time (Somasundaram & Eli, 2022, Scrogin, 2023). From this point of view, it can rightly be considered a success story. But on the other hand, the theoretical support of these experimental results is often limited by what is strictly required and presented in a piecemeal way, according to the needs of the experiments. For instance, the idea of a logarithmic perception of physical time appeared as early as 2005 in Takahashi’s work, in a paper devoted to time discounting, not to decision in risk. Its integration to a wider representation leading to a probability weighting function of which Prelec (1998) was a special case only occurred six years later (Takahashi, 2011; see also (Takahashi et al., 2012)). In the same way, the systematic use of 2-issues lotteries in which one looses or wins, is appropriate for dealing with issues like the comparison of the respective effects of exponential and hyperbolic discounting on the discounted value of a reward, or of the distortion of the probability of obtaining a reward induced by probability discounting. However, this limitation to simple 2-issues lotteries has significant consequences regarding the way probabilities are perceived. At last, the issue of the desirability of the reward is not being addressed head-on. References to the pioneering work of Kahneman & Tversky (1979) are quite frequent, but they usually concern the weighting of probabilities, not the value function that would lead us to consider that our preferences relate not to a state (through a utility function), but to a difference with respect to the statu quo (the value function).

In the follow-up of this article:

• We provide a restatement of probability discounting in which probabilities are transformed into expected delays before winning, but where i) the usual case of a 2-issues lottery is extended to the more general case of discrete random variables with finite support and ii) a utility function is explicitly introduced in the analysis, so that we come to a rank-dependent utility approach4 (Section 2).

• We show that the resulting formalism makes the typical standard cases of rationality in time and in risk mutually exclusive, but allows looser types of rationality, involved in the axiomatisation of generalised hyperbolic discounting and of rank-dependent utility, like Thomsen condition of separability and comonotonic tradeoff consistency (Section 3.1).

• At last, we show that the attitude toward probabilities expressed in the probability weighting function depends primarily on the value of the discounting parameter q, giving rise to three alternative situations. When 0<q<1 , pessimism toward probabilities prevails, possibly mixed with optimism according to the value of the other parameter k. When q=0 , the value of k determines either optimism or pessimism. And when q<0 , optimism prevails, possibly mixed with pessimism according to the value of k. It is, therefore, the same discounting function which, according to the values of parameters which can be interpreted in terms of decision in time, displays all possible attitudes vis-à-vis probabilities. In combination with the utility function, such probability discounting gives rise to the various types of attitudes toward risk (aversion or seeking; strong, monotone or weak) (Section 3.2).

We bring together two pieces of literature: the psychophysics literature and the risk and uncertainty literature. We make clear how to go from perceived waiting time to physical waiting time to risk and uncertainty, and vice versa. We also account for how probability discounting determines attitudes toward probabilities and risk. We provide a unifying framework with a full set of properties of the probability weighting function commonly recognized in the literature. Our three theorems show the link between attitudes toward probabilities, attitudes toward outcomes, and attitudes toward risk when the discounting parameter varies on the interval ] ,1 [ .

2. Extending Probability Discounting

As an approach to decision under risk through a specific valuation of lotteries, the probability discounting approach which emerge from the pioneering work of Rachlin et al. (1991) might be viewed as a four steps procedure, involving 1) the transposition of a probability into a physical delay; 2) the transformation of this physical in a perceived delay; 3) the assessment of a resulting temporal discounting; and 4) the transformation of a discounted delayed value into the utility of a probable reward. The four steps of this procedure are outlined below.

2.1. From the Probability of Gain to a Physical Delay Before This Gain: A Bernoulli Trial Transposition

The usual framework of probability discounting is, more or less explicitly, this of a representation of decision under risk where the set Λ of probability distributions is typically defined over {0, x}, x being the possible gain, of probability p, of a 2-issues lottery L belonging to Λ. After Rachlin et al. (1991), a common feature of the probability discounting contributions is that L is related to the valuation of a decision through a waiting time l, which can be interpreted as “odds against” in repeated gambles5. Though rather intuitive, this interpretation could be given a firmer basis than the usual one, which draws on the comparison with a gambler betting on a horse race, in terms of repeated Bernoulli trials. It is well known that the expected value of the random variable representing the number of trials before winning (the winning trial included) is 1/p. If we take the interval between two trials as the unit of time, the expected value of the physical delay l before the winning trial is therefore given by:

l= 1 p 1= 1p p (1)

Such representation of the link between probability and physical delay has been currently admitted, at least since Rachlin et al. (1991), as the initial moment of a procedure leading to transform probabilities. Insofar as we remain in the framework of 2-issues lotteries, and as the counterpart of the transformation of the probability p of success is a parallel and consistent transformation of the probability 1p of failure, the immediate link in (1) between probability and delay is not contentious. But regarding the more general case of n-issues lotteries is less simple. Assume these lotteries L are the laws of probability of discrete random variables X with finite support:

L=( x 1 ,, x i ,, x n ; p 1 ,, p i ,, p n ) (2)

in which the outcomes x i are ranked in increasing order, x 1 << x i << x n , and i=1 n p i =1 .

Let G be the decumulative distribution function of the random variable X whose probability law is given by the lottery L: G( x i )=Pr( X x i ) . It is obvious that G( x 1 )=1 and G( x n )= p n . Consider now not the isolated probability p i of obtaining x i , but the probability of obtaining at least x i , that is, G( x i ) . We can derive from G( x i ) a Bernoulli trial whose issues are either sucess, with an outcome between x i and x n both included, or failure, with an outcome between x 1 and x i1 also included. G( x i ) is therefore the probability of success, and F( x i )=1G( x i ) the probability of failure ( F( x i ) standing for the usual cumulative distribution function). The expected number of Bernoulli trials to obtain one success (that is, getting at least x i ) is 1/ G( x i ) . And going on transposing probability into a physical delay before winning in a repeated gamble like in (1), the average delay l i for success, that is for obtaining at least x i is given by:

l i = 1G( x i ) G( x i ) ( i=1,,n ) (3)

2.2. From a Physical to a Perceived Delay: A Logarithmic Treatment

As far as p is an objective probability, l can be viewed as “physical waiting time” (Takahashi et al., 2012: p. 13.). Drawing on the reintroduction of Fechnerian-like perspectives in psychophysics (Dehaene, 2003), Takahashi and his co-authors assume that in a 2-issues lottery, the subjectively perceived waiting time τ is a logarithmic function of the physical waiting time

τ=aln( 1+bl ) (4)

with a,b>0 6. The same principles hold in the general case of a n-issues lottery: the subjectively perceived waiting time τ i before winning at least x i is logarithmically related to the physical waiting time:

τ i =aln( 1+b l i )( i=1,,n ) (5)

Replacing the physical delay by decumulated probability (i.e., the probability of winning at least a certain outcome) like in (3), the probability of winning at least x i is related as follows to the perceived delay before winning at least x i :

τ i =aln( 1+b 1G( x i ) G( x i ) )( i=1,,n ) (6)

2.3. From Perceived Time Discounting to Physical Time Discounting

The third step provides a separate treatment of time discounting. In the case of a 2-issues lottery explored by standard literature on probability discounting, things are rather simple. The basic idea is this of an exponential discounting whose argument is the perceived delay τ, instead of the physical delay l:

μ=exp( rτ ) (7)

where μ and r stand respectively for the discounting factor and the discount rate7 for an outcome x, whose expected perceived delay before winning it, is τ. From (4) and (7) we therefore have:

μ= ( 1+bl ) ra (8)

which amounts to a generalized hyperbolic discounting factor8. Exponential discounting, relatively to perceived time, has therefore generated hyperbolic discounting, relatively to physical time.

However, such determination of the discounting factor would be seriously flawed if extended as such to n-issues lotteries: if, drawing on (5), exp( r τ i )= ( 1+b l i ) ra can rightly be viewed as a discounting factor, it depends on the expected time (perceived or physical) before winning at least x i —not before winning exactly x i . The discounting factor associated to the outcome x i is therefore the difference between two discounting factors: the one related to the expected time before winning at least x i and the one related to the expected time before obtaining strictly more than x i , that is at least x i+1 . So that, assuming that l n+1 + :

μ i = ( 1+b l i ) ra ( 1+b l i+1 ) ra ( i=1,,n ) (9)

After the work of (Cajueiro, 2006), the expression of the discounting factor has been currently rewritten, through a change in the parameters, as a q-exponential discounting based on Tsallis’ statistics. This change leads to set a pair of alternative parameters, k and q defined as k=rab and q=11/ ra . Extending this redefinition to the expression of μ i , (9) can be rewritten as9:

μ i =ψ( l i )ψ( l i+1 ) (10)

where ψ( l i )= ( 1+k( 1q ) l i ) 1 1q ( i=1,,n ) .

Or, using Cajueiro’s notation for q-exponential discounting:

μ i = exp q ( k l i ) exp q ( k l i+1 )( i=1,,n ) (11)

The discounting factor μ i can therefore be equivalently expressed as the difference between the values ψ( l i ) and ψ( l i+1 ) of two generalized hyperbolic discountings (10) or, equivalently, between two q-exponential discountings (11). Cajueiro’s presentation introducing in 2006 q-exponential discounting can be found in the literature as early as the following year10. It will be considered that, because of the definition of a, b and r in (4) and (5), the parameters k and q are, by construction, such that k0 and <q<1 . The possibility that q is negative does not appear in the article by Cajueiro (2006), nor in that of Takahashi (2007b). However, when he resumes q-discounting during the same year or the following year but in an intertemporal choice framework, Takahashi (2007a, 2008) explicitly considers the possibility that q is less than 011. The interpretation of the parameters k and q will be discussed in Section 3.

2.4. From a Discounted Delayed Value to the Utility of a Probable Reward

The recourse to an explicit representation of the desirability of the reward is lacking in the works on probability discounting cited above. The emphasis placed on the transposition of probabilities into delays, as well as the binary structure of lotteries, justified a minimum treatment allowing to ignore it. It was sufficient to work with a simple function V( x,t ) whose two arguments, the outcome x and the delay t before winning had each one only two possible values: x=0 in case of failure or x= x ¯ in case of success; t=0 for an immediate (because certain) gain, t=l for a delayed reward (because its probability p is such that l= ( 1p )/p ). Assuming that V( 0,t )=0 , the immediate or certain value of the reward x ¯ writes V( x ¯ ,0 ) , and its delayed or with probability p, value is V( x ¯ ,l ) . This is enough to get

V( x,l )=μV( x,0 ) (12)

which is all we need to focus on the specification and the discussion of the discounting factor μ. But such simplicity must be abandoned when moving on to the more general case of n-issues lotteries which require comparisons between the desirability of the various possible outcomes when they are immediate or certain. This desirability can be represented by an increasing utility function u of x, calibrated so that u( 0 )=0 , and defined up to a positive linear transformation. So that the utility of a lottery U( L ) can be given, like for utility in time, as the sum of the undiscounted utilities of each possible outcome u( x i ) weighted by its discounting factor μ i defined as in (10):

U( L )= i=1 n μ i u( x i ) (13)

Now, because of the probability discounting perspective, μ i in (13) can be understood either as a discounting factor whose expression is given by μ i =ψ( l i )ψ( l i+1 ) in (10), or as probability decision weights. Relying on (3) and (10) we get an alternative expression of μ i , as the decision weight for obtaining an outcome x i . μ i is the difference between the transformed probability G( x i ) of winning at least x i and the transformed probability of winning strictly more than x i , G( x i+1 ) :12

μ i =φ( G( x i ) )φ( G( x i+1 ) ) (14)

where φ( G( x i ) )= ( 1+k( 1q ) 1G( x i ) G( x i ) ) 1 1q ( i=1,,n ) .

It can be shown that the probability weighting function φ is an increasing transformation of [ 0,1 ] into itself with the following properties:

φ( 0 )=0 φ( 1 )=1 φ >0 (15)

As a result, what was first perceived as discounting factors, the μ i ’s, now appear as transposed probabilities whose sum is obviously equal to 1.

The combination of a utility function u with decision weights μ i (such that i=1 n μ i =1 ) determined by a probability weighting function φ, given by (13) and (14), amounts to what is currently known as “rank-dependent utility”13:

U( L )= i=1 n μ i u( x i ) = i=1 n ( φ( G( x i ) )φ( G( x i+1 ) ) )u( x i ) = i=1 n ( ( 1+k( 1q ) 1G( x i ) G( x i ) ) 1 1q ( 1+k( 1q ) 1G( x i+1 ) G( x i+1 ) ) 1 1q )u( x i ) = i=1 n ( exp q ( k 1G( x i ) G( x i ) ) exp q ( k 1G( x i+1 ) G( x i+1 ) ) )u( x i ) (16)

It is well-known that when rank-dependent utility prevails, the acknowledged drawbacks of a direct transformation of each single probability, like this of the probability of success in a 2-issues lottery, (the sum of the decision weights might be different from zero and violation of first degree stochastic dominance might occur) do not hold anymore (see, for instance, (Abdellaoui, 2009)). The probability weighting function φ possesses the expected properties (see (15)) of decision weights in rank-dependent utility, but its shape is more specific, since it is generated by the whole process of probability discounting14. Some consequences of the properties of the probability weighting function are discussed in the following section.

3. Attitudes Conveyed by Probability Discounting

The properties of the probability weighting function in (14) are controlled by the two parameters k and q. The latter were introduced as a recombination of the parameters a and b used in the transformation of physical into perceived delay (4) and of the discount rate in perceived time r (7), and their main virtue seems to have been rendering possible an expression of time or probability discounting through q-discounting. However, they also support the discussion of the underlying attitudes toward rationality, probability and risk.

3.1. Time-Rationality and Risk-Rationality in Probability Discounting

A common way to approach time-rationality and risk-rationality is to agree that they rest, respectively, on the fulfillment of axiomatic properties regarding the underlying preferences: stationarity for decision in time, and independence for decision in risk15. Stationarity and independence enter crucially in the axiomatic basis which make, respectively, preferences in time represented by discounted (exponential) utility, and preferences over random variables (lotteries) represented by expected utility. Both are, in their respective domain, a condition for avoiding preference reversal: stationarity guarantees time-consistency, i.e. the constancy of preferences between two gains at different dates, whether close or remote, provided they are separated by the same interval of time; independence preserves our order of preference between two lotteries, whatever the proportions in which they are combined with a third lottery.

Since the decision weights μ i can be viewed equivalently as discounting weights ( μ i =ψ( l i )ψ( l i+1 ) ; see (10) or the reformulation of n. 13 supra) or as probability weights ( μ i =φ( G( x i ) )φ( G( x i+1 ) ) ; see (14)), a peculiarity of probability discounting is that the issue of rationality is raised simultaneously in relation to time and in relation to risk.

Now, on the one hand, time-rationality is obtained only when q is tending to 1, which yields exponential discounting (and therefore, stationarity and time-consistency) because the ratio between ψ in (10) and its first derivative is a constant equal to −k, so that μ i =exp( k l i )exp( k l i+1 ) . On the other hand, risk-rationality is a special case of simple hyperbolic discounting like in Herrnstein (1981) or Mazur (1987), obtained with q=0 in ψ (10). In this case, μ i = ( 1+k l i ) 1 ( 1+k l i+1 ) 1 : it occurs with the additional condition that k=1 , which makes that φ in (14) is such that φ( G( x i ) )=G( x i ) , whatever x i . As a result, when q=0 and k=1 , μ i = p i , so that probability discounting has generated expected utility (and hence, independence).

This sheds light on the relationship between time-rationality and risk-rationality generated by the transposition of a decision in risk into a decision in time. When moving from the first to the second, we loose time-rationality if the parameters are such that they preserve risk-rationality. Conversely, if we reach time-rationality, we have to give up risk-rationality. Such a conclusion might seem disturbing, but it should not be overestimated. The simple fact that μ i can be understood at the same time as a discount factor and as a probability weight, referring respectively to a specific case of generalized hyperbolic discounting (10) and of a probability weighting function in rank-dependent utility (14), means that probability discounting should satisfy the criteria of rationality, obviously weaker, which characterize each of these two approaches: the Thomsen condition of separability (Fishburn & Rubinstein, 1982: pp. 686-687) for time-rationality16, and comonotonic tradeoff consistency (Wakker, 1994: p. 13) for risk-rationality17. Taking seriously the idea on which probability discounting is based, namely that deciding in risk might be viewed as a way of deciding in time, entails that something has to be abandoned in our requirements in terms of rationality: either one of the two types of rationality (in time or in risk), when the parameters k and q are given appropriate values or, in the general case, the strong versions of risk-rationality and time-rationality, in favour of the weaker versions consistent with rank-dependent utility and generalized hyperbolic discounting.

3.2. The Probability Discounting Determination of Attitudes Toward Probabilities and Risk

3.2.1. The Shape of the Probability Weighting Function

Let us start with the properties of the probability weighting function φ defined as in (14). We know that this function is increasing, since its first derivative is positive on [ 0,1 ] :

φ ( p )=k p 2 ( 1+k( 1q ) 1p p ) 2q 1q >0 (17)

Its second derivative is

φ ( p )=k p 4 ( 1+k( 1q ) 1p p ) 2q 1q ( 2pk( 2q ) ( 1+k( 1q ) 1p p ) 1 ) (18)

The part played by the parameters k and q is crucial. According to their values, φ is either positive, or negative, or of alternate signs, so that φ is either convex, or concave, or inverse S-shaped (firstly concave, then convex), or S-shaped (firstly convex then concave).

φ can be rewritten: φ ( p )=A( p )×B( p )

where A( p )=k p 4 ( 1+k( 1q ) 1p p ) 2q 1q and B( p )=( 2pk( 2q ) ( 1+k( 1q ) 1p p ) 1 )

A( p ) is always negative. Hence, the sign of φ ( p ) depends on the sign of B( p ) , which writes also:

B( p )= 2p( 1k( 1q ) )kq p+k( 1q )( 1p ) (19)

Let us analyse the sign of B( p ) with respect to the values of q.

a) If q=0 then replacing q by 0 in B( p ) leads to B( p )= 2p( 1k ) p+k( 1p ) .

Since the denominator is always positive, then the sign of B( p ) depends on the sign of its numerator. As a consequence, B( p ) is positive if and only if k1 . Hence when q=0 , φ is concave if and only if k1 , and φ is convex otherwise.

b) If q] 0,1 [ then according to equation (19), two cases can occur: the case where k< 1 1q and the case where k 1 1q .

• If k 1 1q then B( p ) is negative whatever p[ 0,1 ] . This leads to φ ( p )0 ( φ( p ) convex) on the interval [ 0,1 ] .

• If k< 1 1q then B( p ) is negative (see the numerator of B( p ) ) on the interval [ 0, p 0 ] and is positive on the interval [ p 0 ,+ ] , where p 0 = kq 2( 1k( 1q ) ) . However p (a probability) cannot go beyond 1. This means that p 0 is either less than 1 or higher than 1. p 0 is less than 1 if and only if k< 1 1 q 2 . Hence:

- when q] 0,1 [ , if k< 1 1q and k< 1 1 q 2 then B( p ) is negative on the

interval [ 0, p 0 ] and positive on the interval [ p 0 ,1 ] ; that is, φ ( p )0 ( φ( p ) convex) on the interval [ 0, p 0 ] and φ ( p )0 ( φ( p ) concave) on the interval [ p 0 ,1 ] ;

- when q] 0,1 [ , if k< 1 1q and k 1 1 q 2 then B( p ) is negative on the

interval [ 0,1 ] ; that is, φ ( p )0 ( φ( p ) convex) on the interval [ 0,1 ] .

c. If q<0 then according to equation (19), two cases can occur: the case where k 1 1q and the case where k> 1 1q .

• If k 1 1q then B( p ) is positive whatever p[ 0,1 ] . This leads to φ ( p )0 ( φ( p ) concave) on the interval [ 0,1 ] .

• If k> 1 1q then B( p ) is positive (see the numerator of B( p ) ) on the interval [ 0, p 0 ] and is negative on the interval [ p 0 ,+ ] , where p 0 = kq 2( 1k( 1q ) ) . However p (a probability) cannot go beyond 1. This means that p 0 is either less than 1 or higher than 1. p 0 is less than 1 if and only if k> 1 1 q 2 . Hence:

- when q<0 , if k> 1 1q and k> 1 1 q 2 then B( p ) is positive on the

interval [ 0, p 0 ] and negative on the interval [ p 0 ,1 ] ; that is, φ ( p )0 ( φ( p ) concave) on the interval [ 0, p 0 ] and φ ( p )0 ( φ( p ) convex) on the interval [ p 0 ,1 ] ;

- when q<0 , if k> 1 1q and k 1 1 q 2 then B( p ) is positive on the interval [ 0,1 ] ; that is, φ ( p )0 ( φ( p ) concave) on the interval [ 0,1 ] .

What are the implications of the above results on the shape and properties of the graph of φ?

Since φ( 0 )=0 and φ( 1 )=1 , then it is obvious that φ( p )p,p (respectively: φ( p )p,p ) when φ is (fully) convex (resp., concave) on the interval [ 0,1 ] .

However when φ is firstly convex then concave (S-shaped; case with 0<q<1 and k< 1 1 q 2 ) or when it is firstly concave then convex (inverse S-shaped; case with q<0 and k 1 1 q 2 ), it is not straightforward to conclude whether its

graph crosses the first bisector, or whether it does not cross it, because it lies entirely above or under this bisector. The difference between the two situations (φ crossing the bisector, or not crossing it) amounts to the existence (in the first situation) or to the non-existence (in the second) of p * ] 0,1 [ such that

φ( p * )= p * (20)

Remind (see (14)) that φ( p )= ( 1+k( 1q ) 1p p ) 1 1q . Hence equation (20) writes:

( 1+k( 1q ) 1p p ) 1 1q =p (21)

Since φ( p ) is a positive and monotonic function on its domain of definition, (21) writes: 1+k( 1q ) 1p p = p ( q1 ) . That is,

( 1k( 1q ) )p+k( 1q ) p q p =0

As a consequence, we want to know if the below equation (22) admits a root belonging to the interval ] 0,1 [ , in which case it crosses the first bisector (otherwise, it does not):

p q +( 1k( 1q ) )p+k( 1q )=0 (22)

Denote η( p )= p q +( 1k( 1q ) )p+k( 1q ) .

• Let us take the case of φ S-shaped, with 0<q<1 and k< 1 1 q 2 . We can see that η( 0 )=k( 1q )>0 , η( 1 )=0 , and η ( p )=q (p) q1 +( 1k( 1q ) ) .

So that η ( p )0 if and only if p ( q 1k( 1q ) ) 1 1q .

- If k<1 then ( q 1k( 1q ) ) 1 1q <1 . As a consequence η will decrease on the interval [ 0, ( q 1k( 1q ) ) 1 1q ] and will increase on the interval [ ( q 1k( 1q ) ) 1 1q ,1 ] . However since η( 0 )>0 , and η( 1 )=0 then it is necessarily the case that there exist p * < ( q 1k( 1q ) ) 1 1q such that η( p * )=0 . This proves that when 0<q<1 , k< 1 1 q 2 and k<1 , there exists p * such that φ( p * )= p * . This means that φ is S-shaped and crosses the bisector at p * < ( q 1k( 1q ) ) 1 1q .

- If k1 then ( q 1k( 1q ) ) 1 1q 1 , and η decreases on the interval [ 0,1 ] . This means that when 0<q<1 , k< 1 1 q 2 and k1 , then φ is S-shaped and fully under the bisector.

• Likewise if we take the case of φ inverse S-shaped, with q<0 and k 1 1 q 2 .

- if k>1 , there exists p * such that φ( p * )= p * - i.e. φ crosses the bisector;

- if k1 , φ is fully above the bisector.

We can therefore write, in the following propositions, the first line after the headers, from which the rest of the tables proceeds (see comments below, in Section 3.2.3).

3.2.2. Propositions

Proposition 1 The table below indicates the link between attitude toward probabilities, attitude toward outcomes and attitude toward risk when the discounting parameter q lies on the interval ] 0,1 [ (Table 1).

Table 1. 0<q<1 —Attitudes toward probabilities, outcomes and risk.

k

0

1

1 1 q 2

+∞

φ

S-shaped, crossing bisector

(see Figure 1)

S-shaped, under bisector

(see Figure 2)

Convex

(see Figure 3)

Attitude toward Probability (Strong)

Local Strong Pessimism and local Strong Optimism

(unlikelihood insensitivity)

Strong Pessimism

Attitude toward Probability (Weak)

Local Weak Pessimism

and local Weak Optimism

Weak Pessimism

u concave

(decreasing sensitivity)

Attitude toward

Risk (Strong)

Neither Strong Risk Averse, nor Strong Risk Seeker

Strong Risk Averse

Attitude toward

Risk (Monotone)

Not Monotone Risk Averse

Monotone Risk Averse

Attitude toward

Risk (Weak)

Not Weak Risk Averse

Weak Risk Averse

u convex

(increasing sensitivity)

Attitude toward

Risk (Strong)

Neither Strong Risk Averse, nor Strong Risk Seeker

Attitude toward

Risk (Monotone)

Not Monotone Risk Averse

Monotone Risk Averse when G u k

Not Monotone Risk Averse when G u >k

Attitude toward

Risk (Weak)

Not Weak Risk Averse

Weak Risk Averse if G u k , or there exists g1

Remarks: • G u = sup y<x u ( x ) u ( y ) ; • g1 is such that u ( x )g u( x )u( y ) xy , for x>y , and φ( p ) p g .

Proposition 2 The table below indicates the link between attitude toward probabilities, attitude toward outcomes and attitude toward risk when the discounting parameter q=0 (Table 2).

Table 2. q=0 —Attitudes toward probabilities, outcomes and risk.

k

0

1= 1 1 q 2

+∞

φ

Concave (see Figure 4)

Convex (see Figure 5)

Attitude toward Probability (Strong)

Strong Optimism

Strong Pessimism

Attitude toward Probability (Weak)

Weak Optimism

Weak Pessimism

u concave

(decreasing sensitivity)

Attitude toward

Risk (Strong)

Neither Strong Risk Averse,

nor Strong Risk Seeker

Strong Risk Averse

Attitude toward

Risk (Monotone)

Monotone Risk Seeker if T u 1/k

Not Monotone Risk Seeker if T u >1/k

Monotone Risk Averse

Attitude toward

Risk (Weak)

Weak Risk Seeker if T u 1/k ,

or there exists h1

Weak Risk Averse

u convex

(increasing sensitivity)

Attitude toward

Risk (Strong)

Strong Risk Seeker

Neither Strong Risk Averse, nor Strong Risk Seeker

Attitude toward

Risk (Monotone)

Monotone Risk Seeker

Monotone Risk Averse when G u k

Not Monotone Risk Averse if G u >k

Attitude toward

Risk (Weak)

Weak Risk Seeker

Weak Risk Averse if G u k , or there exists g1

Remarks: • T u = sup y<x u ( y ) u ( x ) ; • h1 is such that u ( y )h u( x )u( y ) xy , for x>y , and φ( p )1 ( 1p ) h ; • G u = sup y<x u ( x ) u ( y ) ; • g1 is such that u ( x )g u( x )u( y ) xy , for x>y , and φ( p ) p g .

Proposition 3 The table below indicates the link between attitude toward probabilities, attitude toward outcomes and attitude toward risk when the discounting parameter q is strictly negative (Table 3).

Table 3. q<0 —Attitudes toward probabilities, outcomes and risk.

k

0

1 1 q 2

1

+∞

φ

Concave (see Figure 6)

Inverse S-shaped, above bisector (see Figure 7)

Inverse S-shaped, crossing bisector (see Figure 8)

Attitude toward Probability (Strong)

Strong Optimism

Local Strong Optimism and local Strong Pessimism(likelihood insensitivity)

Attitude toward Probability (Weak)

Weak Optimism

Local Weak Optimism

and local Weak Pessimism

u concave

(decreasing sensitivity)

Attitude toward

Risk (Strong)

Neither Strong Risk Averse,

nor Strong Risk Seeker

Attitude toward

Risk (Monotone)

Monotone Risk Seeker if T u 1/k

Not Monotone Risk Seeker if T u >1/k

Not Monotone Risk Seeker

Attitude toward

Risk (Weak)

Weak Risk Seeker if T u 1/k ,

or there exists h1

Not Weak Risk Seeker

u convex

(increasing sensitivity)

Attitude toward

Risk (Strong)

Strong Risk Seeker

Neither Strong Risk Averse,

nor Strong Risk Seeker

Attitude toward

Risk (Monotone)

Monotone Risk Seeker

Not Monotone Risk Seeker

Attitude toward

Risk (Weak)

Weak Risk Seeker

Not Weak Risk Seeker

Remarks: • T u = sup y<x u ( y ) u ( x ) ; • h1 is such that u ( y )h u( x )u( y ) xy , for x>y , and φ( p )1 ( 1p ) h .

3.2.3. Comments

a) On the attitudes toward probabilities

Drawing on (18), the properties of the probability weighting function in relation to the parameters q and k are listed in the first lines of the tables in Propositions 1, 2 and 3.

The block of lines which follows immediately the header in each table deals with the attitude toward probabilities embodied in the probability weighting function. Generally speaking, it amounts to pessimism or optimism, which can be approached from two different points of view, each one is linked to a way of considering the generic term in the expression of the rank-dependent utility of a lottery: either a utility multiplied by a difference between transformed probabilities, or a transformed probability multiplied by a difference between utilities.

Figure 1. Probability weighting function: φ S-shaped, crossing the bisector. q=0.8 , k=0.6 , p 0 =0.27 , p * =0.32 .

Figure 2. Probability weighting function: φ S-shaped, under the bisector. q=0.8 , k=1.05 , p 0 =0.53 .

Figure 3. Probability weighting function: φ convex. q=0.8 , k=2 , p 0 =1.33 .

Figure 4. Probability weighting function: φ concave. q=0 , k=0.3 .

Figure 5. Probability weighting function: φ convex. q=0 , k=3 .

Figure 6. Probability weighting function: φ concave. q=0.5 , k=0.4 , p 0 =0.25 .

Figure 7. Probability weighting function: φ inverse S-shaped, above the bisector. q=2.5 , k=0.9 , p 0 =0.52 .

Figure 8. Probability weighting function: φ inverse S-shaped, crossing the bisector. q=2.5 , k=3 , p 0 =0.39 , p * =0.5 .

The first point of view (Yaari, 1987; Chateauneuf & Cohen, 1994) contrasts strong pessimism with strong optimism (which meets (Wakker, 1994)’s distinction between “probabilistic risk aversion” and “probabilistic risk seeking”), associated respectively to the convexity and to the concavity of φ. The weight of a typical element u( x i ) in (16) is given by a transformed probability μ i =φ( G( x i ) )φ( G( x i+1 ) ) . In particular, a finite variation of G in the neighborhood of 1 or of 0, corresponding to the lowest or to the highest outcomes, indicates its decisional weight μ i at the endpoints of the domain of definition of φ by the corresponding variation in ordinate. The convexity (Figure 3 or Figure 5) (resp., the concavity (Figure 4 or Figure 6)) of φ therefore amounts to strong pessimism (resp., strong optimism), insofar as the probability of the lowest outcomes is overweighted (resp., underweighted), whereas the probability of the highest outcomes is underweighted (resp., over- weighted). Strong pessimism (resp., strong optimism) can be interpreted as in- creasing (resp., decreasing) sensitivity to probability changes when moving from the low probabilities of getting at least the higher outcomes to the high probabilities of getting at least the lower outcomes. This makes easier the interpretation of the intermediate situations of an inverse S-shaped (first concave, then convex; see Figure 7 and Figure 8) or S-shaped (first convex, then concave; see Figure 1 and Figure 2) probability weighting function (see the seminal paper of (Gonzalez & Wu, 1999). In the case of an inverse S-shaped function (Figure 7 and Figure 8), the probabilities of the lowest and of the highest outcomes are overweighted relatively to those of the medium outcomes (in the neighbourhood of the inflexion point p 0 ) which are underweighted. This boils down to strong optimism toward medium to high outcomes (the concave part of φ), and strong pessimism toward low to medium outcomes (its convex part). Commonly used in cumulative prospect theory (see (Tversky & Kahneman, 1992)), the inverse S-shaped probability weighting function is interpreted in terms of cognitive ability after Wakker (2010: pp. 203 sqq) who called it “likelihood insensitivity”, in the sense that people fail to distinguish sufficiently variations of probabilities for medium, usual outcomes, but are overly sensitive when these changes concern best ranked and worst ranked unusual outcomes. Obviously, a symetrical interpretation can be given to the less common S-shaped probability weighting function (Figure 1 and Figure 2), which can be viewed as an expression of what might be called “unlikelihood insensitivity”.

The second point of view makes a distinction between what is usually refered to as weak pessimism and weak optimism (Cohen, 1995). At the difference of strong pessimism and strong optimism, weak pessimism and weak optimism are implicitly based on the interpretation of φ( G( x i ) ) as the transformed probability which we associate to a minimum additional utility u( x i )u( x i1 ) (see supra n. 13). In an expected utility framework, we know that φ( G( x i ) )=G( x i ) for each i. So that pessimism can be seen as doing worse than expected utility, and optimism as doing better than it. Weak pessimism therefore occurs (resp., weak optimism) when φ( G( x i ) )G( x i ) (resp., φ( G( x i ) )G( x i ) ), the probabilities of additional utilities being underweighted (resp., overweighted). It is obvious that strong pessimism implies weak pessimism (and strong optimism implies weak optimism), whereas the reverse is not true. The previous issue of the convexity or concavity of the probability weighting function is here replaced by the question of knowing whether φ lies below the bisector (weak pessimism) (see Figure 2, Figure 3, Figure 5) or above it (weak optimism) (see Figure 4, Figure 6, Figure 7). Consequently, S-shaped or inverse S-shaped probability weighting functions are now significant only when they cross the bisector. When φ is inverse S-shaped crossing the bisector (Figure 8), weak optimism prevails locally for relatively high outcomes (with probabilities of winning at least this outcome belonging to the interval between 0 and the abciss p * of the point of intersection of φ and the bisector) because the corresponding part of φ lies above the bisector; and weak pessimism prevails locally for relatively low outcomes (with probabilities of winning at least this outcome belonging to the interval between p * and 1) because the corresponding part of φ lies below the bisector. Of course, an S-shaped φ crossing the bisector (Figure 1) is interpreted in a symetrical way.

b) On the attitudes toward risk

Following (Rothschild & Stiglitz, 1970)’s seminal paper, we are used to distinguish weak and strong risk-aversion (resp., risk-seeking; risk-neutrality being equivalent to risk-aversion and risk-seeking). Both provide answers to different questions. A decision-maker is said to be weakly risk-averse (resp., weakly risk-seeker), if he or she prefers a lottery L to its expected value E L ( x ) (resp., the expected value E L ( x ) of a lottery L to this lottery). By contrast, a decision-maker is strongly risk-averse (resp., strongly risk-seeker) when, given a pair of lotteries L1 and L2 with equal means such that L1 is stochastically dominating L2 at degree 218, L1 (resp., L2) is preferred to L2 (resp., L1). Weak risk attitude is the result of a comparison between a risky distribution and a certain outcome, whereas strong risk attitude denotes a comparison between two risky distributions. An intermediary concept was introduced by Quiggin (1992) in relation to what was to become known as rank-dependent utility: monotone risk-aversion (resp., monotone risk-seeking) denotes a situation where a decision-maker prefers L1 to a lottery L2 (resp., L2 to L1) when L2 is a monotone increase in risk of L119. Strong, monotone and weak risk attitudes are equivalent in standard expected utility, when the decision weights are equal to the corresponding probabilities, since they all depend on the concavity (risk-aversion) or the convexity (risk-seeking) of the utility function, which incorporates the whole relevant information on the attitude toward risk. Such is the case when q=0 and k=1 , so that the decision weights μ i are equal to the corresponding probabilities p i . Because his or her behaviour boils down to expected utility when k=1 , a simple hyperbolic probability discounter ( q=0 ) who is weakly risk-averse (weakly risk-seeker) is also strongly risk-averse (strongly risk-seeker) and monotonely risk-averse (monotonely risk seeker).

But in all other cases, when the utility of a lottery is given by (16), the properties of the utility function u alone are not sufficient to determine the attitude toward risk: it now depends on the properties of both the utility function u and the probability weighting function φ. Let us therefore turn to the properties of the utility function. Assume, for sake of simplicity, that it is bi-differentiable, and either concave or convex. The concavity and the convexity of u are currently interpreted as, respectively, a decreasing sensitivity and an increasing sensitivity to outcomes. In a probability discounting framework like the one of (16), the risk attitude carried on by the utility function can be either reinforced or thwarted by the attitude toward probabilities carried on by the probability weighting function. We rely explicitly on some results concerning rank-dependent utility and adapted to q-discounting in order to account for the effects on risk attitude of the interaction between the sensitivity to outcomes (u) and the attitude toward probability (φ).

The first result is from Quiggin (1992) and Cohen (1995). It shows that strong risk aversion implies monotone risk aversion which implies weak risk aversion and, in the same way, that strong risk seeking implies monotone risk seeking which implies weak risk seeking. The second result, from Hong, Karni, & Safra (1987) states on the one hand, that decreasing sensitivity and strong pessimism is equivalent to strong risk aversion, on the other it states that increasing sensitivity and strong optimism is equivalent to strong risk seeking. The third result is due to Chateauneuf & Cohen (1994). It highlights the link between weak attitude toward risk and weak attitude toward probability, in the sense that weak risk aversion implies weak pessimism and weak risk seeking implies weak optimism. The fourth result is also from Chateauneuf & Cohen (1994). It aims at finding the extent of weak pessimism (resp., weak optimism), which can overcome increasing sensitivity (resp., decreasing sensitivity) so that weak risk aversion (resp., weak risk seeking) is made possible. It states that whatever x,y , with x>y , whatever p[ 0,1 ] , if

there exists g1 such that u ( x )g u( x )u( y ) xy and φ( p ) p g , then weak risk aversion is satisfied. Likewise, whatever x,y , with x>y , whatever p[ 0,1 ] , if there exists h1 such that u ( y )h u( x )u( y ) xy and φ( p )1 ( 1p ) h ,

then weak risk seeking is satisfied. The fifth result is from Quiggin (1982, 1992); see also (Chateauneuf & Cohen, 1994)). It says that when u is concave (resp., convex), monotone risk aversion, weak risk aversion and weak pessimism are equivalent (respectively, monotone risk seeking, weak risk seeking and weak optimism are equivalent). Finally the last result that we use is due to Chateauneuf, Cohen, & Meilijson (2005). It improves Chateauneuf & Cohen (1994) by relying on indexes of pessimism or optimism on the one hand, and on indexes of non-concavity or non convexity of the utility function on the other hand. This result states that monotone risk aversion is equivalent to G u P φ , and monotone risk seeking is equivalent to

T u O φ , where G u = sup y<x u ( x ) u ( y ) is an index of non-concavity ( G u 1 and is equal to 1 when u is concave), T u = sup y<x u ( y ) u ( x ) is an index of non-convexity ( T u 1 and is equal to 1 when u is convex), P φ = inf 0<p<1 1φ( p ) φ( p ) 1p p 1 is an index of pessimism, and O φ = inf 0<p<1 φ( p ) 1φ( p ) p 1p an index of optimism. The result of Chateauneuf, Cohen, & Meilijson (2005) therefore expresses situations where pessimism (resp. optimism)

compensates the convexity (resp. concavity) of the utility function. It can be shown that when q-discounting occurs, P φ =k and O φ =1/k , which are both obtained when p tends to 1. So that the result of Chateauneuf, Cohen, & Meilijson (2005) can be reformulated as:

{ Monotone Risk Aversion G u k Monotone Risk Seeking T u 1/k

Drawing on the above results from the literature, it it has become possible to determine, in Propositions 1, 2 and 3, the various types of attitudes toward risk generated by the combination between an attitude toward probabilities expressed by the properties of φ, and an attitude toward output which comes from the properties of u.

It is commonsense to claim that all this depends on the action of the two parameters, k and q. In the case of decision in time, their respective function seems rather clear (see, for instance, (Takahashi, 2007a: pp. 639-640) and (Munoz Torrecillas et al., 2018: pp. 191-192)). k is usually perceived as a parameter of “impulsivity”, which we can understand as “impatience”, since it increases the discounting weight of physical waiting time. And q is a parameter of (time-) consistency, since when it moves away from 1, it also makes exponential discounting more and more distant. Regarding decision in risk, q separates situations of non-optimism (in which global risk-seeking of any type is impossible) when it is greater than 0 and smaller than 1 (Proposition 1) from situations of non-pessimism (in which global risk-aversion, also of any type, is impossible) when it is less than 0 (Proposition 3). Rather than a parameter of “risk-aversion”, as Takahashi et al. (2013: p. 877) first called it, k plays a crucial part as a sophisticated parameter of pessimism: it constitutes the upper-bound for the index of non-concavity G u in order to obtain monotone risk-aversion; or it represents, through 1/k, the upper-bound for the index of non-convexity T u to produce monotone risk-seeking. This shows that appropriate values of k can compensate either the concavity or the convexity of the utility function to produce either monotone risk-seeking in the first case, or monotone risk-aversion in the second case. And if k is either too large or too small for this, it remains possible to have at least sufficient conditions to obtain weak risk-aversion or weak-risk seeking (Chateauneuf & Cohen, 1994). When it is smaller than 1 (when 0<q<1 ) or greater than 1 (when q<0 ), k generates S-shaped or inverse S-shaped probability weighting functions φ which cross the bisector, so that none of the basic global attitudes toward risk can exist. In all other cases, at least weak optimism or weak pessimism occurs, so that the necessary condition for any conception of risk aversion or risk seeking is satisfied (Chateauneuf & Cohen, 1994). At last, the relation between both parameters, k and q, allows determining the range of their relative values for which strong risk attitudes are possible: if q

lies between 0 and 1, k 1 1 q 2 generates strong pessimism, thus determining strong risk-aversion with u concave; symmetrically, if q is less than 0, k 1 1 q 2

generates strong optimimism, and strong risk-seeking with u convex (Hong, Karni, & Safra, 1987).

4. Concluding Remarks

Emerging from the intuition that probability entails a more or less long delay before winning, probability discounting has shown to be fruitful. Though usually avoiding the use of an explicit utility function, it could integrate it and give rise to a more complete representation of risky choices. Originally presented in the framework of 2-issues lotteries, its cautiousness extension to the case of n-issues lotteries would face today’s well-known drawbacks associated with a one-to-one transformation of probabilities, like the violation of stochastic dominance of degree 1. This is why the same kind of transformation as the one in use for rank-dependent utility has been employed. The transformation therefore concerns not a single delay or a single probability before winning, but the average delay before obtaining at least a certain reward, or the (decumulated) probability of getting at least this reward. The effects of this transformation on the rationality of behaviour and the attitude towards risk depend on the shape of the q-discounting function, which applies to both time and probability.

An immediate conclusion can be drawn regarding rationality both in time and in risk. Whereas appropriate values of the parameters of the q-discounting function allow reaching the standard criteria of time-rationality (stationarity, through exponential discounted utility) and risk-rationality (independence, through expected utility), they cannot be fulfilled together, the latter being a particular case of hyperbolic discounted utility. The attitude toward risk depends on both the attitude toward outcomes, embedded in the utility function, and on the attitude toward probabilities expressed in extended probability discounting. In a trivial way, the concavity or convexity of the utility function brings respectively risk-aversion or risk-seeking. But these have to be combined with the attitude toward probabilities shown by the q-discounting function in a rank-dependent utility framework. Now, in this paper, we provide a unifying framework in which according to the values of its parameters, we obtain the whole range of the properties of the probability weighting function that is usually acknowledged in the literature. This allows for the distinguishing between the different types (weak and strong) of pessimism and optimism toward probability, and to determine the various attitudes toward risk generated by the combination of a utility function and probability discounting.

Over the last thirty years or so, probability discounting has shown that in a large variety of cases, it is an experimentally relevant procedure to account for behaviour under risk. From a theoretical point of view, our generalisation leads to extending its scope and clarifying its meaning in terms of rationality and attitude toward risk. The main limitation of our work is that it does not explain how external factors (such as social interactions or an exogenous shock) can modify the way risk and time interact (Bergeot & Jusot, 2024).

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

NOTES

1Such relation between probability and delay was already formally in use as early as 1713 in what we know as “Bernoulli trials”, named after Jacob Bernoulli in Ars conjectandi.

2Rachlin, Raineri, & Cross (1991), for instance, explicitly refer to the discounting function ( 1+αt ) 1 (with t denoting time and α a discounting parameter) proposed by Mazur (1987). The same function was previously introduced in 1981 by Herrnstein. Rachlin, Siegel, & Cross (1994) proposed a general hyperbolic discounting function of the type ( 1+αt ) β —which may be thought rather close to the function introduced by Loewenstein & Prelec (1992).

3The q-exponential function exp q is defined as exp q ( αt )= ( 1+( 1q )αt ) 1 1q . See Cajueiro (2006: p. 386).

4A value function, measuring the differences with a state of reference, could have been used instead of a utility function. The result would have been a variant of Kahneman and Tversky’s cumulative prospect theory. We have preferred the methodologically simpler representation of rank-dependent utility, whose transposition to cumulative prospect theory can be easily performed.

5As already noted by Rachlin et al. (1986: p. 36).

6See, for instance, (Takahashi, 2005: p. 692).

7A separate discount rate r related to perceived time is generally missing in the usual literature on probability discounting (see, for instance, (Takahashi, 2005); but Takahashi et al. (2012: p. 12) seem to have done a choice similar to ours). This might be explained by the integration of the relevant information in the parameter a in the relation between perceived and physical time. The drawbacks of such way of processing is that it does not make any distinction between discounting in time and perceiving time. This is why we have chosen to make the discount rate explicit.

8The generalized hyperbolic discounting factor in Loewenstein & Prelec (1992) writes ( 1+αl ) β α . Setting b=α and ra=β/α enables to find the formulation of (8).

9Faced with a 2-issues lottery, we find, as a special case, the usual results from the literature on q-discounting (see, for instance, (Takahashi, 2007b)) with a discounting factor for the outcome in case of success μ= ( 1+k( 1q )l ) 1 1q = exp q ( kl ) .

10See, for instance, Takahashi (2007b) and the colleagues with whom he had partnered (Takahashi, 2010; Takahashi, 2011; Takahashi et al., 2012; Takahashi, 2013; Takahashi et al., 2013).

11Cruz Rambaud & Muñoz Torrecillas (2013) went so far as to propose that q is greater than 1 (see also (Munoz Torrecillas et al., 2018)). Nonetheless, since this would result in the negativity of r or a, and the negativity of b if we want to keep k positive, this possibility is excluded in the following of this paper.

12Note that in the case where i=n , the probability of obtaining strictly more than x n is zero, so that G( x n+1 )=0 .

13Rank-dependent utility continues the pioneering work by Quiggin (1982). For an introduction focusing on associated risk perceptions see, among others, Diecidue & Wakker (2001), Abdellaoui (2009), and Cohen (2015). With some qualifications, more recent versions of prospect theory also belong to this kind of models, at least since (Tversky & Kahneman, 1992)’s paper (see (Wakker, 2010)). In several rank-dependent utility models, U( L ) is usually written as the (discrete) Choquet integral U( L )= i=0 n1 φ( G( x i+1 ) )( u( x i+1 )u( x i ) ) , rather than as its equivalent in (16).

14The above analysis reveals an equivalence between decision weights μ i expressing time discounting (10) and probability discounting (14). Nonetheless, whereas the interpretation of the latter in terms of the weighting function of probabilities in a rank dependent utility framework is quite intuitive, that of the former is far less obvious: in (10), the utility u( x i ) of each possible gain is discounted by a difference between the discounting factors ψ i and ψ i+1 . The difficulty comes not only from the meaning of this difference, but also from the spontaneous interpretation of the sequence of x i ’s from x 1 to x n - as if, after obtaining x 1 immediately ( l 1 =0 ), we will have also x 2 provided we wait l 2 , etc., till x n after a delay l n . This difficulty vanishes from the moment (10) is rewritten equivalently as a standard discrete Choquet integral (see supra n. 13): U( L )= i=1 n ψ( l i )( u( x i )u( x i1 ) ) . Such expression makes clear that what is obtained after l i is not x i , whose discounted utility would come in addition to the discounted utility of x i1 , , and x 1 : the decision maker is supposed to get x i after l i , but not x 1 ++ x i . So that he or she obtains an increase in gain x i x i1 and the corresponding additional utility u( x i )u( x i1 ) , weighted by the discount factor ψ( l i ) .

15Stationarity and independence read as follows. Stationarity: assume x and y are two outcomes respectively available at dates t 1 and t 1 +s . If ( x, t 1 ) and ( y, t 1 +s ) are indifferent, ( x, t 2 ) and ( y, t 2 +s ) are also indifferent for any t 2 t 1 . Independence: assume three lotteries L 1 , L 2 and L 3 , and any λ[ 0,1 ] . If L 1 is preferred to L 2 , then λ L 1 +( 1λ ) L 3 is also preferred to λ L 2 +( 1λ ) L 3 . An intuitive interpretation is that λ is a probability of obtaining either L 1 or L 2 , and 1λ a probability of obtaining L 3 .

16Thomsen condition of separability (Fishburn & Rubinstein, 1982) is based on the idea that when deciding in time, we compensate differences in outcomes by differences in dates, and that these differences are additive. So that given three outcomes x, y and z and three dates r, s and t, if ( x,t ) and ( y,s ) are indifferent to a decision-maker, as well as ( y,r ) and ( z,t ) , it means that xy is compensated by ts and, on the other hand, yz by rt . This means that xz=( xy )+( yz ) is compensated by rs=( rt )+( ts ) . And therefore, ( x,r ) is also indifferent to ( z,s ) .

17Comonotonic tradeoff consistency (Wakker, 1994) reads as follows. Assume two sets of pairwise lotteries defined as L α =( x 1 ,,α,, x n ; p 1 ,, p i ,, p n ) , L β =( y 1 ,,β,, y n ; p 1 ,, p i ,, p n ) and as L γ =( x 1 ,,γ,, x n ; p 1 ,, p i ,, p n ) , L δ =( y 1 ,,δ,, y n ; p 1 ,, y i ,, p n ) . If for some i, there exists outcomes α,β,γ,δ so that L α is preferred to L β and L δ is preferred to L γ , then for two other alternative sets of lotteries defined in the same way, there is no i for which L α is preferred to L β and, contrary to the previous case, L γ is strictly preferred to L δ . Alternative key axioms are given by (Chateauneuf, 1999: pp. 25-27).

18A lottery L 1 (whose cumulative distribution function is F 1 ) is stochastically dominating another lottery L 2 (whose cumulative distribution function is F 2 ) at degree 2 when, for all x belonging to [ x 1 , x n ] , x 1 x ( F 1 ( s ) F 2 ( s ) )ds 0 .

19 L 2 is a monotone increase in risk of L 1 if L 2 = L 1 +Z , with Z being comonotone to L 1 and E( Z )=0 . On the different concepts of attitude toward risk, see (Cohen, 1995).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Abdellaoui, M. (2009). Rank-Dependent Utility. In P. Anand, P. K. Pattanaik, & C. Puppe (Eds.), Rational and Social Choice: An Overview of New Foundations and Applications (pp. 69-89). Oxford University Press.
[2] Bergeot, J., & Jusot, F. (2024). Risk, Time Preferences, Trustworthiness and COVID-19 Preventive Behavior: Evidence from France. The European Journal of Health Economics, 25, 91-101.
https://doi.org/10.1007/s10198-023-01573-y
[3] Cajueiro, D. O. (2006). A Note on the Relevance of the Q-Exponential Function in the Context of Intertemporal Choices. Physica A: Statistical Mechanics and its Applications, 364, 385-388.
https://doi.org/10.1016/j.physa.2005.08.056
[4] Chateauneuf, A. (1999). Comonotonicity Axioms and Rank-Dependent Expected Utility Theory for Arbitrary Consequences. Journal of Mathematical Economics, 32, 21-45.
https://doi.org/10.1016/s0304-4068(98)00032-9
[5] Chateauneuf, A., & Cohen, M. (1994). Risk Seeking with Diminishing Marginal Utility in a Non-Expected Utility Model. Journal of Risk and Uncertainty, 9, 77-91.
https://doi.org/10.1007/bf01073404
[6] Chateauneuf, A., Cohen, M., & Meilijson, I. (2005). More Pessimism than Greediness: A Characterization of Monotone Risk Aversion in the Rank-Dependent Expected Utility Model. Economic Theory, 25, 649-667.
https://doi.org/10.1007/s00199-003-0451-7
[7] Cohen, M. (2015). Risk Perception, Risk Attitude, and Decision: A Rank-Dependent Analysis. Mathematical Population Studies, 22, 53-70.
https://doi.org/10.1080/08898480.2013.836425
[8] Cohen, M. D. (1995). Risk-Aversion Concepts in Expected-and Non-Expected-Utility Models. The Geneva Papers on Risk and Insurance Theory, 20, 73-91.
https://doi.org/10.1007/bf01098959
[9] Cruz Rambaud, S., & Muñoz Torrecillas, M. J. (2013). A Generalization of the-Exponential Discounting Function. Physica A: Statistical Mechanics and its Applications, 392, 3045-3050.
https://doi.org/10.1016/j.physa.2013.03.009
[10] Dehaene, S. (2003). The Neural Basis of the Weber-Fechner Law: A Logarithmic Mental Number Line. Trends in Cognitive Sciences, 7, 145-147.
https://doi.org/10.1016/s1364-6613(03)00055-x
[11] Diecidue, E., & Wakker, P. P. (2001). On the Intuition of Rank-Dependent Utility. Journal of Risk and Uncertainty, 23, 281-298.
https://doi.org/10.1023/a:1011877808366
[12] Fishburn, P. C., & Rubinstein, A. (1982). Time Preference. International Economic Review, 23, 677-694.
https://doi.org/10.2307/2526382
[13] Gonzalez, R., & Wu, G. (1999). On the Shape of the Probability Weighting Function. Cognitive Psychology, 38, 129-166.
https://doi.org/10.1006/cogp.1998.0710
[14] Green, L., & Myerson, J. (2004). A Discounting Framework for Choice with Delayed and Probabilistic Rewards. Psychological Bulletin, 130, 769-792.
https://doi.org/10.1037/0033-2909.130.5.769
[15] Herrnstein, R. (1981). Self-Control as Response Strength. In C. M. Bradshaw, E. Szabadi & C. F. Lowe (Eds.), Quantification of Steady-State Operant Behavior (page). Elsevier/North-Holland.
[16] Hong, C. S., Karni, E., & Safra, Z. (1987). Risk Aversion in the Theory of Expected Utility with Rank Dependent Probabilities. Journal of Economic Theory, 42, 370-381.
https://doi.org/10.1016/0022-0531(87)90093-7
[17] Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47, 263-291.
https://doi.org/10.2307/1914185
[18] Loewenstein, G., & Prelec, D. (1992). Anomalies in Intertemporal Choice: Evidence and an Interpretation. The Quarterly Journal of Economics, 107, 573-597.
https://doi.org/10.2307/2118482
[19] Mazur, J. E. (1987). An Adjusting Procedure for Studying Delayed Reinforcement. In M. L. Commons, J. E. Mazur, J. A. Nevin, & H. Rachlin (Eds.), Quantitative Analyses of Be-havior (Vol. 5), The Effect of Delay and of Intervening Events on Reinforcement Value (pp. 55-73). Erlbaum.
[20] Munoz Torrecillas, M. J., Takahashi, T., Gil Roales-Nieto, J., Cruz Rambaud, S., Callejón Ruiz, Z., & Torrecillas Jover, B. (2018). Impatience and Inconsistency in Intertemporal Choice: An Experimental Analysis. Journal of Behavioral Finance, 19, 190-198.
https://doi.org/10.1080/15427560.2017.1374274
[21] Ostaszewski, P., Green, L., & Myerson, J. (1998). Effects of Inflation on the Subjective Value of Delayed and Probabilistic Rewards. Psychonomic Bulletin & Review, 5, 324-333.
https://doi.org/10.3758/bf03212959
[22] Prelec, D. (1998). The Probability Weighting Function. Econometrica, 66, 497-527.
https://doi.org/10.2307/2998573
[23] Prelec, D., & Loewenstein, G. (1991). Decision Making over Time and under Uncertainty: A Common Approach. Management Science, 37, 770-786.
https://doi.org/10.1287/mnsc.37.7.770
[24] Quiggin, J. (1982). A Theory of Anticipated Utility. Journal of Economic Behavior & Organization, 3, 323-343.
https://doi.org/10.1016/0167-2681(82)90008-7
[25] Quiggin, J. (1992). Increasing Risk: Another Definition. In A. Chikan (Ed.), Progress in Decision, Utility and Risk Theory (pp. 239-248). Springer Netherlands.
https://doi.org/10.1007/978-94-011-3146-9_21
[26] Rachlin, H., & Siegel, E. (1994). Temporal Patterning in Probabilistic Choice. Organizational Behavior and Human Decision Processes, 59, 161-176.
https://doi.org/10.1006/obhd.1994.1054
[27] Rachlin, H., Brown, J., & Cross, D. (2000). Discounting in Judgments of Delay and Probability. Journal of Behavioral Decision Making, 13, 145-159.
https://doi.org/10.1002/(sici)1099-0771(200004/06)13:2<145::aid-bdm320>3.0.co;2-4
[28] Rachlin, H., Logue, A. W., Gibbon, J., & Frankel, M. (1986). Cognition and Behavior in Studies of Choice. Psychological Review, 93, 33-45.
https://doi.org/10.1037//0033-295x.93.1.33
[29] Rachlin, H., Raineri, A., & Cross, D. (1991). SUBJECTIVE probability and Delay. Journal of the Experimental Analysis of Behavior, 55, 233-244.
https://doi.org/10.1901/jeab.1991.55-233
[30] Rachlin, H., Siegel, E., & Cross, D. (1994). Lotteries and the Time Horizon. Psychological Science, 5, 390-393.
https://doi.org/10.1111/j.1467-9280.1994.tb00291.x
[31] Rothschild, M., & Stiglitz, J. E. (1970). Increasing Risk: I. A Definition. Journal of Economic Theory, 2, 225-243.
https://doi.org/10.1016/0022-0531(70)90038-4
[32] Rotter, J. B. (1954). Social Learning and Clinical Psychology. Prentice-Hall.
[33] Scrogin, D. (2023). Estimating Risk and Time Preferences over Public Lotteries: Findings from the Field and Stream. Journal of Risk and Uncertainty, 67, 73-106.
https://doi.org/10.1007/s11166-023-09404-4
[34] Somasundaram, J., & Eli, V. (2022). Risk and Time Preferences Interaction: An Experimental Measurement. Journal of Risk and Uncertainty, 65, 215-238.
https://doi.org/10.1007/s11166-022-09394-9
[35] Stevens, S. S. (1957). On the Psychophysical Law. Psychological Review, 64, 153-181.
https://doi.org/10.1037/h0046162
[36] Takahashi, T. (2005). Loss of Self-Control in Intertemporal Choice May Be Attributable to Logarithmic Time-perception. Medical Hypotheses, 65, 691-693.
https://doi.org/10.1016/j.mehy.2005.04.040
[37] Takahashi, T. (2007a). A Comparison of Intertemporal Choices for Oneself versus Someone Else Based on Tsallis’ Statistics. Physica A: Statistical Mechanics and Its Applications, 385, 637-644.
https://doi.org/10.1016/j.physa.2007.07.020
[38] Takahashi, T. (2007b). A Probabilistic Choice Model Based on Tsallis’ Statistics. Physica A: Statistical Mechanics and Its Applications, 386, 335-338.
https://doi.org/10.1016/j.physa.2007.07.005
[39] Takahashi, T. (2008). A Comparison between Tsallis’s Statistics-Based and Generalized Quasi-Hyperbolic Discount Models in Humans. Physica A: Statistical Mechanics and its Applications, 387, 551-556.
https://doi.org/10.1016/j.physa.2007.09.007
[40] Takahashi, T. (2010). A Social Discounting Model Based on Tsallis’ Statistics. Physica A: Statistical Mechanics and Its Applications, 389, 3600-3603.
https://doi.org/10.1016/j.physa.2010.04.020
[41] Takahashi, T. (2011). Psychophysics of the Probability Weighting Function. Physica A: Statistical Mechanics and its Applications, 390, 902-905.
https://doi.org/10.1016/j.physa.2010.10.004
[42] Takahashi, T. (2013). The q-Exponential Social Discounting Functions of Gain and Loss. Applied Mathematics, 4, 445-448.
https://doi.org/10.4236/am.2013.43066
[43] Takahashi, T., Han, R., & Nakamura, F. (2012). Time Discounting: Psychophysics of Inter-temporal and Probabilistic Choices. Journal of Behavioral Economics and Finance, 5, 10-14.
[44] Takahashi, T., Han, R., Nishinaka, H., Makino, T., & Fukui, H. (2013). The q-Exponential Probability Discounting of Gain and Loss. Applied Mathematics, 4, 876-881.
https://doi.org/10.4236/am.2013.46120
[45] Tsallis, C. (1994). What Are the Numbers that Experiments Provide? Quimica Nova, 17, 468-471.
[46] Tversky, A., & Kahneman, D. (1992). Advances in Prospect Theory: Cumulative Representation of Uncertainty. Journal of Risk and Uncertainty, 5, 297-323.
https://doi.org/10.1007/bf00122574
[47] Wakker, P. (1994). Separating Marginal Utility and Probabilistic Risk Aversion. Theory and Decision, 36, 1-44.
https://doi.org/10.1007/bf01075296
[48] Wakker, P. P. (2010). Prospect Theory: For Risk and Ambiguity. Cambridge University Press.
https://doi.org/10.1017/cbo9780511779329
[49] Yaari, M. E. (1987). The Dual Theory of Choice under Risk. Econometrica, 55, 95-105.
https://doi.org/10.2307/1911158
[50] Yi, R., de la Piedad, X., & Bickel, W. K. (2006). The Combined Effects of Delay and Probability in Discounting. Behavioural Processes, 73, 149-155.
https://doi.org/10.1016/j.beproc.2006.05.001

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.