Decentralized Policy in Resolving Environmental Disputes with Private Information

We have design a private-information game to incorporate independent experts’ assistance. With the better information provided by experts, the mistrust of the uninformed party might be dissolved. And we may get an effective and efficient resolution outcome. We will investigate conditions under which the experts’ information may help the economy to get an efficient outcome or an effective resolution result.


Introduction
For the last few decades, environmental regulations have always been one of the major issues of government policies and world collaborations.Countless conferences were held to solve the global warming (or climate change) problem since the adoption of Kyoto Protocol in 1997.This protocol has the enforcing power to commit the signing countries to reduce GHG (greenhouse gas) emission.However, this protocol was not enforced until February 16, 2005, and unfortunately, the protocol will expire at the end of 2012.A new framework of GHG reduction plan needs to be negotiated before 2012, but after the failed attempt in 2009 Copenhagen Summit, the gap between Kyoto protocol and new commitment will be inevitable.It is usually difficult and time-consuming to reach an agreement in almost every international conference.There are conflicts between economic interests and concerns about the environment, global or otherwise.There are also political struggles between different interest groups in each country.
Climate change has the devastating effect on many aspects of our economic and political life.In addition to rising sea level, glacier retreat and disappearance, life-threatening storm and heat waves, extreme weather would also cause damages not only to agricultural sector but also other industries that depend on the stable supply of natural resources, such as water for processing and cleaning.Apart from the climate change issue, many countries also face different civil disputes concerning resource redistribution and local environmental issues, such as who is entitled to what rights, or who should get more.
Environmental disputes may even cause political instability, depending on the scope and scale of the disputes, and subsequently, causing economic problems.Dispute is inevitably inefficient because of the wasted time, energy and resources, if unresolved, the loss will be large.Many disputes are unresolved simply because of distrust caused by private information.Even with agreement, distrust may lengthen the process, causing inefficiency.
To eliminate this type of inefficiency, Moore and Repullo [1] implement subgame perfect outcomes, Abreu and Sen [2] modified and extend M-R model to more applications for subgame perfect implementation.Subgame perfect implementation loses its power if information is incomplete.When disputes caused by private information, such case is usually fallen under the incomplete information framework.Within incomplete information framework, Baliga [3] and Bergin and Sen [4] provide sufficient and necessary conditions for sequential equilibrium implementation.However, their model is rather restrictive in its applications and failed to address the specific information restriction (e.g.highly asymmetric and not easy to verify) which applies to the cases of environmental disputes and conflict resolutions.
We modified the analytical framework and conditions, extending it to the issues of conflict resolution using independent experts as the verification mechanism and to deter people from cheating.In reality, civil court cases resolved many such disputes, and distrust was the main reason for them to file the suit in the first place.In court, they would use expert witness to verify the information and evidences, and the case would settle in court or out of court.Since environmental disputes involve complicated technical details, the resolution outcome requires experts' verification.We devise a mechanism closely resembled court practice and use third party expert as a credible threat to force "would-be deceiver" reveal the true information, and to resolve the dispute "out-of-court" without losing efficiency.This mechanism may apply to international dispute cases, including negotiating GHG reduction commitment.

The Notation and Definitions
Suppose, in the dispute, only one involved party has private information (for example, he can afford to abate more pollution) that he tries to conceal in order to gain a more favorable outcome.We denote this private information as the type of nature,  .Let denote the finite feasible type profile, where k is the total number of mutually exclusive possible types.We assume only player j knows the type, that is, only player j can distinguish l  form m  for all 1 and for all     .In this one-sided incomplete information game, the sequential equilibrium can be defined through the following definitions.   .Definition PR: In a game of perfect recall (PR), no player ever forgets any information he knew earlier and the actions he has chosen previously.
Definition SR: In an extensive form game with perfect recall, an assessment ( , ) , for all strategy profile '  , where u is the payoff function.
Definition BR: Belief system is updated by using Bayes' rule (BR): for all , i j N  , , if there exists Definition: An assessment is consistent if there is a sequence of assessment that converges to   , where is the perfectly mixed behavior strategy, i.e.,   0 for all h and a, and  is (uniquely) defined from  by Bayes' rule.
Definition SE: An assessment ( , )    is a sequential equilibrium (SE) of a finite extensive form game with PR if it is SR and consistent.
In most environmental dispute cases, information is too technical and complex for the uninformed parties to grasp.However, with expert's trustworthy verification or the voluntary disclosure of player j, uninformed parties may rule out some types and then update their belief.Partitional information structure describes such cases exactly.
Definition for Partitional Information (PI): Information is partitional if player j's type profile can be partitioned into mutually disjoint subsets by player i, such as Information can be renewed when new evidence is introduced by expert's report.A credence probability is a subjective belief that the player put on the expert's report.If player believes the expert's report is sincere, then she will make the decision based on the report.A proper reward structure can influence the sincerity of the expert's report.The expert may care too much about his reputation to be insincere even with little reward.Suppose the reward for the expert's report is , where m is the report, e denotes the effort level, and  , the expert can make a sincere (accurate) report with the cost of c.A risk-neutral expert would choose an effort level to maximize his/her expected payoff.Definition S1: A report is called "sincere" if  Suppose player i put a credence probability, i  , on the expert's report.The expected payoff of player i is Based on the initial information, uninformed player would form the prior belief about the opponent's strategy.When new information is released either by the expert's report or by the opponent's voluntary disclosure, the uninformed party would use this new information to form a better strategy against her opponent.This new information is a FPI.This new FPI can rule out more improbable type profiles.An updating rule should incorporate this new information.So with a new FPI, updating starts with a new prior (based on new FPI), while discarding the old prior.
Considering the refinement of Grossman and Perry [5], we can assign positive probabilities to test the credibility of off-the-equilibrium-path (OTEP) strategies.When the uninformed parties get a new FPI with a new prior belief, the old information partition and prior belief system will be discarded, otherwise a credible OTEP deviation might be found with a support of a positive probability on some profiles which do not exist in the new information set.The definition for the new updating rule can be constructed as: k and  is at least as fine as    , then probability distribution for the partition, i.e.  , changes to a new prior , which has the support contained in the support of .
If an unexpected move a  (OTEP move) occurs, and there exists a set (3) If the move is on-the-equilibrium-path, i.e. a,   for all subsequent history of h.Our new SE is defined henceforth with this definition of NUR.Under NUR, the SE is stronger, because when player updates her belief, she not only deals with zeroprobability events, but also a new information partition.

Necessary and Sufficient Conditions
Necessary and sufficient conditions are required, before a working mechanism can be constructed to implement the truth-revealing SE outcome:

Necessary Condition
For the truth-revealing purpose, our necessary condition is a revised version of condition C in [1], condition  in [2], and condition B in [3].We will need some definitions before constructing our necessary condition.
Definition (N1): An allocation x is a function, such that : x X   , where X is a finite set of possible allocations.
Definition (N2): A choice function is a subset of all possible allocations.Definition (N3): A deception for player i is a strategy, such that   D : , and

Definition (IC): A choice function  
f  satisfies incentive compatibility (IC) condition if and only if, for all ,

Now we can define the necessary condition and its associated proposition:
Condition A: Let f be the choice function, and  be the deception such that f f   , where f is the choice set of truth-revealing outcomes and f  is the choice set of deception outcomes.For each , there exists is the information partition for player i, i  N  , and 2) a finite sequence of strategies: , and also 3) a sequence of probability measures: where denotes some belief system which support this deviation. .Let k be the first point where agent j(k) deviate from the equilibrium path.Condition (A1) shows that a deviation from an equilibrium strategy  , ,h      to the next stage is not as profitable as staying on the equilibrium path.The expected payoff for deviation, ( ) , is less than or equal to the expected payoff of staying on the equilibrium path,   . This condition must hold for all k for  ,    to be a SE.Suppose some agents play deception α which implement f x   , for this deception to be non-optimal, it must be profitable for some types of agents, say j(L), to defect from deception, thus, the deception is upset.This is shown as a "preference reversal" condition in the (A2) part of condition A. That is, a deviation from deception would generate an expected payoff,   , that is greater than or equal to the expected payoff of deception,   sure that no deception is profitable in a sequential game.
Suppose there exists some assessment  , That is, suppose that deception is an optimal outcome, then from (A1), , then f cannot be an outcome in  , SE g   , which contradict the initial assumption that f is implemented in  SE , , g    .Q. E. D. The necessary condition eliminates the deception α played by the informed player.For the deception to be non-optimal, it must be worthwhile for some types of agent to defect from the deception.Condition A allows a sequence of strategies for some agents to play deception until stage L, then he will find himself to face a preference reversal in the next stage.Condition A is only a necessary condition because it does not consider the effect on the posterior belief and the associated strategies when deception is played in the previous stage.If deception is played and there are consistent beliefs supporting this deception, the deception outcome may not be ruled out.

Sufficient Condition
To achieve a truth-revealing dispute resolution, we need a sufficient condition that would allow us to design a mechanism using expert's report as a credible threat to implement truth-revealing SE outcome.We use the weak domain restriction and a restriction on the posterior beliefs on deception to rule out the possibility of deception.
The following definition and condition are the necessary parts for our sufficient condition.
Definition DR: A choice function f satisfies the domain restriction if not all agents have the same ranking over all outcomes.
Condition B follows the same reasoning of the "posterior reversal" condition in [4].
Condition B: Let f be the choice set of the truth-revealing outcomes, which satisfies the domain restriction.For each deception D   , there exist an associated outcome set, f  , and the supporting posterior belief,   .Suppose f , and , where i   is the truth-telling strategy set for all other players except player i.
Suppose there exist two constant allocations, i.e., , y z X  , and suppose that (B1-1) there exists a consistent belief, i.e., '   , which support truthful reporting with new information partition .Let      denotes the information partition supported by  .For all , N i j  , and , if truth-telling is the strategy for the previous stage, then (B1-2) for all consistent beliefs which support a deception (by reporting  instead of  ), if deception occurs, then there exist some type    and the supporting consistent belief Under this condition, the posterior reversal condition identifies the properties of posterior distribution which separate the beliefs of truth-telling from deception.The posterior distribution translates the variations in beliefs into variation in the distribution over outcomes.At some point, the belief under truth-telling (i.e.,  ) is separated from the beliefs under deception (i.e.,   ).Condition (B1) shows that if player i challenges and push the game into 2nd stage, then y will be the equilibrium under truth-telling, and it will be z if player j plays deception.Condition (B2) shows that the challenger will change her counter offer to * x after the elicitation of the private information under condition (B1).
Condition B can be simplified when we introduce a new player, i.e. the expert.Although the expert has the perfect information about the type profile, but this information structure is although similar to but not really a true NEI structure, because uninformed players will be informed only after a sincere report or a voluntary reve-lation by informed player.However, with a proper reward structure, the expert will be more likely to make a sincere report, which will be a formidable and credible threat to deter the deception.Our sufficient condition is constructed in condition D.
Condition D: Let f be the choice set of truth-revealing outcomes.Let i denote the uninformed player and j denote the informed player.For each deception D   , there exists an associated outcome set f  and a posterior belief   .Suppose , and , for all i N  and    , where i   i is the truth-telling strategy set for all other players except player i.  is the deception played by player i.
If truth is disputed in the previous stage, and there exist two constant allocations as the final outcome, i.e.
, y z X  , and suppose that (D2) there exists a consistent probability  for uninformed player to believe and to rely on the expert's report, ( . Suppose there also exists a new information partition associated with a consistent belief, i.e.,  , which denotes the belief supporting truthful reporting, such that D2. 1) for all s N  , * x X  , and , if truth-telling is the strategy for the previous stage, then 2) for all the consistent belief which support a deception, i.e.   , if deception occurs, then there exists some type profile 3) Once the updated belief become degenerated, it remains degenerated.
With condition D, we can derive proposition 2. Proposition 2. If a truth-revealing choice function f satisfies condition D and domain restriction in definition DR, then f can be implemented as a SE.
Our remark: The proof of proposition 2 is quite similar to the proof of proposition 1, except the restrictions set on the posterior belief, which depends on whether a deception is played previously.With domain restriction, a dispute is probable.A SE resolution is thus desirable.Suppose f is a choice function in with the support of  ,    , where  is the probability to get expert's report that could add new information to the updating of a new consistent belief, i.e.  .If for each deception α there exists an outcome f  with a supporting posterior belief   , and for each f x    , , and f x  x x   , condition (D1) subscribes that truth-telling is preferable to any other strategies, i.e., it is the IC condition for truth-telling.If the deception is suspected and the game is pushed to the next stage, condition (D2) ensures that when the probability of the poste-rior verification is positive, i.e. ( , a deception will result in preference reversal with some possible worse outcomes.Thus, deception is not profitable for the informed player.If truth-telling was the strategy in the previous stage, challenge the truth would not be profitable either, as described in condition 1) of (D2).Nevertheless, deception or challenge will never be an equilibrium strategy in a sequential game if condition D is satisfied.
Proof: Suppose f is a truth-telling SE outcome, i.e.

 
f SE , , g    . And f satisfies condition D, which means, according to (D1), for all   is the equilibrium strategy to tell the truth for all other players.Suppose f  isfies condition D and it is a SE outcome.Then sat  must be satisfied according to condition (D1), which contradict the initial assumption that f satisfies condition D and is implemented in SE.So f and f  can't both satisfy condition D and be in the same equilibrium set.Thus, this partially provides a contradiction toward proving proposition 2.
Next we need to eliminate the possibility of a deviation from equilibrium strategies for both informed and uninformed players.
Suppose there exist a SE strategy with a supporting belief   in the previous stage, then the game ends with the final outcome of (z,   ).Condition D2. 2) shows that (z,   ) cannot be the SE outcome, because x X  .So condition D2. 2) contradicts the assumption that a SE supported by previous deception could exist.This is the second contradiction to prove only truth-telling strategy is SE.
So in order to gain more payoffs, i.e. ( , ) v x  , the informed player will not deviate from the equilibrium strategy of truth-telling in the first stage.Since the equilibrium strategy is "truth-telling", would it be possible that "challenge the truth" can be a SE outcome?Suppose, in equilibrium, a challenge is issued by the uninformed player while no deception is played in the previous stage, the final outcome will be (y,  ), which cannot be the SE outcome, because by condition D2. 1), for all s N  ; that is, "challenge the truth" will not benefit the challenger.This provides the final contradiction: when equilibrium strategy is telling the truth in the previous stage, condition D2. 1) contradict the assumption that a "challenge" could be a SE strategy.
So deception and challenge would never be a SE strategy in the first stage, and f (a truth-revealing SE) will be implemented.Q. E. D.

Mechanism: An Example
We can construct many types of mechanism to resolve environmental conflict and disputes.Suppose game G consists of the following stages: Stage j.0 To elicit player j's private information.
Player i would form her prior belief   ,   .Stage j. i: Player j announces his type, j  , and player i simultaneously announces either "agree" or "challenge".If player i announces "agree", j f ( )  is chosen and implemented, game ends here and no more information will be extracted; therefore, no more sunk costs to spend.If player i issues a "challenge", she suspects player j's an- nouncement.Player i has two options after issuing the "challenge": (1) By announcing a credence probability η as the mixed strategy measure, she chooses to randomize between eliciting and believing in the expert's report to make her final offer (i.e., proceed to stage j.p) or breaking up the negotiation (i.e., game ends here).
(2) Player j is allowed to make another announcement, i.e., j j    .Player i can either "agree" and implement the resolution outcome according to  and  (the new supporting beliefs), or "challenge" again.If she agrees, game ends here.If she challenges, player j will pay more shares toward hiring the expert, then proceed to stage j.p.
Stage j. p: Expert is hired to reveal the true state, and to choose a pair of worse outcomes  ,  y z , which are chosen according to condition (D2).Player i and player j share the cost of hiring the expert.At this stage, player j can be punished according to the deception made in stage j.i, an player i can also pay some penalty for issuing unnecessary challenges.Game ends here.
This mechanism can be applied to resolve environmental disputes as well as other kind of conflict, as long as the dispute is caused by deception alone and the only way to gain resolution is to reveal the truth.There are also some case studies on similar types of conflict resolution and mechanism suggestions in Lin [6].We will discuss some environmental dispute cases which could be resolved by our mechanism.

The Applicable Example Explained
One example of possible application would be the disputes between Formosa Plastic (FP) and Texas' local environmental watchdog, i.e.Calhoun County Resource Watch (CCRW) since late 1980s.Texas needed FP to boost their plummeting economy in the 1980s, but the waste water discharged from FP would cause a huge degradation in quality and quantities to the shrimps in Lavaca Bay (the 3rd largest fishing ground in the United States at that time).CCRW's president took some extreme measures to stop FP's operations, for example, she undertook the hunger strike for more than 40 days and sunk her boat on the spot at the time of FP's effluent discharge.After the news exposure, FP had to pay huge fine for the violations.CCRW suspected FP of covering up spills, silencing workers, flouting the EPA and dumping highly toxic chemicals into the air, land and sea.FP claimed they are willing to put forward a plan of further abatement.But CCRW did not trust the company enough to negotiate.So the war between them went on for years, before they had to sit down and talk in order to solve the problem.Our mechanism would make the information revealed to all parties involved, and the resolution would start from there.This actually happened in 1993 when an outside expert trusted by CCRW joined the negotiation process and an agreement was signed after that.However, there's no real law or legal mechanism for the disputing parties to conduct such resolution process, so the agreement in 1993 is really accidental.Thus, when another dispute started again in 2002, the local activist had to chain herself to one of the plant's towers.
Our mechanism could be applied in the following fashion: a law is enacted to required all disputing parties to form a resolution committee (acts like the arbitrator), and the law gives this committee the legal right to put forward a set of "rewards" and "penalties" according to our sufficient and necessary conditions, and the resolution process starts from there.When the true information is undoubtedly revealed, an agreement will be signed, just like the 1993 agreement between CCRW and FP, and the dispute is resolved.
Global disputes like climate change and GHG reduction issues is too complex for a single mechanism to resolve the problem, but as long as the true information is revealed to all parties concerned, no one could morally condemn the country that cannot truly afford the costs of abatement.When there is no deception and no private agenda across the negotiation table, they might find a possible solution to reduce GHG and in the mean time to preserve the economy of that country as well.The contribution of our theory is to eliminate the possible deception which may worsen the problem of the disputes and make everyone worse off in the end.

Conclusions
We have shown the basic model and the sufficient and necessary conditions for a proper mechanism to implement a perfect information (truth-revealing) SE outcome and to resolve the disputes caused by information asymmetry.We have also shown some possible applications.
Our model can be applied to a wide variety of interesting cases, such as externality and compensation mechanism, conflict resolution, negotiation, and bargaining problems, if the conflict is caused by deception alone.The independent third-party experts serve as an option to catch deception if necessary, even though we may never be called upon to use them in SE implementation, since the essence of our model is to get the information revealed in the first stage and the dispute is resolve then and there.


We can derive proposition 1 by using condition A. Proposition 1.If a truth-revealing choice function f is implemented as the SE outcome then it satisfies condition A.Proof: Assume that f is implemented as a SE by an extensive mechanism g.Let  SE , , g    denote the sequential equilibrium of the game g with associated equilibrium assessment   ,   .Thus, for all    , f is implemented in   SE , , g   with the support of   , ,h    , where  is the prior probability distribu-tion, Inequality relation in (A1) of condition A is quite straight-forward.Suppose that f is implemented in  SE   with the support of