A Statistical Analysis of Games with No Certain Nash Equilibrium Make Many Results Doubtful ()
1. Introduction
Game theory is most often connected with the prisoner’s dilemma. Of course, it is just one of many games. In this game there are two inmates having two choices each: confess or stay quiet. The results are summarized in Figure 1.
As one easily sees, both staying quiet will be the best possible outcome in terms of a total prison sentence. However, each prisoner will personally improve by changing strategy. Therefore, both staying quiet is not a stable result, leading to the sad fact that eventually, both will confess to getting ten years each. This result is stable because any individual change will lead to a life sentence for this individuum.
![]()
Figure 1. Penalty depending on each prisoner’s strategy.
Because (arguably) the first to prove the existence of a stable equilibrium in a certain class of games was the late Nobel laureate John Forbes Nash, such stable equilibrium is called Nash equilibrium. Besides being a mathematical toy, game theory is widely applied in business and economics. There the strategies (here confess or be quiet) are e.g. investment strategies such as investing in country A, B, or C. Please note that game theory is a statistical result. Playing a game just once will lead to any outcome. Playing it quite often will typically lead to the predicted Nash equilibrium. If it is possible (and sensible) to mix strategies, game theory will suggest something like investing 20% in A, 50% in B, and 30% in C.
In order to prove the existence of an equilibrium one needs this “mixing of strategies” as we will show in Section 4. Because confess and be quiet cannot be mixed, the prisoner’s dilemma has per se no Nash equilibrium. It is a lucky coincidence that it has one. In the game rock, scissors, paper, there is (with pure strategies) no Nash equilibrium. However, the strategies can be mixed. In doing so there is an equilibrium for 1/3 rock, 1/3 scissors, and 1/3 paper.
In this publication, we will focus on a particular game which has been inspired by [1] or [2]. In penalty shooting in the soccer player and keeper may choose between three strategies: right, left, or middle. Taking the probabilities to get a goal (or avoid it) under these two times three strategies, one may use game theory to calculate the best strategy mix for keeper and player. Taking numbers from 434 individual penalties in 44 World Cup and European Championships games [2] one will get the following result: For player and keeper, it is (approximately) the best strategy to choose right in 43%, left in 38%, and middle in 19% of the cases. It will lead to a success rate (from the player’s point of view) of 80% on average. The details can be found in Sections 2 and 3.
Please note that this game is not limited to constructing soccer strategies. Its use there maybe even be pretty limited. But it may be considered a dummy for many business situations where two competitors are supposed to choose strategies and/or counter-strategies in order to be successful in the end.
As we can calculate the best strategy, a Nash equilibrium obviously exists. However, it is easy to see that this is not always the case depending on input parameters like the probability to land a goal if player and keeper choose the same corner. As stated above, applying game theory is only sensible if one considers it as a statistical result. A particular pair of keeper and player will not have the same success or failure rate under a chosen strategy over time. Even more so, changing keeper and/or player will lead to different probabilities of success or failure. So we have a distribution of probabilities with an average. In professional soccer (or investment) one may rightly assume that this distribution is pretty narrow as professionals optimize their behavior in severe training. But even with an extremely narrow distribution part of it may be in an area where there is no Nash equilibrium though the average lies in a regime showing a Nash equilibrium. As we can use statistical results only, we may find a Nash equilibrium in a calculation though it does not exist in reality. This is scrutinized in Section 3. It is the main result of this publication.
As shown in Section 3, the areas of Nash equilibrium have a sharp border. So an extremely small change in input parameters will lead from an existing Nash equilibrium not only to no equilibrium but a completely useless result. It is a special form of chaos [3], caused by wrong use of averages [4] which cannot be avoided here.
Though this publication focuses on using an example, it is interesting to scrutinize why the Nash equilibrium vanishes here. Such theoretical considerations are the main point of Section 4. At first glance, our suggested game falls in a category where a Nash equilibrium must always exist. Using topology it is almost trivial to prove the existence of it by constructing a continuous Nash-function and using Brouwer’s fixed point theorem. In Section 4 we show that we need “continuous” input factors in order to do so which is not the case here. It is assumed quite implicitly in standard proofs.
In Section 5 we draw conclusions from the published results and suggest further research.
A rigorous literature review is not reasonable here because our main finding (no equilibrium though a calculation yields one) is new. There are games showing sometimes an equilibrium and sometimes not, e.g. [5] [6]. However, the focus here is not to scrutinize such a partly very difficult problem. It is also not a problem of uncertainty. Within an example, we can calculate the Nash equilibrium by solving Equations (1) to (6). Though results (7) to (10) with the constraints (11) and (12) are easily obtained, it is impossible to decide whether they make any sense or not as is explained in Section 3.
2. Suggested Game and Its Stability
In soccer, there is a penalty shooting where only two people are involved. The player stands twelve yards from the goal. The keeper has to stand in front of the goal and tries to prevent getting the ball into the goal. There are two possible outcomes only: goal (success for player) or no goal (success for keeper). To keep it simple, the player may have a choice to shoot to the left, middle, or right. The keeper may jump to the left (his right-side), stands straight, or jump to the right (his left side). Of course, the model can be extended easily including something
![]()
Figure 2. Payoff matrix in penalty shooting.
like upper left and lower left, etc. The following Figure 2 summarizes the possible payoffs.
is the probability for a goal if keeper and player are both betting on left. As another example
is the probability for a goal if the keeper stands straight and the player is shooting to the right. So the first index in
refers to the keeper, the second to the player. The tuple
is the payoff. The first position gives the success rate for the player, the second one is the success rate for the keeper. So e.g.
refers to a situation where the keeper expects the player to shoot to the left while the player unfortunately shoots to the right. Therefore
is presumedly close to
. However, the player may fail the goal in some cases leading to a success for the keeper.
The
are the input parameters or payoffs of the game. The realized strategies of the game are the probabilities
,
, and
. The player shoots with
to the left, with
to the middle, or with
to the right, respectively, in order to have the biggest success (most goals). We can set e.g.
which eliminates one variable. The probabilities
,
, and
are the corresponding probabilities for the keeper. The player’s expected payoff is as follows if shooting to the left, middle, and right:
(1)
(2)
(3)
Completely analog we will get for the keeper’s payoff:
(4)
(5)
(6)
The player cannot improve his strategy if expression (1) equals (2) equals (3). In the same token, the keeper has biggest success if expression (4) equals (5) equals (6). This leads to four linear equations for
,
,
, and
which are easily solved to:
(7)
(8)
(9)
(10)
The abbreviation
. As expected we have
and
. In addition, the following inequalities must hold:
(11)
(12)
They guaranty both: The solubility of the linear equations (no denominator equals zero) and that all probabilities are between zero and one. If these conditions are violated, there is no Nash equilibrium. Because of
and
, the inequalities (11) and (12) are identical.
The conditions (11) and (12) for an existence of a Nash equilibrium are pretty clumsy because (7) to (10) must be inserted. Though we must assume
, a simplification appears to be not feasible. If one assumes that the player never fails the goal, we have
. This will lead to drastic simplification in the result:
(13)
(14)
(15)
(16)
It is straight forward to show that with this simple result the inequalities (11) and (12) are always fulfilled if
is used. With this simplification (never missing the goal) the Nash equilibrium will always exist. This will be important in the discussion of Section 4.
3. Statistical Analysis of Equilibrium
As a general discussion of stability is pretty tedious, we are now going to the detailed numbers from penalty shootings in real championship games. Though the numbers given in [2] are not complete, one may realistically estimate the
to
,
,
,
,
, and
. Please note that the chosen numbers are just taken for simplicity. The findings are identical for any chosen numbers. With these estimates one easily gets:
(17)
So we have a Nash equilibrium as the inequalities (11) and (12) are obviously fulfilled. Even if varying the
not too drastically, the Nash equilibrium still exits. However, the given
are averages. Their distributions are unknown. And more importantly, the suggested game is not limited to penalty shootings. In particular business situations, one may assume any
with
. As stated the
are averages. Even if these averages show a Nash equilibrium, their for sure existing (narrow?) distribution may or may not fall into regimes of non-existent Nash equilibria.
In order not to stay too theoretical, we come back to the penalty shooting. In general, the
are building a nine dimensional space with areas of existing and non-existing Nash equilibrium. In order to see what we mean, we will leave seven of the above given
as given. Two of the
we will vary between zero and one. Then we can make three dimensional plots showing areas of Nash equilibrium. We will not show all
possibilities which would be pretty repetitive.
As a first example, we will vary
and
. In Figure 3 we have plotted
over
and
under the conditions
and
. In the orange meshed areas the inequalities (11) and (12) are fulfilled and the Nash equilibrium exists. As one sees in less then 50% of the
-
-space a Nash equilibrium exists. In the grey areas
. In the other areas without coloring
and/or
is already the case. In real penalty shootings
and
will both be not too far below a value where a Nash equilibrium exists as Figure 3 shows.
![]()
Figure 3. Regime of Nash equilibrium varying
and
.
As stated the
and
are averages. The values for
may be in the area of 0.6 to 0.9. Let’s suppose there are just two pairs of player and keeper. One shows
and the other shows
so the average
is about 0.53. From this average one would conclude that there is a Nash equilibrium though there is none. The same argumentation is possible for
.
As a second example, we consider a variation of
and
and the other
are as given in the beginning of this section. The result has been displayed in Figure 4. The argumentation is the same as in Figure 3. The orange meshed area is the area where a Nash equilibrium exits. Here we never have
except if
and/or
holds. A Nash equilibrium does not exist if
is large (likely goal though player and keeper shooting and standing in the middle) and
is small (no goal though player shoots to the right and keeper stands straight). This is not a likely situation in penalty shooting but a possible one. Furthermore, our example could also fit for investment strategies where we have a priori no typical values for the
.
Again the most interesting discussion follows if one considers the fact that the
are averages. Let’s assume
and the average of
is about 0.4. At first glance, one might assume a Nash equilibrium for these values. However, the average 0.4 of
maybe build of values around 0.6 a and 0.2, respectively. As the values around 0.6 show no Nash equilibrium, the average is meaningless.
In both cases (Figure 3 and Figure 4) we see that the border between an area of existing and non-existing Nash equilibrium is sharp. So an arbitrary small change in
will create a crossover from Nash equilibrium to no Nash equilibrium. This is of course also a chaos effect, though not the one considered in classic textbooks [3]. Here a topology ( [7] to [10] ) changes from solution to
![]()
Figure 4. Regime of Nash equilibrium varying
and
.
no-solution. This is closely related to the transition of integer Hausdorff dimension (no chaos) to non-integer Hausdorff dimension (chaos). For an example of Hausdorff dimension and chaos see e.g. [4]. Topological tools are used in order to investigate or describe chaos [11]. The Hausdorff dimension is one example of it. But it appears to be worthwhile to investigate in a chaotic change of a topology itself. In [12] one finds an example of a chaotic change in warehouse topology.
The example here can also be understood as a problem of solving a system of linear equations (constructed from (1) to (6)) with constrains (11) and (12). Proving the existence of a solution is a trivial problem, see e.g. [10]. The solution (especially with the constraints (11) and (12)) is straight forward but highly non-linear in the input parameters (here
). As shown in [4] in another example (differential equations), using averaged inputs does generally not lead to an average output. Here the existence of a solution (Nash equilibrium) even disappears which can be also the case in the example of differential equations [4].
4. Theoretical Discussion
In the last section, we have seen that a game with two participants having a choice of three different strategies each does not always show a Nash equilibrium. This is, at least at first glance surprising. The existence proof of at least one equilibrium in a certain class of games was the central part in John Forbes Nash’s PhD thesis of 1950. To see the point we will summarize the proof. There are many ways to prove it. We here give a sketch of a proof using topology.
A game is defined by a map
(18)
The
are the possible strategies. In our example keeper and player have three choices each leading to
strategies in total.
be a particular strategy bundle. If for any
(19)
then
is called Nash equilibrium. p is the “profit” of a certain strategy. In our example, p is goal for the player and no-goal for the keeper, so far for a definition of Nash equilibrium. The number of strategies is finite. One can now “randomize” the strategies. In our example we also did it by using certain probabilities for
,
, and
or
,
, and
for shooting or jumping into a certain direction. As the strategy is continuous so is the corresponding new map
. Now the profit is an expectated value. With this, we can construct a continuous Nash-function. Applying Brouwer’s fixed point theorem (proven in 1910 for any finite dimension) proves that any game which can be extended to mixed strategies has a Nash equilibrium.
So far for the summary of the proof of Nash’s famous theorem. So why does our game defined in Section 2 has (sometimes) no Nash equilibrium? The problem is that our goal has ends (goalposts). Shooting slightly more or less to the left implies a continuous change. However, missing the goal makes the probability for a goal discontinuously zero. With this, a continuous Nash-function cannot be constructed, and Brouwer’s fixed point theorem is not necessarily valid. From this, it becomes clear that under the assumption that the player never misses the goal (cf. (13) to (16)), a Nash equilibrium always exits.
Though our game has generally not necessarily a Nash-equilibrium, sometimes it exists. This is no contradiction because Nash’s theorem has no reverse validity as e.g. the prisoner’s dilemma game mentioned in the introduction shows. The strategies cannot be randomized, because confessing and being quiet is not mixable. Nevertheless, the prisoner’s dilemma shows a Nash equilibrium. The game rock, scissors, paper is also discontinuous with no randomized strategy between the three pure strategies. So a Nash equilibrium does not necessarily exist, and unlike the prisoner’s dilemma there is no equilibrium. Randomizing the strategies will lead to the Nash equilibrium mentioned in the introduction.
The main point of this publication is stated in Section 3. We had a game that, depending on the parameters (here
) of the payoff matrix, sometimes shows a Nash equilibrium and sometimes does not. The input parameters are generally an average of a distribution. If not all of the distribution lies in the subspace where a Nash equilibrium exists, the entire calculation does not make any sense. This may happen with any game where the pure strategies cannot be randomized completely into a continuous function.
It is clear that the Nash equilibrium always exists if the player never misses the goal. It is intriguing that the Nash equilibrium still exists if missing the goal is sufficiently rare. In this particular example, these thresholds can be calculated easily. It would be interesting to generalize it. Continuity was the key for applying Brouwer’s fixed point theorem and with it proving the existence of a Nash equilibrium. Discontinuity does not exclude a Nash equilibrium. Is it possible to quantify discontinuity in order to say which discontinuity will still lead to a Nash equilibrium? It is an interesting question for basic research in topology.
The question of the previous paragraph is the question for a reversion of Brouwer’s fix point theorem which states: If
is continuous and
finite, then an
exists with
. If f is not continuous, a fixed point may or may not exist. For what “forms” of discontinuity does a fixed point exist?
5. Discussion and Conclusions
The main result of this publication is given in Section 3. As an example, we have a game where two participants can choose three different strategies each. The main point is that this game sometimes shows a Nash equilibrium. Whether or not a Nash equilibrium exists, depends on the probabilities for success (payoff values, here called
) of the different strategies. However, in almost all games in reality the payoff values are not fixed given numbers. They are a result of observation of similar situations in the past. So we have distributions, and the average payoff values are used. If this average lies in an area where a Nash equilibrium exists but some values of the distribution are outside a Nash equilibrium, then the entire calculation does not make any sense.
An immediate suggestion for further research is finding such a situation in games of the real world. It must be a game where the discrete strategies cannot be randomized, cf. Section 4. One example would be auctions where the best strategy can be to bid as late as possible. However, going over the deadline will discontinuously change the outcome.
Financial products and their “auctions” at the stock markets are also good candidates. Prices of financial products vary chaotically, see e.g. [12] to [15]. And more importantly, they are discrete. They are determined at certain points in time
only. However, it would be incorrect to assume a price within an interval of
. Mathematically it is always possible to create a continuous price but it has no meaning. Prices of e.g. stocks are not conserved quantities [13]. So they may take any value in between. Even worse, a price within an interval
does not exist because the price is a result of the bids.
As displayed in Figure 3 and Figure 4, there is a sharp transition between areas where a Nash equilibrium exits and where it does not. Such discontinuous transition areas can always be a source of chaos. But this sort of chaos is a sudden change from equilibrium to no equilibrium. It is like the chaotic transition within the optimal number of warehouses [12]. Though chaos can be scrutinized by using topology [11], one should investigate this form of chaotic transition in structure. We suggest the word topological chaos.
As a simple starting point consider a closed line, e.g. rectangular triangle. It has a (Hausdorff) dimension of one. Topologically this line is identical to any closed line—be it e.g. a quadrat or circle—because there is a continuous transformation between all these lines. Naturally, all these lines have a Hausdorff dimension of one. However, this line is topologically also identical to a Koch’s curve (see e.g. [3] ) which has a Hausdorff dimension of
which indicates chaos. Though there is a continuous transformation from regular triangle to Koch’s curve, this transformation is non-analytic at any point. Whether this is a general result or just coincidence may be a starting point for further research.
Authors’ Contributions
All the authors (GK and MG) contributed to conceptualization, formal analysis, investigation, methodology, writing original draft, writing review and editing. All authors have read and agreed to the published form.