Artificial Intelligence Differs Strikingly from Human Thinking Due to Quantitative Reasons

Abstract

Artificial intelligence (AI) is hailed as a new revolution, especially in business and economics with all the opportunities and fears of a revolution. However, AI is based on trial and error learning. As recently proven in a Science article (Jeong et al., 2022), humans do not learn by trial and error. In this article, we examine the difference between human learning and trial and error learning quantitatively. The progress of trial and error learning is given by learning curves derived from a random walk. Though real human learning is far from being understood, the progress of human learning is given in human learning curves derived much earlier than 2022, which are in accordance with the new findings of Jeong et al. (2022). This allows a quantitative analysis of how AI differs from human learning. The greatest risk of AI is that one mixes it up with human intelligence.

Share and Cite:

Grabinski, M. and Klinkova, G. (2024) Artificial Intelligence Differs Strikingly from Human Thinking Due to Quantitative Reasons. Theoretical Economics Letters, 14, 1095-1110. doi: 10.4236/tel.2024.143057.

1. Introduction

Artificial intelligence (AI) has been a top buzzword for at least the last ten years be it in science, business, or economics. It is next to impossible to list even 1% of the relevant literature. As a substitute, consider an interview of a Stanford Professor (Thrun, 2023) in a daily newspaper.

Naturally, most people involved in AI are optimistic that AI is a panacea for most businesses and even an entire economy. Politicians are almost competing about who supports AI the most. Needless to say, almost nobody is working on the limitations of AI.

So, it comes as a big surprise (or as no surprise considering the last paragraph) that there is very little response to an article in Science (Jeong et al., 2022) clearly proving the difference between human intelligence and AI. For a simple (non-scientific) summary of Jeong et al. (2022), please see The Economist (2023).

The essence of Jeong et al. (2022) is as follows. Already in 1904, Ivan P. Pavlov received the Nobel prize for physiology. For a more modern summary of his work, please see e.g. Windholz (1997). With the experiment on his dog (Pavlov’s dog experiment), he led to the foundation that learning is done by trial and error. On a cue (chime of a bell in case of Pavlov’s dog), there comes a reward (food in case of Pavlov’s dog). After the cue comes, the reward is expected (Pavlov’s dog started to salivate). The actual reward is then analyzed leading to a verification, falsification, or a different expectation the next time. The latter is called “learning” by trial and error. “Belief in this approach [= trial and error learning] was itself reinforced in the late 20th century by two things. One of these was the discovery that it is also good at solving engineering problems related to artificial intelligence (AI). Deep neural networks learn by minimising the error in their predictions” (cited from The Economist, 2023). So, it was not too difficult to develop programs simulating trial and error learning. Eventually, apps like ChatGPT came out of it. Contrary to this, Jeong et al. (2022) showed by methods of physiology that humans (in most cases) do not learn by trial and error. That makes AI fundamentally different from real (human) intelligence.

Assuming trial and error learning, learning curves in business and especially production were developed and used successfully. It is hard to say who invented learning curves but they have been used for many decades (see e.g. Schieferer, 1957). They are derived from a random walk. Learning curves consider e.g. the cost decrease due to learning over time as:

cost( t ) t α withα>0 (1)

It is not easy to find a derivation of Equation (1). As a convenience to the readers, we give one in Chapter 2. However, in a much more complex situation, a derivation of an equation like Equation (1) can be found in Johnson et al. (2011).

Hardly noticed by the scientific community, there is a severe criticism of Equation (1). In Grabinski (2007) and also Klinkova and Grabinski (2012), a learning curve has been developed by not using trial and error learning. These authors developed a quantitative model based on finding errors in previous situations by analytic thinking. The details are given in Chapter 3. Jeong et al. (2022) verified the theory of Grabinski (2007) and also Klinkova and Grabinski (2012) by their experiments. Instead of Equation (1), one will e.g. get:

cost( t )=( c 0 c ) e t/τ + c withτ>0 (2)

For more details of Equation (2), please see Chapter 3. It should be noted that Equation (2) was essentially developed in 1994 by the first author though not (scientifically) published (Grabinski, 1994).

With the above we can formulate the main result of this publication: AI is based on trial and error learning. As proven by Jeong et al. (2022), human learning is not based on trial and error learning. In contrast to e.g. ants where trial and error learning works quite perfectly, humans learn by analyzing errors of the past. This difference has been mentioned in Klinkova and Grabinski (2012) already. Though completely independent of Klinkova and Grabinski (2012), Hassan and Fadhel (2018) reported part of it, however without any derivation of the model functions used there.

Therefore, AI has a principled difference to human intelligence. Equations (1) and (2) show the difference of AI to the human brain quantitatively for the first time. In Chapter 4, we will pinpoint the difference and give two examples. One is a complicated learning curve and the other is an example from ChatGTP. Both examples show the shortcomings of using trial and error “learning”.

We close with conclusions and further work in Chapter 5. The main point is that AI can be helpful with time-consuming but simple tasks. However, even if AI gives results coinciding with reality perfectly, these results will not give a theory in a scientific sense. An AI 2.0 really performing human thinking appears to be illusionary for at least many years to come.

2. Trial and Error Learning

The learning by trial and error is normally modelled by a random walk. In Subsection 2.1, we will explain how a random walk leads to learning curves. Even within this set of trial and error learning, there are severe limitations to the random walk approach. This will be discussed in Subsection 2.2.

2.1. From Random Walk to Trial and Error Learning

There are many versions of a random walk. As a simple example consider a person only able to walk a step to the left (with a probability p l ) and a step to the right (with a probability 1 p l ). The probability p n ( a ) going a of n steps to the left (or na steps to the right) is:

p n ( a )= p l a ( 1 p l ) na n! a!( na )! (3)

Equation (3) has been displayed in Figure 1. There the random walk starts at the origin. After n=30 steps with a probability of p l =1/3 to the left and 2/3 to the right, the highest probability is for 10 steps to the right. On average one will be 10 steps to the right from the origin. With a sufficient number of steps n, one will reach any position if integer positions are considered only.

In general, the average position from the distribution of Equation (3) is:

a=0 n p l a ( 1 p l ) na n! a!( na )! a = p l n (4)

The standard deviation is therefore:

Figure 1. Probability distribution for steps from the origin ( n=30 and p l =1/3 ).

a=0 n p l a ( 1 p l ) na n! a!( na )! ( a p l n ) 2 =( p l p l 2 )n (5)

As the standard deviation is proportional n, the variance is proportional to n . This simple random walk yields an α=1/2 in Equation (1).

Using two dimensions is almost identical to the one-dimensional case. However, it is not so easy to give an analytic distribution like in e.g. Equation (3). But it is always possible to simulate the resulting path. Of course, a simulation of a random process never gives the same result even in aquite long run. In Figure 2 and Figure 3, a random walk of absolute length of one per step and a random angle has been displayed. The simulation of 500,000 steps looks quite different. Without going into the details everything (e.g. variance) stays the same. From this it is clear that the 2d random walk is ideal to model the way e.g. ants are finding the sugar. Please note that the success rate is essentially given by the number of steps or even more importantly the number of ants trying it in parallel. If only 100 ants are going 100 steps each, it will result in a “completely” filled inner area (say x[ 10,10 ] and y[ 10,10 ] ). A simulation is displayed in Figure 4. In real ant colonies maybe 100,000 ants are performing 100,000 steps per day each. It will lead to an almost identical picture like in Figure 4 except that an area of roughly 300 times 300 is almost completely covered. Comparing Figure 3 and Figure 2 with Figure 4 clearly shows that a team of ants is vastly more efficient than a hard working single one. For a much more advanced use of a random walk in ant colonies, please see e.g. Strotz (2018).

Though this is in best accordance with experiments in ant colonies, it would be foolish to say that ants learn to find the sugar as their behavior is perfectly described by a random process. With this in mind it is almost ludicrous to speak of swarm intelligence though a big group of ants is much more efficient.

It is straight forward though maybe puzzling to go to three or more dimensions. One can also add a varying step length or some counteraction. More can be found in Rudnick and Gaspari (2010) or other standard text books like

Figure 2. Random walk in x-y-plane, step length one, random direction, 500,000 steps; first simulation by authors.

Figure 3. Random walk in x-y-plane, step length one, random direction, 500,000 steps; second simulation by authors.

Figure 4. Random walk of 100 ants performing 100 steps each; own simulation.

Bronshtein et al. (2007).

There is, however, an important note to all random walks. According to Pólya’s theorem (Pólya, 1921), every point is reached after a sufficient number of steps in one or two dimensions (if a grid is considered). This is not the case in three or more dimensions. This leads to the sloppy remark of Shizuo Kautani (Wikipedia, 2024): “A drunk man will find his way home, but a drunk bird may get lost forever”.

2.2. Limitations of the Random Walk Approach

The most obvious weakness of random walk learning can be seen by taking the limits in Equation (1). The real cost does not go to infinity at t=0 and (unfortunately) generally not to zero for t . Of course one may modify Equation (1) into:

cost( t )= ( t+ ( 1 c 0 c ) 1/α ) α + c withα>0 (6)

with c 0 > c 0 being the costs at t=0 and t , respectively. Equation (6) shows the same limits as Equation (2) but the modification in Equation (6) looks arbitrary. And Equation (6) is by no means a result of any kind of random walk. Furthermore, Equation (6) (and Equation (1)!) is not scale invariant. As cost and time has dimensions, the result will depend upon the units the time and costs are measured. Therefore, Equation (1) and Equation (6) cannot describe reality.

The other problem with learning described by a random walk is dimensionality. For ants finding nutrition it is fine. There we have a two-dimensional Euclidian space. According to Pólya’s theorem (Pólya, 1921), it is clear that the source of nutrition will be found eventually. Trial and error may or may not be the most efficient way to find the food, but it is possible.

Going to higher dimensions than two, success is by no means guaranteed. This is very plausible as complex problems cannot be solved by trial and error. Finding the sugar is for ants essentially depending on how many ants are performing the job. Finding an explanation for e.g. the dark matter in the universe will for sure not be successful even if one assigns millions of people with an IQ of around 100.

Another problem in higher dimensions or better non-Euclidian spaces is even more severe. Finding an object in three-dimensional space is a pretty rare problem for humans. In business or economics the problem is e.g. the best location for production, having the optimal make or buy strategy, or setting the optimal interest rate, using subsidies wisely, making laws to protect without hindering the market economy, …The list is of course almost endless. And even in a not very complex problem five or more factors must be optimized simultaneously. The number of factors is the dimension of the random walk in trial and error learning. The resulting n-dimensional space will be almost never isomorphous to an n-dimensional Euclidian space. Even if one assigns a number to each factor, these numbers will not necessarily build a vector in some mathematically defined space. This problem is of course well-known in pure mathematics and it rarely causes wrong results in physics. In business and economics, however, it is more often ignored than recognized. This is the main finding of Lunkenheimer et al. (2022). And there is no easy fix for it.

3. Model for Human Learning

Up to now we have shown that learning via trial and error is far from being useful for problems more complex than finding food around an ant colony. Furthermore, Jeong et al. (2022) have shown that humans are generally not learning via trial and error. As AI is based on trial and error, it cannot be useful for complex problems. As a random walk is not adequate to describe human learning, what is a correct model? Is there an “AI 2.0” possibly using this model? The first question will be answered in this chapter. The answer to the second question is a clear NO. This is at least the firm belief of the authors.

In order to get a learning curve for humans, one has to define human learning beforehand. First there is a task to do. In production it may be drilling holes in a metal sheet, and bending and cutting it in order to get some metal case eventually. For a sculptor the task may be getting something like the Venus of Milo out of a big rock by using hammer and chisel. For the average person it may be building a closet from a big box of parts bought from IKEA.

It is important to note that no human will start by trial and error. The production worker will not drill holes randomly and bending the metal sheet arbitrarily hoping to get something useful in the end. The sculptor will not hammer in a random pattern and the customer of IKEA will not try to put an arbitrary screw in a randomly chosen hole. The customer of IKEA will probably study the assembly information first. The sculptor and the production worker will both have some training or formal education before they start.

So each process is already quite optimal even if performed for the first time. As the old Latin proverb states: “Repetitio mater sapientia est”. [Repetition is the mother of wisdom.] In doing at least similar things more and more often, one will avoid mistakes and one will be more efficient. It will take less time or economically speaking the cost will shrink.

As a measure for the effort it takes to perform something (e.g. assembling an IKEA closet), we take the variable c. Economically, one may always think of c of the cost. Doing it for the first time the effort will be c 0 . In the long run (after all mistakes are eliminated), the effort is c . Please note that c 0 and c will not only depend on what is build but also who is building it. Unlike the above-mentioned ants, c 0 for some people if the tasks if too difficult. By performing the process more and more often, one will find shortcomings or mistakes. To describe it mathematically we have to assume a continuum. Finding some mistakes will bring a change of Δc<0 . If something is done very inefficient, improvement is easy as there are a waste number of mistakes. Finding the last mistakes is probably most difficult. If it were not, the mistake had been found earlier. This is in contrast to trial and error where the mistakes are found by pure luck. Putting this mathematically, we have:

Δc( c c ) (7)

Translated into a differential equation, we have:

dc dn = 1 n 0 ( c c ) (8)

where n the number of trials and 1/ n 0 is proportional constant describing the speed of learning. 1/ n 0 is typically a monotonously increasing function of the IQ of the performing person. It is the speed of learning. The differential Equation (8) is easily solved to:

c( n )=( c c ) e n/ n 0 + c (9)

In accordance with the continuum limit, one will not learn in one attempt but continuously during each arbitrarily small time t. Letting nt and for convenience n 0 τ , we finally have:

c( t )=( c c ) e t/τ + c (10)

So, we have just derived Equation (2) of the introduction.

As mentioned in the introduction, Equations (10) and (2) are originated in Grabinski (1994), appear in various lecture notes of the first author for over 20 years, has been formally published in Grabinski (2007), and has been used and extended in Klinkova and Grabinski (2012). Though neither citing these sources nor giving a derivation, something close to Equation (10) can be found in Hassan and Fadhel (2018).

4. Difference of AI-Learning and Human Learning

AI-learning is based on trial and error. Quantitatively it can be described by one of the random walk models. The simplest form is given by the learning curve of Equation (6). Already assumed in Grabinski (1994) and explicitly stated in Klinkova and Grabinski (2012) (“Therefore, the random walk approach is fine for non-thinking structures such as animals or plants.”) humans (and more developed animals) do not learn by trial and error. Humans learn by finding mistakes via analytic thinking which one may also call creativity. This has been explicitly proven in Jeong et al. (2022). Without going into the details, Jeong et al. (2022) showed the dopamine increase when one of the above-mentioned mistakes has been found during problem solving rather than essentially in the end as by trial and error.

One fundamental difference between trial and error and human learning is that the former one needs many eager participants (see Figure 4 of the 100 ants finding the sugar) compared to high IQ which essentially determines the speed of learning 1/τ in Equation (10). This corresponds to the fact that the trial and error based AI needs powerful computers. But even a supercomputer is not intelligent. It does not have the creativity to explain e.g. dark matter in the universe.

Summarized, AI is helpful for simple problems which do not need a high IQ but nevertheless many hours of human brain power. To pinpoint the difference between human learning and AI quantitatively is more difficult. For the simplest version of a trial and error learning curve (Equation (6)) and a human learning curve (Equation (10)) the difference appears to be strikingly different from a purely mathematical point of view. However, fitting data of e.g. the cost reduction in a new production line during the first year (and making a prognosis for the second year), Equation (10) will be most likely as good as Equation (6). In what follows we will give two examples where it becomes more obvious. Firstly, we consider a more complicated learning curve, and secondly, we will give an example of ChatGPT, the archetype example of AI.

In the Afghan war, there had been many terrorist attacks on American troops and sadly many fatalities. The American troops learnt how to defend themselves and luckily the fatalities decreased in future attacks. Johnson et al. (2011) fitted the number of fatalities in a renowned article in Science. They assumed trial and error learning within the American troops and used a quite sophisticated dynamic red queen random walk. The result is displayed in Figure 5. The blue squares are the actual number of casualties. The first attack had roughly 600 casualties, and within 23 attacks the total number was roughly 1800, fortunately far below 23 × 600 = 13,800. Obviously, the Americans learned to counter-act. The red line is the fit of Johnson et al. (2011). Though a nice fit, there are two remarks: 1) Not only the Americans learned but also the terrorist. 2) Humans do not learn by trial and error as assumed here. The first remark can be partly overcome by saying that the assumed learning of the Americans in Figure 5 is an effective learning of the Americans, meaning how much faster they learned in comparison to the terrorist. As the here implied learning curve is non-linear, this can be an approximation at most. The second remark that humans do not learn by trial and error is much more severe. This was the starting point of Klinkova and Grabinski (2012). The learning of Equation (10) was the solution of a differential equation. In Klinkova and Grabinski (2012), two couple differential equations were used in accordance with the necessary two-party learning. The result has been displayed in Figure 6. The blue bullets are the actual casualties which are identical to the blue squares in Figure 5. The blue line is the fit of Klinkova and Grabinski (2012), which is almost perfect.

Figure 5. Cumulated number of fatalities Tn over the number of attacks n.

Figure 6. Fit of casualties by Klinkova and Grabinski (2012).

Our second example shows how ChatGPT fails because it relies on AI and with it trial and error learning. First we have to find a really new and previously unanswered question, at least in the “opinion” of ChatGPT. An integral calculates a continuous arithmetic average of the functional values (Of course, ChatGPT “knows” this as it can be found in the internet and many textbooks). In Grabinski and Klinkova (2023a), the question was whether it is possible to do the same but instead of building a sum, building a product of the functional values. If successful this will lead to a continuous geometric mean. It was successful and it has been called inteduct. We have chosen this example because it is not (yet) a topic of mathematic textbooks. First, we gave ChatGPT a pure knowledge question:

What is an inteduct?

ChatGTP answered (on March 24, 2024):

“‘Indeduct doesnt appear to be a standard term in mathematics or any other field that Im aware of. Its possible that it might be a typographical error or a term specific to a certain context or specialized field. If you can provide more context or clarify where you encountered the term, I might be able to offer more assistance”.

This was a pure knowledge question (having nothing to do with thinking or intelligence) and ChatGTP essentially said: “I don’t know”. This is in some sense correct. Please note that e.g. Google gives three correct references in top position though this is not an answer to the question.

Then, we asked a question which requires some thinking:

Is there a continuous version of a product, like integrals for sums?”

ChatGTP answered (on March 24, 2024):

Yes, there is indeed a continuous analogue of multiplication, analogous to integration for sums. This concept is known as integration. Integration can be thought of as a continuous version of multiplication in the sense that it combines or accumulates infinitesimal values over a given interval.

Just as how summation is used to find the total of a series of values, integration is used to find the accumulated total of a function over a given interval. This process involves dividing the interval into infinitesimally small pieces, evaluating the function at each piece, and summing up these values. The result is the integral of the function over that interval.

Mathematically, integration is denoted using the integral symbol and is defined as the limit of a sum:

f( x )dx

Here, f( x ) is the function being integrated, dx represents an infinitesimal change in the variable x, and the integral is taken over a certain interval.

Integration has many applications in mathematics, physics, engineering, and other fields, just as multiplication and summation do. It plays a fundamental role in calculus and is used to solve problems involving areas, volumes, rates of change, and many other quantities that can be modeled with functions”.

This is nice English, and sentence by sentence there are correct phrases. However, the total text is complete nonsense. And even worse, it is by no means an answer to the original question.

So, we come to the said result that AI-based ChatGTP cannot answer a question, which requires some thinking, and the question was not too difficult. Again, Google gives much more relevant information. Probably every first-year mathematics student will give a much smarter answer.

On the good side, AI will not make thinking humans jobless. And ChatGTP is no thread for theses and exams as long as questions are asked that require thinking and some intelligence. And such types of questions are the only reasonable ones anyway.

5. Conclusion and Future Work

We have shown that AI can automate simple jobs requiring little intelligence only. And this is not due to the lack of more sophisticated AI systems. It has a principled reason as AI is based on trial and error “learning” in contrast to human thinking. Therefore, AI will never be a substitute for human thinking. From the point of view of a linguist, artificial intelligence is a pretty misleading term as it has nothing to do with intelligence in the connection to IQ (Intelligence in the usage of the secret service may largely benefit from AI). From the marketing and sales point of view, the term AI deserves a Nobel prize.

On the positive side, AI is good for simple but time-consuming tasks. This is comparable to the industrial revolution where severe muscle work was done by steam and later electrically powered machines. But there is no hint that AI will be a revolution. During the industrial revolution, two things happened. Firstly, things were possible that were illusionary before (e.g. producing steel on a mass scale). Secondly, these things were of clear economic value (e.g. for transportation). With AI it is clear that things can be done which are almost impossible before (e.g. analyzing waste amounts of customer data). Whether these new achievements are of any economic value, is far from being clear. The ultimate proof will happen much later but the chances are dim. All economic “revolutions” of the last 40 to 50 years are based on IT (data processing, internet, internet of things, and AI). Though many people speak of an improvement due to IT, there is no proof. Sometimes the use of computers is taken as a synonym for innovation. Quite often investment in innovation is measured by investment in IT. The famous article of Michael Hammer on reengineering (Hammer, 1990) proves otherwise. It clearly states: “In particular, heavy investments in information technology have delivered disappointing resultslargely because companies tend to use technology to mechanize old ways of doing business”. Though Hammer’s article has intended to change it, there is no hint that it happened, see also Grabinski (2007) (At least, the first author was a management consultant in the 1990s who participated in, led and sold various projects having reengineering in their topics. Partly they brought a big success for the client but none had anything to do with reengineering in the sense of Michael Hammer). A recent article in NZZ (Keusch, 2024) made an even ludicrous judgement about AI, probably without intending it. There the fear that AI will create unemployment has been discussed. The AI industry reported that there is no reason to worry. Some jobs may be lost because of AI, but a similar amount [but better paid!] will created.

Equating AI to human intelligence also supports the often-made but wrong assumption that IQ can be trained or that lack of IQ can be overcome by effort. But IQ is fixed by birth or at least in early life as Fergusson (2019) summarized from a psychological point of view and Grabinski and Klinkova (2020) proved it mathematically. If that were not the case, paying somebody permanently more because of higher IQ would be as unjust as paying somebody more for bigger biceps though machines will overcome the problems of a weak bicep easily. Something especially trade unions would love to have.

Trial and error is often used where the theory is not known (at least to the decision-maker). In such circumstances, AI can be very helpful as AI automates trial and error, and it can take a large amount of factors into account impossible for even a large group of humans. However, the result of trial and error (or nowadays AI) is no substitute for a theory. Especially economists are infamous for it. As a small example consider NAIRU (Tobin, 1980). NAIRU by itself is correct. There is a certain connection between unemployment and inflation, and inflation can be corrected by setting interest rates. Therefore, setting the correct interest rate is essential to keep unemployment at bay. But exactly how much change in the interest rate affects inflation how much and with it unemployment is unknown. Trial and error (and maybe AI) seems to be a good approach. It is the way of choice of all central banks at least for the USA, EU, UK and Japan. Nevertheless, there are sometimes surprises where the setting of a previously reasonable interest rate completely fails. As trial and error is no theory, there may be e.g. instability, which makes trial and error completely useless. In this case, the instability has been explicitly proven by Schädler and Grabinski (2015). If trial and error results were taken as a theory, the partly different results would appear as simultaneously valid contradicting theories, which are of course logical nonsense. At least one “theory” must be wrong. But even at the Nobel prize level this is provable ignored in economics. Just consider the recently deceased Daniel Kahneman who shared the Nobel prize in 2002 together with Vernon Smith by having a theory contradicting Kahneman. The same was repeated in 2013 with Shiller and Fama (The authors of this publication have a theoretical reasoning that Kahneman and Shiller are correct).

Below Equation (6) (learning curve of trial and error), we made the remark that Equation (6) is not scale invariant. Furthermore, the exponent α (if ) makes Equation (6) non-analytic. This is typical for results of a random walk and an indicator that it does not describe reality. It is like the puzzling fact that the Cobb-Douglas production function (Cobb & Douglas, 1928) is non-analytic though this conundrum has been explained recently (Grabinski & Klinkova, 2023b). Just as a note of caution, such non-analytic behavior does appear in statistical physics at e.g. phase transitions, see e.g. Grabinski (1990). There it is however by no means a conundrum. Scale invariance is of course generally valid in physics. But at e.g. phase transitions, the correlation length goes to infinity. And scale invariance holds for every finite length only. This fact is explicitly used in renormalization group theory to calculate such critical exponents.

As already said, an AI that really mimics human thinking or intelligence will not be feasible for the time being. This has to do with the fact that human thinking is far from being understood. Even if the results of IT and especially AI become partly equal or even better than humans, does not mean that they work as humans do. Consider e.g. computer-based face recognition which is meanwhile by no means worse than humans do and extremely much better than even mildly face-blind people perform. However, it works by analytic measurements like the distance of the eyes, position of the nose, etc. Face-blind people are normally better than average in such analytic recognition which clearly proves that humans do not make analytic measurements to recognize faces. At Stanford University (Kosinski & Wang, 2018), a neural network system has been used to detect other gender orientations from male faces with a hit rate of 81%. Humans are with 61% just slightly above the guess rate of 50%. Though this is a big achievement in computer science and psychology, it does not mean that this neural network can think like a human. Again this neural network uses analytical measures rather than the gut feeling (maybe in connection with hormones) human use.

Our further work should focus on failures of AI. Two examples are given in Chapter 4. Such research should not be done to discredit AI but to avoid much more severe mistakes as given in the examples of Chapter 4. As the hype for AI becomes bigger and bigger, it could be quite dangerous to rely on AI completely.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Bronshtein, I. N., Semenddyayev, K. A., Musiol, G., & Muehlig, H. (2007). Handbook of Mathematics (5th English ed.). Springer.
[2] Cobb, C. W., & Douglas, P. H. (1928). A Theory of Production. The American Economic Review, 18, 139-165.
[3] Fergusson, M. (2019). The Curse of Genius. The Economist.
https://www.economist.com/1843/2019/04/29/the-curse-of-genius
[4] Grabinski, M. (1990). Explanation of the Discontinuity in the Spin-Relaxation Time of 3He-A1. Physical Review Letters, 63, 814.
[5] Grabinski, M. (1994). Stockpiling Spare Parts for 25 Years. Consulting Project for the Eidgenössisches Militärdepartement (EMD), Swiss Ministry of Defense.
[6] Grabinski, M. (2007). Management Methods and Tools. Gabler.
[7] Grabinski, M., & Klinkova, G. (2020). Scrutinizing Distributions Proves That IQ Is Inherited and Explains the Fat Tail. Applied Mathematics, 11, 279-289.
https://doi.org/10.4236/am.2020.1110063
[8] Grabinski, M., & Klinkova, G. (2023a). Like a Sum Is Generalized into an Integral, a Product May Be Generalized into an Inteduct. Applied Mathematics, 14, 957-984.
https://doi.org/10.4236/am.2023.145017
[9] Grabinski, M., & Klinkova, G. (2023b). Explaining Cobb-Douglas with the New Mathematics of Inteduct. Theoretical Economics Letters, 13, 1383-1391.
https://doi.org/10.4236/tel.2023.136077
[10] Hammer, M. (1990). Reengineering Work: Don’t Automate, Obliterate. HBR.
https://hbr.org/1990/07/reengineering-work-dont-automate-obliterate
[11] Hassan, M. H., & Fadhel, B. T. (2018). Comparative Analytical Study Considering the Analogy of Learning Creativity Quantification versus Ant Colony Intelligence. Advances in Social Sciences Research Journal, 5, 51-71.
https://doi.org/10.14738/assrj.53.4259
[12] Jeong, H. et al. (2022). Mesolimbic Dopamine Release Conveys Casual Associations. Science, 378, eabq6740.
https://www.science.org/doi/10.1126/science.abq6740
[13] Johnson, N. et al. (2011). Pattern in Escalations in Insurgent and Terrorist Activity. Science, 333, 81-84.
https://www.science.org/doi/10.1126/science.1205068
[14] Keusch, N. (2024). Künstliche Intelligenz ersetzt nicht nur Jobs, sondern schafft auch neue (AI Will Not Only Eliminate Jobs. It Will Also Create New Ones). NZZ.
https://www.nzz.ch/wirtschaft/kuenstliche-intelligenz-ersetzt-nicht-nur-jobs-sondern-schafft-auch-neue-doch-wie-wird-man-zum-ki-experten-ld.1822323
[15] Klinkova, G., & Grabinski, M. (2012). Learning Curves with Two Frequencies for Analyzing All Kinds of Operations. Yasar University Publication.
http://www.h-n-u.de/Veroeffentlichungen/learning.pdf
[16] Kosinski, M., & Wang, Y. (2018). Deep Neural Networks Are More Accurate than Humans at Detecting Sexual Orientation from Facial Images. Journal of Personality and Social Psychology, 114, 246-257.
[17] Lunkenheimer, M., Kracklauer, A., Klinkova, G., & Grabinski, M. (2022). Homo Economicus to Model Human Behavior Is Ethically Doubtful and Mathematically Inconsistent. arXiv: 2207.02902.
https://arxiv.org/abs/2207.02902v1
[18] Pólya, G. (1921). Über eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrten im Straßennetz. Mathematische Annalen, 84, 1-2.
[19] Rudnick, J., & Gaspari, G. (2010). Elements of the Random Walk. Cambridge University Press.
[20] Schädler, T., & Grabinski, M. (2015). Income from Speculative Financial Transactions Will Always Lead to Macro-Economic Instability. International Journal of Finance, Insurance and Risk Management, 5, 922-932.
https://www.journalfirm.com/journal/128#
[21] Schieferer, G. (1957). Die Vorplanung des Anlaufseiner Serienfertigung. Ph.D. Thesis, Stuttgart University.
[22] Strotz, L. C. (2018). Getting Somewhere with the Red Queen: Chasing a Biologically Modern Definition of the Hypothesis. Biology Letters, 14, Article ID: 20170734.
http://dx.doi.org/10.1098/rsbl.2017.0734
[23] The Economist (2023). A Decade-Old Model of Animal (and Human) Learning Is under Fire.
https://www.economist.com/science-and-technology/2023/01/18/a-decades-old-model-of-animal-and-human-learning-is-under-fire
[24] Thrun, S. (2023). Herr Thrun, wird KünstlicheIntelligenz den menschlichen Verstand am Endeablösen? (Mr. Thrun, Will AI Replace the Human Mind Eventually?) Interview in HAZ.
https://www.hildesheimer-allgemeine.de/meldung/ki-experte-mit-hildesheimer-wurzeln-welche-zukunft-hat-der-mensch-in-einer-welt-in-der-kuenstliche-intelligenz-regie-fuehrt.html
[25] Tobin, J. (1980). Stabilization Policy Ten Years after. Brookings Papers on Economic Activity, 1, 19-71.
https://doi.org/10.2307/2534285
https://www.brookings.edu/wp-content/uploads/1980/01/1980a_bpea_tobin.pdf
[26] Wikipedia (2024). Random Walk.
https://en.wikipedia.org/wiki/Random_walk
[27] Windholz, G. (1997). Ivan P. Pavlov: An Overview of His Life and Psychological Work. American Psychologist, 52, 941-946.
https://doi.org/10.1037/0003-066X.52.9.941

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.