From Algorithms to Revolution 5.0: What Does Drive the Innovations? ()
1. Introduction
Is there a direction, that is, a goal or an objective that unifies the innovations and new concepts that have been used in society?
Based on an inductive methodology, the article analyzes some concepts related to current innovations to try to establish some general aspect that unites these innovations.
Companies always seek to cut costs and maximize profit. But profit cannot be the only motivation for adopting innovations. Proof of this is the fact that such innovations have also been used by Public Administration (Sarai, Zockun, & Cabral, 2023; Cabral & Sarai, 2024: pp. 1049-1062). In fact, it was the fact that the Public Administration was adopting these innovations that served as a criterion for choosing the innovations which would be addressed in this article.
Some classic works used the method of making an analogy with the human being to understand the State. This analogy can also be made here to understand the motivation for the use of innovations by social organizations.
Nature seems to have as its norm the search for efficiency or optimization, as indicated by Pierre Louis Moreau de Maupertius who, at the suggestion of René Descartes, proposed the Principe de la Moindre Action, or Principle of Least Action, applied in the Essai sur la formation des Corps Organisés in 1754 (Ricieri, 2019). This principle can also be called the Law of Least Effort, something that makes perfect sense when thinking, for example, about energy savings. In nature, the level of energy expenditure can be the difference between life and death.
Human beings are beings of nature and there are indications that they are also subject to such natural law. Recently, Kahneman (2012) demonstrated that the human brain also seeks to conserve energy. To do this, the brain would work with two systems, one of which is used in activities that require more effort and, therefore, more energy, and the other is responsible for most of the daily activities, of an automatic and repetitive nature, with less energy consumption.
Thus, an attempt to answer the question posed initially would be that human beings always seek to do more activities with minimal effort. To be more precise, it can be said that human beings expect more activities to be done, since, for their part, they seek to do as few activities as possible, in order to save energy. Then they try to transfer the effort, and therefore the energy consumption, to other people or machines. When humans are able to transfer activities to machines, they become automatic. But this transfer is not limited to making activities more efficient. It also harnesses the power of machines and computer programs to make them better.
Just as each human being seeks to optimize their own activities, the society would also be subject to this natural law, mainly pressured by the scarcity of resources, which makes the incessant search for efficiency the common element of the innovations it has been adopting.
To develop these ideas, the article is divided into seven parts, the first part being this introduction.
The second part explores the concepts of science and technology, the basic concepts of all innovations.
The third part defines algorithm and digital, elements present in computer programs.
Part four studies the concept of artificial intelligence, the pinnacle of automating activities.
The fifth part analyzes the concept of decentralization present mainly in Blockchain technology and the concept of an application made possible by this technology, which would be smart contracts.
The sixth part provides an overview of the so-called Fourth Industrial Revolution and also of the Industrial Revolution 5.0.
The last part brings together the conclusions found.
2. Science and Technology
For Abbagnano, science would be the knowledge that would guarantee its own validity and maximum degree of certainty, as opposed to the concept of opinion (or δόξα, in Greek). There would be three conceptions of science, according to the way to guarantee this validity. The first, traditional, starts from the demonstration, in a single, closed system in which propositions can be deduced as necessary from other propositions in it. Science, then, would not be in the accident or in the contingent, but only in what is necessary, in that which cannot be different from what is. This ideal system is exemplified by Euclid’s geometry (Abbagnano, 2007: pp. 157-158).
The second conception of science is based on description, inaugurating a perspective based on induction, that is, on the observance of particular phenomena in order to extract, through experiments, the laws of the generality of phenomena. Such experiments would be the instruments for obtaining proof of the validity of laws. These laws would be the relations between phenomena, that is, science would not be so much in the facts or in the objects observed, but in the relations between them, the description of which would allow us to anticipate and predict future facts or phenomena. Newton’s physics would be a good example of this second conception (Abbagnano, 2007: pp. 158-159).
The third conception of science would guarantee the validity of knowledge based on the possibility of correction, that is, self-correction. Science, in this way, would not be endowed with absolute truths, but would bring the best knowledge obtained so far, without prejudice to admitting the correction or modification of this knowledge based on new evidence. Among the authors of this conception mentioned by Abbagnano is Karl Popper, according to whom the scientific method is intended to prove the falsity of scientific propositions, in order to always improve them. From another perspective, still according to Abbagnano, and at the forefront of what would be scientific knowledge, Thomas Khun would have suggested that science would be in a consensus in force at a certain time and place regarding certain paradigms (Abbagnano, 2007: pp. 159-161).
Karl Popper expressly stated that “we do not know: we can only guess” (Popper, 2002: p. 278). For him, absolutely certain and demonstrable knowledge would be a myth (Popper, 2002: p. 280). The goal of science, then, would not be to give final or even probable answers. It would be, rather, pursuing an infinite but attainable goal of continually discovering “new problems, deeper and more general, and of subjecting our attempts at answers ever to ever fresh and ever more rigorous tests.” (Popper, 2002: p. 281)
On the other hand, Thomas Khun shows that the development of scientific knowledge would not be a continuous and incremental line, that is, current knowledge would not be a simple sum of knowledge accumulated so far. For him, there would be true revolutions by the replacement of certain theories by others, by the adoption of new professional commitments incompatible with the previous ones, without the previous theories ceasing to be considered scientific (Kuhn, 1998: pp. 21-25).
The different ways of conceiving science over time are also reflected in the different ways of thinking. These forms would be classified into four groups by Bernardo Carlos S. C. M. de Oliveira and Luís Miguel Luzio dos Santos. Classical thought would be represented by the modern scientific method and based on the pillars of order, separability, and reason. Quantum thinking, based on quantum physics, would break the determinism of classical thinking and the principle of non-contradiction and show that, more than objects, reality would be in the connections between their parts. Systems thinking would have contributed to overcoming the segregation of science and the lack of an overall vision, which would hinder the understanding of phenomena that interrelate with others in a reciprocal way. Finally, complex thinking, represented mainly by Edgar Morin, would try to confer a greater comprehensiveness of reality in a more coherent way, thus admitting the incompleteness of knowledge, the unpredictability of phenomena, the fact that the parts of reality carry in themselves elements of the whole itself, and the fact that antagonism is not necessarily resolved by a standardizing synthesis (Oliveira & Luzio dos Santos, 2021: pp. 25-55).
Having stated these notions about science, it is now necessary to clarify the concept of technique and technology. According to Abbagnano, in addition to the fact that the term technique has multiple meanings, it would have aroused different concerns in the thinkers who dealt with it. Technique, in a more general sense, would simply be art, that is, a set of rules that would guide any activity. This broad meaning would be divided into two groups, one of which is the meaning of religious technique, with the meaning of rite, and in the other group rational technique, divided into three types: 1) symbolic technique; 2) behavior technique; and 3) production technique (Abbagnano, 2007: p. 1106).
Symbolic techniques would also be called cognitive, aesthetic, or artistic, and would be used in science and fine art, with manipulation of symbols to explain, predict, and communicate. The technique of behavior would encompass a vast field of possibilities, such as moral, economic, and social ones. The technique of production relates man to nature, that is, the ways in which, by intervening in nature, he can survive and satisfy his desires (Abbagnano, 2007: p. 1106).
The great concern of this last type of technique is the harmful effects it would have produced, such as environmental degradation, dehumanization, alienation and even the subordination of man to machines. Moreover, in a world subject to technology, science itself would run the risk of submitting to it. Despite the criticisms, the solution to the problems lies in the technique itself (Abbagnano, 2007: p. 1106). Just to give an example to corroborate this statement, it can be said that without current agricultural techniques there would not be enough productivity to meet the planet’s food demand.
Technology, in turn, a term originating from the Greek word τέχνολογία, would be the treatise or dissertation on an art or the exposition of the rules of an art (Baylly, 2000: p. 1924). The Greek suffix λογία originates from the Greek word λόγος (“logos”) which is polysemic, but whose most appropriate meaning for the case would be study, reason, or exercise of thought. Technology would then be the study for the improvement of existing techniques and for the invention of new techniques. It would then be a branch of science focused on the practical application of knowledge. Commonly, it is the set of knowledge, techniques and methods used to achieve certain results. Usually, technology is used mainly for the production of goods or services (Abbagnano, 2007: p.1109).
The division between science and technique or between science and technology has existed since the ancient Greeks. Plato’s “Ion” was already τέχνη καὶ ἐπιστήμη1, as Anatole Bailly (2000: p. 1923) recalls. The “tekné” would be the technique, activity related to the art or manual or mechanical skill. The “epistéme” or science would be knowledge. There is obviously a connection between the two, with technique usually involving the practical employment of science.
This division still exists today, even in the legal system. To give an example of the Brazilian legal system, article 37, XVI, of the Constitution, when dealing with the prohibition of paid accumulation of public positions, excepts in paragraph “b” the accumulation of a teaching position with another technical or scientific one. There is no unanimity as to the meaning of these expressions in the legal environment. Some attribute the adjective of scientific to positions whose performance requires higher education, while technician would be the position that would require professional training (TJDF, 2019a; 2019b).
An expression that has been used recently is Disruptive technology. In an attempt to conceptualize this expression, it can be indicated, first of all, that it is not a discrete concept, but a continuous concept. To put it another way, you can’t simply say that something is or isn’t disruptive. At most, it can be said that something is more or less disruptive. Thus, there are technologies that cause more or less disruption, that is, that interrupt more or less abruptly the normal following of customs, practices, processes, standards, in short, paradigms. Transportation apps, for example, have significantly disrupted the taxi service, but have made a significant contribution to making it easier for users to find available vehicles. Pay-TV services have practically extinguished video rental services. Internet sales have disrupted the operation of many brick-and-mortar stores. These are just a few examples of how technology ends up changing even people’s way of life.
The main aspect of this topic concerns a fundamental characteristic of science. If scientific knowledge seeks to find patterns or regularities, this search finds more sense, at least from the point of view of technique, in reproducible, that is, repeatable, phenomena. It is that without the perspective that there would be a reproduction of the phenomenon, that is, that there would be a need to repeat a certain activity in the future, human beings could prioritize other issues that would continue to be among their concerns.
Imagine, for example, that it turns out that there was one more planet in the solar system that was destroyed millions of years ago. It might make perfect sense to understand its orbit and its effects on the system, from the scientific perspective of disinterested knowledge of mere contemplation. In fact, this understanding would eventually allow us to understand the cause of the destruction of this planet, knowledge that could perhaps contribute to protecting Earth itself. But if this aspect is not in the shared perception among scientists, they will end up directing their efforts to understanding objects that still exist, such as asteroids, which may eventually collide with the planet.
Going back to what was exposed above, about the search of human beings to optimize their activities, there will only be motivation to discover optimizing solutions if these activities need to be repeated. Disinterested and contemplative knowledge may be an ideal, but the voice of necessity is still too strong to dictate directions. And it is still fully compatible with the dignity of the human person to attempt to automate routine activities, delegating them to machines, leaving to human beings higher activities from a spiritual and intellectual point of view (Fresco, 2018). The great challenge is to carry out this transition without some human beings being considered mere instruments and without the process being guided only by the pursuit of profit by some people.
Without ignoring this critical aspect, the fact is that much of this automation today is possible thanks to computers. To understand how instructions are passed to these machines, it is necessary to have some notions of algorithm.
3. The Algorithm and the Digital
Algorithm is a finite set of instructions, usually organized chronologically, so that there is a mechanical procedure, that is, a non-creative one, with a sequential order in the fulfillment of each instruction so that a certain objective is achieved. The origin of the word algorithm is associated with the name of the mathematician Mohammed ibn Musa Al-Khowârizmî (780-850 A.D.), considered the creator of algebra, a word that would have derived from his book Kitâb Hisab Al-Jebr Wa’l Muqâbalah (Book of the Calculation of Restoration with Balancing) (Ricieri, 2019).
A simple example of an algorithm is a gastronomic recipe. It contains the necessary ingredients and the actions that need to be taken for the preparation. There are more complex algorithms, such as the work processes of organizations or the production lines of factories. And a very common use of algorithms, which will be of interest for this article, is the one used in computers.
The word “Computer” comes from the verb “to compute,” which means to calculate. Although computers today can perform an immense diversity of activities, their initial idea was to perform mere mathematical operations. They thus used simple algorithms to perform basic arithmetic operations.
The first computers were humans. Subsequently, humans began to use technical and instrumental inventions to facilitate mathematical operations. Historically, the most important of these was the recording of operations through writing. Writing has allowed us to expand our memory capacity immensely. To further facilitate mathematical operations, the abacus was invented (Figure 1), considered the first calculating mechanism, probably appearing in the 300s BC in Babylon (Sousa Filho & Alexandre, 2014: p. 2):
In addition to instruments like this, humans have also developed techniques to enhance and facilitate their activities. A classic example of a technique is logarithms, which allow complicated multiplication operations to be replaced by simple addition operations due to the property according to which the logarithm of the multiplication of two numbers is equal to the sum of the logarithm of each number (Loga.b = Loga + Logb). The discovery is attributed to John Napier (1550-1617) in his 1614 book Mirifici Logarithmorum Canonis Descriptio (Description of the Wonderful Laws of the Evolution of Numbers) (Abbagnano, 2007: p. 27; Ricieri, 2019; Medina & Fertig, 2006: p. 13).
These examples allow us to see that inventions can consist of both simple techniques and simple instruments. Although every instrument requires a technique of use (as in the case of the abacus), not every technique depends on an instrument to be employed (as in the example of logarithms). Writing, for example, is an invention that mixes technique (how to record) and instrument (materials used for recording).
Source: Sousa Filho & Alexandre, 2014: p. 3.
Figure 1. Abacus.
In this evolution, human beings are increasingly able to transfer more algorithms for machines to carry out their activities. And the algorithms become more and more complex. Returning to the idea of seeking optimization, it is worth noting that human satisfaction does not end with the transfer of activities to machines. Algorithms can be more or less efficient; they can consume more or less (electrical) energy and time. Thus, there is a parallel evolution of the content of the algorithms (of what they can do) and the form of the algorithms (of the way in which the content can be executed).
Abbagnano lists the following characteristic properties of an algorithm: 1) finiteness of instructions; 2) uniformity of the instructions in relation to the possible arguments, i.e., in relation to the inputs that will be processed by the algorithm; 3) availability of an agent or instrument for mechanical execution; and 4) effectiveness, i.e., obtaining the result in finite time from finite steps (Abbagnano, 2007: p. 27) or, better, the possibility of its implementation in a computer or in a Turing Machine2.
A concept linked to that of algorithm and that deserves some consideration is the concept of digital. The search for automation is carried out mainly through computers, which make use of algorithms to process data in a digital way.
The algorithms are inserted into programs that are converted to digital format, the format of machine language.
The word digital derives from digit, which can mean either a sign representing a number or a finger. The relationship between the two meanings lies in the fact that primitive calculations are made only with the fingers, including to count pebbles or manipulate the abacus. By the way, the word calculus comes from the Latin calculus, which means pebble.
The digits that the computer operates on are only two, the zero (0) and the one (1), that is, a binary number system. Hence the English expression binary digit from which the word bit originates. To the computer, zeros and ones are electrical signals with a lower or higher voltage within the range of 0 to 5 volts. To be more exact, a current of 0 to 2.5 volts would be equivalent to digit 0 and a current of 2.5v to 5v would be equivalent to digit 1. Thus, zeros and ones are also represented as off and on signals.
The conception of the binary number system is attributed to Gottfried Wilhelm Leibniz. It is based on the idea that information can be converted into any ordered set of symbols. From this system, George Boole devised a logical system for performing operations using zeros and ones, Boolean logic. This logic was implemented in electrical circuits by Claude Shannon in 1937 (Kaplan, 2016: p. 5; Sousa Filho & Alexandre, 2014: p. 6; Santos, 2020). There are other important names in this evolution, such as Charles Babbage and Ada Byron (Sousa Filho & Alexandre, 2014: pp. 7-9), but it is not the objective here to delve into these details.
The evolution described above and these characteristics pose two fundamental questions that intertwine and involve science and technology: what can be algorithmized, and what is the limit of what a computer can do?
This is where the topic of “artificial intelligence” comes in, an expression that would have been used for the first time by John McCarthy in 1956 at a conference at Dartmouth College in New Hampshire. The name would have been chosen to differentiate the objective of the research from those carried out in the field of cybernetics (Kaplan, 2016: p. 13).
4. Artificial Intelligence
The use of instruments to facilitate human work is something very old. This facilitation is nothing more than the realization of the initial idea exposed above, the search for optimization and efficiency.
The invention of machines is nothing more than a natural evolution of this process associated with the perception that it would be possible to delegate activities to make them automatic. But if the use of slaves, then servants, and now employees, already makes this delegation possible, what would justify the use of machines? According to Marx, the reason was that it was more advantageous for the capitalist to use machines, because they could work uninterruptedly, more quickly and at a lower cost. Paradoxically, the replacement of people by machines, according to him, would reduce the proportion of human capital (variable capital) in relation to machines and equipment (constant capital). Since, for him, wealth would be generated only by human labor, that is, since surplus value would result from human labor, the replacement of this labor by machines would lead to a decrease in the general rate of profit (Marx, 1984: pp. 163-176).
It doesn’t matter if Marx was right or not, but it is undeniable that more and more human beings are replaced by machines, which can only be explained by the fact that they are, in the view of the people who employ them, more advantageous from the point of view of optimization, based on the premise placed at the beginning of the text. This trend is so real that, in the case of Brazil, the Constitution expressly provided in item XXVII of article 7, among the rights of workers, the “protection against automation, in the form of the law”.
There’s also no denying that machines can perform some processes much more accurately and quickly than humans. Examples include the speed with which machines perform calculations and the speed at which they can assemble vehicles, as well as memory capacity and security regarding the integrity of the stored data. It is also essential to remember the immeasurable value of replacing humans with machines in activities that offer danger, such as the maintenance of equipment in places with radiation, with a lot of atmospheric pressure (underwater places, for example) and in inhospitable places, such as outer space and the environment of other planets.
Machines have been around for a long time, but can any machine be called smart? In any serious philosophical discussion, one may encounter the difficulty of defining what is intelligence and what is artificial. But for what matters to the present text, one can reduce the conceptual rigor a little to at least bring a notion of artificial intelligence.
Along these lines, it can be said, as has been said in relation to disruptive technology, that machines can be more or less intelligent, that is, there would be varying degrees of intelligence.
Artificial intelligence begins to be born when human beings begin to transfer to machines not only mechanical tasks, but relatively intellectual or abstract tasks. Perhaps the ability to decide autonomously is a good representation of the sense of intelligence, but in principle it would not encompass everything that intelligence means. The closer the machine’s behavior is to human behavior or the more appropriate a machine’s reaction is to a given situation, the more it is said that the machine is intelligent. Artificial intelligence would be nothing more than the ability of machines to perform intellectual activities that humans perform. It could, therefore, simply be called automation of cognitive activities (Kaplan, 2016: pp. 1-4; Whittaker, 2019: p. 7; 27) or simply automation (Kaplan, 2016: p. 17; Freitas & Freitas, 2020).
However, Jerry Kaplan argues that it doesn’t matter whether machines perform these activities in the same way as humans or whether they are self-aware. For him, artificial intelligence would be “the ability to make appropriate generalizations at an appropriate time and based on limited data.” Also according to this author, the faster conclusions are produced from minimal information and the larger the field of application, the more intelligent the behavior is considered (Kaplan, 2016: pp. 5-6).
In fact, from this comparison of similarities between machines and humans arises the distinction between Strong Artificial Intelligence and Weak artificial intelligence. Weak artificial intelligence, for Gellers (2021), is a system designed to achieve certain stipulated goals or a set of goals, in a way or using techniques that qualify it as intelligent. In weak artificial intelligence, he said, the computer is merely a tool that appears to have intelligence. On the other hand, in strong artificial intelligence, computers would have the ability to understand and possess other cognitive states, that is, they would apparently have a mind with its own internal state (Gellers, 2021: p. 6).
From another perspective, Jerry Kaplan summarizes the debate by stating that the concept of strong artificial intelligence would be related to the existence or duplication of minds in computers while weak artificial intelligence would only be a simulation of real intelligence (Kaplan, 2016: p. 68). The problem with this debate is that it requires a comparison between the machine and humans. However, in order to make this comparison, conceptual clarity and measurement criteria are essential, something that, in principle, would not have been developed yet. Just think, for example, about what would be the meaning of thinking, feeling, having consciousness or free will (Kaplan, 2016: pp. 67-68).
This debate is reminiscent of the Aristotelian distinction between the activities of government and those of subordinates, or between free people and slaves. In Book I of the Politics, Aristotle argues that there would be people who would apparently have a natural vocation for intellectual pursuits and, therefore, for commanding, and people who would have a natural vocation to obey (Aristotle, 1985, Book I). Slavery was transformed into serfdom under feudalism and subordinate labor (employment) under the present capitalist system. At present, a person can work as an entrepreneur or as an employee. In principle, no one would dare to say that human beings who work as employees or even as contractors for the provision of services would lack intelligence, that is, the mere fact that a person is available to carry out orders from another person does not make him less intelligent than the person who commands. If intelligence is required even to carry out orders, then it cannot be said that computers are not intelligent solely on the basis that they merely carry out orders or perform previously programmed activities. The distinguishing criterion must be different.
The fact is that the so-called weak or applied artificial intelligence is advancing at an extraordinary pace and meeting the demands imposed on it, causing the loss of interest in strong artificial intelligence, at least from the point of view of the market, as seen by the 2018 announcement by researchers at Carnegie Mellon that research with this type of intelligence would cease (Whittaker, 2019: p. 24).
The degree of intelligence of a machine is directly proportional to its level of ability to process possible inputs or situations and to give an appropriate response or decision. Decision is merely the choice of a response or reaction to a situation.
On a more primitive level, humans program the possible inputs, situations, or commands, i.e., a menu of options. These algorithms, then, are limited to the commands that have been programmed. At a more advanced stage, more possible options are programmed, but the algorithm is still limited to such options. Any option or situation outside the menu will be answered by an unresponsive response from the machine or perhaps a voice message saying, “I don’t understand.” The last stage of development would be for the machine to “learn” to recognize and adopt reactions to initially unprogrammed situations, nothing different from what a human would do in an organization when faced with situations not foreseen in its work process, even if to respond that the requested activity is not within what it can offer. It is in this last stage that the above discussion about the existence of strong artificial intelligence or artificial general intelligence is located, which would also encompass the existence or not of consciousness in machines (Whittaker, 2019: p. 24).
If what drives the advancement of technology is the search for widespread automation, it can be agreed that the ideal would be for machines not only to perform the activities for humans, but to perform these activities without humans having to command them all the time or even “teach” them. This process involves an approach: 1) to the concept of learning; 2) the attempt to simulate the processing of information as it occurs in the human mind; and 3) how to predict what behaviors would be expected of a human being and, therefore, also of a machine in the most diverse situations.
These three approaches refer, respectively, to the themes of machine learning, neural networks, and Big Data.
Do Machines Learn to Predict the Future?
The concept of Machine Learning has as its first problem the concept of learning itself. In any case, it is recognized that the learning process or capacity is an important element of the concept of intelligence (Kaplan, 2016: p. 27).
The general idea of learning seems to relate to experience, storing or remembering the experience, and employing that experience to solve problems in similar situations. It also involves perceiving patterns in the data obtained from the experiment (Kaplan, 2016: pp. 27-28).
In a simple way, in line with what has been exposed above, humans seek to automate more and more their routine activities and, when possible, non-routine ones as well. Programs designed for pre-defined situations tend to become obsolete when unforeseen situations begin to appear that also need to be solved. The search for automation of this growing development to cover more and more situations is at the heart of what is intended with machine learning (Figueiredo & Cabral, 2020: p. 85).
Machine learning comes with a change in approach from the perspective of what would be learning or intelligence. At first, the intelligence approach was related to the analysis and logical processing of symbols, something very close to what humans did with mathematical problems. But there is a shift in this approach when trying to understand how the human brain works and trying to replicate that system in machines. This shift in perspective ushers in what’s called Machine Learning and the branch of this approach that seeks to reproduce the functioning system of the brain is called Neural Network (Kaplan, 2016: pp. 20-28; Mattielo, 2012: p. 42).
One application of machine learning, within the idea of pattern recognition, is in image recognition. For example, you can have the computer process various images containing or not containing cats so that it “learns” what a cat is, that is, at least understands the “visual” pattern or shape of a cat. This process can be done with a human indicating in the images which ones would have cats and which ones would not. In this case, it’s called the Supervised Learning. But there is the possibility of simply providing the images and letting the computer find the patterns relative to the cat on its own, which would be called Unsupervised Learning (Kaplan, 2016: p. 30).
Another classification of machine learning also includes the Reinforcement Learning). According to this classification, the Supervised Learning would be the one capable of making inferences of classifications and regressions, i.e., statistical forecasting procedures, from labeled data for its training, that is, there would be a table determining corresponding inputs and outputs. Unsupervised Learning would be the process of learning from raw data without certain labels. Finally, the Reinforcement Learning would be a system with punishments and rewards that would lead the agent to seek to progressively maximize the rewards through iteration with the environment (Bochie et al., 2020: pp. 4-5; François-Lavet, Henderson, Islam, Bellemare, & Pineau, 2018: p. 6).
The possibility of carrying out activities of this type has become increasingly feasible thanks to the evolution of technology that has expanded the memory and processing capacity of computers. In fact, within the idea of neural networks, computers began to use, instead of a single processor, several processors that would divide the tasks demanded, something similar to the functioning of the brain. It is in this context that would be the so-called Deep Learning, which use of neural networks with many internal layers (Kaplan, 2016: p. 30), an approach that is based directly on the data provided by reality and not just on pre-defined inputs with pre-programmed responses (Sejnowski, 2020: p. 1).
This direct and automatic contact with immense amounts of data of various types and formats, whose treatment only became feasible due to the expansion of the storage capacity and processing speed of computers, combined with the cheapening of costs and miniaturization of this equipment, leads to the idea of Big Data. According to Fernando Amaral (2016: pp. 7-12), the idea of Big Data is not only linked to the large amount of data, but also to speed, variety, veracity, and value. For him, Big Data it would not be a technology, but a phenomenon that would involve several aspects combined (Amaral, 2016: pp. 7-12). This phenomenon is possible due to the fact that access to technology now allows electronic records of practically any activity, providing input for the treatment of this raw data. Just to give a few examples, cell phones allow you to record places where the person has been, purchases they have made, websites they have accessed, people they have contacted, among other records. Based on such data, programs can both design marketing campaigns and marketing directed to people’s needs as well as predicting their possible actions. Simple everyday actions, such as sending e-mail messages or typing certain words, can be aided by the computer, for example by suggesting an e-mail address after typing only the first letter of that address, or by suggesting a word or phrase after typing only some letters. These simple functions can save you a great deal of time when considering how many times people perform them on a daily basis.
Depending on the volume of data, some procedure for obtaining samples may be necessary. On the other hand, there may be situations where all the data will need to be analyzed. Everything will depend on a relationship of costs and benefits, as well as on available needs and resources (Amaral, 2016: p. 10). Finally, according to Fernando Amaral, Big Data could be defined as the “phenomenon of massification of elements of production and storage of data, as well as the processes and technologies to extract and analyze them.” (Amaral, 2016: p. 12)
If there are patterns in this data that humans are often not able to detect, machines, on the other hand, due to their processing and storage power, are able to perform this activity very easily. The detection of these patterns makes it possible to make predictions that apparently refer to the idea of being able to predict the future.
It is also worth mentioning that the fields of application of artificial intelligence are being increasingly expanded, such as robotics, computer vision, voice recognition, and natural language processing (Kaplan, 2016: p. 49).
If, on the one hand, computers allow processing large amounts of data and storing them, it is necessary to guarantee the integrity of this data, especially when it is intended that this data is not altered. This is one of the reasons for the emergence of Blockchain.
5. Decentralization and Automation: Blockchain and Smart Contracts
To explain the Blockchain it is advisable to jointly clarify the operation of the system Bitcoin which aimed to create a kind of digital monetary system with its own currency, the bitcoin (Nakamoto, 2008; Sarai, 2016: pp. 133-192). The system Bitcoin is based on the Blockchain. In fact, it was the Bitcoin that made the Blockchain technology popular (Wikipedia, 2022a). This technology is a species of the genus Distributed Ledger Technologies (DLTs).
With the 2008 financial crisis, some people unhappy with the official financial system sought to create a digital monetary system independent of the government and banks. This system should guarantee the anonymity of its operators, eliminate intermediaries and seek to avoid double spending, in addition to obviously being safe. One system that has been relatively successful is the Bitcoin, the basis of which is described in a 2008 paper by Satoshi Nakamoto (Nakamoto, 2008). However, it is not known if it is a person or group of people who used a pseudonym (Wikipedia, 2022d).
The system was based on the technologies available at the time, such as the internet, cryptography, digital data transfer and storage, as well as the perception that, even in the official monetary system, practically most transactions were already carried out digitally as mere accounting records of credit and debit, without the circulation of paper currency (Sarai, 2021: pp. 253-272).
To ensure anonymity, this system discloses all transactions carried out in it, but without disclosing personal data of those who carried them out, which is done by keeping the cryptographic public keys anonymous. Intermediaries are eliminated because users carry out transactions directly with each other, serving the system itself as a mechanism to guarantee the authenticity of operations. The system avoids double-spending by giving primacy to the first operation performed in the chronological order associated with the acceptance of that transaction by other participants. Finally, the security of the system is based on the premise that the computational power of the set of honest participants would be greater than that of the dishonest ones (Nakamoto, 2008).
Each transaction in the Bitcoin system is validated by other participants. The first to perform the validation integrates the transaction into the system’s transaction history and discloses this integration for all other participants to verify. Once verified, participants begin to validate other transactions. The transactions that are included form blocks of transactions. All blocks are interconnected and interdependent, as each block generates code that builds on the previous set of blocks, so any change invalidates the code. Thus, as all blocks are interconnected, this is where the name Blockchain comes from, which could be understood as chain of blocks (Ribeiro & Mendizabal, 2019: pp. 21-27).
Due to the suppression of the figure of the intermediary and the transfer of responsibility for maintaining the functioning of the system to its own participants, it is said that this system is decentralized. And since it is not people, but their equipment, that carry out the requested operations, the system somewhat allows anonymity.
Because Blockchain technology can evolve and change, it is difficult to conceptualize it. At most, as has been done, it is possible to bring some of its current characteristics, such as the fact that it functions as a distributed registration system shared by all participants, which contemplates all the operations carried out in it and which guarantees the authenticity and security of the records (Alharby & Van Moorsel, 2017: p. 126; Schwab, 2016: pp. 30-31).
Blockchain technology is not restricted to monetary systems. More and more applications are proposed, such as public registry services (notary services), network management, carbon credit management, decentralized energy exchange, and supply chain certification. In the public sector, some initiatives include the management of public data, the registration of votes, the management of public transport, and health management (Ribeiro & Mendizabal, 2019: pp. 14-19). It can be said that, in principle, any control system through registries can benefit from this technology, such as, for example, civil and business registries, copyright registries (with guaranteed priority) and rights registries in general.
From the moment it is possible to guarantee the authenticity of transactions carried out electronically and in an automated way, the idea of the so-called Smart Contracts is a natural consequence. There are those who conceptualize them as computerized transaction protocols that execute the terms of a contract with the aim of reducing malicious or accidental exceptions, as well as the need for intermediaries to ensure business security (Szabo, 1994). There are also those who define them as a system that distributes digital goods to certain parties when certain pre-defined requirements are met (Alharby & Van Moorsel, 2017: p. 125).
Maher Alharby and Aad van Moorsel refer to Josh Stark’s classification, according to which, despite the diversity of definitions, they could all be organized into two groups (Stark, 2021). One of them would be the definitions that would see smart contracts from a technological point of view or, more specifically, from the programming code executed on the Blockchain. In the other group would be legal definitions, which would analyze smart contracts only as instruments for legal purposes, that is, mere digital substitutes or complements of traditional practices, that is, the digital record of a contract previously made on paper (Stark, 2021).
The use of Blockchain technology for operations with smart contracts is common and there are those who restrict the concept of smart contracts to contracts executed on the Blockchain. This is because both technologies are based on the same assumption: to allow parties who do not know and do not trust each other to carry out transactions without the need for a third party to provide security to such transactions (Alharby & Van Moorsel, 2017: pp. 126-127). To give an example that clarifies this aspect, just think of a simple traditional export contract, in which the seller is in Brazil and the buyer is in the United States. Both combine business at a distance. Hence the first difficulty is to establish who will take the initiative to fulfill their part first. If the seller ships the purchased item, there is no guarantee that they will receive the price. If the buyer pays first, there is no guarantee that they will receive the purchased item. A solution found traditionally was to entrust the execution of the deal to a third party, usually a bank that has representations at both the buyer’s and seller’s headquarters. With this, this bank has a way to verify that the seller has fulfilled its part of shipping the item and that the buyer has fulfilled its part of making the payment. But what guarantees that the bank will fulfill its part? There is no guarantee, but only a custom that is formed and a trust that is gained over time.
Anyway, despite the name “smart contracts”, it would be better to call them “automatic contracts”, that is, these contracts would be nothing more than the result of automation of activities related to contract formation and execution. Anything beyond that is mere sophistication and often marketing to sell services.
At most, one can expand the idea to legal facts in general, but the idea is always that of automating activities, something that is not new. Just think of the example of the post-dated check practice. Although the cheque is a credit instrument for payment in cash, a custom has been created to put a future date on the cheque based on an agreement between the parties so that the bearer would only make the discount on the date bet. However, nothing would prevent the bearer from presenting the cheque before that date. Currently, many operations can be scheduled on banks’ internet portals. A more sophisticated example is the scheduling of operations on the stock exchange, in which the user can pre-program an order to buy a certain stock if its market price reaches a certain level.
This last example is interesting because there is not just one order of contractual enforcement. The order comprises the actual formation of the purchase contract, as well as the execution of the corresponding payment. The novelties that are beginning to emerge more recently only concern the possibility of automating other activities.
What is important to note is that there will always be a pre-programming for the automatic execution of certain activities. Only apparently the machines will eventually be entering into the contract, for there has already been a manifestation of human will accepting this form of celebration before.
There is then a prior agreement between the parties as to how to interpret the activities that will be performed by the machines. Such activities will have legal effects because the parties have so agreed.
A major challenge for the advancement of contracts, acts and legal facts in general is the fact that many times the execution of activities depends on some empirical verification to provide the data to the computers. In this case, the difficulty lies in ensuring that the sensors will not make mistakes. For example, how do you differentiate rain from a situation where water is thrown from a hose or a watering can?
Another challenge seems to have existed forever and that lies in the fact that it seems unfeasible to predict all possible hypotheses. That is why, for them, Aristotle affirms that it is necessary to apply equity, which would be an adjustment of the general rules of justice to the concrete case3. This activity may require the most complex use of interpretation, and if such an activity cannot be defined in a general way even for human beings, it is even more difficult to make this definition for machines. It will be humans who will need to pre-determine the solutions to each hypothesis. If something cannot be expressed objectively, it cannot be programmed.
While these restrictions may give the impression that the scope of smart contracts is limited, this impression can be misleading. Some existing applications of smart contracts include the internet of things, automatic ownership, copyright management, and digital commerce (Alharby & Van Moorsel, 2017: pp. 128-129).
As for Blockchain, it is not the only technology that sought to develop a network that would guarantee the security of digitally recorded data without the intervention of an intermediary. There is, for example, the Tangle, whose technology would be based on Directed Acyclic Graphs (DAG), which for some would be a mere evolution of Blockchain (Ribeiro & Mendizabal, 2019: p. 13) and for others a new technology (Rosa, Silva, Marcellin, & Gruber, 2020: pp. 237-239). Just as Bitcoin is based on the Blockchain, Iota intends to be a digital currency based on the Tangle. The latter technology would have been developed to overcome the difficulties of Blockchain and to be applied mainly in the Internet of Things (IoT). DAG is based on graphs, which are mathematical concepts with multiple applications (Feofiloff, Kohayakawa, & Wakabayashi, 2011; Wikipedia, 2022b). In the DAG, each transaction is represented by a point, which is added to the existing transaction history after validating such transactions, but the operation of this network consumes less energy and is faster as the number of transactions increases, thus allowing a high degree of scalability (Rosa, Silva, Marcellin, & Gruber, 2020: p. 238).
The set of innovations is so large and their impacts can be so significant that there are those who already claim that this set is part of a new Industrial Revolution.
6. Industrial Revolution 4.0 (and 5.0?)
The use of automated contracts interacts with some innovations that are part of the so-called “Industry 4.0” or “Industrial Revolution 4.0”. The term “4.0” would be to indicate that this Industrial Revolution would be the fourth. The term “Industrial Revolution” refers to some profound transformation in the form of production. There is always evolution in this form, but some innovations seem to have a more important impact, which even change people’s way of life (Wikipedia, 2022c). They would have a non-linear character, that is, they would be abrupt. According to Klaus Schwab (2016: pp. 18-19), an author usually cited when the expression “Fourth Industrial Revolution” is used, the First Industrial Revolution would be mainly associated with the mechanization of human labor through the steam engine between 1760 and 1840. The second would result from the use of electricity and the production line, which allowed mass production from the end of the nineteenth century and the beginning of the twentieth century. The third, according to him, would be characterized by the use of computers and the internet, covering the period from 1960 to 1990.
What would mark the Fourth Industrial Revolution would be the advancement of digital technologies and their integration with other technological innovations, not only physical, but also biological. There would thus be a convergence of discoveries at a speed never seen before (Schwab, 2016: pp. 19-20). These innovations indicate three megatrends, in Klaus Schwab’s view (2016: pp. 26-35): 1) physical; 2) digital; and 3) biological. Physics would be present in the tangible world being represented by autonomous vehicles, 3D printing, advanced robotics and new materials (including nanomaterials). Digital ones would appear in the internet of things (which is linked to the physical megatrend), in collaborative technologies such as Blockchain, in the technological platforms that shape the so-called on-demand or sharing economy (Uber, AirBnB, Amazon, etc.). Finally, the third megatrend in biology would be in genetics, including synthetic biology and genetic engineering (using Clustered Regularly Interspaced Short Palindromic Repeats - CRISPR4, for example), the manufacture of living tissues, and the incorporation of devices into the body.
There are already those who mention a Fifth Industrial Revolution, or Revolution 5.0. If Revolution 4.0 apparently placed humanity in dependence on technology or losing its role to automation, Revolution 5.0 seeks to regain control of the human, without disregarding the potential of technology. In other words, the idea is to seek to put humans and technology working collaboratively to make processes more efficient and intelligent. This new stage would include: 1) the use of virtual and augmented reality; 2) sophisticated safety and work equipment; 3) computer recognition of voice and gestures; 4) virtual models of physical objects (digital twins) and simulations; 5) robots that work together; 6) integration between human and machine; 7) high adaptability of machines; 8) reduction of prices and sizes; 9) easy and intuitive use; 10) expansion of automated activities; and 11) increased security (Musarat, Irfan, Alaloul, Maqsoom, & Ghufran, 2023).
One aspect of this new Industrial Revolution has been noticed for some time. This is the phenomenon of convergence, i.e. the integration of utilities. This phenomenon is well represented by the mobile phone. This device was born just as a telephone. Over time, other functions such as agenda, clock, alarm, camera, video camera, radio and TV were integrated into it, to name just a few examples. This convergence is advancing further to join more and more of the human body, to become portable or wearable. This is possible thanks to the advancement of the discovery of new, lighter, more resistant and efficient materials, as well as the miniaturization of devices, that is, nanotechnology. Many of them are mere helpers that expand human capacities: diaries expand memory capacity, calculators increase efficiency for performing calculations, means of communication make contact between people instantaneous.
Taking only the equipment, the combination of automation and connection allows, as already mentioned above, the realization of relatively automatic contracts. It is possible, for example, to program a refrigerator to make the purchase of a certain product when a sensor detects that this product is running low. In the same way, sensors can be installed on a farm to detect when it rains or when there is drought so that measures can be adopted automatically. Thus, sensors can trigger irrigation equipment in the event of a drought. They can also contract certain inputs that can only be applied in case of rain. A vehicle can submit a battery purchase order over the internet when it detects that the installed battery is at its end of life.
These examples give only a small sense of what it’s called “Internet of Things” (IoT). It is the combination of equipment, automation and connection, which can be associated with other technologies, such as Big Data, to name just one (Magrani, 2018: pp. 19-24).
At the same time that many things begin to carry out activities that were previously unimaginable, a new phenomenon has been emerging in a somewhat paradoxical way. It is that people come to realize that they only need the utilities provided by some things and not the things themselves, or, that is, the property of the thing used ceases to be essential.
This phenomenon is noticed, for example, in the use of public transport, but has become more evident in the use of transport by apps. It is even more important when it comes to goods subject to a high degree of obsolescence, such as computers.
In fact, in the field of computers, a new service has emerged to offer large processing and storage capacity in a relatively more secure way than that provided by personal computers. It is the concept of “Cloud Computing”. The idea is simple: instead of owning expensive equipment with a lot of processing and storage power, you pay only for the processing and storage service. Hence, the company that provides the service is responsible for updating equipment and programs to keep state-of-the-art processing and storage capacity always up to date and available, as well as protection services against intrusion and data loss (Taurion, 2009: pp. 1-13).
By the way, there is a computing technology that in the short term will probably be offered only as a service and not with equipment supply. This is Quantum Computing. It arises because of two phenomena. On the one hand, following the so-called Moore’s Law, it was possible to manufacture smaller and smaller processors, with greater processing capacity and with fewer and fewer atoms needed to represent a single bit. On the other hand, when processors become very small, at the limit, with a single atom representing a bit, the traditional laws of physics for ordinary computers that are used to record the binary system (with zeros and ones) will give way to the laws of quantum physics, in which a single bit, the quantum bit or q-bit can, in a way, representing zeros and overlapping ones at the same time, due to the probability of both outcomes at the atomic level (Mattielo, 2012). The notion is somewhat complex, but the important thing is that this phenomenon causes computational capacity to multiply immensely. What are the impacts of this? Two can be cited. In a classical optimization problem, the problem is described in a mathematical function and, analytically, through differential calculus, the optimal point of the function is sought, for example, to save materials or energy. Often the function requires a lot of calculations, and the computer is used to perform them faster. Not only would the quantum computer perform such calculations immensely faster, but it would dispense all the activity of analysis and mathematical modeling. It would be enough to enter the expected result into the computer which, by means of brute force, that is, by mere trial and error, would find such a result extremely quickly. The second example is quite worrying. All cybersecurity today through passwords is based on probabilities and processing time that a current average computer would take to break such a lock. Quantum computers would pierce such security systems without any difficulty. To give you a comparative idea, while a classical computer would take 100,000 years to perform a certain factoring activity, a quantum computer would take only 4.5 minutes (Mattielo, 2012: p. 37).
A final concept to close these conceptual aspects of the innovations dealt with and that deserves to be mentioned is that of Non-Fungible Token (NFT) (Fairfield, 2021; Wang et al., 2021). There are several translations for the word token, such as sign, test and pass. The idea is of one thing that indicates or proves something else. On the surface, it can be said that a token is a proof that someone is the owner of something. It so happens that the token can be the thing itself titled and digitized. To better understand this idea, a good example is a work of art or, more specifically, a painting. Usually, paintings are unique and it is this uniqueness or rarity that, in addition to the talent of its creator, guarantees the value of the work. This same work can be photographed and digitized or eventually even created directly on the computer. It so happens that, from the moment it is digitized, such a work is easily copied, ending the requirement of rarity. In fact, it is this ease of copying that has generated the piracy market for books, movies, music, video games and computer programs, to give a few examples. These intellectual creations continue to have use value and economic value, but piracy often, because it has no costs, removes all monetary value from such goods.
As seen above, Blockchain was the first advancement in the digital medium to ensure the inalterability of digital records. But, taking Bitcoin as an example, although a transaction and, therefore, a balance of bitcoins held by a certain person can be kept unchanged in a secure way until that person decides to make a transfer, the fact is that each bitcoin is fungible in relation to any other and has the same value. It was necessary, then, to go ahead and, in addition to ensuring inalterability, also guarantee uniqueness, exclusivity and, therefore, rarity, in the digital environment. And this was also possible from Blockchain technology. Thus, when digitizing, for example, an artistic creation, such as a song, it becomes possible to keep this record not only unalterable, but unique and exclusive. It is in this context that NFTs were born, which initially had application precisely in the field of works of art. NFTs can either be a fully digital asset or merely a record that refers to something in the physical world (Fairfield, 2021: pp. 14-24).
NFTs point not only to a new way of registering property, but also to new objects of property rights, which can generate new reflections on public goods and public procurement.
7. Final Thoughts: Optimization Is the Link among the Innovations
The concepts treated above lead to reflection on which way the society will be affected and especially how it can benefit from these innovations.
One institution that perhaps provides a better laboratory to demonstrate these effects is the Public Administration. In it, the lack of material and human resources on the one hand, the growing demand for public services shows how only through innovations is it possible to satisfy this demand.
The most important thing is that the concepts analyzed seem to have a common point. This point would be the search for optimization.
Looking back at the topics covered, it was seen that the Industrial Revolution 4.0 and even 5.0 seek to integrate humans and tools to perform more activities, faster and better.
With people interacting more frequently and in the digital medium, Blockchain ensures that the records of their acts are not altered, allowing their agreements to be executed automatically through smart contracts.
Computer programs even make it possible to pre-determine when machines will sign and execute these contracts.
And there are programs that allow humans to make better decisions based on more information, when it’s not the machine itself that already makes those decisions.
These tasks performed by machines, increasingly “intelligent”, are only possible due to the digitalization of algorithms, which pass the basic instructions for their processing.
The technology and science that underpin these innovations allow for their continuous improvement.
As Public Administration has been used here as an example, it also provides inputs for further reflection and research.
The relationship between Public Administration and new technologies is not limited to the mere use of them by the Administration. They have the capacity to change the very concept of administration, they can lead to the creation of new public services and even to the extinction of traditional bodies and services. The extinction of jobs in the private market may lead to new social demands before the Government. Questions that were not even asked before are now knocking on the door and will need to be faced. The sharing of personal data will allow for more appropriate and efficient services, but also great risks for their holders. Digitally available services facilitate access to people, but also enable cyberattacks. The regulatory role of the State will need to be fundamentally improved, while at the same time seeking to ensure security and development. New boundaries will be drawn for the role of society, the individual and the State (Schwab, 2016: pp. 75-110).
But one should not fail to remember the dark side that these tools can bring. In the same way that the media allow the Government to analyze preferences, predict and improve public policies, it can also improperly probe personal information, invading the privacy of citizens. It can also manipulate people with inappropriate information (Margolis, 2017: p. 12).
But these questions are left for other research.
NOTES
1There are some imperfections in the spelling due to the limitation of the computer fonts used.
2A Turing machine is a mathematical object, an ideal mathematized construction of a generic character for problem solving (Porto Editora, 2022; Fonseca Filho, 2007: pp. 80-84).
3“What gives rise to the problem is the fact that what is equitable is just, but not what is legally just, but a correction of legal justice. The reason for this is that every law is universal, but it is not possible to make a universal statement that is correct in certain particular cases. In cases, therefore, where it is necessary to speak universally, but it is not possible to do so correctly, the law takes into account the most frequent case, although it does not ignore the possibility of error in consequence of this circumstance. And this procedure is not without correctness, for the error is not in the law or in the legislator, but in the nature of the particular case, since practical matters are by nature of this kind.
Wherefore, when the law lays down a general law, and a case arises which is not covered by that rule, then it is right (since the legislator has failed and erred by excess of simplicity) to correct the omission by saying what the legislator himself would have said if he had been present, and which he would have included in the law if he had foreseen the case in question.
Therefore what is equitable is just and superior to a kind of justice, although it is not superior to absolute justice, but to the error arising from the absolute character of the legal provision. Thus, the nature of the equitable is a correction of the law when it is deficient by reason of its universality. That is why not all things are determined by law: it is impossible to lay down a law concerning some of them, so that a decree is necessary. In fact, when a situation is indefinite, the rule is also indefinite, as is the case with the lead ruler used by the builders of Lesbos to adjust the frames; The ruler adapts to the shape of the stone and is not rigid, in the same way as the decree adapts to the facts.” (Aristotle, 2003: p. 125)
4About CRISPR, see: Du & Qi, 2016. Vyas & Bernstein, 2019; Barbosa & Cavalcante, 2019.