On Knowledge: Proposing a Knowing-Understanding Dialectic in the Context of Finding More Cohesive Epistemic Approaches of Truth

Abstract

Given the high advances done in the scientific field, especially with regard to the quantum fields of study, the epistemic approach to doing science changed drastically. Thus, it is reasonable and necessary to create a model of thought for explaining possible solutions. Such a solution can be understood through the implementation of a new dialectic method that is promising to a descriptive approach to how knowledge is processed and handled cognitively and what roles synthetic apriorisms and intuition play in this process.

Share and Cite:

Floroiu, I. (2024) On Knowledge: Proposing a Knowing-Understanding Dialectic in the Context of Finding More Cohesive Epistemic Approaches of Truth. Open Journal of Philosophy, 14, 556-569. doi: 10.4236/ojpp.2024.143036.

1. Introduction

In the context of learning, epistemic approaches developed from both cognitive sciences, in the modern world, and from philosophy, such as Kant and Hegel, which aided in the development of new ways of thought. In the modern era, the influences of both philosophy and neuroscience are embedded as the building blocks of science itself. The scientific approach is an epistemic methodology through which hypotheses are made with the aid of empiricism, through experiments, and the way we understand understanding itself, through neuroscience and philosophy. Because of this, it is necessary for scientists to understand a concept throughout time using multiple perspectives (Parrini, 2012; Solomon, 1974).

Even though scattered and with a lack of a collectively accepted structure and methodology, the scientific method proved itself to be the best epistemic approach to understanding how the world works (Russ, 2014). During the development of classical physics, it was indispensable and a lot more structured. However, with the rise of relativity and quantum sciences, it became harder and harder to use the same methodology and to express the same certainty with regard to the explanation of phenomena. To better understand these affirmations, it is best to provide an explanation of how the scientific method works in both cases.

As said before, during the times in which classical physics was developed, a lot of work revolved around explaining concepts through specific empirical methods, such as practical experiments. Based on the outcome of the experiments, a law was created based on the understanding of scientists. However, “law” can be seen, from an epistemic perspective, in this context, as a mere aggregate of practical cases, thus it is not a true law, since there is an impossibility in grasping all the cases through which a phenomenon can be understood through empirical experimentation. This is due to the fact that all the contexts and cases that a phenomenon can be manifested through cannot be practically experienced in a finite timeframe (Bird, 2004).

However, with the development of quantum mechanics, the approach of the scientific method changed drastically, because this field revolves around understanding the building blocks that constitute matter. The process in itself is difficult to complete, as empirical experimentation necessitates a level of complexity that is hard to attain technologically. One other reason constitutes an impossibility to understand a process through typical ways of parametrisation of a physical body, because of the uncertainty principle, which will be discussed in the following chapters. Because of these limitations, the result has to be manifested through the inversion of the steps through which the scientific method works. If the classical sciences necessitate an empiric approach with the purpose of creating a theory of nature, the quantum sciences necessitate a mathematical theory through which empiric processes are evaluated, if even possible. With this in mind, one must take into account the uncertainty principle, which is nothing more than an extrapolation of mathematical laws. In general, uncertainty principles refer to a meta-theorem in Fourier analysis that states that a nonzero function and its Fourier transform cannot be localized to arbitrary precision, aspects that will be approached in a specific chapter of this paper (Cohen, 2001).

Given the circularity of the need to define truth and the need to create a methodology of defining truth, it can be pointed out that it is necessary to point out the relationship between truth as a concept and other mental processes that might play a role in defining the first. Thus, it is important to grasp a degree of interaction between the envision of knowledge, conceptually, and the process of understanding.

The following chapters will give an explanation of how truth can be understood in the current world, exemplifying a thesis of knowledge, the afferent antithesis and will argue how a synthesis can be formed based on the previous inferences, concluding with how this way of thought can make artificial intelligence and the human mind be more easily understood through an original insight into the context discussed above.

2. The Mathematical Principles of Science and Synthetic Apriorisms in Grasping Universal Truths

It was already discussed that a shift in perspective with regard to understanding the natural world was produced in the quantum era, which caused the scientific method to be drastically altered. Furthermore, it is necessary to infer that a consequence of this dramatic shift created a complex scattering of the scientific methodology.

This statement can be further explained given the epistemic nature of science itself. A quantum process is hard to be understood using an empirical approach, since observing subatomic particles is, to an extent, impossible. Furthermore, even observing a subsequent subatomic behavior necessitates solid theoretical grounds, which were initially given through an emphasis on theoretical consequences of already established physical laws rather than empirical evidence. For example, understanding the wave-particle duality, the behavior of particles was possible through Einstein’s photoelectric effect and Young’s double slit experiment.

The conclusion that light can be a wave was primarily based on the results of interference and diffraction experiments, with Thomas Young’s double-slit experiment being a pivotal demonstration, based on Huygens principle. Christiaan Huygens proposed that light travels as a wave. His principle suggested that every point on a wavefront acts as a source of secondary spherical wavelets, and the sum of these wavelets forms the next wavefront. This wave model could explain reflection and refraction, but not interference or diffraction at that time. Because of this limitation, light was shone through two closely spaced slits onto a screen, during Young’s experiment. Instead of two bright spots corresponding to the slits, an interference pattern of alternating bright and dark fringes appeared on the screen, as an observation. The conclusion is that this pattern could only be explained if light behaved as a wave, with waves from the two slits interfering constructively and destructively.

Einstein’s photoelectric effect was also a pivotal point through which scientists came through with an initial understanding of a quantum world. Its objective was to explain the emission of electrons from metal surfaces when exposed to light. The observation was that the kinetic energy of the emitted electrons depended on the frequency of the incident light, not its intensity. It was concluded that light must also have particle-like properties (quanta of energy called photons) to explain how light energy is transferred to electrons.

Because of this, subatomic particles’ behavior was based on experimental approaches that tested a property during an interaction rather than the nature of a particle in itself. The approaches were based, with regards to the context of demonstrating their properties, on their possibility given the conclusions of other studies that were unable to fully explain how a phenomenon proceeds with what was known so far, rather than an empirical form of experimentation through which determining the existence was concluded.

Rather than exclusively talking about the properties of subatomic particles, scientists concluded their existence (as the nature of the particles in themselves) in the same manner. For example, J.J. Thompson discovered the electron through experiments with cathode rays, showing that atoms contain smaller, negatively charged particles by using a cathode ray tube to show that cathode rays (streams of electrons) were deflected by electric and magnetic fields, indicating they were particles with mass and charge. Also, Rutherford proposed the nuclear model of the atom based on the gold foil experiment, where alpha particles were deflected by a dense, positively charged nucleus. In the experiment, alpha particles were directed at a thing, gold foil, observing the scattering patterns, leading to the conclusion that most of the atom’s mass is concentrated in a small, central nucleus. James Chadwick Discovered the neutron, a neutral particle within the nucleus, explaining the mass of the nucleus and leading to a more complete understanding of atomic structure. In the experiment, beryllium was bombarded with alpha particles and neutral radiation was detected, which was later identified as neutrons. Most of the rest of the subatomic particles were observed using particle accelerators, after their development.

All of the above experiments were developed as a necessity to describe a specific postulate of a conclusion identified through previous research. Rather than discovering a particle in itself, its nature was described indirectly through macroscopic experiments. This certain behavior changed drastically within the background of technological limitations. Thus, the epistemic accounts for quantum sciences are more scattered than classical science methods because scientists need a theoretical conclusion that should be tested rather than an empirical experiment that would be theoretically modelled afterward.

Paul Dirac developed a relativistic equation for the electron, which implied the existence of a particle with the same mass as the electron but the opposite charge (the positron). The positron was later discovered by Carl Anderson in 1932 during cosmic ray experiments. This discovery was driven by Dirac’s theoretical prediction, not by an initial empirical anomaly.

Peter Higgs and others proposed the existence of the Higgs boson as part of the mechanism that gives mass to elementary particles within the framework of the Standard Model of particle physics. Decades later, in 2012, the Higgs boson was discovered at the Large Hadron Collider (LHC). The experiments at the LHC were designed specifically to test the theoretical predictions regarding the Higgs boson.

Frequently starts with theoretical models that predict new phenomena, which are then tested empirically. Examples like the uncertainty principle, the positron, and the Higgs boson show how theoretical conclusions necessitate the design of specific experiments for validation. This epistemic difference arises partly because quantum phenomena are often not directly observable in the same way classical phenomena are, requiring more abstract theoretical frameworks to guide empirical investigation.

Most quantum scientists came up with a mathematical description of a phenomenon first, since it was impossible to work around the empirical nature of experiments that science was used to. As an example, Einstein focused exclusively on thought experiments to prove relativity, because empirical methods were impossible given the technological advances of that time. However, 100 years later, it has been shown that almost all of the aspects of his theory were correct (Barish & Weiss, 1999). In conclusion, Einstein was capable of mathematically simulating the true behaviors of various planetary objects and of the space-time continuum alike. Most quantum scientists did the same thing, because of the same context, which is technological limitations.

One of the most pervasive context, and probably the most important, was the understanding of uncertainty. The Uncertainty Principle, formulated by Werner Heisenberg in 1927, is a fundamental concept in quantum mechanics that revolutionized our understanding of the physical world. At its core, the Uncertainty Principle asserts a fundamental limit to the precision with which certain pairs of physical properties of a particle, such as position and momentum, can be simultaneously known (Busch et al., 2007).

This principle challenges the understanding of classical physics, where the position and momentum of a particle can be precisely determined. The mathematical foundation of the Uncertainty Principle stems from the wave-particle duality inherent in quantum mechanics. In quantum mechanics, particles such as electrons are described not only as point-like objects with well-defined positions and momenta but also as waves with characteristic wavelengths. This dual nature is encapsulated in the wave function, which represents the probability amplitude of finding a particle at a given position (Busch et al., 2007).

The position operator, denoted by x^, acts on the wave function to extract the position information of a particle. Similarly, the momentum operator, denoted by p^, extracts the momentum information. These operators do not commute, which means their order of application matters. Mathematically, their commutator is given by (Busch et al., 2007):

[ x , p ]= x × p p × x 0 (1)

This non-zero commutator is at the heart of the Uncertainty Principle. According to the principles of quantum mechanics, the uncertainty in measuring the position of a particle (Δx) and the uncertainty in measuring its momentum (Δp) are related by the following inequality, known as the Heisenberg Uncertainty Principle (Busch et al., 2007):

Δx×Δp/2 (2)

where ℏ is the reduced Planck constant, a fundamental constant of nature. This inequality implies that the product of the uncertainties in position and momentum cannot be smaller than a certain value, and the more precisely one property is measured, the less precisely the other can be determined (Busch et al., 2007).

The reduced Planck constant can also be written as the Planck constant, as follows (Busch et al., 2007):

=h/ 2π (3)

where h is the Planck constant.

Thus, Equation (2) becomes:

Δx×Δph/ 4π (4)

To understand this concept intuitively, consider a wave packet representing a particle. A wave packet localized in space corresponds to a well-defined position but contains a range of wavelengths, leading to uncertainty in momentum. Conversely, a wave packet with a well-defined momentum consists of a range of positions, leading to uncertainty in position. This inherent trade-off between position and momentum uncertainties lies at the heart of the Uncertainty Principle.

In wave mechanics, the uncertainty principle governing the relationship between position and momentum emerges from the Fourier transforms of the wavefunction in the respective orthonormal bases in Hilbert space (Busch et al., 2007). This arises due to the conjugate nature of position and momentum, where a nonzero function and its Fourier transform cannot both be sharply localized simultaneously. This tradeoff between localization is a fundamental characteristic observed in systems analyzed through Fourier analysis, such as sound waves (de Bruijn, 1967). For instance, a pure tone represents a sharply defined spike at a single frequency, while its Fourier transform depicts the wave’s shape in the time domain as a completely spread-out sine wave. In the realm of quantum mechanics, the uncertainty principle is deeply rooted in the wave-particle duality. The particle’s position is described by a matter wave, and its momentum serves as its Fourier conjugate, as indicated by the de Broglie relation (Busch et al., 2007)

P=k (5)

where k represents the wavenumber.

In the formalism of matrix mechanics, which is a mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables exhibit similar uncertainty constraints. An eigenstate of an observable corresponds to a specific measurement value (the eigenvalue) of the wave function. For example, if a measurement of observable A is performed, the system assumes a particular eigenstate ψ of that observable. However, this eigenstate of observable A may not necessarily be an eigenstate of another observable B. Consequently, it lacks a uniquely associated measurement value for observable B, as the system does not reside in an eigenstate of that observable.

Throughout the history of quantum sciences, cases of mathematical theorems that helped scientists formalize rules without the capacity of experimentation which were concluded to exemplify theories of natural phenomena are highly common, such as the example described above, which is well embedded in mathematical rules. Because of this, it is better to dive deeper into how this can be understood through a philosophical perspective in the context of attaining knowledge.

The interplay between mathematical theorems and empirical observation has sparked profound philosophical contemplation. Within this realm, a Kantian perspective offers a lens through which to explore the role of apriorisms—priori truths independent of experience—in shaping our understanding of natural phenomena. Kant’s epistemology, as delineated in his seminal work “Critique of Pure Reason,” posits the existence of a priori forms of knowledge that structure our perceptual experience and underpin scientific inquiry (Kant, 1908).

Central to Kant’s philosophy is the notion of synthetic apriori knowledge, propositions that are both informative and necessary, yet not derived from experience. These apriori truths serve as the framework through which we interpret and organize sensory data. From a Kantian standpoint, the mathematical theorems employed in quantum sciences can be regarded as exemplifying such synthetic apriori knowledge (Kant, 1908). They provide the conceptual scaffolding upon which scientific theories are constructed, guiding our understanding of the natural world.

In the context of quantum phenomena, where traditional modes of empirical observation often falter due to the inherent indeterminacy of quantum systems, reliance on apriori principles becomes particularly salient. Mathematical formalisms, such as those encapsulated in the equations of quantum mechanics, offer a means of grappling with the elusive behavior of subatomic particles and, subsequently, the behavior of the most fundamental forms of manifestation of reality. These formalisms, rooted in abstract mathematical structures, enable scientists to model and predict the behavior of quantum systems with remarkable accuracy, despite the absence of direct empirical verification.

Moreover, Kant’s transcendental idealism underscores the role of the mind in shaping our conception of reality. According to Kant, phenomena—the objects of empirical experience—are apprehended through the categories of the understanding, which structure our perception of the world. In the realm of quantum sciences, mathematical frameworks serve as the medium through which these categories are instantiated, allowing us to grasp the underlying structure of quantum reality (Kant, 1908). From a Kantian perspective, this highlights the inherent limitations of human cognition in grasping the ultimate nature of reality. While apriori principles furnish us with indispensable tools for navigating the complexities of the quantum world, they also underscore the contingent nature of our knowledge. As we venture further into the realm of quantum science, we are compelled to confront the epistemological boundaries that constrain our understanding, prompting us to reassess our philosophical presuppositions and refine our conceptual frameworks in pursuit of a deeper comprehension of the cosmos.

This being said, we can safely affirm that the quantum sciences need an apriori insight through mathematical theorems. Given the apriori properties of the mind, intuition can be seen as the engine of understanding processes and attributes in the absence of direct experience, thus novel scientific inquiries necessitate the intuition processes to understand and comprehend the meaning behind a mathematical proposition. This is due to the necessity of using abstract properties, which define mathematic operations and properties, to understand concrete processes, instead of experimenting with concrete natural processes empirically through which mathematical modeling is implemented to describe a phenomenon.

To fully grasp the polarity of these statements, the concept of truth must be explored, given the high advances in the epistemic methodologies through which science is made.

3. Understanding the Concept of Truth in the Modern Epistemic Worldview

In our contemporary intellectual landscape, the concept of truth has undergone a profound evolution, reflecting shifts in our epistemic frameworks and modes of inquiry.

The quest for truth has long been a central preoccupation of human thought, permeating the realms of philosophy, science, and everyday discourse. Traditional notions of truth, rooted in correspondence theories or absolute metaphysical certainties, have gradually given way to a more nuanced and dynamic understanding. In the modern epistemic worldview, truth emerges as a multifaceted and elusive concept, shaped by the complexities of human cognition, cultural contexts, and the iterative nature of inquiry.

At the heart of this modern understanding lies a recognition of the fallibility inherent in human cognitive faculties. As finite beings with limited perspectives, our grasp of reality is necessarily partial and provisional. The recognition of this epistemic humility underscores the need for a more nuanced conception of truth—one that acknowledges the contingent nature of our knowledge and the inherent uncertainties that accompany it.

In contemporary epistemology, truth is often conceived not as an absolute state to be definitively grasped but as a process of approximation and refinement (Williams, 1976; Schantz, 2002; Jago, 2018; Soames, 1999). This perspective finds resonance in the pragmatic theories of truth, which emphasize the practical efficacy of beliefs in guiding action and achieving desirable outcomes. According to this view, truth is not so much a static endpoint as it is a dynamic and context-dependent construct, shaped by the pragmatic exigencies of human inquiry.

Moreover, the advent of postmodern thought has further problematized traditional notions of truth, highlighting the role of power dynamics, language games, and discursive practices in shaping our understanding of reality (Smith, 1996). From this vantage point, truth emerges not as a transcendent ideal but as a contingent and socially constructed phenomenon, subject to the vicissitudes of historical context and cultural interpretation.

Thus, truth can be understood as what is less false than everything else. This conceives the fact that science is a work in progress and the scientific method can be seen as an open-source work that is evolving over time to comprehend and understand more correct truth values over time.

Given these conclusions, it is necessary to define a subtle difference between understanding and knowing with regard to how human minds approach different processes and attributes by which interaction with reality and language is possible.

4. Knowledge vs. Understanding

To better understand the point of the dialectic implemented, it is safe to propose a few ways through which the concepts of knowledge and understanding can be alternatively understood to form a cohesive foundation of the problem.

It is desirable to understand the cognitive aspects of how words are used in the context of learning and communication. When using a word, the human mind points to a specific concept through the properties of a group. For example, when talking about apples, the human mind creates an abstract and low-level definition of the properties that are common to all apples, such as shape, its hierarchical relationship with other concepts, by being a fruit or being a natural object that grows on trees, that, put together, create an exhaustive group that defines the concept well enough to not be confounded with another similar concept, but loose enough to integrate all apples in its definition.

The aspect of learning language is first processed inside the human mind in early childhood, as a necessary developmental stage. Early childhood is the period in which the human mind hears words and intuitively learns how to use them in specific contexts that are presented by using words. Thus, both aspects are completely unknown to children. Because of this, the process of learning a native language can be equated with a task of self-supervised learning in the context of artificial intelligence.

At its core, self-supervised learning tasks are designed to generate supervisory signals directly from the input data, without requiring explicit annotations. This is achieved through the formulation of pretext tasks, where the model is tasked with predicting certain aspects of the input data that are inherently present but not explicitly labeled. These pretext tasks serve as a form of self-imposed supervision, guiding the learning process by providing informative signals for representation learning. Through iterative optimization, the model gradually learns to extract relevant features that are useful for downstream tasks, such as classification, regression, or generation. One of the key advantages of self-supervised learning lies in its ability to leverage large amounts of unlabeled data, which are often more abundant and easier to obtain than meticulously annotated datasets. By tapping into this vast reservoir of unlabeled information, self-supervised models can learn rich and nuanced representations that capture the underlying structure of the data. This not only enables the models to achieve state-of-the-art performance on various tasks but also fosters a deeper understanding of the underlying data distribution, thereby enhancing generalization capabilities across different domains and tasks. Several strategies have been proposed to formulate pretext tasks in self-supervised learning, each tailored to the specific characteristics of the data and the learning objectives. For instance, in image data, pretext tasks such as image inpainting, rotation prediction, and colorization have been widely explored to encourage the model to capture spatial and semantic relationships (Jaiswal et al., 2020). Similarly, in natural language processing, tasks like language modeling, masked language modeling, and next sentence prediction have proven effective in learning contextualized representations from raw text data. By carefully designing pretext tasks that align with the inherent structure of the data, self-supervised learning enables models to learn robust and transferable representations across different modalities and domains. Moreover, self-supervised learning fosters a more autonomous approach to machine learning, where models can progressively refine their representations through continual learning from unlabeled data streams. This self-directed learning paradigm not only reduces the burden of manual annotation but also empowers machines to adapt to dynamic environments and evolving datasets without human intervention. As a result, self-supervised models exhibit greater flexibility and scalability, making them well-suited for real-world applications in domains such as computer vision, natural language understanding, and reinforcement learning.

However, self-supervised learning is not without its challenges and limitations. Designing effective pretext tasks that facilitate meaningful representation learning remains an active area of research, requiring careful consideration of the data characteristics and the learning objectives. Additionally, self-supervised models may still suffer from issues such as dataset bias, domain shift, and catastrophic forgetting, which can hinder their generalization performance in real-world settings. Addressing these challenges requires interdisciplinary efforts spanning machine learning, statistics, and cognitive science to develop robust and adaptive self-supervised learning algorithms.

However, there is a dramatic difference between self-supervised learning and the way the brain learns language during early childhood: intuitive learning. Already referred to in the past chapters of the paper, intuition can be seen as the generator and the main powerhouse of understanding of apriori thought. Intelligent systems lack the property of intuition, to a great extent, because apriori ideas are not trivial based on their mechanism of functioning.

Because a child is not fully capable of consciously deconstructing a concept to pinpoint all the properties that form it, but is fully capable of learning to utilize a word as a pointer to a concept when talking or doing a task. This process of intuitively understanding a concept through the meaning behind it, by forming a low-resolution image of what it represents can be seen, in the context of this paper, as the mechanism of understanding. Thus, understanding will be used from now on to the point of the intuitive knowledge of reality without the necessitation of defining the concept consciously, or outside this mental property, in the absence of it.

Based on this, knowledge can be classified into two contingent aspects: intuitive knowledge, which has an apriori characteristic because of its triviality, and conscious knowing, which will be described in a more detailed way through the process of knowing.

Knowing is usually seen as the process of explaining the meaning of a concept in a conscious way. To do such a thing, it is imperative to mentally deconstruct a concept into all the component parts that form it, which are represented by its properties. The mentality of this process requires awareness of one’s thoughts, which is conscious thinking.

Given these aspects, the idea that knowing and understanding are the same process regulated by different mental properties can be integrated, to further understand the knowledge dialectic that will be presented and its consequences in understanding emerging artificial intelligence processes and the human mind in a convergent manner, as well as to identify the similarities and differences between these two processes, given the fact that neural networks were created to imitate mental properties by mathematical modeling.

5. The Thesis of the Dialectic of Knowledge

Given the ideas already mentioned, it can be concluded that knowing is a process that requires understanding as a building block. It can be affirmed that the knowledge of an object, be it physical or mental, is done based on the understanding of the object. However, understanding the object requires an intuitive apriori abstraction, which is why the process of understanding is always done within the self. Knowledge can be seen, then, as a relationship between the understanding of a concept and the understanding of the nature of subjectivity within one’s self, which is understanding one’s intuition. It is safe to consider apriori inferences as understanding processes that are the foundation of the process of knowing. However, this relationship between understanding an object and understanding intuition can be formed on an empirical basis, exclusively. Understanding one’s intuition is the process of being aware and capable of synthesizing knowledge that is perceived by the subject in a trivial matter, while understanding an object is a matter of creating a low-level resolution of all the objects that can be classified as being in the same category of objects without doing so in a conscious manner. Because of this, it can be integrated with the idea of double intuition: intuition can be seen as a mental object, embedded in an abstract low-level resolution conceptualization of what all intuitions have in common, outside conscious thought, while also being seen as the engine of triviality with regards to what it synthesizes. Because of this empirical link between understanding an object and projecting triviality in the process, a thesis and an antithesis can be formed, with regard to how knowledge can be attained.

This essay states that, because of the apriori nature of understanding itself, both the concept that ought to be understood and the unconscious capacity of understanding triviality are apriori processes themselves, born from intuition. However, knowing an object is an iterative approach through a process of empirical forming of relationships between apriori understandings, which is an aposteriori process in itself.

However, not all understanding has an apriori property. The understanding of one’s own identity, through a Hegelian understanding, can be seen as an iterative process of affirming or negating a concept previously understood, with the purpose of rejecting or integrating it into the concept of self. Understanding a previous object is an apriori process. Because of this, the understanding of identity (or the self) is always based on empirical grounds, which makes it an aposteriori process.

To form a synthesis of knowledge, it is necessary to formulate what an understanding of understanding itself should mean. The understanding of understanding an object is the process of being aware of the process of understanding an object and knowing intuitively what that process is about. This can also be said about identity: while formulating an aposteriori hierarchical link between understanding an object and understanding the previous properties the self has, through an iterative process of updating one’s own understanding of their self, it is also perfectly correct to consider the capacity of defining one’s self as apriori, because it is not a taught process, but one already known by all people, intuitively. Because of this, the understanding of understanding is always an apriori process. To further explain this affirmation, it is necessary to point out that understanding is also an apriori form of understanding and that the only form of understanding that can be seen as aposteriori is the understanding of the current identity one adopts. However, the process of attributing a property to an identity and the process of knowing which properties or attributes should be replaced by the knowledge of others are apriori properties. Thus, understanding knowing is an apriori process. However, knowing and understanding is not fully possible.

6. Conclusions and Implications

Based on these notions, it can be concluded that most of the implications and hypotheses of quantum sciences are based on an intuitive understanding, rather than by forming a cohesive accumulation of an informational corpus describing a physical process, using knowledge, since the object of knowing is unable to be studied directly through experiments. Furthermore, the process of developing language-based communication and understanding the world during the first phases of an individual’s life can also be seen as driven through an intuitive basis, which is the foundational definition of understanding. Thus, it can be affirmed that understanding is the basis of forming a model of the world and knowledge as a form of mechanistically working in the world to produce a set of processes. This dialectic creates a weighted sum of the two processes, through the synthesis, by endorsing intuition as the primary engine through which a mental modeling of the world and the self can be concluded. However, based on this affirmation, it is implied that the self has an apriori connotation, as it can be understood without necessarily negating or affirming a mutable property or a set of properties extrapolated from the object of attention that is in the world. By mutable, it should be understood a property that can be transferred to the self from an outside object.

Given the approach explained above, a case can be made with regard to what an artificial intelligence model lacks in terms of human cognition. This paper aims to propose that forming a general artificial intelligence needs the capacity to understand apriori thought in a trivial way, just as humans do through intuition. This pivotal aspect can provide the insight that artificial intelligence models need to be embedded with an authentic Dasein to attain general intelligence in the world of machines. So far, it has been concluded that generative methods, such as chatGPT, possess inauthentic Dasein, which is why they cannot be considered general intelligence. This step is crucial in developing emergent technologies that can provide more beingness with regard to immersion into the world (Floroiu & Timisică, 2024).

Authentic Dasein, a sufficient condition of rendering a system as general intelligence, can be fully obtained, with regards to modern technologies, by imprinting a mathematical mechanism that revolves around subjectivity, such as immersing with the capacity of creative endeavors, with the weighted value of triviality as the most prominent emerging process, computationally.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Barish, B. C., & Weiss, R. (1999). LIGO and the Detection of Gravitational Waves. Physics Today, 52, 44-50.
https://doi.org/10.1063/1.882861
[2] Bird, A. (2004). Philosophy of Science. In J. Shand (Ed.), Fundamentals of Philosophy (pp. 297-325). Routledge.
[3] Busch, P., Heinonen, T., & Lahti, P. (2007). Heisenberg’s Uncertainty Principle. Physics reports, 452, 155-176.
https://doi.org/10.1016/j.physrep.2007.05.006
[4] Cohen, L. (2001). The Uncertainty Principle for the Short-Time Fourier Transform and Wavelet Transform. In L. Debnath (Ed.), Wavelet Transforms and Time-Frequency Signal Analysis (pp. 217-232). Birkhäuser.
https://doi.org/10.1007/978-1-4612-0137-3_8
[5] de Bruijn, N. G. (1967). Uncertainty Principles in Fourier Analysis. In Inequalities: Proceedings of a Symposium Held at Wright-Patterson Air Force Base (pp. 57-71). Academic Press.
[6] Floroiu, I., & Timisică, D. (2024). A Heideggerian Analysis of Generative Pre-Trained Transformer Models. Romanian Journal of Information Technology and Automatic Control, 34, 13-22.
https://doi.org/10.33436/v34i1y202402
[7] Jago, M. (2018). What Truth Is. Oxford University Press.
https://doi.org/10.1093/oso/9780198823810.001.0001
[8] Jaiswal, A., Babu, A. R., Zadeh, M. Z., Banerjee, D., & Makedon, F. (2020). A Survey on Contrastive Self-Supervised Learning. Technologies, 9, Article 2.
https://doi.org/10.3390/technologies9010002
[9] Kant, I. (1908). Critique of Pure Reason. In R. Schacht (Ed.), Modern Classical Philosophers (pp. 370-456). Houghton Mifflin.
[10] Parrini, P. (2012). Kant and Contemporary Epistemology (Vol. 54). Springer Science & Business Media.
[11] Russ, R. S. (2014). Epistemology of Science vs. Epistemology for Science. Science Education, 98, 388-396.
https://doi.org/10.1002/sce.21106
[12] Schantz, R. (2002). Volume 1 What is Truth? Walter de Gruyter.
https://doi.org/10.1515/9783110886665
[13] Smith, D. E. (1996). Telling the Truth after Postmodernism. Symbolic Interaction, 19, 171-202.
https://doi.org/10.1525/si.1996.19.3.171
[14] Soames, S. (1999). Understanding Truth. Oxford University Press.
https://doi.org/10.1093/0195123352.001.0001
[15] Solomon, R. C. (1974). Hegel’s Epistemology. American Philosophical Quarterly, 11, 277-289.
[16] Williams, C. J. F. (1976). What Is Truth? Cambridge University Press.
https://doi.org/10.1017/CBO9780511753527

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.