Understanding Factors of Bias in the Way Information Is Processed Through a Metamemetic and Multilogical Approach

Training in complex thinking is required in fields like computer science and discussing sensitive topics that can easily polarize internet users’ propensities. Multilogicality and Metamemetic reasoning are strongly suggested as an approach to identifying and analyzing factors related to AI Bias and human biases. This approach entails identifying problems and deducting invalid premises, distinguishing them from valid premises or those we are uncertain about. The theme of this paper focuses on four groups of people: curators, developers, businesses, and users (the fourth group being the main focus). This approach offers a new way to apply critical thinking strategies in the context of living in a digital age.


Introduction
The context of this paper is essential. Humans have evolved through three replicator points (genes, memes, and techno memes). Memes are essentially meaning we apply to thoughts such as beliefs, values, and traditions [1]. Memes travel through memeplexes (complexes of memes, like tribes, groups, and communities, for instance). They replicate themselves, when successful, and pass their contents with variations from group to group and mind to mind, either vertically (generationally) or horizontally (through direct person-to-person communication, information from the internet, or mass media, online trends/movements, etc.).
The third replicator point has arguably been built over the last three decades. Intelligent Information Management Susan Blackmore coined the term "technomemes," referring to the digital and technological evolution of ordinary memetic patterns, which have more diverse options for replication [2]. Memes are pieces of culture and containing information and associated with a particularly meaning [2]. They travel through memeplexes, replicating them throughout millions of years. The third trimester has arguably been built over the last three decades. Susan Blackmore coined these as techno memes, which have more diverse options for replication, prolongation, and fecundity [3]. With the extension of ourselves and information, we can manipulate our identities.
The contents will involve what's proposed by Diego Fontanive as "Metamemetic Thinking" [3]. The first section is reviewing relevant literature on AI Bias.
The second offers an approach to understanding information-seeking and information-processing. The third section is focused on identity and explores some attitudes that could be observed and inquired about online. An inquiry could be made, for example, about agnotology, which is a factor of the fabrication of ignorance online, done through examples such as the production of fake or misleading data, including false social media posts and micro/macro blogging: how is our interaction with AI, and how might it influence us exponentially over the next few decades.

Examples of How Memes Might Work
Most of the factors (perception, memory, creativity, etc) that inform our awareness past the memetics replicator point and how to respond to stimuli are acquired, while only a few are informed by genes, resulting in responses like basic instincts and symbiotic proclivities [2]. Although controversial, matters and subjects like love, freedom, the psychological structure of the sense of self, the sense of faith consists of cognitive contents that we acquire throughout our life. These memes are derived from the ontological (regarding nature of being), epistemological (regarding scope, extent of validity of knowledge), and axiological (regarding how things get valued) structures that solidified the concepts into our thinking. What we think about ourselves is an example of a structure informed by others throughout our lifetime [2]. A relevant inquiry, here, would be whether it is possible to sustain a form of thinking that does not get interfered with by one's conditioning. If we achieved it, the human condition would improve over time (D.F. 2022). Table 1 gives examples of roles and functions of memes that may be observable online and offline.

Reviewing Literature on AI Bias: Development Sector
This literature review on statistics related to biases getting into AI technology, from the codes to training to algorithms and systems. To be clear about this section, the point is that we do not know the reasons for why there is a discrepancy in statistics. There are societal, cultural, and memetic influences that create responses generated in data collection related to these findings. Reality may be Being in a group and not having people inquire about ideas that may need criticism [4] There is a sense of convenience and belonging.
Thoughts are based on beliefs and confirmation bias.
Memeplexes seem to operate infectiously this way.
Rituals, symbols, behaviors, and attitudes may be repeated to contain group structure.

Identity
We might be sure of something because we were able to label it previously as something. This can make us feel safe.
There is a sense (for ourselves and from others), and therefore, we assume we cannot be or think in any other way than how the identity is described.
Association with something and becoming seemed to be a way to memetically change our sense of safety. A set of expectations travels with us no matter where we go from group to group.
Identifying something does not mean we are personally associated with it. Authority, semantically refers to writing something, such as a rule, which originates from ideas.

Systems and rituals
A certain authority and a certain person have a role and expectations and anything else isn't expected of them.
There is confusion, functional or dysfunctional stupidity, disorder, etc, behavioral evil (cite previous paper).
We should be consistent with the reality of roles, why they are there, and functions if they are rational. We can't rebel against something just because we disagree.
What is useful can be deducted from what is not useful. represented more so in a different time, place, or depending on the situation (referring to vertical and horizontal influences).
According to literature on bias-reduction strategies (which seems to neglect me-

Technical Factors in Bias-Reduction
Developers should decide when it is better to implement automated decisionmaking or human decisions, and if it's human, we must reduce bias in those decisions. Depending on the culture, bias in algorithms might be unlearned or prevented [5]. To check that a dataset is limited in biases, we would need to be aware of our own biases as well as those in the algorithm, which is an arduous task.
A useful approach is using multidisciplinary research, which offers different points of view and cultural knowledge within the AI community [6]. To limit bias in the algorithms, we need to have skilled AI creators from many cultures and walks of life, to vary the perspectives in the creation of the algorithms. Varying the population working on the development of AI will vary the results from search engines.
AI is the ability of a computer to accomplish tasks normally associated with humans. Algorithms are used to mimic the intellectual processes associated with human thought and have attained the levels of human experts and professionals on certain platforms, such as computer search engines. However, algorithms are created by people and this may lead to bias in the selection of information that comes up in a search query. The links that relate to the keywords typed by the user appear on the search results page. Although AI may assist with the speed of attaining information, it is not without bias encoded in algorithms that activate retrieval from data repositories and search engines [7].

Operational in Bias-Reduction
Developers of AI may be unaware of what AI query results may depend on input from human limbic system bias or lack of available data in a search engine or repository [5]. Google results from lack of data stem from limited variables, such as not having enough time to find all possible answers to a query. Another varia-

Organizational Factors of AI Bias
Court decisions, employment, and access to credit are among many organizational processes that are dependent on AI-driven systems [9]. We don't know the variable that is crucial to look for until we test the algorithm before and after its development. We need to evaluate the results of the algorithm to see which variable is crucial to limit discriminating outcomes.  [8]. Therefore, by evaluating outcomes, a new algorithm was established to include women and minorities.

Bias Identification, Understanding, and Reduction in the Information-Organization Sector of AI
Search engines look at key terms in a search query and collect links that apply to those terms. Hypothetically, users of search engines should diversify their search terms, which would maximize queries as well as the precision and quantity of results [11]. Additionally, the Boolean system can narrow down the topic or provide breadth to the topic by using and/or/not in the search. This saves time and provides greater independence to the researcher when searching for information relevant to their question. Saving time and having control over the retrieval of information is needed in the education and employment of individuals. However, some factors play a role in the equity of information retrieved via search engines. Curators utilize Artificial Intelligence (AI) algorithms when they organize and create filters for archival databases. They analyze how information is associated, its validity, and its relevance to generate meaningful interpretations and communicate them to visitors, and probe discussion [12]. Algorithmic filtering is more efficient and accurate, while human judgment is more subjective. This is also important in the physical organization of books where while a selection might be labeled incorrectly by mistake, there might also be errors in the indexer's cognitive filter.
The researcher could be unaware of the algorithm's problems in the filters or agents [12]. For developers, some inquiries are offered here. What expectations do we have for AI? If predictions are made, what is a realistic outcome, and can it be known how AI attains an outcome/result? Research suggests we don't always know how AI gets to an outcome. This could be crucial to preventing something dangerous.

Users' Information-Seeking Behavior
A formula for memetic information-seeking proclivity might be: Conditioned interpretation of what we observe = unnecessary data × information overload + conditioning + psychological pictures (horizontal + vertical changes)/inattentiveness × duration neglect.

Biases in User Search Behaviors
In this section, a mutamemetic analysis is applied to a research study on biases in user behavior.
White and Horvitz (2015) examine the relationship of a searcher with reality and with search results [13]. Assessment of beliefs through the attainment of information can help theorize new methods of ranking and adaptive search algorithms. More assessments are needed in the outcomes of beliefs and in deciding which sources are more reliable than others.
One observation was that someone's confirmation bias remained after a problematic variable in the algorithm of the dataset was removed and a search was conducted afterward. Another was overconfidence, which also affected the degree of belief. One change could be the user having more control while using the filters in the search engine. Personal interest might influence filtering as it also influences where links rank in Google results [13].

Factors of biased behaviors online Context
Agnotology Three studies revolving around the mechanisms of ignorance are identified: One is about a native state, or the mind before it has or is aware of knowledge. Another is about there being a lost realm or selective choice (Frazier, 2016). Either information was never seen because it never came up in the search or the person dishing out information did not see it before adding it to a database. This is an unintentional consequence of the degree of attentiveness of a role one takes in the search process (Frazier, 2016) [14]. The third classification is a strategic ploy, which implies that information was deliberately censored or manipulated (which happens a lot in magazines and newspapers, and with fake technology, where AI-generated content looks real but is not).

Apophenia
Online, apophenia is more of a problem with conspiracy theories and is important in the era of cognitive overload. We make connections and meanings about irrelevant pieces of information, informed by the knowledge we already have, resulting in cognitive overload and neurotic apophenia. Feelings are mental associations when occur when we are unsure about how to think about something. When it becomes problematic, the memory of a feeling and the creation of a narrative are not reevaluated for validity, creating and using biases. Biased memes can also shape the fear of difference, creating discriminatory behavior. Fear of losing the sense of familiarity is influenced by our idea of psychological security. There is the illusion of knowing that we make authority. For example, many may assume communism is dangerous but not understand what it is. Apophenia is important to understand in the era of conspiracies and paranoia.

Psychological time
Memory and biases like the halo effect or duration neglect impact the sense of things as we experience them. Computers let people time travel. Online details may serve more distraction. Can we observe and make responsible use of free will? The past impacts our current information processing. Time in an experience is not remembered, affecting the weight of meaning or lack of it [15].

Punishment and reward modalities
This can be observed beneath the narrative of publish or perish which restrains a person from innovation, time, and space needed to inquire and think clearly. It may cause functional stupidity also.
Decrease in inattentiveness and multiplied variables of distraction This can lead to and be caused by information overload and the inability to manage it.

Extended mind thesis
Information is stored to represent a source outside of ourselves to help our memory, as an example of a cognitive tool, or idea of self, psychological factor. Having a digital self is something completely new (Book of Abstracts), and less fragile, than for example, a sketch or diary [16]. Based on criticism, these theorists refined and defended their thesis. Their points include: When a device is paired with cognitive tools, there must be a functional role or they cannot be extensions of the mind. What has considered an extension is a perceived system, not just part of it. A computer itself, for example, differs from the information stored in it. Lastly, there are non-neural ways to attain and replicate information [16]. An individual is prone to deliberately avoid any search or information or media material that is suspected to be able to disrupt the beliefs and therefore the drives that person wishes to prove as valid. This is the psychological mechanism through which what's known as confirmation bias also takes place, and it's a mechanism fueled by the drive and the need to seek a sense of psychological security.
Therefore, as long as the matter of one's psychological security remains unanalyzed, and so long as the mechanism of confirmation bias will always remain, what shapes one's selective reasoning remains. For instance: if one has a theory, he or she will always be more prone to seek elements aimed at validating that theory, instead of searching deliberately the opposite of such elements, which is what is required to disprove, therefore dismantle the theory (and move on forming a better one). Theories don't, unfortunately, have to be sound or supported to replicate online. There is also more potential to attract interest by biases like the bandwagon effect, narratives, and the creation of irrational fears. An example is a nar- The idea that we must be good is a narrative conditioned by Judeo-Christian values, as an example. The implicit desire of being good people as well as being seen as good people often prevents us from being willing to comprehend and even challenge, if necessary, the actual meaning of goodness, trying, therefore, to do with the understanding beneath its mere surface and epistemic acceptance [4]. Table   2 exemplifies factors of behaviors associated with the creation and propagation of biases.
A final inquiry is distinguishing memetic from non-memetic biases, such as what arises when dealing with uncertainty, information overload, processing, and basing things off previous knowledge). Any construct of thoughts that do not get critically inquired about can easily become an implicit bias in our cognition. The implicit desire to be good for instance can function as an implicit bias that can make us unable to detect the cultural influences and contradictions that can eventually inform the individual and collective preservation of that bias (personal communication with D.F.).

Conclusions
To conclude, observations were made on literature about AI bias reduction strategies, biases in search behavior, and information-seeking behavior gathered as a basis for future research studies and questions. Future research could partly be in acknowledging the gap between what is understood and what is not or cannot be understood yet (completely). Another consideration in research is innovative and critical thinking methods to counter a decline in general critical thinking skills.
One combined approach is metamemetics. Countering issues related to dysregulation, such as the harmful use of AI, is being researched, and might be useful in exploring with this approach.

Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.