Understanding Factors of Bias in the Way Information Is Processed Through a Metamemetic and Multilogical Approach


Training in complex thinking is required in fields like computer science and discussing sensitive topics that can easily polarize internet users’ propensities. Multilogicality and Metamemetic reasoning are strongly suggested as an approach to identifying and analyzing factors related to AI Bias and human biases. This approach entails identifying problems and deducting invalid premises, distinguishing them from valid premises or those we are uncertain about. The theme of this paper focuses on four groups of people: curators, developers, businesses, and users (the fourth group being the main focus). This approach offers a new way to apply critical thinking strategies in the context of living in a digital age.

Share and Cite:

Cardinal, M. (2022) Understanding Factors of Bias in the Way Information Is Processed Through a Metamemetic and Multilogical Approach. Intelligent Information Management, 14, 80-91. doi: 10.4236/iim.2022.142006.

1. Introduction

The context of this paper is essential. Humans have evolved through three replicator points (genes, memes, and techno memes). Memes are essentially meaning we apply to thoughts such as beliefs, values, and traditions [1]. Memes travel through memeplexes (complexes of memes, like tribes, groups, and communities, for instance). They replicate themselves, when successful, and pass their contents with variations from group to group and mind to mind, either vertically (generationally) or horizontally (through direct person-to-person communication, information from the internet, or mass media, online trends/movements, etc.).

Humanity has entered the third replicator point three decades ago, which will intensify over the next five to ten years with metaverse, Web 3.0, augmented reality, nanotechnology, and biotechnology. Susan Blackmore coined the term “technomemes,” referring to the digital and technological evolution of ordinary memetic patterns, which have more diverse options for replication [2]. Memes are pieces of culture containing information and associated with a particular meaning [2]. They travel through memeplexes, replicating them throughout millions of years. The third trimester has arguably been built over the last three decades.

Susan Blackmore coined these as techno memes, which have more diverse options for replication, prolongation, and fecundity [3]. With the extension of ourselves and information, we can manipulate our identities.

The contents will involve what’s proposed by Diego Fontanive as “Metamemetic Thinking” [3]. The first section is reviewing relevant literature on AI Bias. The second offers an approach to understanding information-seeking and information-processing. The third section is focused on identity and explores some attitudes that could be observed and inquired about online. An inquiry could be made, for example, about agnotology, which is a factor of the fabrication of ignorance online, done through examples such as the production of fake or misleading data, including false social media posts and micro/macro blogging: how is our interaction with AI, and how might it influence us exponentially over the next few decades.

2. Examples of How Memes Might Work

Most of the factors (perception, memory, creativity, etc) that inform our awareness past the memetics replicator point and how to respond to stimuli are acquired, while only a few are informed by genes, resulting in responses like basic instincts and symbiotic proclivities [2]. Although controversial, matters and subjects like love, freedom, the psychological structure of the sense of self, the sense of faith consists of cognitive contents that we acquire throughout our life. These memes are derived from the ontological (regarding nature of being), epistemological (regarding scope, the extent of validity of knowledge), and axiological (regarding how things get valued) structures that solidified the concepts into our thinking. What we think about ourselves is an example of a structure informed by others throughout our lifetime [2]. A relevant inquiry, here, would be whether it is possible to sustain a form of thinking that does not get interfered with by one’s conditioning. If we achieved it, the human condition would improve over time (D.F. 2022). Table 1 gives examples of roles and functions of memes that may be observable online and offline.

3. Reviewing Literature on AI Bias: Development Sector

This literature review on statistics related to biases getting into AI technology, from the codes to training to algorithms and systems. To be clear about this section, the point is that we do not know the reasons why there is a discrepancy in statistics. There are societal, cultural, and memetic influences that create responses generated in data collection related to these findings. Reality may be

Table 1. Roles, function, and possible results of memes (examples).

represented more so in a different time, place, or depending on the situation (referring to vertical and horizontal influences).

According to the literature on bias-reduction strategies (which seems to neglect memetic influences), identifying and reducing biases should consist of three factors: technical (tools to identify traits that impact a model’s accuracy), operational (enhancing the processes of how data is collected), and organizational (for transparent communication of metrics and procedures. To observe the points in this section metamemetically, it could be beneficial to be attentive toward observing real examples of cultural factors behind the development and management of algorithms and platforms as well as factors of online behaviors.

3.1. Technical Factors in Bias-Reduction

Developers should decide when it is better to implement automated decision-making or human decisions, and if it’s human, we must reduce bias in those decisions. Depending on the culture, bias in algorithms might be unlearned or prevented [5]. To check that a dataset is limited in biases, we would need to be aware of our own biases as well as those in the algorithm, which is an arduous task.

A useful approach is using multidisciplinary research, which offers different points of view and cultural knowledge within the AI community [6]. To limit bias in the algorithms, we need to have skilled AI creators from many cultures and walks of life, to vary the perspectives in the creation of the algorithms. Varying the population working on the development of AI will vary the results from search engines.

AI is the ability of a computer to accomplish tasks normally associated with humans. Algorithms are used to mimic the intellectual processes associated with human thought and have attained the levels of human experts and professionals on certain platforms, such as computer search engines. However, algorithms are created by people and this may lead to bias in the selection of information that comes up in a search query. The links that relate to the keywords typed by the user appear on the search results page. Although AI may assist with the speed of attaining information, it is not without bias encoded in algorithms that activate retrieval from data repositories and search engines [7].

3.2. Operational in Bias-Reduction

Developers of AI may be unaware of what AI query results may depend on input from human limbic system bias or lack available data in a search engine or repository [5]. Google results from lack of data stem from limited variables, such as not having enough time to find all possible answers to a query. Another variable is that development training experiments might not have incorporated enough data representing more people and more scenarios that match consistently with reality. Input into AI systems based on opinions and constraints like ignorance or censorship impact the organization of results. Ultimately, inadequate results stem from many psychological factors sneaking in through training or development. This leads to biases and we cannot trust the results from AI systems yet [5].

Can algorithms be trained to consider digital behaviors behind information-seeking, such as the induced ignorance of humans when making decisions? It doesn’t recognize its own biases (coming from human input) to make decisions or the limitations of the developer’s decision-making. One question we must ask ourselves might include: how and why does an algorithm conclude? What is the significant difference in how humans come to a decision, such as punishment and reward modality? As a further example of meta ignorance (not knowing one’s own ignorance) in AI, algorithms could be trained to detect and question why someone may respond a certain way to a game about love interests, marriage, or soulmate searching. Love is a meme that tends to have ambiguity associated with it. Where there is love, there is conflict. If a developer applies metamemetic training to a program, it could possibly identity and prevent some issues that could arise without that training.

An indirect factor in finding bias reduction strategies is socio-economic analysis or memetic factors. Access to technology and high-speed internet is necessary to access information in education, employment, etc. Almost 17 million American children under the age of 18 lack high-speed home internet [8]. More people will get involved with coding and future AI-based technologies to create algorithms.

Other socioeconomic factors include policies that interfere with the development and use of AI. Operational factors that result in an inability to access information include information not digitized or indexed, the copyright system, which impacts people who cannot purchase the article, and censorship for cultural (or other) reasons. An example of cultural reasons includes the Indigenous people of the US. They are excluded from freedom of access to information due to cultural and historical reasons. LIS and Indigenous cultures share equity in access to information, but how they interpret access is different [6]. Indigenous librarians have different definitions for LIS terminology such as freedom. Permissions must be granted before accessing one’s culture’s materials. There are initiatives worldwide to allow Indigenous expression in their own creative spaces without permission from the Elders. Input from these communities can help with processes in LIS to identify, draft, and promulgate definitions and differences.

The past becomes an authority when it dictates how we will think about things. Why is information a meme in need of protection? Yet, simultaneously, why do people want control over things? How does it benefit us when we identify with it? Is there an extension of conflicting ideas in the mind or collective history?

3.3. Organizational Factors of AI Bias

Court decisions, employment, and access to credit are among many organizational processes that are dependent on AI-driven systems [9]. We don’t know the variable that is crucial to look for until we test the algorithm before and after its development. We need to evaluate the results of the algorithm to see which variable is crucial to limit discriminating outcomes.

Bias in algorithms created a real-life example of unintentional exclusion related to the Amazon organization’s search for employees. Lecher found women and people of color were underrepresented in the development of AI. Of note, 80% of AI professors are men [10]. Additionally, women employees make up 15% of Facebook and 10%. What does this objectively suggest about the relationship to bias reduction?

When the hiring system accepted resumes, the coding algorithms assumed men were the “Yes” and women were the “No” for employment. The creator of the algorithm at the time was presumably male and disqualified women from these jobs. The algorithm was changed to include females and minorities, and since this change, 2020 demographics have changed. Now, the percentages of male to female employees are 53.1% to 46.9%. Employees based on race/ethnicity in the US are White (32.1%), Black (26.5%), Latino (22.8%), Asian (13.6%), Indigenous (1.5%), and multiracial (3.6%) [8]. Therefore, by evaluating outcomes, a new algorithm was established to include women and minorities.

3.4. Bias Identification, Understanding, and Reduction in the Information-Organization Sector of AI

Search engines look at key terms in a search query and collect links that apply to those terms. Hypothetically, users of search engines should diversify their search terms, which would maximize queries as well as the precision and quantity of results [11]. Additionally, the Boolean system can narrow down the topic or provide breadth to the topic by using and/or/not in the search. This saves time and provides greater independence to the researcher when searching for information relevant to their question. Saving time and having control over the retrieval of information are needed in the education and employment of individuals. However, some factors play a role in the equity of information retrieved via search engines.

Curators utilize Artificial Intelligence (AI) algorithms when they organize and create filters for archival databases. They analyze how information is associated, its validity, and its relevance to generate meaningful interpretations and communicate them to visitors, and probe discussion [12]. Algorithmic filtering is more efficient and accurate, while human judgment is more subjective. This is also important in the physical organization of books where while a selection might be labeled incorrectly by mistake, there might also be errors in the indexer’s cognitive filter. The researcher could be unaware of the algorithm’s problems in the filters or agents [12].

For developers, some inquiries are offered here. What expectations do we have for AI? If predictions are made, what is a realistic outcome, and can it be known how AI attains an outcome/result? Research suggests we don’t always know how AI gets to an outcome. This could be crucial to preventing something dangerous.

4. Users’ Information-Seeking Behavior

A formula for memetic information-seeking proclivity might be:

Conditioned interpretation of what we observe = unnecessary data × information overload + conditioning + psychological pictures (horizontal + vertical changes)/inattentiveness × duration neglect.

In making sense of the information that we come across online, we rely on what we already know to make a decision. This formula briefly describes an example of things that happen when we inquire about the validity behind the results before we make decisions with that information. Duration neglect (Table 2) is an example of interpreting things in a situation where we may be distracted or in a rush while searching for information. Memories and experiences contain psychological pictures, a term theorized by Diego Fontanive, to suggest that memes that act like filters manipulate how information. They get activated with interactions, and experiences as well as internal actions, lie reflection and abstraction. Filters are created by memeplexes, like laws, values, norms, problems, values, beliefs, and information.

Biases in User Search Behaviors

In this section, a mutamemetic analysis is applied to a research study on biases in user behavior.

White and Horvitz (2015) examine the relationship of a searcher with reality and with search results [13]. Assessment of beliefs through the attainment of information can help theorize new methods of ranking and adaptive search algorithms. More assessments are needed in the outcomes of beliefs and in deciding which sources are more reliable than others.

One observation was that someone’s confirmation bias remained after a problematic variable in the algorithm of the dataset was removed and a search was conducted afterward. Another was overconfidence, which also affected the degree of belief. One change could be the user having more control while using the filters in the search engine. Personal interest might influence filtering as it also influences where links rank in Google results [13].

They suggested steps for search engine providers. Beliefs should be noted behind the behaviors and what we believe while interpreting. A second was labeling the content of search results. Thirdly, identify and note the location of the reliable content as opposed to the less or unreliable content of the information retrieved. Fourthly, create a model of beliefs affecting the filtering process and selective decision-making of searchers. Jeffrey’s rule of Conditioning should be considered when considering interference in deducting valid from invalid information in the process of search (what is deducted from the properties of observation to make something an argument, probably in a sound way) influences how we attain valid information under uncertainty. If a belief is strong, the attitude won’t change even when introduced to other viewpoints. Self-diagnoses, for example, might happen based on the ranking of results, some prioritized and higher than others (perceived perhaps as more relevant). They may misinterpret symptoms and associate them with a diagnosis. Common and benign symptoms in search results may be misinterpreted even if there’s a low chance of them having that condition.

Clicking on links might encode bias into an algorithm and rank links higher

Table 2. Examples of factors of biased behaviors online.

in search results based on similar quests made by the same searcher. The more clicks and browsing, the higher a results page determines where that source is ranked. New models elaborating on search behavior should be theorized. Inferences could also be made from the outcome of using recommended and related content in search results.

This research study was on attitudes and belief dynamics. Despite being thorough, it does not seem to consider biological factors explaining the mode used to filter, select, discard, and process information. Considerations for researching behavior and beliefs include: biological factors interfering with selective reasoning. Considering the state of modern algorithmic AI in the field of information ecology, and analyzing the issue of psychological security (Personal communication with D.F., 2022). Pleasure (dopamine-production) and security are natural inclinations. “The brain’s inclination to seek anything that can give it a sense of security. The sense of security, when found, slowly becomes familiarity, and that’s precisely where the selective reasoning gets shaped by the memes the sense of familiarity contains. At that point the memetic process of replication begins, and that greatly influences the way the individual goes through and processes information (personal communication with D.F., 2022)”. This memetic process happens throughout the formation of the dynamics of the individual selective reasoning even before the intervention/interference of the beliefs the individual is holding.

The second analysis also draws on the details in the first section. The digital “translation” of this process takes place in the form in which the entirety of modern algorithmically shaped artificial intelligence operates on search engines, social media, and any online platform that is self-regulated by its algorithms. AI is completely informed by the human limbic system (meaning: emotional drives), and quite unregulated (personal communication with D.F., 2022).

The third analysis is highlighting the problem of psychological security and how it interferes with the shaping of algorithms and the cognitive selective reasoning adopted to select information/sources/internet searches, which is to be approached and investigated accurately (personal communication with D.F, 2022).

An individual is prone to deliberately avoid any search or information or media material that is suspected to be able to disrupt the beliefs and therefore the drives that person wishes to prove as valid. This is the psychological mechanism through which what’s known as confirmation bias also takes place, and it’s a mechanism fueled by the drive and the need to seek a sense of psychological security.

Therefore, as long as the matter of one’s psychological security remains unanalyzed, and so long as the mechanism of confirmation bias will always remain, what shapes one’s selective reasoning remains. For instance: if one has a theory, he or she will always be more prone to seek elements aimed at validating that theory, instead of searching deliberately for the opposite of such elements, which is what is required to disprove, therefore dismantle the theory (and move on forming a better one). Theories don’t, unfortunately, have to be sound or supported to replicate online. There is also more potential to attract interest through biases like the bandwagon effect, narratives, and the creation of irrational fears. An example is a narrative created by unnatural fears like anxieties created by concatenations of thought processes related to objectively unsound premises (like, for instance, states of mind resulting from comparing ourselves with other people online), and instead, feel guilty for not sharing other people’s anxiety.

The idea that we must be good is a narrative conditioned by Judeo-Christian values, for example. The implicit desire of being good people as well as being seen as good people often prevents us from being willing to comprehend and even challenge, if necessary, the actual meaning of goodness, trying, therefore, to do with the understanding beneath its mere surface and epistemic acceptance [4]. Table 2 exemplifies factors of behaviors associated with the creation and propagation of biases.

A final inquiry is distinguishing memetic from non-memetic biases, such as what arises when dealing with uncertainty, information overload, processing, and basing things on previous knowledge). Any construct of thoughts that do not get critically inquired about can easily become an implicit bias in our cognition. The implicit desire to be good for instance can function as an implicit bias that can make us unable to detect the cultural influences and contradictions that can eventually inform the individual and collective preservation of that bias (personal communication with D.F.).

5. Conclusions

To conclude, observations were made on literature about AI bias reduction strategies, biases in search behavior, and information-seeking behavior gathered as a basis for future research studies and questions. Future research could partly be in acknowledging the gap between what is understood and what is not or cannot be understood yet (completely). Another consideration in research is innovative and critical thinking methods to counter a decline in general critical thinking skills. One combined approach is metamemetics. Countering issues related to dysregulation, such as the harmful use of AI, is being researched, and might be useful in exploring with this approach.


Agnotology: The study of how ignorance works and comes about through cultural conditioning as well as a natural state. A problem with ignorance is when people do not care about attaining knowledge to change their perspective [14].

Algorithm: A set of steps to perform a task to meet a predetermined goal.

Apophenia: The tendency to perceive meaningful connections between objectively unrelated pieces of information.

Groupthink: People are unable to make decisions and innovate due to agreeability and its entropic thinking and discussion.

Extended Mind Thesis (Andy Clark and David Chalmers, “Extended Mind Thesis”, 1998): A way that may further probe understanding of memetics. To properly understand minds, they claimed, we need to recognize not just how they interact with the outside world; we also must recognize that aspects of the non-biological world are, in certain situations, actually constitutive of them (Ramsey, 2010). This includes rituals, art, movies, magazines, etc.

Memes: Units of culture replicating in the mind that twist ideas and apply meaning with information. Memes are also abstractions of objects, like avatars and perception of technology and how we utilize it. There are helpful memes such as art, symbols for the pharmacy, egg carton boxes, paying for goods. There are unhelpful memes such as indoctrination, common sense, assumptions, and expectations. This psychological phenomenon will likely be around for as long as humans exist.

Multilogical Thinking: This involves different ways of understanding, such as lateral, analytical, dynamic, negative, and critical. Models of awareness are created through these examples of thinking (Glossary of Critical Thinking terms).

Metamemetics: Diego Fontanive’s adaptation of memetics, studying the viral memes and violence caused by devotion to them. “It also involves the decoding of the premises behind the meaning of the meaning to understand whether the premises are valid or not (Fontanive, 2020)”.

Psychological pictures: Continuously operating filters that we use to interpret the world. Certain filters work for certain things because we were not taught how to identify filters and interact without them to use to see things, like laws, values, norms, problems, values, beliefs, and information.

Temes (or tremes): Susan Blackmore theorized technological memes are extensions of ourselves, although it seems we are extensions of technology because of such freedom to be anything we want to show to others (see: “The Meme Machine” by Susan Blackmore).

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.


[1] Fontanive, D. (2019) From Memes to Technomemes and Digital Selves. 2019 Congreso Internacional de Educación y Aprendizaje, Porto, 19-21 June 2019.
[2] Dawkins, R. (1999) The Selfish Meme. Time, 153, 52-53.
[3] Blackmore, S. (1998) Imitation and the Definition of a Meme. Journal of Memetics-Evolutionary Models of Information Transmission, 2, 159-170.
[4] Groupthink (n.d.) Psychology Today.
[5] Dilmegani, C. (2021) Bias in AI: What It Is, Types and Examples of Bias, and Tools to Fix it. Science and Technology, 52, 1-3.
[6] Roy, L. (2015) Advancing Indigenous Ecology within LIS Education. Library Trends, 64, 384-414.
[7] Copland, B.J. (2021) Artificial Intelligence. Britannica.
[8] Ujifusa, A. (2020) 1 in 3 American Indian, Black, and Latino Children Fall into the Digital Divide, a Study Says. Education Week.
[9] Uzzi, B. (2020) A Simple Tactic that Could Help Reduce Bias in AI. Harvard Business Review.
[10] Lecher, C. (2019) The Artificial Intelligence Field Is Too White and Too Male, Researchers Say: A New Report Explores AI’s ‘Diversity Crisis’. The Verge.
[11] Budd, J.M. (1996) The Complexity of Information Retrieval: A Hypothetical Example. The Journal of Academic Librarianship, 22, 111-117.
[12] Khan, S. and Bhatt, I. (2018) Curation. In: Hobbs, R. and Mihailidis, P., Eds., The International Encyclopedia of Media Literacy.
[13] White, R.W. and Horvitz, E. (2015) Belief Dynamics and Biases in Search. ACM Transactions on Information Systems, 33, 1-46.
[14] Frazier, T.Z. (2015) Agnotology and Information. Proceedings of the Association for Information Science and Technology, 52, 1-3.
[15] Closer to Truth (2022) Video: “Jeff Tollaksen—Physics of Observation.” Closer to Truth. YouTube.
[16] Menary, R. (2010) The Extended Mind. The MIT Press, Cambridge.

Copyright © 2023 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.