Artificial Intelligence Law through the Lens of Michel Foucault: Biopower, Surveillance, and the Reconfiguration of Legal Normativity ()
1. Introduction
The rise of artificial intelligence (AI) has been accompanied by significant legal and ethical challenges, particularly concerning individual rights and freedoms. AI systems are increasingly embedded in decision-making processes in various sectors, such as healthcare, finance, law enforcement, and governance. For instance, Brynjolfsson and McAfee (2017) in Machine, Platform, Crowd: Harnessing Our Digital Future provide a comprehensive analysis of how AI-driven platforms are reshaping industries by automating complex decision-making tasks and enhancing efficiency. Similarly, Pasquale (2015) in The Black Box Society: The Secret algorithms That Control Money and Information delves into the opaque nature of AI algorithms and their growing influence over critical decisions, such as financial credit scoring, law enforcement predictions, and access to healthcare services. This lack of transparency often raises ethical and legal concerns regarding accountability and fairness. Furthermore, Zuboff (2019) in The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power highlights how AI technologies not only optimize decision-making but also facilitate unprecedented levels of data collection and surveillance, thereby influencing governance and individual autonomy. Together, these works underscore the pervasive integration of AI into essential societal functions, while also addressing the implications for privacy, ethics, and the balance of power between individuals and institutions. This integration raises questions about the extent to which these technologies impact human autonomy, privacy, and control. To address these questions, this paper draws on the philosophical insights of Michel Foucault, focusing on his concepts of power, surveillance, and “technologies of the self”.
Michel Foucault’s work on power, particularly his analysis of disciplinary mechanisms and biopower, provides a crucial theoretical framework for understanding how AI technologies can function as instruments of social control. His concept of biopower describes the strategies by which modern states regulate subjects through institutions, norms, and practices that shape and manage populations (Foucault, 1977). This perspective helps us to explore whether AI can be seen as a tool of biopower, capable of defining and controlling individuals through data collection and predictive analytics.
Moreover, Foucault’s concept of “technologies of the self” provides insights into how individuals interact with AI, shaping their behaviors and self-perceptions. As AI systems increasingly guide choices and influence personal decisions, questions arise regarding the nature of autonomy and how legal systems might adapt to protect individuals from being merely “managed” by these technologies.
The central hypothesis of this paper is that AI has the potential to deconstruct traditional legal categories of responsibility and morality, which necessitates a revaluation of the axiological aspects of the existing legal framework. This hypothesis will be explored through a detailed analysis of Foucault’s theoretical concepts applied to current AI practices, specifically focusing on surveillance and control mechanisms.
The objectives of this paper are threefold:
1) To explore how Michel Foucault’s theories of power and surveillance can be applied to understand the impact of AI on individual rights and freedoms.
2) To examine whether AI, as a contemporary technological apparatus, functions as a tool of biopower that extends state and corporate control over individuals.
3) To analyze how the law might adapt to mitigate the risks of objectification and loss of agency resulting from the widespread use of AI technologies.
1.1. Limitations in Traditional Notions of Legal and Moral Responsibility
Traditional notions of legal and moral responsibility are rooted in clear attributions of agency, causation, and intent. Legal frameworks typically assign accountability to identifiable actors—be it individuals, corporations, or institutions—based on their intentional or negligent actions. Similarly, moral responsibility relies on the ability to associate a decision or consequence with a conscious choice made by a moral agent. However, the rise of AI systems has disrupted these paradigms in several ways.
Firstly, the opacity of AI systems, often referred to as the “black box” phenomenon, complicates the identification of causation. AI algorithms process vast amounts of data through intricate and non-transparent computational models, making it difficult to determine how specific decisions are reached. This challenges the assignment of accountability, as it is unclear whether responsibility lies with the developer, the deployer, or the AI itself. Secondly, AI systems often function autonomously, adapting and learning from new data. This autonomy undermines the traditional legal emphasis on intent, as the outcomes produced by AI systems may not align with the intentions of their creators or users. Finally, the distributed nature of AI development and deployment disperses responsibility across multiple actors, further complicating the attribution of blame in cases of harm or misuse.
1.2. Relevance of Foucault’s Philosophy to AI Law
Michel Foucault’s exploration of disciplinary power and biopower is highly relevant to the discourse on AI regulation. Foucault (1977) describes disciplinary power as a mechanism through which individuals are subjected to constant surveillance, thus regulating their behavior. This is paralleled in the use of AI-driven surveillance systems, such as facial recognition technologies and predictive policing tools, which enable a level of oversight and control unprecedented in human history.
Foucault’s concept of biopower, which refers to the regulation of populations through an ensemble of institutions and practices, also finds a modern counterpart in AI. AI systems that gather and analyze vast amounts of data about individuals are fundamentally biopolitical tools—they categorize, manage, and predict human behavior on a mass scale (Zuboff, 2019). This biopolitical aspect of AI raises critical concerns about privacy, consent, and the limits of state and corporate power.
The legal implications of such technologies are profound. Current regulatory frameworks, including data protection laws like the General Data Protection Regulation (GDPR), aim to protect individuals from invasive data practices, but they may be insufficient to address the broader issue of biopolitical control. This paper, therefore, proposes a revaluation of legal principles to better align with the ethical challenges posed by AI, guided by the insights from Foucault’s philosophy.
1.3. Key Features of Michel Foucault’s Philosophy
Michel Foucault’s philosophy offers valuable insights into the dynamics of power, control, and individual autonomy. Key features of his work include the concepts of disciplinary power, biopower, and technologies of the self:
1) Disciplinary Power: Foucault’s (1977) notion of disciplinary power, elaborated in Discipline and Punish, describes the mechanisms through which individuals are regulated and normalized. Using the metaphor of the panopticon—a prison design enabling constant surveillance—Foucault illustrates how modern societies exercise control by making individuals internalize norms. This creates a system where behavior is modified even in the absence of direct observation.
2) Biopower: Extends beyond individual discipline to the regulation of populations. This form of power governs life itself, focusing on health, productivity, and social norms through an ensemble of institutions and practices. Biopower is particularly relevant in understanding how contemporary technologies, including AI, manage and categorize populations on a large scale.
3) Technologies of the Self: Foucault’s concept of technologies of the self highlights how individuals actively shape their identities and behaviors in response to societal norms and power structures. These technologies encourage self-regulation, often mediated by external pressures. In the context of AI, such technologies influence how individuals interact with systems that guide choices and decisions, raising questions about autonomy and agency.
2. Theoretical Framework
The theoretical framework of this paper draws primarily on the work of Michel Foucault, specifically focusing on his concepts of disciplinary power, biopower, and technologies of the self. This section aims to provide a comprehensive understanding of these concepts in their original philosophical context and then apply them to the challenges posed by artificial intelligence in the legal realm.
2.1. Michel Foucault: Power, Surveillance, and Biopower
Michel Foucault’s exploration of power dynamics forms the backbone of his philosophical inquiry. Foucault’s concept of power is not merely repressive but productive; it operates through social structures to shape knowledge, norms, and individual behavior (Foucault, 1977). His analysis of power relations is particularly significant in the context of AI, as it provides a lens through which to examine how AI systems might function as tools of control and influence in contemporary society.
2.1.1. Disciplinary Power
Disciplinary power, as described in Foucault’s (1977) Discipline and Punish, involves the control of individuals through subtle, continuous surveillance and normalization processes. Foucault uses the metaphor of the panopticon—a prison designed to allow inmates to be observed by a single guard without knowing when they are being watched—to describe the pervasive nature of disciplinary power in modern societies. The concept of the panopticon is particularly relevant to AI, especially with technologies such as facial recognition, data tracking, and predictive analytics, which ensure continuous surveillance of individuals.
For instance, the use of AI in public spaces for surveillance purposes mirrors Foucault’s panopticon. Facial recognition systems employed in cities across the globe are capable of monitoring large populations, effectively creating an environment where individuals are constantly visible to the state or corporate entities, thus modifying behavior through the anticipation of being watched.
2.1.2. Biopower
Biopower refers to the regulation and control of populations through a wide array of techniques and institutions. According to Foucault (1977), biopower operates at the level of the population, managing birth rates, health, mortality, and productivity. In the age of AI, this form of power extends to the regulation of digital identities and behaviors, often without explicit consent from those being governed.
AI technologies such as health monitoring apps, social media algorithms, and credit scoring systems are examples of biopolitical tools. These systems collect and analyze data to categorize individuals, predict behavior, and influence decisions, effectively extending the reach of biopower into the digital realm (Rouvroy & Berns, 2013). The implications for legal responsibility are significant, as traditional privacy laws may not fully capture the scope of biopolitical control exerted by AI systems.
2.1.3. Technologies of the Self
Technologies of the self, another critical concept introduced by Foucault, refer to the practices through which individuals act upon themselves to transform their identity and behavior (Foucault, 1977). In the context of AI, these technologies are evident in the way individuals interact with AI-driven systems that track and optimize personal behavior. Wearable devices, personalized recommendations, and health applications represent technologies through which individuals are encouraged to self-regulate in line with data-driven insights (Lupton, 2016).
AI’s role in shaping individual self-perception and decision-making aligns with Foucault’s idea of individuals becoming subjects through power relations. From a legal perspective, this raises questions about autonomy and informed consent, as individuals may increasingly rely on AI to guide their actions, potentially undermining personal agency.
2.2. Application of Foucault’s Concepts to AI Law
Applying Foucault’s concepts to AI law reveals the complexities of regulating technologies that inherently exercise power over individuals and populations. The notion of disciplinary power is relevant to legal debates surrounding surveillance and privacy. AI systems used in predictive policing, for instance, can perpetuate systemic biases, thus requiring a reexamination of legal standards around discrimination and accountability (Pasquale, 2015).
Biopower’s manifestation in AI presents challenges for data protection and the right to privacy. Current frameworks, such as the GDPR, focus on individual data rights, but may not fully address the broader implications of biopolitical control. This highlights the need for a more comprehensive legal approach that takes into account the collective impact of AI on society (Zuboff, 2019).
Technologies of the self prompt legal discussions on the nature of autonomy and the extent to which individuals can meaningfully consent to the use of AI technologies that influence their behavior. The concept of informed consent may need to be reevaluated in light of the pervasive influence of AI on personal decision-making processes (Crawford & Calo, 2016).
3. Development of the Topic
3.1. Philosophical Implications of AI in Law: Power, Control, and Responsibility
The application of Foucault’s philosophy to AI sheds light on how these technologies contribute to new forms of power relations and control within legal systems. The increasing use of AI for surveillance, decision-making, and behavioral prediction raises concerns regarding how power is exerted over individuals and populations.
3.1.1. AI and the Expansion of Surveillance
AI technologies such as facial recognition and predictive analytics extend the state’s ability to surveil and control populations. Foucault’s idea of the panopticon serves as a metaphor for understanding the omnipresence of AI in our daily lives. Surveillance in the digital age increasingly resembles Foucault’s panopticon, wherein the possibility of constant observation influences behavior even if the observation is not active at all times.
AI-based surveillance systems have a profound and multifaceted influence on social stability, with both potential benefits and significant risks. On one side, these systems contribute positively to societal order and safety by enabling real-time monitoring, predictive analytics, and improved resource allocation. For instance, technologies like facial recognition and behavioral analysis can help prevent criminal activities, support law enforcement in identifying suspects, and streamline emergency responses. By enhancing the efficiency of public safety measures, such systems can create a perception of security and stability among the population, particularly in areas with high crime rates or limited resources for traditional policing.
However, the integration of AI into surveillance systems also introduces substantial risks that can undermine trust, equity, and fundamental freedoms—key pillars of social stability. One of the primary concerns is the erosion of individual privacy. AI-driven tools collect vast amounts of data from individuals, often without their explicit consent, enabling constant monitoring of behaviors, movements, and interactions. The social credit system in China serves as a prominent example, where AI technologies are used to monitor and score citizens’ actions, influencing their access to public services, financial products, and even freedom of movement. While intended to promote societal harmony, this system effectively enforces behavioral conformity and suppresses dissent, fostering an environment where individuals fear repercussions for deviating from state-defined norms.
Additionally, AI-based surveillance systems often operate on opaque algorithms and datasets that can embed and amplify biases. For example, predictive policing algorithms may disproportionately target marginalized communities due to biased historical crime data, exacerbating existing inequalities and perpetuating cycles of discrimination. This not only harms the affected communities but also undermines the social fabric by fostering resentment and distrust toward governing institutions. Furthermore, the anticipation of being constantly observed creates a self-regulating society, where individuals alter their behavior to align with perceived expectations. This “panoptic” effect, as described by Michel Foucault, stifles individuality and creativity—essential drivers of social progress—and can lead to a homogenized society overly focused on compliance rather than innovation or critical discourse.
The legal and ethical challenges posed by AI-based surveillance systems are significant. While these systems aim to enhance security, they must be balanced against the protection of human rights, such as privacy, freedom of expression, and equality. Without robust legal frameworks and oversight mechanisms, these technologies risk becoming tools of authoritarian control, destabilizing trust in institutions and eroding democratic values. Effective governance must include measures like algorithmic transparency, accountability for bias mitigation, and strict regulations on data collection and use. Public awareness and stakeholder engagement are also crucial in ensuring that these systems are deployed ethically and with adequate safeguards to maintain social stability without sacrificing fundamental freedoms.
The legal response to such technologies often focuses on data protection, but the broader issue of behavioral manipulation and its impact on autonomy is under-addressed. Laws like the GDPR attempt to safeguard personal data, but they do not fully capture the consequences of a surveillance society driven by AI (Zuboff, 2019). To adequately protect citizens, legal frameworks must evolve to consider the influence of AI on freedom and agency.
3.1.2. Responsibility and the Deconstruction of Traditional Legal Categories
One of the core issues raised by AI is the question of responsibility. In traditional legal frameworks, accountability is often assigned to a specific actor—whether an individual or a corporation. However, the nature of AI systems complicates this attribution of responsibility. AI’s decision-making processes can be opaque, leading to what Pasquale (2015) describes as the “black box society”, where the rationale behind decisions is not transparent.
Foucault’s concept of biopower provides a lens to understand how responsibility is distributed across multiple entities—developers, users, and regulators—who collectively contribute to the outcomes produced by AI systems (Rouvroy & Berns, 2013). This necessitates a rethinking of the legal notion of accountability. If an AI system discriminates in employment decisions, for example, who is responsible—the developer, the company using the system, or the AI itself? Such questions challenge traditional legal categories and highlight the need for new frameworks that address the distributed nature of AI responsibility.
The concept of biopower also raises ethical concerns regarding who has the right to make decisions about populations. AI systems that determine eligibility for welfare benefits or healthcare interventions are examples of how biopower operates in the digital age. These systems have the potential to reinforce existing inequalities, as they often rely on biased data, thereby perpetuating discrimination against marginalized groups.
3.2. Case Studies and Comparative Analysis
To further illustrate the philosophical and legal implications of AI, this section provides a comparative analysis of two key examples: predictive policing in the United States and China’s social credit system. These case studies highlight how Foucault’s concepts of surveillance, biopower, and disciplinary power manifest in real-world AI applications.
3.2.1. Predictive Policing in the United States
Predictive policing refers to the use of AI algorithms to predict where crimes are likely to occur and who might be involved in criminal activities. This approach, employed in several cities across the United States, relies on historical crime data to forecast future incidents. However, predictive policing systems often perpetuate systemic biases present in the data, leading to the over-policing of certain communities—typically those that are already marginalized.
Foucault’s concept of disciplinary power is evident in predictive policing. The mere presence of such surveillance alters behavior, with communities under constant watch being more likely to self-regulate to avoid interaction with law enforcement. This dynamic creates a cycle where marginalized groups are disproportionately subjected to surveillance and control, effectively criminalizing poverty and race.
Legally, predictive policing raises critical issues regarding due process, equality before the law, and the presumption of innocence. The use of biased data in AI algorithms challenges the fairness of the criminal justice system and underscores the need for robust oversight and accountability measures to prevent discrimination.
3.2.2. Social Credit System in China
The social credit system in China offers a comprehensive example of biopower facilitated by AI. Under this system, citizens are assigned scores based on their behavior, which is tracked using a range of data sources, including online activity, financial transactions, and public surveillance. High scores result in rewards such as easier access to loans, while low scores can lead to penalties like travel restrictions.
Foucault’s notion of biopower is particularly pertinent here, as the social credit system functions as a state apparatus to manage and regulate the population. It uses AI to impose norms and expectations on citizens, thereby disciplining behavior in a manner aligned with state objectives. This level of control raises significant legal questions regarding privacy, freedom of expression, and the right to dissent. The absence of transparency and accountability in how scores are calculated further complicates the legal landscape, making it challenging to contest or appeal decisions (Chen & Cheung, 2020).
Comparing the social credit system with Western data protection frameworks, such as the GDPR, reveals stark differences in the regulatory approach to AI. While the GDPR emphasizes individual rights and informed consent, the Chinese system is focused on collective compliance and social harmony. This divergence highlights the importance of cultural and political contexts in shaping AI regulation.
3.3. The Role of Law in Addressing AI-Induced Power Dynamics
Given the implications discussed, it is evident that current legal frameworks may be inadequate to address the power dynamics introduced by AI. The pervasive nature of AI surveillance and its capacity to influence behavior necessitates a reevaluation of privacy, autonomy, and accountability in the legal domain.
One potential approach is the introduction of “algorithmic impact assessments”, similar to environmental impact assessments, which could evaluate the societal and ethical implications of AI systems before their deployment (Crawford & Calo, 2016). Such assessments could help ensure that AI technologies align with fundamental rights and freedoms, thereby mitigating the risks associated with biopower and disciplinary control.
Additionally, the legal concept of informed consent must evolve to address the complexities of AI-driven decision-making. Given the opacity of AI systems and their influence on personal decisions, individuals may not fully understand the consequences of consenting to the use of these technologies. This calls for enhanced transparency measures and the establishment of clear standards for meaningful consent.
4. Conclusions and Future Reflections
The examination of artificial intelligence through the lens of Michel Foucault reveals the profound implications of AI on power dynamics, individual autonomy, and legal responsibility. AI technologies, much like the mechanisms of surveillance and control that Foucault describes, operate as instruments of biopower, extending state and corporate influence over populations in ways that challenge traditional legal norms and ethical principles.
4.1. AI as a Tool of Biopower and Disciplinary Power
AI systems are increasingly employed as tools of biopower, capable of regulating individuals and populations through predictive analytics, data collection, and decision-making processes. This paper has demonstrated how AI, in contexts such as predictive policing and social credit systems, embodies the disciplinary power that Foucault (1977) describes—altering behavior through the anticipation of surveillance and the imposition of norms.
The implications for legal frameworks are significant. As AI technologies gain the capacity to influence behavior and control access to resources, the need for a regulatory response that addresses the broader impact of these systems on autonomy and agency becomes paramount. The current focus on data protection and privacy, exemplified by regulations like the GDPR, must expand to encompass issues of biopower and the socio-political consequences of AI-driven surveillance.
4.2. The Challenge of Legal Accountability
AI’s capacity to make autonomous decisions complicates the attribution of legal responsibility. Foucault’s analysis of power relations helps to illuminate how responsibility for AI decisions is distributed among developers, users, and other stakeholders, which blurs the lines of accountability. The black-box nature of AI further exacerbates this issue, making it difficult to determine who is responsible when AI systems produce harmful outcomes (Pasquale, 2015).
To address these challenges, the legal system must move towards a model of distributed responsibility that acknowledges the multiplicity of actors involved in the development and deployment of AI. This might involve holding developers liable for algorithmic transparency, establishing regulatory oversight for AI use, and creating mechanisms for accountability that extend beyond individual culpability (Rouvroy & Berns, 2013).
4.3. Protecting Autonomy: The Role of Technologies of the Self
Foucault’s concept of technologies of the self emphasizes the importance of individual autonomy and the capacity for self-determination. In the context of AI, individuals increasingly interact with systems that influence their behavior, whether through social media algorithms, health tracking devices, or personalized advertisements (Lupton, 2016). These technologies, while providing benefits, also pose risks to autonomy by shaping choices in ways that individuals may not fully understand or consent to.
Legal frameworks must therefore evolve to ensure that individuals retain control over their interactions with AI. This includes enhancing transparency and providing meaningful ways for individuals to opt-out of data-driven decision-making processes that may limit their autonomy. Proposals such as algorithmic impact assessments, which evaluate the societal effects of AI systems, represent a step towards safeguarding individual rights in an increasingly automated world (Crawford & Calo, 2016).
4.4. Towards a Revaluation of Legal Normativity
The central hypothesis of this paper—that AI has the potential to deconstruct traditional notions of legal and moral responsibility—calls for a revaluation of legal normativity in light of AI’s impact on society. Current legal concepts of privacy, consent, and accountability must be redefined to address the complexities introduced by AI. Moreover, the law must be adaptive, recognizing that AI, as a form of biopower, can shift the balance of power in society, potentially to the detriment of individual rights and freedoms (Zuboff, 2019).
The introduction of the “will to regulate” as a conceptual innovation underscores the need for a proactive legislative approach to AI. Just as Foucault’s work highlights the interplay between power and resistance, the law must serve as a mechanism for resisting the excesses of AI-driven biopower. This involves not only regulating the use of AI but also ensuring that its deployment aligns with core human rights principles and promotes equity and justice.
4.5. Future Directions for AI Regulation
Looking forward, it is essential that regulatory approaches to AI consider the socio-political context in which these technologies are deployed. The comparative analysis of the GDPR and China’s social credit system illustrates the diversity of approaches to AI regulation and highlights the importance of aligning these frameworks with societal values and ethical norms (Chen & Cheung, 2020).
1The OECD has established AI Principles advocating for the responsible use of AI. These principles emphasize human-centric development, transparency, accountability, and fairness. They call for international cooperation to harmonize AI governance and ensure equitable access to AI technologies.
2On March 21, 2024, the UN General Assembly adopted a landmark resolution aimed at regulating the emerging field of AI. This resolution, passed by acclamation and co-sponsored by over 120 member states, marked the first time the Assembly has adopted measures specifically tailored to AI governance. The resolution calls on all states to abstain from using AI systems that cannot operate in compliance with international human rights standards or that pose undue risks to the enjoyment of these rights. It underscores that the same rights individuals have offline must be equally protected
There is also a need for international cooperation in AI regulation to ensure consistency and prevent the proliferation of fragmented standards that could undermine human rights protections. Organizations such as the OECD1 and the United Nations2 have begun to address these issues, but a more concerted effort is required to establish global norms for the ethical use of AI (Floridi et al., 2018).
online, spanning the entire lifecycle of AI systems. The Assembly also highlighted the need to integrate respect for, protection of, and promotion of human rights into the design, development, deployment, and usage of AI technologies. This approach aims to ensure that AI serves as a tool to advance human dignity and freedoms, aligning with the principles of equality and justice. The resolution acknowledges AI's potential to accelerate progress toward achieving the 17 Sustainable Development Goals (SDGs). At the same time, it recognizes the disparities in technological development between and within nations, urging member states and stakeholders to foster inclusive and equitable access to AI. It emphasizes closing the digital divide and increasing digital literacy, particularly in developing countries, to ensure that all nations can benefit from the advancements in AI. The resolution also encourages states, private sector actors, civil society, research organizations, and the media to collaborate in creating robust regulatory and governance frameworks for the safe and reliable use of AI. Linda Thomas-Greenfield, the U.S. Ambassador to the UN, introduced the resolution, describing it as a “historic step” in promoting the secure use of AI. She expressed hope that the inclusive dialogue leading to this resolution would serve as a model for addressing other AI-related challenges, such as those involving peace, security, and responsible military use of AI. She emphasized the importance of governing AI proactively to prevent it from governing humanity, advocating for a human-centered approach grounded in dignity, safety, and human rights. The resolution complements ongoing initiatives within the UN, such as UNESCO’s Recommendation on the Ethics of Artificial Intelligence, the work of the International Telecommunication Union (ITU), and efforts by the Human Rights Council. It also aligns with future global projects, including negotiations for a digital global compact and contributions from the Secretary-General’s High-Level Advisory Body on Artificial Intelligence. Collectively, these initiatives reflect a commitment to creating AI governance structures that prioritize sustainable development, inclusivity, and the ethical use of technology to advance global priorities.
Finally, the law must be future-oriented, anticipating the ways in which AI will continue to evolve and shape society. This involves not only addressing current challenges but also preparing for the ethical dilemmas that will arise as AI becomes increasingly sophisticated. The integration of philosophical insights, such as those provided by Foucault, into the legal discourse on AI can help to ensure that regulation is not merely reactive but also reflective of deeper questions about power, autonomy, and the nature of human agency.