Legislative Frameworks for the Ethical Deployment of Artificial Intelligence: Safeguarding Human Rights in the Age of Technology ()
1. Introduction
Artificial intelligence (AI) is challenging international law, and its impact on the need for new legislation is undeniable. Although AI can help improve efficiency and accuracy in many aspects of human life, it has also raised many significant concerns regarding its implications for human rights and international responsibility as governments increasingly use this technology in various situations, such as data analysis and security, and even in international conflicts.
In spite of its high accuracy, artificial intelligence can make big mistakes if it receives incorrect information or if a government intentionally uses artificial intelligence to gain unauthorized access or analyze and review individuals’ personal data. Questions arise about the potential for human rights violations and the responsibility of states in such instances.
This article employs an analytical and descriptive methodology to examine the legal frameworks of international law governing the ethical use of artificial intelligence in the context of human rights violations.
To provide a comprehensive analysis of this topic, the discussion will first explore the relationship between technology and human rights. Then, we will discuss the potential violation of human rights by AI deployment, and finally, we will examine the international responsibility of the state concerning this issue by focusing on their obligations.
2. Technology and Human Rights
New technologies have found their own space in today’s society. Also, these technologies are progressing day by day. Therefore, the adaptation of a legal framework on the national and international stage becomes very important, and this framework shall be continuously updated.
Looking at new technologies from a legal perspective will require a preliminary understanding that such a Copernican revolution has not only produced consequences from a socio-economic perspective, but also created problems and had inevitable repercussions on the extensive catalog of human rights (Coccoli, 2017).
Today, technology has its own advantages in promoting human rights, such as enhanced communication and awareness about human rights violations. So activists and organizations can quickly spread awareness about these violations. Also, individuals can educate themselves about their rights and the legal framework that protects them. Advanced technologies such as satellite internet can also enable organizations to monitor human rights and provide accurate evidence and information on violence, such as war crime and environmental destruction (Susskind, 1998). while technology has the potential to enhance human rights, it can also pose significant threats, such as the violation of privacy through surveillance (Oseni et al., 2021).
The collection of personal data by corporations and governments can lead to a loss of individual autonomy and the erosion of privacy rights (Zuboff, 2023). Furthermore, the use of technology in warfare presents another critical issue. The development of autonomous weapons can lead to violations of international humanitarian law, particularly in terms of proportionality and distinction (Horowitz & Scharre, 2015).
In summary, technology plays a role in advancing human rights by providing access to information and facilitating communication. However, it also presents significant challenges, such as privacy violations, which we will address in more detail in the upcoming section.
3. Violations of Human Rights and Challenges
Human rights violations have always occurred throughout history. In the past, these violations were primarily related to rights such as the right to life, the right to health, and the right to freedom. However, with the advancement of technology, new aspects of human rights violations have been established, such as violations of privacy and inequality. Today, proving human rights violations has become challenging for different reasons, such as the lack of legislation and insufficient awareness of wrongful actions. One of the contemporary challenges in human rights is the use of AI, which, if not controlled through appropriate legislation and measures, can lead to human rights violations.
Initially, the concept of human rights will be examined. Then, the discussion will address the important human rights that have been violated.
3.1. Definition of Human Rights and Challenges
Human rights are defined as the fundamental rights and freedoms to which all persons are entitled, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinions, national or social origin, property, birth or other status. Human rights protect the individual from the abusive or arbitrary exercise of power by State authorities, and its scope extends to the territorial jurisdiction of states (Melzer & Kuster, 2016). Human rights are defined in various international documents and are often included in the constitutions of many countries. Human rights can be divided into various categories, such as civil and political rights and economic, social, and cultural rights. Therefore, these rights can be found in all aspects of life and are inherent to all humans (Van Boven, 2010).
The violation of human rights can create grounds for international responsibility. Such violations can lead to various forms of liability, including state responsibility, which obligates the wrongdoing state to make reparations for the harm caused.
However, determining international responsibility for human rights violations is not always easy. For example, in violations caused by new technologies, such as artificial intelligence, there are challenges that must be carefully examined.
Human rights violations can occur in various forms through artificial intelligence, which we will carefully address in the future.
3.2. Violations of Human Rights Resulting from AI Deployment
The deployment of artificial intelligence (AI) has raised many human rights violations.
Due to its differences from other technologies, AI poses specific challenges compared to other technologies. In other words, AI is about intelligent systems and machines that can operate automatically and act like humans, while other technologies need infrastructure and specific systems (Malik, 2023). AI involves machines that think and learn like humans and help them to solve problems and make decisions. Other technologies deal with managing and processing information. AI systems focus on analyzing data and creating smart machines that need human-like intelligence to do some duties such as reasoning, learning, perception, and problem-solving.
The focus of other technologies is on creating and protecting reliable and secure technology such as Information Technology (IT). For instance, Robotics focuses on building machines that can do physical tasks on their own or with some help.
Therefore, AI, due to its features, has specific challenges for human rights despite other technologies.
Considering that these issues are developing and increasing day by day, we try to highlight some of the most important ones.
3.2.1. The Impact of AI on Discrimination and Vulnerable Groups
The prohibition of discrimination means that everyone is equal before the law and is entitled to equal protection. AI challenges about discrimination can be caused by the unequal access of certain groups to these technologies, bias in the data, and algorithmic bias (Greenstein, 2022). In this context, one study has shown that COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) was twice as likely to label black offenders as high-risk than whites (Angwin et al., 2016). Also, in January 2020, the New York Times reported the first case of wrongful arrest (of a black man) due to racial bias in AI-based facial recognition technologies.
One of the vulnerable groups of society that are influenced by AI systems is women. In fact, the prevailing gender gap in the use of mobile internet and unbalanced access to digital devices has shown that job search engines systematically present women ads for lower-paying jobs (Devillers et al., 2021). In contrast, high-paying jobs have been displayed for men (Stanila, 2018).
3.2.2. Civil and Political Rights
AI can violate many civil and political rights. Notable examples of this include the right to privacy, criminal justice, and political liberties.
AI challenges have changed the traditional meaning of privacy (Affonso et al., 2021), which include using AI to identify individuals, AI profiling of individuals based on population-level data, AI-generated inferences of information and identity based on non-sensitive data, and AI decision-making. Furthermore, automatic facial recognition as another AI system leads to diverse individual (dignity, privacy, autonomy) and collective, for instance, trust and transparency damages (Ashraf, 2022). China shows an alarming example. It has established a social credit system that rewards or punishes citizens on the basis of social norm compliance or non-compliance, facilitated by an extensive biometric surveillance network (Smith & Miller 2022).
Also, AI systems often aggregate data from various sources, such as social media and online games. The extensive data collection can create real aggression often without their consent. For example, by using AI systems, governments can identify behavioral patterns and predict what action the person may take (Lami et al., 2024).
Also, AI systems operate without sufficient accountability. Therefore, individuals are not sure about how their data is being used, and this lack of accountability can lead to the violation of privacy rights.
In the context of criminal justice, AI can violate the right to an effective remedy. Making decisions for this topic must be individual and logical. Although automated decision-making processes and algorithmic data processing techniques may create some issues and challenges, These challenges include “the ambiguity of the decision itself and its basis and the difficulty in assigning responsibility for the decision also complicates individuals’ understanding of whom to turn to address the decision”, for instance, have the advantage of a more effective collection of information which can be used either in the investigation phase or/and as evidence at the trial stage. However, they may violate the right to privacy and the principle of equality of arms when the accused has no chance to challenge the correctness or the selection of the automatedly generated evidence used against him (Quattrocolo, 2020).
AI might affect various sides of liberties, like freedom of expression and freedom of religion, which is related to AI-driven personalization. It might minimize how and where individuals assemble online and what types of associations can be formed. Furthermore, via content moderation, AI can influence assembly by eliminating conversations or events from social media (Ashraf, 2020). For example, the MeToo movement brought millions of women together online to share their stories of sexual harassment, and over 6 million people assembled to sign a petition to annul Brexit—apparently the largest petition ever delivered to parliament (Ashraf, 2022).
3.2.3. Social and Cultural Rights
According to recent reports, AI technologies have developed some important topics, such as poverty, racism, and inequality (Alston, 2019). For example, AI is already widely present in health care services, for example, in automatic acute care triaging and chronic illness management, including remote monitoring, preventative treatment, patient intake, referral help via AI-enabled Telehealth, and personalized and precision medical practices (Anshari et al., 2023).
The affection of AI on the right to take part in cultural life and enjoy the benefits of scientific progress is more significant in “ Criminalize,” but the right to education is one of the examples of this field that is directly affected by AI technologies. For example, it affects grading and essay scoring in high-stakes standardized testing environments (Raso et al., 2018). For example, a [machine learning] system analyzing video or photographic footage could learn to associate certain types of dress, manners of speaking, or gestures with criminal activity and could be used to justify the targeting of these groups under the guise of preventing crime Therefore, AI technologies and surveillance may inspire “fear of being identified or suffering reprisals for cultural identity, leading people to avoid cultural expressions altogether.
3.2.4. Labor Rights
The right to work is recognized as a fundamental human right, and this right has also been referenced in many international documents, including the Universal Declaration of Human Rights and the International Covenant on Economic, Social and Cultural Rights.
The Universal Declaration of Human Rights states that everyone has the right to work without any distinction. However, advancements in AI have raised many concerns related to this matter. For instance, fuelled new fears about large-scale job losses stemming from the ability of AI to increasingly automate not only repetitive but also non-repetitive tasks. Furthermore, there are concerns about autonomous decision-making in the workplace, particularly in HR and management processes, which are linked to excessive surveillance, intrusive practices, and ensuring fundamental worker’s rights (OECD & Deal, 2021). For instance, through social media platforms, AI can provide job advertisements to targeted audiences and enable businesses to personalize recruitment. However, in these advertisements, “search engines may deliver job postings on well-paying technical jobs that are targeted at men only, possibly discriminating against women job-seekers (Chan, 2022).
4. Human Rights and State Responsibilities: The Challenge of Employing Artificial Intelligence
Governments, as sovereign entities, have a fundamental obligation to protect human rights. This responsibility is mentioned in various declarations and treaties, which state that human rights protection is essential for the dignity of individuals (Bird, 2010).
International law is designed to make each state responsible for the human rights protection of its own population (Gibney et al., 1999). Universal Declaration of Human Rights, as one the most important international human rights documents, emphasizes equality and non-discrimination of human rights and obligates states to protect this principle. In contrast, the use of automated systems can lead to discriminatory behavior, particularly in terms of digital discrimination (Ferrer et al., 2021). Furthermore, in articles 2 and 7 of the International Covenant on Civil and Political Rights, it is stated that states are obligated to ensure the implementation of human rights. In general, international documents indicate that states have the duty to utilize all available resources to prevent human rights violations caused by artificial intelligence and other technologies.
The international responsibility of states regarding the use of artificial intelligence can be categorized into two main areas. First, the states should establish laws and regulations to limit the use of AI and prevent potential abuses. Secondly, an international organization should be established to control domestic government policies on the deployment of artificial intelligence and other automated technologies. In the next section, we will review the actions taken and explore each of these responsibilities in detail.
4.1. International and Domestic Legislation on AI deployment, Inaction or Disagreement about Establishing Limits and Standards
Government regulation of AI deployment can be examined in two main areas: domestic and international legislation (global and regional). In addition to examining each of them, we will also discuss the approaches governments take.
4.1.1. International and Regional Legislation on AI Deployment, Success or Mere Recommendation
Up until now, no international treaty on the use of AI has been concluded. However, there have been international efforts, which are highlighted below.
Within the proposal for a regulation on artificial intelligence, the EU chose a horizontal regulatory approach despite the adoption by the European Parliament of certain resolutions on artificial intelligence in relation to specific issues, such as ethical aspects, liability, and copyright.
The EU has been trying to be the first region to set standards for the digital age to present itself as a leader in the field of rulemaking and to ensure that the European model becomes a global standard and can be adopted within other parts of the world (Finocchiaro, 2024). However, the aim is not to compete with China and the United States in terms of technological production. The aim is, on the one hand, to establish a new model and, on the other hand, to avoid fragmentation.
The EU model presents its legislation in four key areas. First of all, data protection, through Regulation (EU) 2016/679 of the European Parliament and the Council of 27 April 2016, on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (more commonly known as the “GDPR”) and the exploitation of data provided for under the Data Act. Second, digital services and the digital market through the Digital Services Act. Third, as regards digital identity through the review of the eIDAS Regulation from 2014; and fourth, as regards artificial intelligence, through the proposal for a regulation.
This framework safeguards not only fundamental rights but also European “values”. The European efforts have certainly been a significant advancement. However, some critical issues are unavoidable. First, the system sketched out by the proposal for a regulation appears to be quite inflexible because The model adopted by the European Commission is a model based on risk management (Abriani & Schneider, 2021). Also, the classification systems of AI need to be revised and updated as AI is constantly developing (Finocchiaro, 2024). If it wishes to assert European leadership on the global stage, it will have to go beyond an organizational and managerial approach and engage with the core, genuinely unresolved issues.
In summary, the EU efforts have made progress in human rights and Al, particularly in data protection. However, these efforts are insufficient in themselves since these efforts are regional rather than international, and also, these efforts do not address many other aspects of AI deployment, and it has an onerous and undifferentiated approach which is not appropriate for liability for losses caused by artificial intelligence applications.
The UN General Assembly, on 11 March 2024, adopted a resolution on the use of artificial intelligence. It represents the first time the assembly has adopted a resolution to regulate the emerging field. The purpose of this resolution is to utilize artificial intelligence in non-military activities, including pre-design, design, development, evaluation, testing, deployment, use, sale, procurement, operation, and decommissioning in a way that is reliable, explainable, ethical, inclusive, in full respect, promotion and protection of human rights and international law, privacy-preserving, sustainable development-oriented, and responsible.
The difference between this resolution and previous resolutions, such as the resolution in December 2022 on the right to privacy in the digital age, is that artificial intelligence is analyzed explicitly. In this resolution, artificial intelligence is recognized as a universal governance framework, and states are called upon to create reliable regulatory frameworks based on international principles.
Although this resolution has an international character, it cannot adequately respond to the current needs related to the use of AI at the international level since the UN General Assembly resolutions are advisory in nature and not legally binding under international law.
The OECD (Organization for Economic Cooperation and Development) was established as an international organization to improve the economic and social well-being of people worldwide (OECD, 2004). Since AI can be helpful in the economy, especially in business and production (Qin et al., 2024), OECD has released reports and created platforms to support the use of Al. This organization recognized AI as a potential tool to improve education and healthcare, but also warned about the risks it can have for human rights violations.
The efforts of the OECD have taken the form of recommendations and cannot be considered as a regulation.
4.1.2. Domestic Legislation on AI Deployment
Some countries have tried to use AI under a regulatory framework, such as China and the United States of America. The model adopted in the USA is a self-regulatory model based on antitrust law. The Chinese model, on the other hand, appears to be a dirigiste model based on State capitalism (Finocchiaro, 2024). In the 2023 United States legislative session, at least 25 states, including Puerto Rico and the District of Columbia, introduced artificial intelligence bills, and 18 states and Puerto Rico adopted resolutions or enacted legislation. Although international legislation on AI deployment is less convenient due to the existence of disagreement, international legislation could be much more useful than domestic legislation in resolving disputes over AI deployment (Koskenniemi, 2005). It can also increase international cooperation between states in setting new standards because AI is advancing day by day (Arinez et al., 2020).
Additionally Governments are faced some practical challenged when trying to implement AI-related human rights legislation domestically. For instance, many national strategies for AI lack clear goals and commitments, making it difficult to ensure accountability and effective implementation. Each government has considered a specific legal system in its legislation. Some of these legal systems have difficulty keeping up with the fast changes in AI. In other words, the fast pace of AI innovation outstrips legislative processes, leaving gaps in regulation and oversight (Bakiner, 2023). Furthermore, different political landscapes and priorities among states can impede the development of unified approaches to AI regulation, further complicating the implementation of effective human rights protections.
While International cooperation among governments can help overcome many of the mentioned challenges.
Governments can share experiences and challenges by collaborating with each other. This enables them to consult each other to overcome obstacles (Thomas, 2012). Also, cooperation can establish shared principles like transparency, accountability, and fairness to create consistent global guidelines for AI use. Setting common rules for cross-border data sharing protects privacy and ensures ethical AI applications while respecting national laws and addressing the global challenges posed by AI. Without it, governments will be less able to address growing inequalities in wealth, power, and access to new technologies.
In short, it seems that states have not yet been able to agree on AI legislation since many aspects of this technology are still uncertain for many countries. The EU has had legislation as the first region on AI deployment, but it has its own critical issue that can not meet international human rights needs. In comparison, the states are responsible and should agree on AI legislation to control the AI deployment.
4.2. The Regulatory Body of the Artificial Intelligence, Independence or Dependence on the International Law
The second way to organize AI deployment is to create an international body to oversee its performance and regulation. So far, no specific organization has been established for AI deployment. However, according to some opinions, the regulation of the International Telecommunication Union (ITU) could, in some aspects, govern artificial intelligence (Ryan, 2012). However, this perspective may not be entirely appropriate. Because artificial intelligence differs from subjects related to ITU, such as the internet, for example, AI functions as an automated tool designed to enhance decision-making and actively engage with users (Russell & Norvig 2016). In addition, AI is an interdisciplinary technology, and its regulation at the international level requires expertise from multiple studies (Saghiri et al., 2022). The absence of a specialized international body for Artificial intelligence has caused countries to move toward regionalism, similar to the approach taken by the European Union (Salajan et al., 2024). Meanwhile, Al, as a cross-phenomenon, requires multilateralism.
As a result, it seems that the regulation of AI deployment should be considered in relation to human rights issues within the framework of the United Nations, and since AI topics have their own specific characteristics and are rapidly evolving, collaboration among states to establish multilateral treaty and create a specialized international organization to oversee domestic and international policies taken by its contracting states shall be considered as an obligation by states to human rights documents, such as the Universal Declaration of Human Rights.
5. Create a Specialized International Organization as a Solution
Governments, by entering into a Legislative Treaty, agree to exercise their sovereignty and legislation on AI deployment based on the commitments outlined in the agreements of this organization. This treaty is the establishing document of this international organization and guarantees the agreements and standards it establishes. In other words, by ratifying this treaty, governments commit to aligning their policies with the organization’s agreements.”
Such an organization could effectively establish a legal framework and set international standards, creating a similar procedure worldwide for employing AI in issues related to human rights. Additionally, this organization could be considered as a center for resolving disputes in these areas. Furthermore, the existence of such an organization could promote international cooperation among governments, which would help reduce discrimination between industrial and developing countries in accessing artificial intelligence, and joining such an organization implies that governments have accepted this entity as a regulatory body overseeing the deployment of AI in their domestic practices. Therefore, governments cannot simultaneously invoke principles such as sovereignty while disregarding the standards set by the organization. However, this does not mean that this organization is necessarily considered superior to governments; rather, it signifies that governments are committed to following its policies and refraining from any actions that contradict them.
6. Conclusion
In the digital age, artificial intelligence has become a powerful tool in the hands of states. While this technology can develop the quality of life and facilitate public service, it also brings significant challenges regarding human rights protection, including breaches of privacy and unlawful surveillance. Under international human rights documents, the states are obliged to utilize all available sources to prevent human rights violations. The most important means to achieve this goal is through legislation and the establishment of restrictions on the use of this technology. It seems that the lack of familiarity between states and the absence of agreement on AI definition has made them reluctant to move toward a unified regulatory approach. However, the EU has started an approach beyond internal legislation, but some critical issues, such as lack of flexibility, do not allow it to become sufficient for human rights protection. As a solution, this article suggests that due to the differences between AI and other technologies, particularly its ability for autonomous decision-making and rapid evolution, states should establish a specialized organization to oversee the regulation of AI deployment and ensure legal unity among countries. Moreover, states can organize annual meetings through this organization to discuss regulations related to the use of artificial intelligence and its developments and changes.
Appendixes
Documents
Articles 2 and 7 of the International Covenant on Civil and Political Rights.
European Commission (2022a). Proposal for a Regulation of the European Parliament and of the Council on Harmonised Rules on Fair Access to and Use of Data (Data Act). COM (2022) 68 Final. EURLex.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2022:68:FIN
European Commission (2022b). Proposal for a Regulation of the European Parliament and of the Council on the European Health Data Space. COM (2022)197 Final. EUR-Lex.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2021:197:FIN
European Parliament (2020a). European Parliament Resolution of 20 October 2020 with Recommendations to the Commission on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies (2020/2012(INL)). OJ C 404, 6.10.2021, pp. 63-106. EUR-Lex.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020IP0275
European Parliament (2020b). European Parliament Resolution of 20 October 2020 with Recommendations to the Commission on a Civil Liability Regime for Artificial Intelligence (2020/2014(INL)). OJ C 404, 6.10.2021, pp. 107-128. EUR-Lex.
https://eur-lex.europa.eu/legal-content/EN/T
European Parliament (2020c). European Parliament resolution of 20 October 2020 on Intellectual Property Rights for the Development of Artificial Intelligence Technologies (2020c/2015(INI)). OJ C 404, 6.10.2021, pp. 129-135. EUR-Lex.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020IP0277
Regulation (EU) (2022). 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act). OJ L 277, 27.10.2022, pp. 1-102. EUR-Lex.
https://eur-lex.europa.eu/eli/reg/2022/2065
The Equal Rights of Men and Women Are Affirmed in the Preamble of the UDHR; Art. 3 of ICCPR; Art. 3 and Art. 7. (a) (i) of ICESCR; and Art. 18(3) of ACHPR.
The Following Rights Enshrined in the Charter of Fundamental Rights of the European Union Are Expressly Referred to: Human Dignity (Article 1), Respect for Private and Family Life and Protection of Personal Data (Articles 7 and 8), Non-Discrimination (Article 21) and Equality Between Men and Women (Article 23).
UDHR, Art. 7; ICCPR, Arts. 2 and 26; ICESCR, Art. 2; ECHR, Art. 14. and Protocol No. 12 to the ECHR; ACHR, Arts. 1 and 24; ACHPR, Art. 2.
UDHR, Art. 8; ICCPR, Art 2.3; ECHR, Art. 13; ACHR, Art. 25; ACHPR, Art. 7.1.
Universal Declaration of Human Rights.
Websites
OECD. Artificial Intelligence. https://www.oecd.org/en/topics/artificial-intelligence.html
Artificial Intelligence Available at: Artificial Intelligence 2023 Legislation.
Sunstone (2023). Artificial Intelligence vs Information Technology: A Student Guide.
https://sunstone.in/blog/artificial-intelligence-vs-information-technology
Mishra, V. (2024). General Assembly Adopts Landmark Resolution on Artificial Intelligence. UN News.
https://news.un.org/en/story/2024/03/1147831
Hill, K. (2023). Wrongfully Accused by an Algorithm. The New York Times.
https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
https://undocs.org/A/78/L.49
https://undocs.org/en/A/RES/77/211
https://www.coe.int/en/web/human-rights-and-biomedicine/common-ethical-challenges-in-ai
https://www.datacamp.com/blog/ai-regulation
https://www.slrconsulting.com/insights/understanding-the-human-rights-issues-associated-with-artificial-intelligence
https://www.un.org/digital-emerging-technologies/sites/www.un.org.techenvoy/files/GDC-submission_ART-AI_University-of-Bath.pdf
Access Now (2018). Human Rights in the Age of Artificial Intelligence.
https://www.accessnow.org/wp-content/uploads/2018/11/AI-and-Human-Rights.pdf
OECD (2023). The Impact of AI on Workplace: Main Findings from the OECD AI Surveys of Employers and Workers.
https://www.oecd.org/en/publications/the-impact-of-ai-on-the-workplace-main-findings-from-the-oecd-ai-surveys-of-employers-and-workers_ea0a0fe1-en.html
United Nation Human Rights Council (2021). The Right to Privacy in the Digital Age.
https://www.ohchr.org/en/calls-for-input/2021/right-privacy-digital-age-report-2021