Artificial Intelligence in Law: Utilisation by Brazilian Legal Practitioners and Regulatory Challenges ()
1. Introduction
Technological evolution is driven by human needs. According to Basalla (1988: p. 6), technologies tend to evolve in response to pressing and immediate human needs, such as agriculture, water supply, and transportation, with the aim of facilitating the fulfilment of these basic needs. Artificial Intelligence (AI) is said to have emerged in the 1940s as an exercise and attempt to replicate the human brain, following the understanding that the brain is a neural network that emits pulses, or functions without them, which led the scientific community to study the possibility of simulating the human brain through a combination of neural networks and mathematical models (Shao, Zhao, Yuan, Ding, & Wang, 2022: p. 2).
The 1950s and 1960s were essential for the development of this technology, encompassing milestones from the Turing test to the famous meeting at Dartmouth College, which aimed at fostering discussions on how to use machines to simulate human intelligence (Georgiou, 2020: p. 138). Turing did not develop an AI programme; rather, he fostered a philosophical discussion about the potential of machine usage and machine thinking, which was essential for the advancement of technology (Terrones, 2018: p. 148). The development of AI progressed smoothly, but slowly, mainly due to the time required to develop all necessary technologies and algorithms. Most of these were achieved only in the 1990s, when AI also began to gain broader popularity (Georgiou, 2020).
The evolution has been so significant that AI has been introduced into various business models to enhance efficiency. The ability of AI to process and analyse data on a scale and at a speed that exceeds human capabilities is praised in this sense (Colther & Doussolin, 2024: p. 5). AI, however, lacks cognitive power. It operates by gathering and classifying available data, and it has been significantly boosted by machine-learning systems, which use patterns in data to produce intelligent results (Surden, 2019: p. 1312).
In this context, one useful application of AI in the legal field is the automation of activities through the modelling of logical and knowledge-based rules (Surden, 2019: p. 1312). Thus, AI could be used as a legal tool, functioning by transforming norms into rules that can be processed by computers, thereby employing legal logic. According to a Bloomberg Law report (2024a), the use of AI by legal practitioners is already a reality, with the technology being employed to automate routine tasks such as document review, research, or generative legal writing.
However, this technology is not free from bias, raising concerns related to discrimination and privacy. Since AI lacks cognitive power and is an automatic system for reading data and identifying patterns to generate intelligent results, the algorithms used can perpetuate biases and discrimination present in the data analysed (Saeidnia, 2023: p. 1). For instance, a study published in Science Direct examined the presence of gender bias in AI-generated results, concluding that AI tends to reflect discriminatory patterns rooted in society. As a technology with high market penetration (making it unlikely that AI usage will be reversed), there is a need for education on AI use to teach consumers to approach its results with caution and critical thinking (Newstead, Eager, & Wilson, 2023). There has been an allegation of gender discrimination, for instance, in the case of Amazon’s HR department using AI (Valeri, 2023). The AI was trained on curriculum vitae (CVs) received by the company over the past 10 years. Since these CVs were predominantly from men, the system began assigning lower scores to CVs that made any reference to the candidate’s gender, resulting in the discrimination of women. The discrimination has been alleged to have arisen from pattern recognition and the failure to correct this error.
AI bias and discrimination are often alleged to occur in the results generated through public surveillance. According to Heaven (2020), discrimination in such systems would occur due to the data base used by the AI, with a person of “black” skin being approximately five times more likely to be stopped by the police compared to a person with “white” skin. In this sense, another example of AI-generated flaws in Brazil due to unsupervised use and inadequate databases occurred in the state of Sergipe (DataPrivacy BR Research, 2023). That state employs facial recognition technology in public security cameras. A young woman was identified by the AI as a fugitive from the police, leading to an aggressive approach, including the use of handcuffs and detention at a police station. After the woman was released following clarification of the incident, it was revealed that the only characteristic she shared with the wanted individual was her skin colour.
The lack of regulation and clear guidelines on ethics and the preservation of rights in AI development has led to a series of open letters in the United States. In 2023, the Future of Life Institute (2023) called for a six-month moratorium on the development of new AI systems to draw attention to the indiscriminate use of AI, which, in their view, it does not necessarily benefit humanity, advocating for more stringent protocols and the use of external auditors. In the same year, the U.S. Chamber of Commerce (2023) urged the government to assess the American legal framework rather than adopting regulations hastily, to ensure that excessive regulation would not hinder the development of new technologies that could potentially benefit the country’s economy and the public’s well-being. Finally, the academic community has also expressed its concerns. The computer science professor Stuart Russell (2015) wrote a series of open letters calling on the global community of scientists, engineers, and technologists to develop guidelines for AI research. Professor Russell’s main concern is the development of autonomous weapons using AI.
Several countries have begun discussing the importance of regulating AI. The first legislation was adopted in 2024 by the European Union (Regulation 2024/1689), known as the EU AI Act. The EU AI Act was ambitious, aiming at regulating the development and use of AI within the EU. It applies to various operators within the supply chain and adopts a risk-based approach. Considering that the Act applies to all operators regardless of their location, provided that AI operates within the European market, its effects extend beyond the EU.
Given that the United States is one of the primary countries for technological development, hosting some of the world’s leading “tech companies” in Silicon Valley, it is relevant to note that there is no comprehensive federal law in force there. This does not mean that the subject is entirely neglected. AI is addressed in some sector-specific regulations, such as aviation, through the Federal Aviation Administration Reauthorization Act (White & Case, 2024). The US has also succeeded in adopting certain guidelines aiming to steer AI regulation, such as The White House Blueprint for an AI Bill of Rights.
Brazil is still in the process of developing its legal framework for AI. Discussions are taking place within the Federal Senate, following the introduction of Bill No. 2338/2023. These discussions began in 2020 in the Chamber of Deputies with Bill No. 21/2020. After its approval, the bill was sent to the Senate, where, in 2022, a commission of legal experts was formed to draft a substitute bill to consolidate the various proposals under consideration, including the one received from the House of Representatives of Brazil. The final report from this commission resulted in a draft bill that was converted into Bill No. 2338/2023, presented by Senator Rodrigo Pacheco, then President of the Brazilian Senate.
Through a systematic, but non-exhaustive, bibliographic review, this article aims at addressing the evolution of AI usage in legal practice both locally and internationally, exploring, in particular, how this technology has been implemented in the Brazilian context and analysing potential challenges in regulating the subject in the country. The main purpose of the article is to critically analyse the application of artificial intelligence in the Brazilian legal sector, exploring its benefits, risks, and ethical implications, comparing international regulatory approaches (European Union, United States, and China), and assessing how existing and proposed regulatory frameworks in Brazil can be enhanced to balance technological advancement with the protection of fundamental rights. The article is divided into three parts: the first, addresses the evolution of technology use in the legal sector; the second, focuses on understanding the use of these technologies within the national framework; and, the third, seeks to identify and analyse the challenges of regulating the subject through a comparative analysis.
2. The Evolution of AI and Its Impacts on Law
AI was first mentioned in science fiction books, such as those written by Isaac Asimov, and entered the scientific field from the 1950s onwards. The scientific milestone was Turing’s paper, Computing Machinery and Intelligence, in which he analysed how to build intelligent machines and how to test their intelligence. According to Turing (1950: p. 451), technological advancements would, in the future, enable the development of machine learning with the possibility of storing information, allowing machines to be programmed to mimic the human brain. Turing (1950) believed that the key lay in adopting a developmental and educational approach is similar to that of a child, feeding the machine with data and programming it to classify this information.
Six years after Turing’s publication, the Dartmouth Summer Research Project on Artificial Intelligence was held. The idea proposed by professors McCarthy, Minsky, Rochester, and Shannon (1955: p. 2) was to gather 10 scientists specialising in artificial intelligence with the aim of significantly advancing the field, based on the assumption that every aspect of learning or any other feature of intelligence could be precisely described enough for a machine to simulate. The event is celebrated as a landmark for AI as a scientific discipline.
According to Moor (2006: p. 87), the event did not unfold exactly as planned. The scientists reportedly did not achieve the level of interaction initially envisioned for the summer school project. Nevertheless, it was significant for the development of the Logic Theory Machine project. The Logic Theory Machine was also presented by scientists Newell and Simon (1956) in a publication which purpose was to describe an information-processing system capable of discovering proofs for theorems in symbolic logic. According to Gugerty (2006: p. 881), the Logic Theory Machine was developed based on studies of people’s heuristics, working from the theorem to be proved and using heuristics to perform valid inferences and reach the axiom. The relevance of the Dartmouth event and the studies of Newell and Simon for technological development is clearer in Figure 1 below:
Figure 1. Artificial Intelligence Timeline (Adapted from The History of Artificial Intelligence, 2017).
The graph shows a clear exponential leap in AI development during the 1980s. According to Giancaglia (2021), there was an AI boom during this period due to the increase in expert systems and available funding. Expert systems are knowledge-based systems designed to separate factual statements from abductions, imitating decision-making processes (Szolovits, 1987: p. 48). At the same time, the commercialisation of these AI systems became increasingly common. According to Szolovits (1987: p. 43), the scale of commercialisation was impressive, despite being concentrated in a small number of projects that managed to cross the line from science to routine commercial application.
Despite significant investments, AI development projects did not achieve all their desired outcomes, leading to a reduction in funding (Anyoha, 2017). However, the 1990s and 2000s marked the realisation of key objectives. The 1990s saw advancements in machine learning and the popularisation of Internet use. Machine learning technology, initially developed in the 1950s, was particularly influenced by the work of psychologist Frank Rosenblatt (Fradkov, 2020: p. 1385).
According to Fradkov (2020: p. 1387), innovations in machine learning during the 1990s and 2000s can be attributed to three key developments: Big Data, which made technological advancement a practical necessity; the reduction in the costs of parallel computing and memory; and the development of new deep machine learning algorithms. As machine learning advanced, the term AI came to be popularly used as a synonym for machine learning technology by the general public (Dimiduk, Holm, & Niezgoda, 2018: p. 159). Dimiduk, Holm, and Niezgoda (2018: p. 159) explain that the original idea of developing machines that could behave like humans would be better represented by the term artificial general intelligence.
Figure 2 below shows the evolution of various AI systems in relation to human capabilities, highlighting the rapid development of image and voice recognition technologies:
Figure 2. Test scores of AI systems on various capabilities relative to human performance (Adapted from Dynabench: Rethinking Benchmarking in NLP, 2021).
With the rapid development of these technologies, AI has become integral to various business models. According to Vazques and Goodwin (2024), the implementation of AI solutions in businesses aims at optimising business functions, boost employee productivity, and drive business value. The goal is to promote greater efficiency in business operations. Mongan and Taylor (2023) highlight that AI amplifies human capabilities, particularly in designing corporate strategies.
In the legal field, AI is publicised as a powerful tool for automating manual processes and promoting efficient work (Bloomberg Law, 2024b). According to Bloomberg Law (2024b), the AI technology implemented in the legal market primarily consists of supervised machine learning tools. A report by the British Institute of International and Comparative Law indicates that lawyers are implementing AI in at least the following capacities: as a search and discovery tool, document automation, predictive legal analysis, legal review, case management, legal advice and expertise automation, and information and marketing tools (Pietropaoli, 2023, p. 5-11).
AI is also being used in courts. According to Reiling (2020: p. 2), judges’ work can be summarised as processing information from various sources to draft decisions. Given the complexity of cases, it can be argued that a significant portion of routine cases has a predictable outcome, making AI a powerful tool for improving access to Justice. In this context, Reiling (2020: p. 4) points to a project by Tilburg University, Eindhoven University of Technology, and the Jheronimus Academy of Data Science to implement AI in traffic violation cases.
Finally, it is worth noting that the implementation of AI in the legal market is not without criticism. A common concern is the ethical implications of the algorithms used. This criticism arises from the way the technology functions, identifying patterns in the data it analyses. There is a general concern that AI could replicate existing societal prejudices, producing decisions that increase the vulnerability of certain social groups (Surden, 2020: p. 727). Also, there is a legitimate concern regarding the need to incorporate ethical responsibility into AI development, balancing economic commitments that prioritise the profit of certain sectors with the imperative to ensure that AI is developed for the common good of humanity (Terrones, 2018: p. 154). This raises valid concerns about whether the technology is ready to be applied in a way that respects fundamental rights, ensuring equal treatment and data security.
3. The Use of AI by Brazilian Law Practitioners
Brazil is not isolated and follows the global trend of using technologies to increase work efficiency. In this context, Juliano Maranhão (2024) explained that research on Law and AI is concentrated on large language models and machine learning methods. These solutions do not include legal reasoning, meaning that the technology lacks the capacity to replace human interpretation, the construction of legal concepts, and the proposal of new solutions for complex cases.
Another relevant point in analysing the use of AI in activities requiring technical expertise is the fact that large language models are being studied cautiously due to the phenomenon known as “hallucination”. As explained by Maleki, Padmanabhan, and Dutta (2024: p. 135), there is no standard definition for the term, which generally conveys the idea of inconsistent results. These inconsistencies can take various forms, from references to non-existent facts (such as creating precedents or bibliographies) to conceptual inaccuracies and errors.
For example, in 2023, a Brazilian lawyer used ChatGPT to draft a petition requesting participation in an electoral case as amicus curiae. This action, not permitted under national law, resulted in the lawyer being fined for acting in bad faith. Thus, understanding how AI is being utilised within the Brazilian context is crucial. In another case, a federal judge used the same technology program to issue a ruling. However, the ChatGPT-based decision was grounded in non-existent precedents from the Brazilian Supreme Court. The defeated lawyer noticed the fraud and reported it to the Internal Affairs Division of the Brazilian Federal Justice of the 1st Region and the case will also be reviewed by the National Justice Council.
Unfortunately, the incident that occurred in Brazil cannot be considered an isolated case. The inability to analyse and identify false information provided by technology is a global trend. In 2024, the OECD (2024) published the results of the Truth Quest Survey, which aimed at assessing people’s ability to identify false or misleading online content. The study was conducted across 21 countries (including Brazil), with Finland being the only nation to surpass 80% accuracy in identifying false or misleading information generated by artificial intelligence, as shown in Figure 3 below:
Figure 3. Truth Quest scores for AI- and human-generated disinformation (Adapted from The OECD Truth Quest Survey, 2024).
In 2024, Brazil’s National Council of Justice (CNJ) issued a report on the use of generative artificial intelligence by the Brazilian judiciary. Although the study focused on judges and civil servants in Brazilian courts, the research was not restricted to their professional activities (CNJ, 2024a: pp. 48-52). Among the participants, at least half reported having used AI in their lives, with approximately 30% utilising it professionally (CNJ, 2024a: pp. 52-58).
One striking finding in the research is the heavy reliance on free or open versions of AI technologies (61%). These tools have been employed for text refinement in legal documents, drafting suggestions for legal pleadings, and summarising videos (CNJ, 2024a: pp. 57-59). This raises potential concerns about information security, particularly in cases involving confidentiality or judicial secrecy. However, no conclusive findings can be made on this point at this time due to a lack of available data.
In another study, the CNJ (2024b) identified 140 AI projects developed or under development in Brazilian courts and justice councils. Not all of these projects have been implemented so far. At least eleven projects have been completed but not implemented, while sixty three are ready for use by the Brazilian judiciary. The main reasons for launching these projects were to promote efficiency and agility, enhance precision and consistency in repetitive tasks, and foster innovation in internal processes (CNJ, 2024c). However, the CNJ does not provide data on the efficiency of these projects. There is not even a requirement for transparency in their disclosure.
The dissemination of news regarding the efficiency of programs is often done by the courts themselves. In this regard, the Appellate Court of the State of Paraná recently announced that the program developed by that Court, JurisprudênciaGPT, has recognized in an international competition: it won second place at the 2024 Gartner Eye on Innovation Awards for Government in Americas. According to the CNJ (2024d), this generative AI tool significantly optimises legal research by enabling judges and court’s staff to query a vast database of over 4.9 million court rulings. The tool provides precise responses supported by references, facilitating decision-making and enhancing judicial efficiency.
The adoption of new technologies, such as AI programs, by the Brazilian judiciary is not inherently negative. According to Almgren (2023: pp. 23-25) the implementation of such technologies assists the judiciary in fulfilling the constitutional right to a reasonable duration of legal proceedings. Brazil faces an excessive volume of litigation, and the judiciary has a longstanding issue with ensuring procedural efficiency. In this context, the development of Project Victor by the Brazilian Supreme Court and the University of Brasília stands out. Launched in 2018, the project addresses critical challenges such as the excessive volume of litigation and the need for faster document processing. Among its key features, the AI system was designed to convert images into text, classify and separate documents, and identify recurrent legal themes for faster resolution.
Project Victor is reported to have reduced task analysis time from forty-four minutes to just five seconds, contributing to the decrease in pending cases, as evidenced by the reduction from 7409 Extraordinary Appeals in 2018, to 5219 in 2019 (Prado & Andrade, 2022: pp. 72-73). Furthermore, the automated screening initially achieved an accuracy rate of 84%, with expectations of continuous improvement (Prado & Andrade, 2022: p. 68). However, the system still faces limitations due to its developmental and calibration stage, which prevents conclusive analyses of its full impact. Prospects include expanding Victor’s application to other courts and investing in technical training, crucial elements for consolidating digital transformation in the Brazilian Judiciary (Prado & Andrade, 2022: pp. 71-74).
According to Maranhão, Junquilho and Tasso (2023: p. 151), the main current issue with how AI is used in the judiciary relates to governance. The author highlights transparency as a key factor in deploying this technology in a public service with high social impact. Despite a CNJ resolution on the topic, it is neither clear nor transparent how the Brazilian judiciary interacts with AI systems. The research’s purpose was to propose a governance solution based on transparency, analysing transparency in use, operation, and data management (Maranhão, Junquilho, & Tasso, 2023: pp. 156-157).
The use of AI in Brazil is not restricted to the judiciary. Within public advocacy, the Office of the Attorney General of Brazil announced plans to implement AI in the management and production of legal and administrative documents as of 2024. Notably, the suggested petition models would be based on the institution’s own database. Another example comes from João Pessoa, Paraíba, a city in Brazil where municipal attorney offices have invested in automated systems for managing judicial and administrative cases (including municipal debt recovery) to improve revenue collection efficiency (Oliveira, 2024: p. 8). However, there is no information on transparency in these cases.
AI practices are also widespread among Brazilian lawyers. According to the São Paulo Bar Association (OABSP, 2024), there are at least 32,000 AI tools available to address lawyers’ demands. In this context, the Brazilian Bar Association adopted guidelines in 2024 to regulate the use of generative AI in legal practice.
These guidelines emphasise information security, adherence to Brazil’s General Data Protection Law, and the need to handle client data with confidentiality and privacy. Additionally, the document reinforces that AI should not replace lawyers. Its use must align with ethical standards, requiring human judgement for data evaluation, as mandated by law, and AI cannot perform tasks exclusively reserved for lawyers.
The concerns addressed in the document are justified in Brazil’s national context. According to Junquilho (2023: p. 18), the technology has been implemented in Brazil without fully understanding its potential effects or ethical regulation. The ethical application of AI arises from the need to use it to benefit society and mitigate potential negative impacts, such as discriminatory outcomes (Junquilho, 2023: p. 35). Thus, understanding how AI is regulated and Brazil’s goals on this topic is essential.
4. Regulation of AI Usage in Law
It is evident from the preceding topics that AI employed in Brazil primarily focuses on data analysis and pattern identification to suggest outcomes. In addition to being used in various business models, AI is widely disseminated within the national legal market. Studies even point to AI being used to draft judicial decisions. In this context, regulating AI usage is essential to ensure computer models treat individuals fairly and legally, avoiding violations of fundamental rights and freedoms due to potential biases in the employed algorithms.
It is important to note that the issue lies not in the use of technology but in the lack of transparency and training of databases. For example, the introduction of “Race Blind Charging” guidelines in California in 2025 illustrates how technical adjustments can address biases (Weivoda, 2024). Under California Penal Code Section 741, prosecution agencies must use redacted reports and criminal histories to remove demographic information from charging decisions. However, discretion still allows limited application of these measures to specific cases. Another example is the use of AI systems to present judicial information without revealing the defendant’s physical characteristics, inspired by the format of “The Voice”, where decisions are based only on merit. Additionally, as noted by Ho et al. (2023: pp. 4-5), the adoption of AI in sentencing decisions in states like Virginia has helped reduce gender differences in sentencing. These examples show that structural adjustments and ongoing training are necessary steps to address the ethical challenges of discrimination and promote decisions based on merit.
It is also advocated that the regulation of AI used in the Judiciary needs to guarantee the right to due process. Pasquale (2021: p. 42) highlighted several errors made by autonomous systems in the judiciary of Australia and Michigan, which caused harm to the population by denying benefits to which individuals were entitled or by charging non-existent debts. According to Pasquale (2021: p. 50), the processes are complex, and the simplification needed to allow AI to read and operate on them can lead to harm for the parties involved.
According to Maranhão, Florêncio and Almada (2021: pp. 161-162), regulating artificial intelligence is a challenging matter, as it may lead to over- or under-utilisation of the technology. Moreover, it is difficult to determine the level of generality required to balance the specificity needed for effective application with the abstraction necessary to support continuous technological development. Despite being a novel topic, there are existing regulations and initiatives from other countries that can serve as case studies for Brazil.
In this context, this part of the article will examine some of the regulations adopted or discussed in the European Union, the United States, and China. This geographical selection is based on legislative pioneering (European Union) and the role of major technology-exporting countries (United States and China). Equally important, these regions are classified as current digital empires (Bradford, 2023: p. 6).
According to Bradford (2023: p. 7), there is a marked difference among the three: the United States has a market-driven regulatory approach, China employs a state-driven approach, and the European Union is rights-driven approach. The purpose of this analysis is to understand how these regions balance the protection of fundamental rights with entrepreneurial freedom, seeking to avoid relegating Brazil to the role of a mere technology importer due to its chosen legislative model. The Governor of the State of Sao Paulo, Tarcísio de Freitas, advocates for the adoption of a regulation that attracts investments and creates jobs (São Paulo State, 2024).
The EU
The European Union has positioned itself as a pioneer in the discussion and defence of AI regulation. Unsurprisingly, it adopted the first regulation on the subject: the EU AI Act. The regulation aims at establishing standards for the development, marketing, and use of AI within the European Union. This discussion is not new in the European context, as the EU AI Act is part of the broader packages announced by the EU regarding technology, including the Digital Services Act and the Digital Governance Act (European Parliament, 2023).
The EU AI Act introduces a risk-based framework for AI regulation, categorising AI systems by their risk levels and assigning corresponding regulatory requirements. Such Act addresses key ethical and legal concerns surrounding AI through several prohibitions (Ren & Du, 2024). It bans AI systems that covertly manipulate behaviour, exploit vulnerable groups, or enable social scoring based on behaviour or personal characteristics. It also imposes strict regulations on the use of real-time biometric identification by law enforcement in public spaces to safeguard privacy and civil liberties. These prohibitions stem from risks classified as unacceptable.
AI systems used in the administration of justice and democratic processes are classified as high-risk under Point 8 of Annex III, necessitating compliance with the Act’s provisions. However, this classification is insufficient due to Article 6(3) of the EU AI Act, which states that any AI capable of materially influencing decision-making outcomes should also be considered high-risk. This requirement means that not all legal field AI solutions are automatically classified as high-risk, potentially imposing a greater financial burden to meet compliance obligations.
The U.S.
In contrast, the regulatory agenda of the United States differs clearly in its lack of prohibitions or obligations for AI developers. The United States does not have federal legislation on AI ethics up to now. However, it adopted the non-binding Blueprint for an AI Bill of Rights in 2022 and the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in 2023 (White House, 2023).
This US stance does not imply the absence of national efforts to establish legislation. Since 2022, the Algorithmic Accountability Act has been under consideration in Congress. While not as stringent as European rules, it aims to establishing minimum requirements and accountability through reporting and impact assessments.
In this context, US regulations remain general and do not specifically address technologies applicable to the legal market. Instead, they promote equity and civil rights by mandating clear AI guidance for landlords, federal programmes, and contractors while addressing algorithmic discrimination and enhancing fairness in criminal justice (European Parliament, 2024: p. 2). Thus, the US government adopts a monitoring and research-based approach to AI development, without proactively imposing prohibitions or sanctions to mitigate known risks.
China
In China, the country has regulated the use of algorithms by companies in online recommendation systems, which must ensure they operate in a moral, ethical, and accountable manner, with transparency and a commitment to promoting positive values and disseminating “positive energy”. A translated version of this regulation was made available by Stanford University. Similar to the European stance, China has also opted to impose sanctions on companies that violate the legislation (Creemers, Webster, & Toner, 2022). China adopts a mixed approach that combines European-style regulation for social harmony with competitive local markets to foster innovation (Chun, Wittm, & Elkins, 2024: p. 9). However, a comprehensive national AI law is not yet in force.
Brazil
Brazil has been moving towards a similar alternative, seeking to blend legislative experiences from Europe and the United States (Castro, 2024). Brazil does not yet have a general AI regulation, although discussions about a bill are underway. The Federal Senate recently approved the bill, and discussions are expected to continue in the House of Representatives of Brazil. According to the Senate’s news portal (2024), these discussions aim at balancing the guarantee of fundamental rights with the freedom to research and innovate, promoting Brazil’s economic development. Balancing these two national concepts is not simple.
According to Kubota and Rosa (2024: p. 20), there is a need to study AI’s impact across various sectors to avoid unnecessary and excessive restrictions on entrepreneurial freedom. In a study conducted by Fundação Getúlio Vargas (FGV) to analyse the use of technology by law firms in Brazil, it was revealed that, while seventy-seven per cent of firms used basic tools for organisation and information management, only twenty-six per cent reported using software for automated document generation (FGV, 2018: pp. 18-27). These figures are significant as they indicate that the Brazilian legal market has not yet adopted AI technologies to the same extent as the other countries studied.
According to Shi et al. (2021: pp. 2-3), there is a national policy in China aimed at modernising the legal sector through the implementation of smart courts. The digitalisation and use of technology in the judicial system was incorporated into the China’s National Strategy for Informatization Development in 2016. Furthermore, the authors noted surprising results, with routine activities being performed using AI reducing the time needed to complete proceedings by up to half (Shi et al., 2021: p. 11).
According to Laptev and Feyzrakhmanova (2024: pp. 396-397), the United States demonstrates a high level of adoption and development of AI technologies in the legal sector compared to other countries. In this context, the authors explained that the National Artificial Intelligence Initiative Act of 2020 highlights the importance of maintaining the country’s leadership in AI research and development, as well as preparing the workforce to integrate these systems across various sectors. In the judicial sphere, several initiatives have been implemented, such as the use of the Public Safety Assessment (PSA), which assists judges in making decisions on preventive measures, and COMPAS, which assesses the risk of reoffending based on personal and social factors.
Regarding the European Union, Laptev and Feyzrakhmanova (2024: pp. 398-399) argued that the use of artificial intelligence in the judicial sector follows a more cautious approach compared to countries like the United States and China. The focus has been on creating a robust ethical and regulatory framework, such as the Ethical Charter on the Use of Artificial Intelligence in Judicial Systems (2018) and the Ethics Guidelines for Trustworthy AI (2019). Examples of implementation include France, which uses systems such as Case Law Analytics and Predictive to analyse precedents and legal risks, but prohibits fully automated decisions, ensuring that a human judge is ultimately responsible for the decisions. In the United Kingdom, technologies such as HART and PredPol are used to assess risks of reoffending and predict crime locations. However, there is scepticism regarding full automation, and final decisions are made by humans. Thus, while there are ongoing projects, the European Union prioritises adopting guidelines and principles to ensure that the use of AI is ethical, safe, and controlled.
Therefore, the research indicated that the degree of AI technology use differs across the countries analysed. While China and the United States have a wide application of AI in their judicial systems, actively promoted through national policy, the European Union has adopted a more cautious approach, with projects in early stages and no systems deeply integrated into the core of judicial processes. In this context, Brazil appears to be at an initial stage, where discussions are more theoretical so far than based on practical experiences. The Brazilian AI Draft Bill currently under discussion in the House of Representatives main purpose seems to establish so far a risk-based framework, similar to the EU model. The proposal includes the introduction of criteria to define excessive and unacceptable risks, and prohibiting the use of technologies classified in this way. As with the EU AI Act, technologies used in the judiciary would be classified as high-risk. However, the Brazilian proposal has yet to discuss the adoption of additional and more specific criteria to distinguish technologies that could be classified as supplementary or uncapable of materially influencing decision-making outcomes (Almgren, 2023: p. 36).
According to Almgren (2023: p. 33), there is an ongoing tension between two main schools of thought. On one side, some argue that AI could be used in Brazil for unsupervised decision-making. On the other, there is a push to prohibit the use of algorithmic decision-making processes without human intervention, with the primary argument centred on the risk of discriminatory bias. It is important to highlight that this matter is not entirely overlooked by Brazilian law. The Brazilian General Data Protection Law (LGPD) establishes the right of data subjects to request a review of decisions made exclusively through automated processing (Article 20). This includes the right to an explanation of the criteria used in the decision-making process, aiming at mitigating potential discrimination. However, this provision is still subject to specific regulation by the National Data Protection Authority (ANPD), as required under its legal mandate.
For instance, a classic case of discrimination resulting from automated decision-making occurred in 2015, when a person uploaded photos to Google Photos, which automatically categorized them into a folder named “gorillas”. National Courts have, thus, been issuing decisions to impose the duty of information about the algorithms used by companies in order to mitigate the possibility of AI being used to violate constitutional principles and national law, including the Civil Rights Framework for the Internet.
In this context, Maranhão, Vainzof, and Fico (2024) argue that the national market would benefit from the regulation of the article by the ANPD. According to the authors, the lack of criteria and regulatory requirements in Article 20 has resulted in a proliferation of judicial decisions discussing source code disclosure, algorithmic subordination, and decisions to remove individuals from registries, which could lead to legal uncertainty regarding the development and application of automated systems and AI in Brazil, primarily due to uncertainties concerning the protection of intellectual property rights.
The discussions on AI regulation in Brazil have some peculiarities, such as the fact that the Brazilian experience is closer to issues related to personal data protection, which is still in consolidation process in Brazil. In this context, there is a discussion about the possibility of the ANPD also having jurisdiction to oversee the implementation of AI regulations (Schmidt, 2023). Another national challenge is the concern with adopting a regulation that protects fundamental rights and promotes technological development, with a national demand for the inclusion of rules on the promotion and development of research (Schmidt, 2023). Finally, national legislation is also under pressure from the artistic community to include rules on copyright, in order to prevent content protected by copyright from being used in the development of AI systems without proper permission (Brazil, 2024). The lack of legislation does not imply an absence of guidance from the Brazilian government. In 2021, the Ministry of Science, Technology, and Innovation adopted Ordinances 4617 and 4979, establishing the Brazilian Artificial Intelligence Strategy. While the policy aimed at guiding the Brazilian State’s actions in promoting AI-related research and innovation, it also sought to establish guidelines for ethical and conscious use.
Regarding AI use in the legal market, there are two regulatory efforts to fill the legal gap. Concerning lawyers’ activities, the Brazilian Bar Association recently issued federal recommendations with guidelines for using AI in the legal practice. This document is significant as it ties AI use to the Brazilian Bar Association’s Code of Ethics, emphasising the need for ethical, confidential, honest, and good-faith actions.
Lawyers must not use AI without human supervision, adhering to the Civil Procedure Code’s requirement to respect the truthfulness of information (especially when using AI for precedent identification). Additionally, lawyers must act transparently with clients, informing them about AI use in their work. In theory, lawyers are now subject to the sanctions of the Brazilian Bar Association’s Code of Ethics for misuse of AI, linking ethical technology use to professional conduct.
AI use by the judiciary has also been regulated by the CNJ. Since 2020, the CNJ has issued resolutions on the topic, creating research and data collection committees and establishing a national AI strategy for the judiciary. Resolution 363/2021 mandates that systems used must be transparent and justifiable, with periodic evaluations. However, there is no clear indication of what constitutes transparency or justification.
5. Conclusion
The article demonstrated the rapid pace at which AI technology has been developing globally in the most recent years. Big Data, combined with the need to make businesses more efficient, has been a significant driver of technological advancement. Among the various types of AI, machine learning has gained prominence in commercial use. The technology has recently progressed in mimicking human features far more quickly than in the past. In this context, many members of civil society have raised concerns about the dangers of unregulated AI.
AI does not function in an exclusively positive or beneficial way. Operating through data storage for pattern classification, Unregulated AI and/or the misuse of AI has the potential to exacerbate inequalities and promote discriminatory practices. The most extreme and noticeable cases so far involve gender bias or racial discrimination, with already relevant examples in this regard. The research also revealed that many people lack adequate digital or technological education, struggling to identify false or misleading information created by AI.
Nonetheless, AI is being implemented across a wide range of sectors. In Brazil, AI is heavily used in the legal market, with machine learning and large language models being the primary technologies employed in this field. The research also showed that the indiscriminate use of AI without proper human supervision has already caused “hallucination” cases within the national Judiciary. These include lawyers submitting petitions based on non-existent precedents and judges issuing erroneous decisions due to false information generated by AI. These examples confirm the potential for violations of due process. Consequently, some Brazilian scholars have advocated for regulating AI use in the judiciary to fulfil the duty of transparency.
In examining regulatory models adopted or under discussion worldwide, this study focused on the major digital empires that could serve as examples for Brazil. Brazil’s ambition to adopt legislation that guarantees fundamental rights should be aligned with promoting national technological development, attracting investments, and creating jobs, not only in this sector, but in general by creating more efficiency and competitiveness to Brazilian industries, their respective companies and the public administration as a whole.
In this context, the research highlighted three distinct legislative models. The EU has adopted a rights-based approach, implementing a risk matrix that includes the possibility of technology bans and the potential for fines against developers. The US has taken a more cautious stance, aiming at incorporating ethical principles into AI development while maintaining entrepreneurial freedom. As such, there is no binding general regulation in effect, but the government issued non-binding guidelines on AI and ethics. The US also does not have prohibitions or provisions for fines and sanctions. Finally, China does not have a general AI law in force but has introduced regulation on the use of algorithms by companies in online recommendation systems. Analysis of this regulation suggests that China aims at combining European and US-style approaches by adopting a state-driven model. While the country seeks to protect rights, it does not intend to overly restrict its domestic industry.
From the analysis of these regulations, the study concluded that Brazil: (i) appears to favor an approach similar to China’s, blending elements of European and American-style regulations; (ii) is conducting discussions on its proposed legislation based on the EU’s risk-based model; (iii) so far, lacks clarity on the risk levels applied to AI technologies in the Judiciary, as there is no distinction between AI assisting decision-making and AI used for routine tasks; and (iv) faces debate over whether to allow the use of algorithmic decision-making processes without human intervention.
Lastly, the research also showed that the Brazilian judiciary has been motivated to adopt and develop AI technologies institutionally. This process of AI development and implementation has been monitored by the CNJ. Many projects were developed in partnership with national universities. In the legal field, both the CNJ and the Brazilian Bar Association have sought to mitigate the risks posed by the legal gap by issuing regulations with limited application to the Judiciary and the general legal profession.
Overall, the article demonstrated that Brazil could benefit from regulating the use, marketing and development of AI, particularly due to the experience gained through the interpretation of data protection laws by national Courts. Regulation should promote legal certainty and clarify issues related to intellectual property rights, for instance, which, with the due care and attention for not over-regulating this new technology, could create a much better environment for attracting technology investments to the country, and put the country in a more competitive advantage for its efficiency both in the business sector as well as in the public administration.