Opportunities to Improve the Quality of Environmental Reports and the Effectiveness of Environmental Impact Assessment: A Case of Electric Power Transmission Systems in Brazil

Abstract

A good quality Environmental Impact Statement (EIS) is key for the effectiveness of Environmental Impact Assessment (EIA) processes and consequently to the acceptability of projects subject to EIA. The international literature has contributed to the understanding of the essential aspects to be verified regarding the quality of EIS, offering a wide spectrum of good practice examples related to the content of the studies. Even so, there is a need for empirical studies that allow the identification of specific aspects related to the context of application of the EIS, which could lead to the identification of opportunities to improve both the quality of the reports and also the effectiveness of EIA. Therefore, the present paper is focused on the quality review of a number of EIS submitted to the Brazilian Federal Environmental Agency (Ibama) to instruct the assessment of electric power transmission systems. Based on the application of the EIS quality review package as proposed by Lee and Colley (1992), the outcomes reveal opportunities for improving the scope of EIA, analysis of alternatives, prediction of magnitude and the assessment of impact significance. Finally, the development and/or adaptation of a similar tool for the systematic review of the quality of EIA reports is recommended.

Share and Cite:

Demori, V. and Montaño, M. (2024) Opportunities to Improve the Quality of Environmental Reports and the Effectiveness of Environmental Impact Assessment: A Case of Electric Power Transmission Systems in Brazil. Journal of Environmental Protection, 15, 124-140. doi: 10.4236/jep.2024.152009.

1. Introduction

The quality of environmental studies is a key factor for an effective Environmental Impact Assessment (EIA) [1] . The low quality of information provided by studies negatively affects the ability to influence decision-making on projects and, therefore, may affect the EIA’s ability to achieve substantive results [2] , such as influencing projects to a better integration of environmental issues (substantive effectiveness), contributing to a more timely and costly responsive EIA process (transactive effectiveness), or by internalizing international best practice procedures and approaches (procedural effectiveness).

The literature points out that there is high variability in the quality of information produced in the EIA process in different contexts, from low quality [2] [3] to good results from systematic evaluations [4] [5] . Research conducted in Brazil has highlighted deficiencies in Environmental Impact Assessment (EIA) [6] [7] [8] [9] , though the systematic review of the quality of EIAs in Brazil can still be considered incipient when compared globally [9] [10] .

In this context, this paper sought to verify whether the EIA processes conducted by the Brazilian Federal Environmental Agency (Ibama) to instruct the assessment of electric power transmission systems have adequately informed the decision, considering the quality of the environmental reports, and identify systemic gaps that could contribute to the effectiveness of the EIA system, considering international principles of best practice.

2. Methods

The research was carried out on the federal EIA system dedicated to the assessment of electrical energy transmission systems, coordinated by the Brazilian Institute of the Environment and Renewable Natural Resources (Ibama). The agency was created in 1989 and is currently linked to the Ministry of the Environment [11] .

Electrical Energy Transmission Systems (EETSs) are projects responsible for transporting electrical energy between generating units, such as hydroelectric and wind power plants, and distribution systems, which are responsible for delivering energy to consumers [12] . EETS can contain linear components (transmission lines and road/service accesses) and others (substations, telecommunications repeater stations, ground electrodes, construction sites for installation and support bases for operation), and therefore must be carefully assessed considering the likely significant impacts on flora, fauna and affected communities. According to Brazilian legislation, the environmental licensing of EETS projects must be guided by the prior assessment of the potential environmental impacts, and must be supported by the preparation of Environmental Impact Studies (EISs) or by Simplified Environmental Reports (SERs) in cases where the legislation allows the preparation of simplified studies [12] [13] [14] .

It is important to mention that the electricity sector has been expanding over the last decade [15] with growing demand for new projects, with pressure from project proponents for greater efficiency in the EIA system and federal environmental licensing. Furthermore, due to the competences attributed to the federal sphere of EIA and licensing by Complementary Law 140/2011 [16] , it is understood that there is a tendency to deal with projects of greater complexity for decision-making, as they can affect other countries, indigenous lands, federal conservation units and involving more than one state and politicians (strategic projects) [17] .

Three criteria were considered for the selection of environmental studies to be evaluated: 1) integrated into licensing processes conducted from the beginning by MMA Ordinance No. 421/2011 [12] , to encompass the current panorama; 2) processes that presented at least the Preliminary License (LP) issued by the date established for data collection (April 2018), in order to ensure that environmental studies have supported decision-making; and 3) the studies should be available in full for public access.

The documentation of thirty-eight processes were located, 21 guided by Environmental Impact Study (EIS) and Environmental Impact Report (Rima) and 17 by Simplified Environmental Report (SER), as shown in Table 1. According to the Ordinance No. 421/2011 from the Ministry of the Environment, the difference between environmental studies is basically found in the stage of defining the scope of the study and the demand for primary data, those collected with the specific need for the study [18] . In the case of SER, which supports decision-making in the simplified licensing procedure, the stage of defining the scope of the study is not foreseen, its content being previously defined in its own standard, and the diagnosis that supports the assessment of impacts can be prepared based on secondary data, if available. As for the EIA, which supports the decision-making process in other cases, the definition of the scope of the studies is based on a Term of Reference (TR) pre-established in a standard, but it is the

Table 1. EETS processes analyzed.

responsibility of the environmental body to define a specific TR for definition of the scope of the impact study, in addition to there being restrictions on the use of secondary data in preparing the diagnosis of the area of direct influence.

The environmental studies were accessed through the Ibama repository. For cases of incomplete or missing studies in the repository, contact was made with analysts from the agency to check the possibility of making them available. Still, some studies remained incomplete, with missing annexes and appendices, and were discarded. Thus, of the 38 studies initially identified, 15 EIS (71% of the initial set) and 6 SER (35% of the set) were accessed in full, shown in Table 1.

Environmental Reports’ Quality Analysis

The paper is based on the application of a protocol to assess the quality of environmental reports, as proposed by Veronez [19] , which is further explained in the following section.

Among the EIS quality analysis tools found in the literature, we chose to use the Lee and Colley Review Package—LCRP [20] , which stands out for being considered one of the most robust and widely used in different contexts (United Kingdom, Ireland, South Africa, Germany, Bangladesh, Nigeria, Zimbabwe) and different types of projects (see, for example, [10] ). In the present work, the LCRP was applied with the adaptations proposed by Veronez [19] .

The protocol that guides the application of the LCRP was initially developed to assess the quality and completeness of environmental studies by any interested party (such as: authorities, technicians, non-governmental organizations, affected populations) in the context of the United Kingdom and the authors indicate the little need for adaptation for use in other contexts [20] . The review strategy consists of identifying weaknesses, omissions and concealment of information in the study [20] .

The list of review topics is organized in a 4-level hierarchical format: subcategories, categories, areas and an overall grade. There are 4 main areas: 1) Description of the project, the environment and baseline conditions; 2) Identification and evaluation of the main impacts; 3) Alternatives and Mitigation; and 4) Communication of results. The categories are based on EIS activities to be carried out within each area, and the subcategories detail their correspondent categories.

The originally proposed methodology consists of evaluating each of the subcategories, considering the category context in which they are inserted, on the sufficiency of the information contained in the environmental study according to the concepts and criteria presented in Table 2.

Upon completing the judgment of all subcategories of a category, the evaluator must, based on their scores and with other information he deems important, evaluate the category. The same procedure is carried out for higher levels, that is, once all the grades in the categories of an area have been obtained, the area is evaluated and, after all the grades in the areas have been obtained, the study as a

Table 2. Grades and grading criteria.

Source: [19] [20] .

whole is evaluated.

A simple average of grades from lower levels should not be used to judge higher levels, as each subcategory may have different weights when determining the value of the category [20] . In this context, Veronez [19] , based on consultation with experts, contributed to diminish the subjectivity of the method, by determining relative weights to each lower level to be considered on the judgment of a superior level (Figure 1).

The aforementioned author proposes the conversion factors as presented in Table 2. Conversion 1 is used to transform a non-numeric grade (concept) into a numerical value for the purpose of weighting calculations to determine the higher level grade. Through Conversion 2, the numeric grade is transformed back into a non-numeric grade.

The protocol provides that the reviewer must be familiar with the EIA system and have at least basic knowledge of international EIA methodologies and good practices. Furthermore, it is recommended that the evaluation be carried out by at least 2 reviewers separately, who, after comparing individual results, must decide together on non-compatible grades, in order to promote less possibility of bias in the review. In the present case, the review was carried out by a reviewer with 6 years of experience in federal environmental licensing of EETS, with 240 hours of training in EIA, and supervised by a professional with more than 20 years of experience in applying the tool.

The review of environmental studies included three strategies: 1) familiarization of the reviewer and supervisor with the tool, through a training stage based on the review of three EIS of different types with two other reviewers, which was

Figure 1. Hierarchy between reviewing categories/subcategories and weights applied. Source: [19] .

useful to understand appropriately the categories and subcategories of the protocol; 2) review of grades for studies already evaluated, due to learning and refinement of evaluation criteria resulting from the repeated application of the evaluation protocol, as initially highlighted by McGrath and Bond [21] ; and 3) application of the criteria and weights proposed by Veronez [19] to determine the grades for the categories (composed of groups of subcategories) and areas (composed of groups of sections), as well as the overall grade for the environmental study.

The review procedure consisted of the following steps:

· Full reading of the environmental study to understand the layout of essential information;

· Reading each category and its subcategories. Assessment of subcategories with concept and brief comment on the strengths and weaknesses that determined the grade (to this end, the study must be revisited as many times as necessary);

· Once all subcategories of the same category have been evaluated, the weighting indicates the grade for the category;

· Once all categories have been evaluated, the weighting indicates the area’s score;

· Once all areas have been evaluated, the weighting indicates the overall grade.

Subcategories 1.1.4 (nature of processes and production rates) and 1.2.5 (means of transport of raw materials and products) were considered not applicable to the context, as they are typically related to industrial activities. Subcategory 3.1.3 was also considered not applicable in all cases analyzed, considering that it was not possible to observe the report of identification of significant unexpected adverse impacts during the studies, so the evaluation of alternatives was judged in Subcategories 3.1.1 and 3.1.2. Also, Category 4.4 and its subcategories were not evaluated for the SER, considering that the regulations do not require, for this licensing procedure, the presentation of a non-technical summary.

3. Outcomes

The outcomes are presented synthetically in Table 3 and in more detail in Appendix. In general, the vast majority of studies (90% of cases) were considered satisfactory (they presented an overall assessment A-C), as also reported elsewhere for the energy sector: energy and fuels (100% of 11 cases) in South Africa [5] and wind energy (90% of 20 studies) in the United Kingdom and Germany [4] .

However, it is worth highlighting that 100% of the cases are at the limit of acceptance presenting a general score of C-D, which shows that there are omissions and inadequacies to be addressed, therefore, the detailed analysis of the grades at the various levels indicate aspects of the EIA system that could/should be improved.

Following the hierarchical level of the general grade for the areas, there are satisfactory results (A - C grades) for all 4 areas, with 100%, 76%, 81% and 100%, respectively, for areas 1 to 4 (Table 3). The lower quality of the areas that require greater analytical effort (2 and 3) in relation to the descriptive areas (1 and 4) is recurrent in the Brazilian context [7] [9] and in research in other countries

Table 3. Quality review analysis.

A - C: Satisfactory (highlighted indicates the prevalence of satisfactory grades); D - F: Unsatisfactory (highlighted indicates the prevalence of unsatisfactory grades). A - B: Highlighted indicates strengths; C – D: Highlighted indicates the border amongst satisfactory and unsatisfactory; E – F: Highlighted indicates weaknesses.

[5] [21] . Furthermore, as in the general assessment, despite satisfactory results for all areas, all of them are also at the limit of acceptance (C - D grades > 50%), which again reinforces that, despite the satisfactory quality, there are omissions and inadequacies to be addressed.

3.1. Opportunities to Improve the Quality of Environmental Studies

Given the outcomes from EIS quality review, it is possible to identify the aspects that could be subject to further improvement by the analysis of the weaknesses (E - F grades > 50%) and the performance at the limit of acceptance (C - D grades > 50%) regarding different categories and subcategories.

3.2. Weaknesses

Subcategory 1.1.5 deals with the nature and quantities of raw materials for construction and operation. In this sense, although every EIS had addressed the nature of the raw material, mainly metals and concrete for the construction stage, none of them addressed the quantities and very little information was presented about the logistics between suppliers and construction sites and from there to the service fronts. Such information might be relevant, for example, to understand what impact the installation of the project will have on the quality of roads and the daily lives of the population in the affected region.

Category 1.3 evaluates waste in the broad sense of the activity. The following were considered most significant in this case: solid waste and effluents from construction sites and service fronts; drainage of beds; audible noise, radio interference and electrical and magnetic emissions arising from operation. Regarding the types, quantities and production rate (Subcategory 1.3.1), most of the studies (57%) did not present all the types mentioned above or did not present any of this information and the vast majority (76%) do not describe the methods and uncertainties about estimates (Subcategory 3.1.3). In general, the studies boil down to, varying in terms of sufficiency of information, indicating that solid waste and effluents will be treated according to a management program and few of them address, limiting themselves to just an estimate, audible noise, radio interference and electrical and magnetic emissions arising from the operation.

Category 2.4 evaluates the prediction of the magnitude of the impact was negatively affected by the lack of description and justification of the methods used for prediction (Subcategory 2.4.2) and by the definition and justification of the parameters for evaluation (Subcategory 2.4.3), for which the Qualitative methods are widely used, despite situations in which it would be possible to use a quantitative method.

Category 2.5, which assesses the significance of impacts, was also considered a weakness of EIS quality, as was the case for the environmental reports assessed by Veronez and Montaño [9] . The main reason was the lack of justification for the parameters used to assess significance (Subcategory 2.5.3) for most cases. Furthermore, the other subcategories (2.5.1, which assesses whether the significance of all impacts and remnants was correctly identified and 2.5.2 on the methods used) despite not being classified as weak points, are in the edge between acceptance and no acceptance.

3.3. Scores at the Limit of Acceptance

Subcategory 1.2.3 regards the information on the time spent on different project stages. For the evaluation, the decommissioning stage was not considered, considering that the projects are planned for a concession operation of at least 30 years, extendable for an equal period. The majority of D grades are justified by the lack of information about the duration of operation of the activity.

Category 2.2 (Identification of impacts). Subcategory 2.2.2 was responsible for the score, since the majority of environmental studies failed to justify the choice of method for identifying impacts.

Category 2.3 (Scope). The studies fail to report on the involvement of affected groups during their preparation (Subcategory 2.3.1), which may raise doubts as to whether they actually participated, as well as the lack of detail on significant impacts and justifications (Subcategory 2.3.3), since in general, studies briefly characterize all impacts and do not delve into significant ones, or do so in the same superficial way for all impacts.

Category 3.1 (Alternatives). The studies fail to discuss technological alternatives and, in some cases, locational ones. Probably due to the restrictions imposed by the current life cycle model of the project typology, in which the details of the project are decided before carrying out its impact assessment [22] .

Category 3.2 (Effectiveness of mitigating measures). Even though environmental control measures are widely known by the sector (Subcategory 3.2.2, considered as a strong bridge), perhaps due to years of application of EIA, the lack of discussion of residual impacts (Subcategory 3.2.1) and the absence of a clear approach to the effectiveness of the measures (Subcategory 3.2.3) bring the concept of the category in question to the threshold of information sufficiency.

Category 4.3 (Emphasis). The studies impose the same emphasis on all impacts (subcategory 4.3.1).

Category 4.4 (Non-technical summary). Although these documents must be written using an adequate language and form (Subcategory 4.4.1 as a strong point), they lack in their content, not indicating, for the most part, confidence in the methods used and the residual impacts.

4. Conclusions

The quality review of environmental reports revealed satisfactory overall scores, but at the limit of acceptance. At the same time, the review package brings an opportunity to identify particular aspects related to the quality of the environmental reports, moreover in case of systematic clustering of similar scores. The activities/information with grades at the limit of acceptance and, mainly, those with a greater recurrence of low scores were then identified, thus providing evidence for the need for adaptive measures.

It is recommended to make efforts to improve the quality of environmental studies, through better treatment of activities/information identified as weak points and thresholds, mainly, scope and emphasis, examination of alternatives, prediction of magnitude and assessment of significance of impacts.

Furthermore, actors are recommended to develop and/or adapt methodological tools for systematic monitoring of the quality of studies, such as that used in the present work. For consultants and proponents, the tool could be useful to identify whether the environmental study would be suitable for submission to Ibama. For the environmental agency, such a tool could be used in the checking stage of the study, identifying whether the available information is sufficient for decision-making and, therefore, the study would be suitable for dissemination, thus bringing less subjectivity to the quality control of the process at this stage and offering greater predictability to other interested parties (mainly to project’s proponents).

In a more comprehensive way, evaluating the quality of studies in a systematic way could be used by the environmental agency to verify weaknesses in the EIS, so that it can work with other actors for improvements, for example, through methodological guides and guidance offered to others involved in the EIA process, and also to rank the quality of consultancies, which could result in positive competition for improving studies.

Acknowledgements

The author recognizes the support from Ibama and the National Council for Scientific and Technological Development (CNPq) for funding the research, and processes SEI/Ibama n˚ 02001.000291/2017-16 and CNPq 315609/2021-4. Also, it is acknowledged to the Sao Paulo Research Foundation (Fapesp) for grant #2021/14412-5.

Appendix—Detailed Quality Review

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Zhang, J., Kørnøv, L. and Christensen, P. (2013) Critical Factors for EIA Implementation: Literature Review and Research Options. Journal of Environmental Management, 114, 148-157.
https://doi.org/10.1016/j.jenvman.2012.10.030
[2] Sadler, B. (1996) International Study of the Effectiveness of Environmental Assessment: Final Report: Environmental Assessment in a Changing World: Evaluating Practice to Improve Performance. Canadian Environmental Assessment Agency and IAIA, Ottawa.
https://publications.gc.ca/site/eng/9.834209/publication.html
[3] Morgan, R.K. (2012) Environmental Impact Assessment: The State of the Art. Impact Assessment and Project Appraisal, 30, 5-14.
https://doi.org/10.1080/14615517.2012.661557
[4] Phylip-Jones, J. and Fischer, T.B. (2013) EIA for Wind Farms in the United Kingdom and Germany. Journal of Environmental Assessment Policy and Management, 15, 1340008-1-1340008-30.
https://doi.org/10.1142/S1464333213400085
[5] Sandham, L.A. and Pretorius, H.M. (2008) A Review of EIA Report Quality in the North West Province of South Africa. Environmental Impact Assessment Review, 28, 229-240.
https://doi.org/10.1016/j.eiar.2007.07.002
[6] de Almeida, A.N., et al. (2016) Principais deficiências dos Estudos de Impacto Ambiental. Revista Brasileira de Gestão Ambiental e Sustentabilidade, 3, 3-14.
https://doi.org/10.21438/rbgas.030401
[7] Aversa, I.C. (2018) Avaliação de impacto ambiental aplicada a projetos de geração de energia eólica: O caso do Estado do Ceará. Master’s Dissertation, University of São Paulo, São Carlos.
[8] Ministério Público Federal (2004) Deficiências em estudos de impacto ambiental: Síntese de uma experiência.
https://biblioteca.mpf.mp.br/repositorio/items/9fe8ce42-5f3f-4acc-b6e7-5a416006bc1e
[9] Veronez, F. and Montaño, M. (2017) Análise da qualidade dos estudos de impacto ambiental no estado do Espírito Santo (2007-2013). Desenvolvimento e Meio Ambiente, 43, 6-21.
https://doi.org/10.5380/dma.v43i0.54180
[10] Anifowose, B., Lawler, D.M., van der Horst, D. and Chapman, L. (2016) A Systematic Quality Assessment of Environmental Impact Statements in the Oil and Gas Industry. Science of the Total Environment, 572, 570-585.
https://doi.org/10.1016/j.scitotenv.2016.07.083
[11] Meio Ambiente (2020) Organograma.
https://antigo.mma.gov.br/o-ministerio/organograma.html
[12] Ministério do Meio Ambiente (2011). Portaria 421, de 26 de Outubro de 2011.
https://www.ibama.gov.br/component/legislacao/?view=legislacao&legislacao=124563
[13] CONAMA (1986) Resolução Conama no 001, de 23 de Janeiro de 1986. Diário Oficial da União, Section I, 2548-2549.
https://www.ibama.gov.br/sophia/cnia/legislacao/MMA/RE0001-230186.PDF
[14] CONAMA (1997) Resolução Conama no 237, de 19 de Dezembro de 1997. Diário Oficial da União, Section I, 644-652.
https://conama.mma.gov.br/?option=com_sisconama&task=arquivo.download&id=237
[15] ONS (2016) Plano da operação energética 2016-2020.
https://www.ons.org.br/AcervoDigitalDocumentosEPublicacoes/PEN2016_SumarioExecutivo.pdf
[16] Presidência da República (2011) Lei Complementar no 140, de 8 de Dezembro de 2011. https://www.planalto.gov.br/ccivil_03/leis/lcp/lcp140.htm
[17] Ministério de Minas e Energia (2015) Plano Decenal de Expansão de Energia Ten-Year Energy Expansion Plan.
https://www.epe.gov.br/sites-pt/publicacoes-dados-abertos/publicacoes/PublicacoesArquivos/publicacao-45/topico-79/Sum%C3%A1rio%20Executivo%20do%20PDE%202024.pdf
[18] Sánchez, L.E. (2013) Avaliação de Impacto Ambiental: Conceitos e métodos. 2nd Edition, Oficina de Textos, São Paulo.
[19] Veronez, F.A. (2018) Efetividade da avaliação de impacto ambiental de projetos no Estado do Espírito Santo. Ph.D. Thesis, University of São Paulo, São Carlos.
[20] Lee, N. and Colley, R. (1992) Reviewing the Quality of Environmental Statements. EIA Centre, Manchester, Occasional Paper No. 24.
[21] McGrath, C. and Bond, A. (1997) The Quality of Environmental Impact Statements: A Review of Those Submitted in Cork, Eire from 1988-1993. Project Appraisal, 12, 43-52.
https://doi.org/10.1080/02688867.1997.9727037
[22] Demori, V.A., Almeida, M.R.R.E. and Montaño, M. (2018) Alterações no Licenciamento Ambiental Federal de Sistemas de Transmissão de Energia Elétrica. 4o Congresso Brasileiro de Avaliação de Impacto, Associação Brasileira de Avaliação de Impacto, Fortaleza, 492-498.
https://avaliacaodeimpacto.org.br/anais-do-congresso-brasileiro-de-avaliacao-de-impacto-cbai18/

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.