Empirical Analysis of Object-Oriented Design Metrics for Predicting Unit Testing Effort of Classes

Abstract

In this paper, we investigate empirically the relationship between object-oriented design metrics and testability of classes. We address testability from the point of view of unit testing effort. We collected data from three open source Java software systems for which JUnit test cases exist. To capture the testing effort of classes, we used metrics to quantify the corresponding JUnit test cases. Classes were classified, according to the required unit testing effort, in two categories: high and low. In order to evaluate the relationship between object-oriented design metrics and unit testing effort of classes, we used logistic regression methods. We used the univariate logistic regression analysis to evaluate the individual effect of each metric on the unit testing effort of classes. The multivariate logistic regression analysis was used to explore the combined effect of the metrics. The performance of the prediction models was evaluated using Receiver Operating Characteristic analysis. The results indicate that: 1) complexity, size, cohesion and (to some extent) coupling were found significant predictors of the unit testing effort of classes and 2) multivariate regression models based on object-oriented design metrics are able to accurately predict the unit testing effort of classes.

Share and Cite:

M. Badri and F. Toure, "Empirical Analysis of Object-Oriented Design Metrics for Predicting Unit Testing Effort of Classes," Journal of Software Engineering and Applications, Vol. 5 No. 7, 2012, pp. 513-526. doi: 10.4236/jsea.2012.57060.

1. Introduction

Software testability is an important software quality attribute. IEEE [1] defines testability as the degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met. ISO [2] defines testability (characteristic of maintainability) as attributes of software that bear on the effort needed to validate the software product.

Software testability is, in fact, a complex notion. Indeed, software testability is not an intrinsic property of a software artifact and cannot be measured simply such as size, complexity or coupling. According to Baudry et al. [3,4], software testability is influenced by many factors including controllability, observability and the global test cost. Yeh et al. [5] argue also that diverse factors such as control flow, data flow, complexity and size contribute to testability. Zhao [6] states that testability is an elusive concept, and it is difficult to get a clear view on all the potential factors that can affect it. Dealing with software testability raises several questions such as [7,8]: Why is one class easier to test than another? What makes a class hard to test? What contributes to the testability of a class? How can we quantify this notion? In addition, according to Baudry et al. testability becomes crucial in the case of object-oriented (OO) software systems where control flows are generally not hierarchical, but diffuse and distributed over whole architecture [3,4].

Software metrics can be useful in assessing software quality attributes and supporting various software engineering activities [9-12]. In particular, metrics can be used to assess (predict) software testability and better manage the testing effort. Having quantitative data on the testability of a software can, in fact, be used to guide the decision-making of software development managers seeking to produce high-quality software. Particularly, it can help software managers, developers and testers to [7, 8]: plan and monitor testing activities, determine the critical parts of the code on which they have to focus to ensure software quality, and in some cases use this data to review the code. One effective way to deal with this important issue is to develop prediction models based on metrics that can be used to identify critical parts of software requiring a (relative) high testing effort. There is a real need is this area.

A large number of OO metrics were proposed in literature [13]. Some of these metrics related to different OO software attributes (such as size, complexity, coupling, cohesion and inheritance) were already used in recent years to assess (predict) testability of OO software systems [7,8,14-20]. Software testability has been addressed from different point of views. According to Gupta et al. [8], none of the OO metrics is alone sufficient to give an overall reflection of software testability. Software testability is, indeed, affected by many different factors as pointed out by several researchers [3-6,21,22]. Moreover, even if there is a common belief (and empirical evidence) that several of these metrics (attributes) have an impact on testability of classes, few empirical studies have been conducted to examine their combined effect (impact), particularly when taking into account different levels of testing effort. As far as we know, this issue has not been empirically investigated.

The aim of this paper is to investigate empirically the relationship between OO design metrics, specifically, the Chidamber and Kemerer (CK) metrics suite [23,24] and testability of classes taking into account different levels of testing effort. We also include the well-known (size related) LOC (Lines of Code) metric as a “baseline”. The question we attempt to answer is how accurately do the OO metrics (separately and when used together) predict (high) testing effort. We addressed testability of classes from the perspective of unit testing effort. We performed an empirical analysis using data collected from three open source Java software systems for which JUnit test cases exist. To capture the testing effort of classes, we used the suite of test case metrics introduced by Bruntink et al. [7,17] to quantify the corresponding JUnit test cases. These metrics were used, in fact, to classify the classes (in terms of required testing effort) in two categories: high and low.

In order to evaluate the relationship between OO design metrics and unit testing effort of classes, we used logistic regression methods. We used the univariate logistic regression method to evaluate the individual effect of each metric on the unit testing effort of classes. The multivariate logistic regression method was used to investigate the potential of the combined effect of the metrics. The performance of the prediction models was evaluated using Receiver Operating Characteristic (ROC) analysis. In summary, the results indicate that complexity, size, cohesion and (to some extent) coupling were found significant predictors of the unit testing effort of classes. Moreover, the results show that multivariate regression models based on OO metrics are able to accurately predict the unit testing effort of classes. In addition, we explored the applicability of the prediction models by examining to what extent a prediction model built using data from one system can be used to predict the testing effort of classes of another system.

The rest of this paper is organized as follows: A brief summary of related work on software testability is given in Section 2. Section 3 introduces the OO design metrics investigated in the study. Section 4 presents the selected systems, describes the data collection, introduces the test case metrics we used to quantify the JUnit test cases and presents the empirical study we performed to investigate the relationship between OO design metrics and unit testing effort of classes. Finally, Section 5 concludes the paper and outlines directions for future work.

2. Software Testability

Software testability has been addressed in literature from different point of views. Fenton et al. [10] define software testability as an external attribute. Freedman introduces testability measures for software components based on two factors: observability and controllability [25]. Voas defines testability as the probability that a test case will fail if a program has a fault [26]. Voas and Miller [27] propose a testability metric based on inputs and outputs domains of a software component, and the PIE (Propagation, Infection and Execution) technique to analyze software testability [28]. Binder [29] defines testability as the relative ease and expense of revealing software faults. He argues that software testability is based on six factors: representation, implementation, built-in text, test suite, test support environment and software process capability. Khoshgoftaar et al. investigate the relationship between static software product measures and testability [30,31]. Software testability is considered as a probability predicting whether tests will detect a fault. McGregor et al. [32] investigate testability of OO software systems and introduce the visibility component measure (VC). Bertolino et al. [33] investigate testability and its use in dependability assessment. They adopt a definition of testability as a conditional probability, different from the one proposed by Voas et al. [26], and derive the probability of program correctness using a Bayesian inference procedure. Le Traon et al. [34-36] propose testability measures for data flow designs. Petrenko et al. [37] and Karoui et al. [38] address testability in the context of communication software. Sheppard et al. [22] focus on formal foundation of testability metrics. Jungmayr [39] investigates testability measurement based on static dependencies within OO systems. He takes an integration testing point of view.

Gao et al. [40,41] consider testability from the perspective of component-based software construction. The definition of component testability is based on five factors: understandability, obervability, controllability, traceability and testing support capability. According to Gao et al. [41], software testability is related to testing effort reduction and software quality. Nguyen et al. [42] focus on testability analysis based on data flow designs in the context of embedded software. Baudry et al. [3,4,21] address testability measurement (and improvement) of OO designs. They focus on design patterns as coherent subsets in the architecture. Chowdhary [43] focuses on why it is so difficult to practice testability in the real world. He discusses the impact of testability on design and lay down guidelines to ensure testability consideration during software development. Khan et al. [44] focus on testability of classes at the design level. They developed a model to predict testability of classes from UML class diagrams. Kout et al. [45] adapted this model to the code level (Java programs) and evaluated it on two case studies. Briand et al. [46] propose an approach where instrumented contracts are used to increase testability. A case study showed that contract assertions detect a large percentage of failures depending on the level of precision of the contract definitions.

Bruntink et al. [7,17] investigate factors of testability of OO software systems. Testability is investigated from the perspective of unit testing. Gupta et al. [8] use fuzzy techniques to combine some OO metrics values into a single overall value called testability index. The proposed approach has been evaluated on simple examples of Java classes. Singh et al. [18] used OO metrics and neural networks to predict testing effort. The testing effort is measured in terms of lines of code added or changed during the life cycle of a defect. In [19], Singh et al. attempt to predict testability of Eclipse at the package level. Badri et al. [14] performed a similar study to that conducted by Bruntink et al. [7] using two open source Java software systems in order to explore the relationship between lack of cohesion metrics and testability of classes. In [15], Badri et al. investigated the capability of lack of cohesion metrics to predict testability using logistic regression methods. More recently, Badri et al. [16,47] investigate the effect of control flow of the unit testing effort of classes.

3. Object-Oriented Design Metrics

We present, in this section, the summary of the OO design metrics we selected for the empirical study. These metrics have been selected for study because they have received considerable attention from researchers and are also being increasingly adopted by practitioners. Furthermore, these metrics have been incorporated into several development tools. We selected in total seven metrics. Six of these metrics (CBO, LCOM, DIT, NOC, WMC and RFC) were proposed by Chidamber and Kemerer in [23,24]. We also include in our study the well-known LOC metric. We give in what follows a brief definition of each metric.

Coupling between Objects: The CBO metric counts for a class the number of other classes to which it is coupled (and vice versa).

Lack of Cohesion in Methods: The LCOM metric measures the dissimilarity of methods in a class. It is defined as follows: LCOM = |P| − |Q|, if |P| > |Q|, where P is the number of pairs of methods that do not share a common attribute and Q the number of pairs of methods sharing a common attribute. If the difference is negative, LCOM is set to 0.

Depth of Inheritance Tree: The DIT metric of a class is given by the length of the (longest) inheritance path from the root of the inheritance hierarchy to the class on which it is measured (number of ancestor classes).

Number of Children: The NOC metric measures the number of immediate subclasses of the class in a hierarchy.

Weighted Methods per Class: The WMC metric gives the sum of complexities of the methods of a given class, where each method is weighted by its cyclomatic complexity. Only methods specified in the class are considered.

Response for Class: The RFC metric for a class is defined as the set of methods that can be executed in response to a message received by an object of the class.

Lines of Code per class: The LOC metric counts for a class its number of lines of code.

4. Empirical Analysis

This study aims at investigating empirically the relationship between OO design metrics and testability of classes in terms of required unit testing effort. We considered in each of the used systems only the classes for which JUnit test cases exist. We noticed that developers usually name the JUnit test case classes by adding the prefix (or suffix) “Test” or “TestCase” into the name of the classes (and in few cases interfaces) for which JUnit test cases were developed. Only classes that have such name matching mechanism with the test case class name are included in the analysis. This approach has already been adopted in other studies [48].

JUnit is a simple Framework for writing and running automated unit tests for Java classes. A typical usage of JUnit is to test each class Cs of the program by means of a dedicated test case class Ct. However, by analyzing the JUnit test case classes of the subject systems, we noticed that in some cases there is no one-to-one relationship between JUnit classes and tested classes. This has also been noted in other previous studies [49,50]. In these cases, several JUnit test cases correspond in fact to a same tested class.

For each selected software class Cs, we calculated the values of OO metrics. We also used the suite of test case metrics (Section 4.3) to quantify the corresponding JUnit test case (cases) Ct. The OO metrics and the test case metrics have been computed using the Borland Together tool1. The selected classes of the subject systems have been categorized according to the required testing effort. We used the test case metrics to quantify the JUnit test cases and identify the classes which required a (relative) high testing effort. In order to simplify the process of testing effort categorization, we provide only two categorizations: classes which required a high testing effort and classes which required a (relatively) low testing effort.

4.1. Selected Systems

Three open source Java software systems from different domains were selected for the study: ANT, JFREECHART (JFC) and POI. Table 1 summarizes some of their characteristics. It gives, for each system, the total number of software classes, the total number of attributes, the total number of methods, the total number of lines of code, the number of selected software classes (for which JUnit test cases were developed), and the total number of lines of code of selected software classes (for which JUnit test cases were developed). ANT (www.apache.org) is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. This system consists of 713 classes that are comprised of 2491 attributes and 5365 methods, with a total of roughly 64,000 lines of code. JFC (http://www.jfree.org/jfreechart) is a free chart library for Java platform. This system consists of 496 classes that are comprised of 1550 attributes and 5763 methods, with a total of roughly 68,000 lines of code. The Apache POI (http://poi.apache. org/) project’s mission is to create and maintain Java APIs for manipulating various file formats based upon the Office Open XML standards (OOXML) and Microsoft’s OLE 2 Compound Document format (OLE2). This system consists of 1540 classes, that are comprised of 4463 attributes and 14,084 methods, with a total of roughly 136,000 lines of code. Moreover, we can also observe from Table 1, and this for each system, that JUnit test cases were not developed for all classes. The number of selected software classes for which JUnit test cases were developed varies from one system to another. In total, our experiments will be performed on 688 classes and corresponding JUnit test cases.

4.2. Descriptive Statistics

Table 2 shows the descriptive statistics for all the OO metrics considered in the study. We give, for illustration, only the descriptive statistics corresponding to system ANT. We give in fact two tables of descriptive statistics (labeled I and II). The table labeled (I) indicates the descriptive statistics for OO metrics and this for all classes of the system. The table labeled (II) indicates the descriptive statistics for OO metrics only for selected classes for which JUnit test cases were developed. Moreover, the metric LCOM is not computed for classes having no attributes. This is why the number of observations (Table 2: ANT(I) and ANT(II)) corresponding to the

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] IEEE, “IEEE Standard Glossary of Software Engineering Terminology,” IEEE CSP, New York, 1990.
[2] ISO/IEC 9126, “Software Engineering Product Quality,” 1991.
[3] B. Baudry, B. Le Traon and G. Sunye, “Testability Analysis of a UML Class Diagram,” 9th International Software Metrics Symposium, Sydney, 3-5 September 2003.
[4] B. Baudry, Y. Le Traon, G. Sunyé and J. M. Jézéquel, “Measuring and Improving Design Patterns Testability,” 9th International Software Metrics Symposium, Sydney, 3-5 September 2003.
[5] P. L. Yeh and J. C. Lin, “Software Testability Measurement Derived From Data Flow Analysis,” 2nd Euromicro Conference on Software Maintenance and Reengineering, Florence, 8-11 March 1998.
[6] L. Zhao, “A New Approach for Software Testability Analysis,” 28th International Conference on Software Engineering, Shanghai, 20-28 May 2006.
[7] M. Bruntink and A. V. Deursen, “Predicting Class Testability Using Object-Oriented Metrics,” 4th International Workshop on Source Code Analysis and Manipulation, Chicago, 15-16 September 2004. doi:10.1109/SCAM.2004.16
[8] V. Gupta, K. K. Aggarwal and Y. Singh, “A Fuzzy Approach for Integrated Measure of Object-Oriented Software Testability,” Journal of Computer Science, Vol. 1, No. 2, 2005, pp. 276-282. doi:10.3844/jcssp.2005.276.282
[9] V. R. Basili, L. C. Briand and W. Melo, “A Validation of Object-Oriented Design Metrics as Quality Indicators,” IEEE Transactions on Software Engineering, Vol. 22, No. 10, 1996, pp. 751-761. doi:10.1109/32.544352
[10] N. Fenton and S. L. Pfleeger, “Software Metrics: A Rigorous and Practical Approach,” PWS Publishing Company, Boston, 1997.
[11] R. S. Pressman, “Software Engineering, A Practitioner’s Approach,” 6th Edition, McGraw Hill, New York, 2005.
[12] I. Sommervile, “Software Engineering,” 9th Edition, Addison Wesley, New York, 2011.
[13] B. Henderson-Sellers, “Object-Oriented Metrics Measures of Complexity,” Prentice-Hall, Upper Saddle River, 1996.
[14] L. Badri, M. Badri and F. Toure, “Exploring Empirically the Relationship between Lack of Cohesion and Testability in Object-Oriented Systems,” In: T.-H. Kim, et al., Eds., Advances in Software Engineering, Communications in Computer and Information Science, Vol. 117, Springer, Berlin, 2010.
[15] L. Badri, M. Badri and F. Toure, “An Empirical Analysis of Lack of Cohesion Metrics for Predicting Testability of Classes,” International Journal of Software Engineering and Its Applications, Vol. 5, No. 2, 2011, pp. 69-85.
[16] M. Badri and F. Toure, “Empirical Analysis for Investigating the Effect of Control Flow Dependencies on Testability of Classes,” 23rd International Conference on Software Engineering and Knowledge Engineering, Miami Beach, 7-9 July 2011.
[17] M. Bruntink and A. V. Deursen, “An Empirical Study into Class Testability,” Journal of Systems and Software, Vol. 79, No. 9, 2006, pp. 1219-1232. doi:10.1016/j.jss.2006.02.036
[18] Y. Singh, A. Kaur and R. Malhota, “Predicting Testability Effort Using Artificial Neural Network,” Proceedings of the World Congress on Engineering and Computer Science, San Francisco, 22-24 October 2008.
[19] Y. Singh and A. Saha, “Predicting Testability of Eclipse: A Case Study,” Journal of Software Engineering, Vol. 4, No. 2, 2010, pp. 122-136. doi:10.3923/jse.2010.122.136
[20] Y. Singh, A. Kaur and R. Malhota, “Empirical Validation of Object-Oriented Metrics for Predicting Fault Proneness Models,” Software Quality Journal, Vol. 18, No. 1, 2010, pp. 3-35. doi:10.1007/s11219-009-9079-6
[21] B. Baudry, Y. Le Traon and G. Sunye, “Improving the Testability of UML Class Diagrams,” Proceedings of International Workshop on Testability Analysis, Rennes, 2 November 2004.
[22] J. W. Sheppard and M. Kaufman, “Formal Specification of Testability Metrics,” IEEE AUTOTESTCON, Philadelphia, 20-23 August 2001.
[23] S. R. Chidamber and C. F. Kemerer, “A Metrics Suite for OO Design,” IEEE Transactions on Software Engineering, Vol. 20, No. 6, 1994, pp. 476-493. doi:10.1109/32.295895
[24] S. R. Chidamber, D. P. Darcy and C. F. Kemerer, “Managerial Use of Metrics for Object-Oriented Software: An Exploratory Analysis,” IEEE Transactions on Software Engineering, Vol. 24, No. 8, 1998, pp. 629-637. doi:10.1109/32.707698
[25] R. S. Freedman, “Testability of Software Components,” IEEE Transactions on Software Engineering, Vol. 17, No. 6, 1991, pp. 553-564. doi:10.1109/32.87281
[26] J. M. Voas, “PIE: A Dynamic Failure-Based Technique,” IEEE Transactions on Software Engineering, Vol. 18, No. 8, 1992, pp. 717-727. doi:10.1109/32.153381
[27] J. Voas and K. W. Miller, “Semantic Metrics for Software Testability,” Journal of Systems and Software, Vol. 20, No. 3, 1993, pp. 207-216. doi:10.1016/0164-1212(93)90064-5
[28] J. M. Voas and K. W. Miller, “Software Testability: The New Verification,” IEEE Software, Vol. 12, No. 3, 1995, pp. 17-28. doi:10.1109/52.382180
[29] R. V. Binder, “Design for Testability in Object-Oriented Systems,” Communications of the ACM, Vol. 37, No. 9, 1994, pp. 87-101. doi:10.1145/182987.184077
[30] T. M. Khoshgoftaar and R. M. Szabo, “Detecting Program Modules with Low Testability,” 11th International Conference on Software Maintenance, Nice, 16 October 1995.
[31] T. M. Khoshgoftaar, E. B. Allen and Z. Xu, “Predicting Testability of Program Modules Using a Neural Network,” 3rd IEEE Symposium on Application-Specific Systems and SE Technology, Richardson, 24-25 March 2000, pp. 57-62.
[32] J. McGregor and S. Srinivas, “A Measure of Testing Effort,” Proceedings of the Conference on OO Technologies, Toronto, 17-21 June 1996.
[33] A. Bertolino and L. Strigini, “On the Use of Testability Measures for Dependability Assessment,” IEEE Transactions on Software Engineering, Vol. 22, No. 2, 1996, pp. 97-108. doi:10.1109/32.485220
[34] Y. Le Traon and C. Robach, “Testability Analysis of Co-Designed Systems,” Proceedings of the 4th Asian Test Symposium, ATS. IEEE CS, Washington, 1995.
[35] Y. Le Traon and C. Robach, “Testability Measurements for Data Flow Design,” Proceedings of the 4th International Software Metrics Symposium, New Mexico, 5-7 November 1997. doi:10.1109/METRIC.1997.637169
[36] Y. Le Traon, F. Ouabdessalam and C. Robach, “Analyzing Testability on Data Flow Designs,” ISSRE’00, 2000.
[37] A. Petrenko, R. Dssouli and H. Koenig, “On Evaluation of Testability of Protocol Structures,” IFIP, Rueil-Malmaison, 1993.
[38] K. Karoui and R. Dssouli, “Specification Transformations and Design for Testability,” Proceedings of the IEEE Global Telecommunications Conference, London, 18-22 November 1996, pp. 680-685.
[39] S. Jungmayr, “Testability Measurement and Software Dependencies,” Proceedings of the 12th International Workshop on Software Measurement, Magdeburg, 7-9 October 2002.
[40] J. Gao, J. Tsao and Y. Wu, “Testing and Quality Assurance for Component-Based Software,” Artech House Publisher, London, 2003.
[41] J. Gao and M. C. Shih, “A Component Testability Model for Verification and Measurement,” COMPSAC IEEE, 2005.
[42] T. B. Nguyen, M. Delaunay and C. Robach, “Testability Analysis Applied to Embedded Data-Flow Software,” Proceedings of the 3rd International Conference on Quality Software, Dallas, 6-7 November 2003, pp. 4-11. doi:10.1109/QSIC.2003.1319121
[43] V. Chowdhary, “Practicing Testability in the Real World,” International Conference on Software Testing, Verification and Validation, IEEE CSP, Washington, 21-25 March 2009, pp. 260-268.
[44] R. A. Khan and K. Mustafa, “Metric Based Testability Model for Object-Oriented Design (MTMOOD),” ACM SIGSOFT Software Engineering Notes, Vol. 34, No. 2, 2009, pp. 1-6. doi:10.1145/1507195.1507204
[45] A. Kout, F. Toure and M. Badri, “An Empirical Analysis of a Testability Model for Object-Oriented Programs,” ACM SIGSOFT Software Engineering Notes, Vol. 36, No. 4, 2011, pp. 1-5. doi:10.1145/1988997.1989020
[46] L. C. Briand, Y. Labiche and H. Sun, “Investigating the Use of Analysis Contracts to Improve the Testability of Object-Oriented Code,” Software—Practice and Experience, Vol. 33, No. 7, 2003, pp. 637-672. doi:10.1002/spe.520
[47] M. Badri and F. Toure, “Evaluating the Effect of Control Flow on the Unit Testing Effort of Classes: An Empirical Analysis,” Advances in Software Engineering, Vol. 2012, 2012, 13 p. doi:10.1155/2012/964064
[48] A. Mockus, N. Nagappan and T. T. Dinh-Trong, “Test Coverage and Post-Verification Defects: A Multiple Case Study,” 3rd International Symposium on Empirical Software Engineering and Measurement, Lake Buena Vista, 15-16 October 2009. doi:10.1109/ESEM.2009.5315981
[49] B. Van Rompaey and S. Demeyer, “Establishing Traceability Links between Unit Test Cases and Units under Test,” Proceedings of the European Conference on Software Maintenance and Reengineering, Kaiserslautern, 24-27 March 2009.
[50] A. Qusef, G. Bavota, R. Oliveto, A. De Lucia and D. Binkley, “SCOTCH: Test-to-Code Traceability Using Slicing and Conceptual Coupling,” International Conference on Software Maintenance, Williamsburg, 29 March 2011.
[51] K. K. Aggarwal, Y. Singh, K. Arvinder and M. Ruchika, “Empirical Analysis for Investigating the Effect of Object-Oriented Metrics on Fault Proneness: A Replicated Case Study,” Software Process: Improvement and Practice, Vol. 16, No. 1, 2009, pp. 39-62. doi:10.1002/spip.389
[52] Y. Zhou and H. Leung, “Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults,” IEEE Transactions on Software Engineering, Vol. 32, No. 10, 2006, pp. 771-789.
[53] L. C. Briand, J. Daly and J. Wuest, “A Unified Framework for Cohesion Measurement in Object-Oriented Systems,” Empirical Software Engineering—An International Journal, Vol. 3, No. 1, 1998, pp. 65-117.
[54] L. C. Briand, J. Wust, J. Daly and V. Porter, “Exploring the Relationship between Design Measures and Software Quality in Object-Oriented Systems,” Journal of Systems and Software, Vol. 51, No. 3, 2000, pp. 245-273. doi:10.1016/S0164-1212(99)00102-8
[55] T. Gyimothy, R. Ferenc and I. Siket, “Empirical Validation of Object-Oriented Metrics on Open Source Software for Fault Prediction,” IEEE Transactions on Software engineering, Vol. 3, No. 10, 2005, pp. 897-910.
[56] A. Marcus, D. Poshyvanyk and R. Ferenc, “Using the Conceptual Cohesion of Classes for Fault Prediction in Object-Oriented Systems,” IEEE Transactions on Software Engineering, Vol. 34, No. 2, 2008, pp. 287-300. doi:10.1109/TSE.2007.70768
[57] K. El Emam and W. Melo, “The Prediction of Faulty Classes Using Object-Oriented Design Metrics,” National Research Council of Canada NRC/ERB 1064, 1999.
[58] D. W. Hosmer and S. Lemeshow, “Applied Logistic Regression,” Wiley, New York, 2000. doi:10.1002/0471722146
[59] K. El Emam, “A Methodology for Validating Software Product Metrics,” National Research Council of Canada NRC/ERB 1076, 2000.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.