Empirical Analysis of Object-Oriented Design Metrics for Predicting Unit Testing Effort of Classes

Abstract

In this paper, we investigate empirically the relationship between object-oriented design metrics and testability of classes. We address testability from the point of view of unit testing effort. We collected data from three open source Java software systems for which JUnit test cases exist. To capture the testing effort of classes, we used metrics to quantify the corresponding JUnit test cases. Classes were classified, according to the required unit testing effort, in two categories: high and low. In order to evaluate the relationship between object-oriented design metrics and unit testing effort of classes, we used logistic regression methods. We used the univariate logistic regression analysis to evaluate the individual effect of each metric on the unit testing effort of classes. The multivariate logistic regression analysis was used to explore the combined effect of the metrics. The performance of the prediction models was evaluated using Receiver Operating Characteristic analysis. The results indicate that: 1) complexity, size, cohesion and (to some extent) coupling were found significant predictors of the unit testing effort of classes and 2) multivariate regression models based on object-oriented design metrics are able to accurately predict the unit testing effort of classes.

Share and Cite:

M. Badri and F. Toure, "Empirical Analysis of Object-Oriented Design Metrics for Predicting Unit Testing Effort of Classes," Journal of Software Engineering and Applications, Vol. 5 No. 7, 2012, pp. 513-526. doi: 10.4236/jsea.2012.57060.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] IEEE, “IEEE Standard Glossary of Software Engineering Terminology,” IEEE CSP, New York, 1990.
[2] ISO/IEC 9126, “Software Engineering Product Quality,” 1991.
[3] B. Baudry, B. Le Traon and G. Sunye, “Testability Analysis of a UML Class Diagram,” 9th International Software Metrics Symposium, Sydney, 3-5 September 2003.
[4] B. Baudry, Y. Le Traon, G. Sunyé and J. M. Jézéquel, “Measuring and Improving Design Patterns Testability,” 9th International Software Metrics Symposium, Sydney, 3-5 September 2003.
[5] P. L. Yeh and J. C. Lin, “Software Testability Measurement Derived From Data Flow Analysis,” 2nd Euromicro Conference on Software Maintenance and Reengineering, Florence, 8-11 March 1998.
[6] L. Zhao, “A New Approach for Software Testability Analysis,” 28th International Conference on Software Engineering, Shanghai, 20-28 May 2006.
[7] M. Bruntink and A. V. Deursen, “Predicting Class Testability Using Object-Oriented Metrics,” 4th International Workshop on Source Code Analysis and Manipulation, Chicago, 15-16 September 2004. doi:10.1109/SCAM.2004.16
[8] V. Gupta, K. K. Aggarwal and Y. Singh, “A Fuzzy Approach for Integrated Measure of Object-Oriented Software Testability,” Journal of Computer Science, Vol. 1, No. 2, 2005, pp. 276-282. doi:10.3844/jcssp.2005.276.282
[9] V. R. Basili, L. C. Briand and W. Melo, “A Validation of Object-Oriented Design Metrics as Quality Indicators,” IEEE Transactions on Software Engineering, Vol. 22, No. 10, 1996, pp. 751-761. doi:10.1109/32.544352
[10] N. Fenton and S. L. Pfleeger, “Software Metrics: A Rigorous and Practical Approach,” PWS Publishing Company, Boston, 1997.
[11] R. S. Pressman, “Software Engineering, A Practitioner’s Approach,” 6th Edition, McGraw Hill, New York, 2005.
[12] I. Sommervile, “Software Engineering,” 9th Edition, Addison Wesley, New York, 2011.
[13] B. Henderson-Sellers, “Object-Oriented Metrics Measures of Complexity,” Prentice-Hall, Upper Saddle River, 1996.
[14] L. Badri, M. Badri and F. Toure, “Exploring Empirically the Relationship between Lack of Cohesion and Testability in Object-Oriented Systems,” In: T.-H. Kim, et al., Eds., Advances in Software Engineering, Communications in Computer and Information Science, Vol. 117, Springer, Berlin, 2010.
[15] L. Badri, M. Badri and F. Toure, “An Empirical Analysis of Lack of Cohesion Metrics for Predicting Testability of Classes,” International Journal of Software Engineering and Its Applications, Vol. 5, No. 2, 2011, pp. 69-85.
[16] M. Badri and F. Toure, “Empirical Analysis for Investigating the Effect of Control Flow Dependencies on Testability of Classes,” 23rd International Conference on Software Engineering and Knowledge Engineering, Miami Beach, 7-9 July 2011.
[17] M. Bruntink and A. V. Deursen, “An Empirical Study into Class Testability,” Journal of Systems and Software, Vol. 79, No. 9, 2006, pp. 1219-1232. doi:10.1016/j.jss.2006.02.036
[18] Y. Singh, A. Kaur and R. Malhota, “Predicting Testability Effort Using Artificial Neural Network,” Proceedings of the World Congress on Engineering and Computer Science, San Francisco, 22-24 October 2008.
[19] Y. Singh and A. Saha, “Predicting Testability of Eclipse: A Case Study,” Journal of Software Engineering, Vol. 4, No. 2, 2010, pp. 122-136. doi:10.3923/jse.2010.122.136
[20] Y. Singh, A. Kaur and R. Malhota, “Empirical Validation of Object-Oriented Metrics for Predicting Fault Proneness Models,” Software Quality Journal, Vol. 18, No. 1, 2010, pp. 3-35. doi:10.1007/s11219-009-9079-6
[21] B. Baudry, Y. Le Traon and G. Sunye, “Improving the Testability of UML Class Diagrams,” Proceedings of International Workshop on Testability Analysis, Rennes, 2 November 2004.
[22] J. W. Sheppard and M. Kaufman, “Formal Specification of Testability Metrics,” IEEE AUTOTESTCON, Philadelphia, 20-23 August 2001.
[23] S. R. Chidamber and C. F. Kemerer, “A Metrics Suite for OO Design,” IEEE Transactions on Software Engineering, Vol. 20, No. 6, 1994, pp. 476-493. doi:10.1109/32.295895
[24] S. R. Chidamber, D. P. Darcy and C. F. Kemerer, “Managerial Use of Metrics for Object-Oriented Software: An Exploratory Analysis,” IEEE Transactions on Software Engineering, Vol. 24, No. 8, 1998, pp. 629-637. doi:10.1109/32.707698
[25] R. S. Freedman, “Testability of Software Components,” IEEE Transactions on Software Engineering, Vol. 17, No. 6, 1991, pp. 553-564. doi:10.1109/32.87281
[26] J. M. Voas, “PIE: A Dynamic Failure-Based Technique,” IEEE Transactions on Software Engineering, Vol. 18, No. 8, 1992, pp. 717-727. doi:10.1109/32.153381
[27] J. Voas and K. W. Miller, “Semantic Metrics for Software Testability,” Journal of Systems and Software, Vol. 20, No. 3, 1993, pp. 207-216. doi:10.1016/0164-1212(93)90064-5
[28] J. M. Voas and K. W. Miller, “Software Testability: The New Verification,” IEEE Software, Vol. 12, No. 3, 1995, pp. 17-28. doi:10.1109/52.382180
[29] R. V. Binder, “Design for Testability in Object-Oriented Systems,” Communications of the ACM, Vol. 37, No. 9, 1994, pp. 87-101. doi:10.1145/182987.184077
[30] T. M. Khoshgoftaar and R. M. Szabo, “Detecting Program Modules with Low Testability,” 11th International Conference on Software Maintenance, Nice, 16 October 1995.
[31] T. M. Khoshgoftaar, E. B. Allen and Z. Xu, “Predicting Testability of Program Modules Using a Neural Network,” 3rd IEEE Symposium on Application-Specific Systems and SE Technology, Richardson, 24-25 March 2000, pp. 57-62.
[32] J. McGregor and S. Srinivas, “A Measure of Testing Effort,” Proceedings of the Conference on OO Technologies, Toronto, 17-21 June 1996.
[33] A. Bertolino and L. Strigini, “On the Use of Testability Measures for Dependability Assessment,” IEEE Transactions on Software Engineering, Vol. 22, No. 2, 1996, pp. 97-108. doi:10.1109/32.485220
[34] Y. Le Traon and C. Robach, “Testability Analysis of Co-Designed Systems,” Proceedings of the 4th Asian Test Symposium, ATS. IEEE CS, Washington, 1995.
[35] Y. Le Traon and C. Robach, “Testability Measurements for Data Flow Design,” Proceedings of the 4th International Software Metrics Symposium, New Mexico, 5-7 November 1997. doi:10.1109/METRIC.1997.637169
[36] Y. Le Traon, F. Ouabdessalam and C. Robach, “Analyzing Testability on Data Flow Designs,” ISSRE’00, 2000.
[37] A. Petrenko, R. Dssouli and H. Koenig, “On Evaluation of Testability of Protocol Structures,” IFIP, Rueil-Malmaison, 1993.
[38] K. Karoui and R. Dssouli, “Specification Transformations and Design for Testability,” Proceedings of the IEEE Global Telecommunications Conference, London, 18-22 November 1996, pp. 680-685.
[39] S. Jungmayr, “Testability Measurement and Software Dependencies,” Proceedings of the 12th International Workshop on Software Measurement, Magdeburg, 7-9 October 2002.
[40] J. Gao, J. Tsao and Y. Wu, “Testing and Quality Assurance for Component-Based Software,” Artech House Publisher, London, 2003.
[41] J. Gao and M. C. Shih, “A Component Testability Model for Verification and Measurement,” COMPSAC IEEE, 2005.
[42] T. B. Nguyen, M. Delaunay and C. Robach, “Testability Analysis Applied to Embedded Data-Flow Software,” Proceedings of the 3rd International Conference on Quality Software, Dallas, 6-7 November 2003, pp. 4-11. doi:10.1109/QSIC.2003.1319121
[43] V. Chowdhary, “Practicing Testability in the Real World,” International Conference on Software Testing, Verification and Validation, IEEE CSP, Washington, 21-25 March 2009, pp. 260-268.
[44] R. A. Khan and K. Mustafa, “Metric Based Testability Model for Object-Oriented Design (MTMOOD),” ACM SIGSOFT Software Engineering Notes, Vol. 34, No. 2, 2009, pp. 1-6. doi:10.1145/1507195.1507204
[45] A. Kout, F. Toure and M. Badri, “An Empirical Analysis of a Testability Model for Object-Oriented Programs,” ACM SIGSOFT Software Engineering Notes, Vol. 36, No. 4, 2011, pp. 1-5. doi:10.1145/1988997.1989020
[46] L. C. Briand, Y. Labiche and H. Sun, “Investigating the Use of Analysis Contracts to Improve the Testability of Object-Oriented Code,” Software—Practice and Experience, Vol. 33, No. 7, 2003, pp. 637-672. doi:10.1002/spe.520
[47] M. Badri and F. Toure, “Evaluating the Effect of Control Flow on the Unit Testing Effort of Classes: An Empirical Analysis,” Advances in Software Engineering, Vol. 2012, 2012, 13 p. doi:10.1155/2012/964064
[48] A. Mockus, N. Nagappan and T. T. Dinh-Trong, “Test Coverage and Post-Verification Defects: A Multiple Case Study,” 3rd International Symposium on Empirical Software Engineering and Measurement, Lake Buena Vista, 15-16 October 2009. doi:10.1109/ESEM.2009.5315981
[49] B. Van Rompaey and S. Demeyer, “Establishing Traceability Links between Unit Test Cases and Units under Test,” Proceedings of the European Conference on Software Maintenance and Reengineering, Kaiserslautern, 24-27 March 2009.
[50] A. Qusef, G. Bavota, R. Oliveto, A. De Lucia and D. Binkley, “SCOTCH: Test-to-Code Traceability Using Slicing and Conceptual Coupling,” International Conference on Software Maintenance, Williamsburg, 29 March 2011.
[51] K. K. Aggarwal, Y. Singh, K. Arvinder and M. Ruchika, “Empirical Analysis for Investigating the Effect of Object-Oriented Metrics on Fault Proneness: A Replicated Case Study,” Software Process: Improvement and Practice, Vol. 16, No. 1, 2009, pp. 39-62. doi:10.1002/spip.389
[52] Y. Zhou and H. Leung, “Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults,” IEEE Transactions on Software Engineering, Vol. 32, No. 10, 2006, pp. 771-789.
[53] L. C. Briand, J. Daly and J. Wuest, “A Unified Framework for Cohesion Measurement in Object-Oriented Systems,” Empirical Software Engineering—An International Journal, Vol. 3, No. 1, 1998, pp. 65-117.
[54] L. C. Briand, J. Wust, J. Daly and V. Porter, “Exploring the Relationship between Design Measures and Software Quality in Object-Oriented Systems,” Journal of Systems and Software, Vol. 51, No. 3, 2000, pp. 245-273. doi:10.1016/S0164-1212(99)00102-8
[55] T. Gyimothy, R. Ferenc and I. Siket, “Empirical Validation of Object-Oriented Metrics on Open Source Software for Fault Prediction,” IEEE Transactions on Software engineering, Vol. 3, No. 10, 2005, pp. 897-910.
[56] A. Marcus, D. Poshyvanyk and R. Ferenc, “Using the Conceptual Cohesion of Classes for Fault Prediction in Object-Oriented Systems,” IEEE Transactions on Software Engineering, Vol. 34, No. 2, 2008, pp. 287-300. doi:10.1109/TSE.2007.70768
[57] K. El Emam and W. Melo, “The Prediction of Faulty Classes Using Object-Oriented Design Metrics,” National Research Council of Canada NRC/ERB 1064, 1999.
[58] D. W. Hosmer and S. Lemeshow, “Applied Logistic Regression,” Wiley, New York, 2000. doi:10.1002/0471722146
[59] K. El Emam, “A Methodology for Validating Software Product Metrics,” National Research Council of Canada NRC/ERB 1076, 2000.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.