An Approach to Developing a Performance Test Based on the Tradeoffs from SW Architectures

Abstract

In a performance test, the standards for assessing its test results are not sufficiently determined due to the lack of a well-structured test developing methods which are found in a functionality test. By extending the established workflow structure, this approach will concentrate on tradeoffs within T-workflow and further develop tests based on T-workflow. The monitoring and tuning point have also been investigated to understand the validity and performance of software. Finally through a case study, it has been shown that better assessment of software performance can be obtained with the suggested tests developed based on T-workflow and by locating its monitoring point and tuning point.

Share and Cite:

B. Choi, M. Yoon and H. Kim, "An Approach to Developing a Performance Test Based on the Tradeoffs from SW Architectures," Journal of Software Engineering and Applications, Vol. 6 No. 4, 2013, pp. 184-195. doi: 10.4236/jsea.2013.64024.

1. Introduction

A quality of software (SW) is directly related to its performance testing, in which the system’s efficiency and reliability are assessed. Performance test measures the speed under certain loading conditions and discovers bottlenecks within the functions of a system. A performance test is conducted primarily for verifying a system’s satisfaction of the performance objectives [1]. Performance of a system is affected by many complex factors; one of the performance attributes can affect another.

A SW performance is validated with a performance evaluation before SW development. It is also validated by performance test completed after SW development. Performance models are used to build the performance evaluations, and the most frequently used models are based on the software architecture (SA) [2,3]. Most performance evaluations are built with performance models that only assess the performance of SA, not the SW. Therefore, there is an inevitable gap between the performance results analyzed with performance models and the realized SW performance. In other words, there are limitations in a performance evaluation designed only with performance models.

On the other hand, a performance test is built with performance requirements and workload models. Many studies have been asserting the importance of clarification of performance requirements in developing dependable performance tests due to the fact that most of the tests are conducted by framing test scenarios based on the performance requirements. Other performance tests are built with more realistic workload models developed through analyzing user behavior patterns. Whether the test cases were developed based on performance requirements or workload models, only achievement of performance requirements can be verified for test items. Therefore the complex relationships between performance attributes are not reflected despite of their importance. In this paper, a performance test’s coverage is defined for analyzing performance attributes’ side-effects and the test cases satisfying the suggested coverage is developed.

It is generally believed that the performance of SW is determined at the SA development stage. Before the development of SW, SAs are mostly used for performance assessment [4]. Architecture tradeoff analysis method, ATAM [5], is one of many assessment methods using SA, in which the compatibility of architecture is evaluated by analyzing achievement of the initially intended quality objectives and detecting risky components through analyzing the tradeoffs of architectural decisions. In this paper, a performance test coverage is defined using abovementioned tradeoffs of architectural decisions. Also, a new approach to develop more systemic performance test cases is proposed using the analysis of the causality of performance attributes’ side-effects.

Following this introduction, existing architecture-based performance evaluations, performance analyses and performance tests are examined in Section 2. In Section 3, suggested methods and processes of developing such test cases are explained. In Section 4, case study for NAND flash memory file system applying the suggested methods is explained. Finally in Section 5, this study is concluded with future plans.

2. Related Works

2.1. Performance Test

Performance is one of the features of a whole system, reflecting its overall functionalities. A performance test is usually conducted at the system test level after completion of system development. The scenario-based blackbox technique is used for commonly employed performance test methods and tools. The technique develops test scenarios based on performance requirements [4,6] or measuring workloads by analyzing the existing usage data [7]. However, specification-based performance testing at the system level focuses on measuring performance only within certain loading conditions. Therefore it is difficult to detect the cause of performance problems. Moreover, because the system development is already completed, there are limitations when solving the identified problems.

Studies on the existing model-based performance testings are mostly aimed at constructing more realistic workloads [8] and their System Under Test, SUT, is focused on web-applications [9,10]. There are two commonlyused methods for constructing workloads; the recently developed method, using analysis of the existing log files for web-applications and the method deriving from user behavior patterns obtained from existing similar applications. These two methods, however, require time and resources for collecting and analyzing the existing log files and user behavior patterns.

2.2. Software Architecture Based on Performance Test and Analysis

Software architecture is a set of important decisions made on the structure of SW. SA illustrates structures of SW at a high level of abstraction [2]. Because the realization of SW is established based on its architecture, SW performance is greatly affected by SA.

There are various studies on SW performance analysis, using SAs at the early SW development stage through performance prediction and evaluation. By analyzing the SW performance in SW development stage, weaknesses of SW can be discovered as early as to be supplemented or adjusted, leading to the improvement of SW quality. In Software Performance Engineering, SPE [11], the use of mathematical performance models is proposed to assess the performance at every stage from the beginning of the SW development.

Recently, a performance model using the SA regular requirement models has been suggested for analysis of the SW performance. Such a model can help select the SA with optimum performance [2,3]. However, these methods are basically used for selecting the optimal architecture at the development stage due to the inevitable gap between the SAs and the realized SWs. Therefore, additional performance tests are required after SW development.

In this paper, utilization of analyzed results of architectural decisions’ tradeoffs is suggested for developing performance test cases. Architecture decisions are major SA solutions, directly influencing on the establishment of performance attributes and their tradeoff-relationships. They can influence over more than one quality attributes. Through analysis of the tradeoffs within architecture decisions, four methods have been suggested for performance testing: 1) setting performance evaluation indices; 2) developing test cases applying the tradeoff-based workflow design as a test coverage; 3) identifying a monitoring point and using the performance-affecting data, monitored for interpreting the performance test results; and 4) identifying a tuning point. Several terms, such as tradebased workflow, a monitoring point, and a tuning point, are more clearly defined and explained in detail in following Section 3.2. Besides four methods proposed in this paper, the study also aims at analysis of the sideeffects of the performance attributes through a performance test, in which performance indices are set and test cases are built based on the tradeoffs.

3. A New Method for Developing Performance Tests

To build performance test cases more effectively, the study addresses the two major test issues. First is “what should be tested in the performance test”, which uses SA tradeoffs in building performance tests. In the existing test methods, only one performance index has been evaluated. However, if a performance attribute in a tradeoff-relationship with another is selected to be tested as a performance index, its tradeoffs belonged to another quality attribute are also selected as performance indices. By evaluating the two or more performance indices simultaneously, the test results can be focused on the analysis of side-effects of performance attributes.

Second test issue is “what should be selected as input variables for each test case”. Problems in performance are usually caused by complex functions with various factors, and therefore, it is difficult to discover the causes. However, if a test case is built with selected key variables in SW performance, the cause of the performance problems can be more easily understood and analyzing the actual values of input variables can become easier to handle. In this paper, a workflow which illustrates the tradeoff-relationships between performance attributes, called T-workflow, has been drawn and test cases have been built in a way to cover this T-workflow to select appropriate input variables. Furthermore, new methods for locating a monitoring point and a tuning point is introduced. A monitoring point is a performance affecting point with which analysis of the causes of performance degradation can be better understood. A tuning point is a point at which performance adjustment can be made to find the optimal performance state.

Figure 1 illustrates the flow of inputs and outputs in developing test cases. T-workflow is drawn based on performance requirements, SA decisions which satisfies the requirements, and the SA itself. Then, test cases are developed with the T-workflow. In this section, as a running example, the suggested method is explained by building a performance test case for YAFFS2, the NAND flash memory file system [12].

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] F. Mattiello-Francisco, E. Martins, A. R. Cavalli and E. T. Yano, “InRob: An Approach for Testing Interoperability and Robustness of Real-Time Embedded Softwarem,” Journal of Systems and Software, Vol. 85, No. 1, 2011, pp. 3-15.
[2] S. Balsamo, P. Inverardi and C. Mangano, “An Approach to Performance Evaluation of Software Architectures,” Proceedings of the 1st International Workshop on Software and Performance, Sata Fe, 12-16 October 1998, pp. 178-190. doi:10.1145/287318.287354
[3] F. Aquilani, S. Balsamo and P. Inverardi, “Performance Analysis at the Software Architectural Design Level, Performance Evaluation,” Vol. 45, No. 4, 2001, pp. 147-178.
[4] E. J. Weyuker and F. I. Vokolos, “Experience with Performance Testing of Software Systems: Issues, an Approach, and Case Study, Software Engineering,” IEEE Transactions on Software Engineering, Vol. 26, No. 12, 2000, pp. 1147-1156.
[5] R. Kazman, M. Klein, M. Barbacci, T. Longstaff, H. Lipson and J. Carriere, “The Architecture Tradeoff Analysis Method,” Proceedings of the 4th International Conference on Engineering of Complex Computer Systems (ICECCS ’98), Monterey, 10-14 August 1998, pp. 68-78.
[6] C. W. Ho and L. Williams, “Deriving Performance Requirements and Test Cases with the Performance Refinement and Evolution Model (PREM),” North Carolina State University, Raleigh, 2006, Technical Report No. TR-2006-30.
[7] F. I. Vokolos and E. J. Weyuker, “Performance Testing of Software Systems,” Proceedings of the 1st International Workshop on Software and Performance, Sata Fe, 12-16 October 1998, pp. 80-87. doi:10.1145/287318.287337
[8] D. Draheim, J. Grundy, J. Hosking, C. Lutteroth and G. Weber, “Realistic Load Testing of Web Applications,” Proceedings of the 10th European Conference on Software Maintenance and Reengineering (CSMR ‘06), Bari, 22-24 March 2006, pp. 57-70.
[9] Y. Y. Gu and Y. J. Ge, “Search-Based Performance Testing of Applications with Composite Services,” International Conference on Web Information Systems and Mining, Shanghai, 7-8 November 2009, pp. 320-324. doi:10.1109/WISM.2009.73
[10] C. D. Grosso, G. Antoniol, M. Di Penta, P. Galinier and E. Merlo, “Improving Network Applications Security: A New Heuristic to Generate Stress Testing Data,” In: Proceedings of the 2005 Conference on Genetic and Evolutionary Computation (GECCO ‘05), Hans-Georg Beyer, Ed., ACM, New York, pp. 1037-1043.
[11] C. U. Smith, “Performance Engineering of Software Systems,” Addison-Wesley, Boston, 1990.
[12] http://www.yaffs.net/
[13] IEEE Standard Glossary of Software Engineering Terminology.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.