School Work

Software Architecture Based Regression Testing 2006 Journal of Systems and Software

of 18
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Software Architecture Based Regression Testing 2006 Journal of Systems and Software
  Software architecture-based regression testing Henry Muccini  a,* , Marcio Dias  b , Debra J. Richardson  c a Dipartimento di Informatica, University of L’Aquila, Via Vetoio 1, I-67100 L’Aquila, Italy b Department of Computer Science and e-Science Research Institute, University of Durham, Durham, UK  c Department of Informatics, Donald Bren School of Information and Computer Sciences, University of California Irvine, USA Received 15 October 2005; received in revised form 20 December 2005; accepted 10 February 2006Available online 5 June 2006 Abstract Software architectures are becoming centric to the development of quality software systems, being the first concrete model of the soft-ware system and the base to guide the implementation of software systems. When architecting dependable systems, in addition toimproving system dependability by means of construction (fault-tolerant and redundant mechanisms, for instance), it is also importantto evaluate, and thereby confirm, system dependability. There are many different approaches for evaluating system dependability, andtesting has been always an important one, being fault removal one of the means to achieve dependable systems.Previous work on software architecture-based testing has shown it is possible to apply conformance testing techniques to yield someconfidence on the implemented system conformance to expected, architecture-level, behaviors.This work explores how regression testing can be systematically applied at the software architecture level in order to reduce the cost of retesting modified systems, and also to assess the regression testability of the evolved system. We consider assessing both ‘‘low-level’’ and‘‘high-level’’ evolution, i.e., whether a slightly modified implementation conforms to the initial architecture, and whether the implemen-tation continues to conform to an evolved architecture. A better understanding on how regression testing can be applied at the softwarearchitecture level will help us to assess and identify architecture with higher dependability.   2006 Published by Elsevier Inc. Keywords:  Software architecture; Dependable systems; Regression testing; Architecture-based analysis and testing 1. Introduction A software architecture (SA) (Garlan, 2000) specifi-cation captures system structure (i.e., the architecturaltopology), by identifying architectural components andconnectors, and required system behavior, designed tomeet system requirements, by specifying how componentsand connectors are intended to interact. Software architec-tures can serve as useful high-level ‘‘blueprints’’ to guidethe production of lower-level system designs and imple-mentations, and later on for guidance in maintenanceand reuse activities. Moreover, SA-based analysis methodsprovide several value added benefits, such as system dead-lock detection, performance analysis, component valida-tion and much more (Bernardo and Inverardi, 2003).Additionally, SA-based testing methods are available tocheck conformance of the implementation’s behavior withSA-level specifications of expected behavior (Dias et al.,2000) and to guide integration and conformance testing(Bertolino et al., 2003; Muccini et al., 2003).Reaping these architectural benefits, however, does notcome for free. To the contrary, experience indicates thatdealing with software architectures is often expensive per-haps even too expensive, in some cases, to justify the ben-efits obtained. For example, consider the phenomenon of ‘‘architectural drift’’ (Perry and Wolf, 1992). It is not 0164-1212/$ - see front matter    2006 Published by Elsevier Inc.doi:10.1016/j.jss.2006.02.059 Abbreviations:  SA, software architecture; RT, regression testing; TS,transition system; ATS, abstract TS; ATC, architecture-level test case; GP,program graph. * Corresponding author. Tel.: +39 0862 433721; fax: +39 0862 433131. E-mail addresses: (H. Muccini), (M. Dias), (D.J. Richardson). The Journal of Systems and Software 79 (2006) 1379–1396  uncommon during evolution that only the low-level designand implementation are changed to meet tight deadlines,and the architecture is not updated to track the changesbeing made to the implementation. Once the architecture‘‘drifts’’ out of conformance with the implementation,many of the aforementioned benefits are lost: previousanalysis results cannot be extended or reused, and the effortspent on the previous architecture is wasted. Moreover,even when implementation and architecture are keptaligned, SA-based analysis methods often need to be reruncompletely from the beginning, at considerable cost, when-ever the system architecture or its implementation change.SARTE (Software Architecture-based Regression TEsting)is a collaborative project among the three authors universi-ties focused on providing a framework and approach forSA-based testing in the context of evolution, when botharchitecture and implementation are subject to change.The topic of architecture-based testing has been extensivelyanalyzed by one of the authors in Muccini et al. (2003),where a general framework for software architecture-basedconformance testing has been proposed. A software archi-tecture-based testing and analysis toolset (Argus-I) wasdeveloped by two of the authors, as described in Diaset al. (2000).SARTE builds upon the research and development inboth previous projects. In this context, this paper showshow SA-based regression testing provides a key solutionto the problem of retesting an SA after its evolution. Inparticular, after identifying SA-level behavioral test casesand testing conformance of the code with respect to theexpected architectural behaviors (Muccini et al., 2003),we show what should be tested when the code and/or archi-tecture is modified and how testing information previouslycollected may be reused to test the conformance of therevised implementation with respect to either the initialor revised architecture. We describe, in general terms, (i)how implementation-level test cases may be reused to testthe conformance of modified code with respect to the archi-tectural specification, and (ii) how to reuse architecture-level test cases when the architecture evolves. Ourapproach relies on reusing and modifying existing code-level regression testing (RT) techniques. 1.1. Motivations and goals This section describes why SA-based RT can contributeto improve the overall system dependability (Section 1.1.1)and our ongoing project goals (Section 1.1.2). 1.1.1. SARTE motivations Regression testing permits to test modified software toprovide confidence that no new errors are introduced intopreviously tested code. It may be used during  development ,to test families of similar products, or during  maintenance ,to test new or modified configurations (Harrold, 2000).Although SA-based RT may be used for both purposes,we here focus on the maintenance aspect, being confi-dentthisapproachmaybeusedduringdevelopmentaswell.In this section we analyze (i) why a software architecturemay change due to maintenance or evolution, and (ii) whyregression testing at the architecture level is a relevanttopic. Why may software architectures change?   Softwarearchitectures may change over time, due to the need toprovide a more dependable system, the need to removeidentified deficiencies, or the need to handle dynamicallyevolving collections of components at runtime (Garlan,2000). Much research has investigated SA evolution, espe-cially at runtime. In Oreizy et al. (1998), for example, theauthors analyze how an architecture may change at run-time (in terms of component addition, component removal,component replacement, and runtime reconfiguration) andhow tool suites may be used to cope with such evolution. InAllen et al. (1998) the authors describe an approach tospecify architectures that permits the representation andanalysis of dynamic architectures. In Kramer and Magee(1998) the authors analyze the issues of dynamic changesto a software configuration, in terms of component crea-tion and deletion, and connection and disconnection. InMedvidovic and Taylor (2000) the authors analyze suchArchitecture Description Languages which provide specificfeatures for modeling dynamic changes. Why SA-based regression testing?   Many functionaland non-functional analysis techniques have been pro-posed to operate at the SA-level (Bernardo and Inverardi,2003). However, the drawback is that (given that an archi-tecture may evolve) current techniques require that SA-based analysis be  completely rerun from scratch  for a mod-ified SA version, thereby increasing analysis costs andreducing benefits. To mitigate this drawback, we proposehere to apply regression testing at the SA level in orderto lower the cost and greatly improve the cost–benefitproperties of SA-based testing.The benefits we expect are manifold: (i) the selection of SA-level test cases and their execution at the code level is along and expensive process (as described in Muccini et al.(2003) and summarized in Section 3.1). Reusing previous results as much as possible may strongly reduce testingeffort while testing a modified architecture. Quoting(Harrold, 1998), in fact, ‘‘regression testing can accountfor as much as one-third of the total cost of a software sys-tem... the use of software architecture for regression testingactivities has the potential for a bigger impact on the costof software’’; (ii) SA-based testing can identify (functional)errors that are likely to be missed when applying tradi-tional code-level testing, thus complementing traditionaltesting techniques. In line with research on specification-based RT (see Section 6), SA-based RT may valuably com-plement code-based RT, as discussed later in Section 5. 1380  H. Muccini et al. / The Journal of Systems and Software 79 (2006) 1379–1396   1.1.2. SARTE goals SARTE’s intermediate  project goals  are depicted inFig. 1, where the left side embodies our first goal and theright side embodies the second goal: Goal 1: Test Conformance of a Modified ImplementationP  0 to the initial SA. ã  Context:  Given a software system, an architecture S,and an implementation P, we first gain confidencethat  P correctly implements S  . During maintenance,a modified version of the code (P 0 ) is implemented – where some components from P remain, some com-ponents are modified, and/or some new componentsare introduced. ã  Goal:  Test the conformance of P 0 with respect to S,while reusing previous test information for selectiveregression testing, thereby reducing the test cases thatmust be retested. Goal 2: Test Conformance of an Evolved SoftwareArchitecture . ã  Context:  Given a software system, an architecture S,and an implementation P, we have already gainedconfidence that P correctly implements S. Supposeevolution requires a modified version of the architec-ture (S 00 ) – where some architecture-level componentsare kept, others are modified, and/or new ones areintroduced and consequently a modified implementa-tion P 00 may have been also developed. ã  Goal:  Test the conformance of P 00 with respect to S 00 ,while reusing previous test information for selectiveRT, thereby reducing the test cases that must beretested.In the rest of this paper we address both goals, by pro-posing an approach to integrate existing code-level RTtechniques with SA-based RT and by exploiting similaritiesand differences between SA versions.A different goal is to reconstruct the actual architecturewhen the first goal determines that the code no longer con-forms to the initial architecture. This is a sort of reverse-engineering activity that could mitigate the architecturaldrift problem in a more general context. Ongoing researchon this topic is presented in Section 7. 1.2. Paper structure and organization The paper is organized as follows. Basic backgroundon regression testing is provided in Section 2. A theoreti-cal description of the approach is presented in Section 3.In Section 4 we illustrate the application of SARTEto two different applications: the Elevator example andthe Cargo Router system. Section 5 provides a descrip-tion of what we learned from our experience on SA-basedregression testing together with some considerations.Related work are briefly discussed in Section 6. Thepaper concludes with a discussion on ongoing and futurework (Section 7) and some concluding remarks (Section 8). 2. Regression testing In this section, we focus on the regression testing strat-egy to provide the background necessary to understandour approach for SA-based regression testing. We brieflyintroduce how regression testing works, describing, inbroad terms, how to identify appropriate tests in a regres-sion test selection context.Regression testing, as quoted from Harrold (2000),‘‘attempts to validate modified software and ensure thatno new errors are introduced into previously testedcode’’. The traditional approach is decomposed intotwo key phases: (i) testing the program P with respectto a specified test suite T, and (ii) when a new versionP 0 is released, regression testing of the modified version Component AComponent C Architectural Test Cases(ATCs) are extracted totest the source code Component BComponent AComponent C’Component Y ThecodeevolvesGoal1: Testthe new codeconformanceto S P(Code, version 1)P’(Code, version 2) The SAevolves Goal2: TestS’’ and code,reusingpreviousresultsThe codemay evolve P’’Software Architecture S (version 1)Software Architecture S’’(version 2) (a)(b) Component B Fig. 1. Project goals: (a) the implementation evolves; (b) the software architecture evolves. H. Muccini et al. / The Journal of Systems and Software 79 (2006) 1379–1396   1381  P 0 to provide confidence that P 0 is correct with respect toa test set T 0 .To explain how a regression testing technique works ingeneral, let us assume that a program P has been testedwith respect to a test set T. When a new version P 0 isreleased, regression testing techniques provide a certainconfidence that P 0 is correct with respect to a test set T 0 .In the simplest regression testing technique, called  retestall  , T 0 contains all the test cases in T, and P 0 is run onT 0 . In  selective regression testing  , T 0 is selected as a ‘‘rele-vant’’ subset of T, where  t  2  T is relevant for P 0 if thereis the potential that it could produce different results onP 0 that it did on P (following a  safe  definition).In general terms and assuming that P is a program undertest, T a test suite for P, P 0 a modified version of P and T 0 the new test suite for P 0 , regression testing techniques workaccording to the following steps: (1) select T 0 , subset of Tand relevant for P 0 ; (2) test P 0 with respect to T 0 ; (3) if necessary, create T 00 , to test new functionality/structure inP 0 ; (4) test P 0 with respect to T 00 ; (5) create T 000 , a new testsuite and test history.All of these steps are important for the success of a selec-tive regression testing technique and each of them involvesimportant problems (Graves et al., 1998). However, step 1(also called, regression test selection) characterizes a selec-tive regression testing technique. For this reason, we focuson this step to propose an SA-based regression testingapproach in the rest of this paper. 3. SARTE: SA-based regression testing Our SA-based regression testing inherits the two-phaseddecomposition from traditional RT approaches, and com-prises the following two phases: SA-based conformance testing  . We apply a SA-basedconformance testing approach. SA-based regression test selection . This phase is decom-posed to meet  Goal 1  and  Goal 2  in Section 1.1.2.Fig. 2 summarizes the activities required by SA-basedconformance and regression testing. While SA-based con-formance testing has been already analyzed in Mucciniet al. (2003, 2004), goal of this paper is to focus on SA-based regression testing. 3.1. SA-based conformance testing  As mentioned before, this work builds upon the generalframework for SA-based conformance testing set forth inMuccini et al. (2003), whose goal is to test the implementa-tion conformance to a given software architecture. Thesoftware architecture specification is used as a referencemodel to generate test cases while its behavioral model,which describes the expected behavior, serves as a testoracle.The framework encompasses five different steps, asshown in the middle section of  Fig. 2. Step 0: SA specification.  It begins with a topological andbehavioral specification of the SA. The topologydescribes the SA structure in terms of components,connectors, and configuration. The behavioralmodel specifies how the SA is supposed to behave.Architecture description languages are employedfortopologicaldescription,whiletransitionsystems(TS) are hereafter employed for SA behavior. Step 1: Testing criterion.  An observation function is intro-duced in order to implement a  testing criterion  thatlooks at the SA from a perspective that is deemedto be relevant for testing purposes, while hidingaway non-relevant actions from this perspective.The state machine-based model is abstracted, pro-ducing an Abstract TS (ATS), in order to showsuch high-level behaviors/components we wantto test. Step 2: Architecture-level test case.  An architecture-leveltest case (ATC) is defined as  an ordered sequenceof architectural events we expect to observe whena certain initiating event is performed  . This defini-tion encompasses two different keywords: thesequence of actions, which represents expectedbehaviors, and the initiating event, that is, thearchitectural input which should allow thesequence to happen. Deriving an adequate set of ATCs entails deriving a set of complete paths thatappropriately cover the ATS. Step 3: Test cases.  Naturally, such ATCs strongly differfrom executable code-level test cases, due to theabstraction gap between software architectureand code (the  traceability problem  (Dick andFaivre, 1993)). We deal with this problem througha ‘‘mapping’’ function which maps SA-level func-tional tests into code-level test cases. Step 4: Test execution.  Finally, the code is run over theidentified test cases. The execution traces are ana-lyzed to determine whether the system implemen-tation works correctly for the selectedarchitectural tests, using the architectural behav-ioral model as a test oracle to identify when a testcase fails or succeeds.Experience on applying SA-based conformance testinghas demonstrated its feasibility and suitability. However,repeating the entire testing process at any time the systemevolves is undoubtedly too expensive, thus making SA-based testing less appealing and applicable. Here we pro-pose an approach to deal with system evolution, whichreuses previous test results to retest the modified architec-ture/implementation with reduced effort. 3.2. Goal 1: Test conformance of a modified implementation P  0 to the initial SA Let us assume SA-based conformance testing has pro-vided confidence that the implementation P conforms to a 1382  H. Muccini et al. / The Journal of Systems and Software 79 (2006) 1379–1396 
Similar documents
View more...
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks