Date and Time: 17 April 2015
Location: Congress Graz
Academic key note will be given by Prof. Per Runeson from Lund University and industrial keynote by Ranorex
08:45 - 09:00
Opening & Welcome
09:00 - 09:45
Keynote 1: Christoph Preschern (Ranorex, Austria)
People behind the Testing Tools and Frameworks
Developing a tool that helps testers with automation requires dealing with two major challenges. First, you need to have an excellent understanding about the application technologies used for web, desktop or mobile environments. Secondly - but no less important - is to understand the testers working with the tool. Who are the people behind the test automation projects? What expertise, knowledge, and skills do they possess? Addressing these questions, as well as looking at today's test automation challenges for testers will be presented during this keynote.
09:45 - 10:30
Short Paper Position Statements: Applications and Challenges
C.S. Gebizli, D. Metin, H. Sözer
Combining Model-Based and Risk-Based Testing for Effective Test Case Generation
Model-based testing employs models of the system under test to automatically generate test cases. In this paper, we propose an iterative approach, in which these models are refined based on the principles of risk-based testing. We use Markov Chains as system models, in which transitions among system states are annotated with probabilities. Initially, these probability values are equal and as such, states have equal chances for being visited by the generated test cases. Memory leaks are monitored during the execution of these test cases. Then, transition probabilities are updated based on the risk that a failure can occur due to the observed memory leaks. We applied our approach in the context of an industrial case study for model-based testing of a Smart TV system. We observed promising results, in which several crash failures were detected after an iteration of model refinement. We aim at automating the whole process based on an adaptation model using the history of recorded memory leaks during previous test executions.
X. Zhang, H. Tanno
Requirements Document Based Test Scenario Generation for Web Application Scenario Testing
This paper introduces a method to generate test scenarios from semi-formal requirements documents such as use-case descriptions and screen transition diagrams. In this method,
use-case testing techniques and state transition testing techniques are combined to comprehensively cover the viewpoints needed for web application scenario testing. Evaluations of two industry level software projects show that the proposed generation method can replicate most of the existing manually derived test scenarios.
T. Arts, J. Hughes, U. Norell, H. Svensson
Testing AUTOSAR software with QuickCheck
The automotive industry defines AUTOSAR (AUTomotive Open System ARchitecture) as a standard for software development relevant to vehicle manufacturers, suppliers and other companies from the electronics, semiconductor and software industry. Recently, Volvo Car Corporation have released their first car based upon this software standard. About hundred ECUs (computers) are connected via Can, Lin, or FlexRay or a combination of those. The software in these ECUs is developed by different companies. The software is highly configurable, with thousands of different parameters. The standard is precise, but leaves some room for optimisation and interpretation. The challenge we have addressed is testing that the software of the different vendors is compatible. We did so by modeling over 3000 pages of textual specification into QuickCheck models and tested different implementations against large volumes of randomly generated tests from these models. We filed over 200 issues for discussion with Volvo and software vendors. The complete testing approach was more efficient, more effective and more correct than a manual testing approach.
M. Carlsson, O. Grinchtein, J. Pearson
Testing of a telecommunication protocol using constraint programming
The system under test is radio base station that communicates with a mobile phone simulator and a network entity. Testing requires capture and analysis of log files in the simulated environment. The protocol includes a number of messages with complex timing requirements between them. A protocol log is a sequence of messages with timestamps.
T. Wetzlmaier, M. Winterer
Test automation for Multi-Touch User Interfaces of Industrial Applications
In this paper we discuss challenges in testing multi-touch user interfaces. We report on our experiences with testing multi-touch UIs in context of industrial software applications for machinery control. For this purpose we developed a reusable capture/replay approach for recording multi-touch gestures. The tool support is used to establish a gesture library capturing the wide range of individual variants in multi-touch interactions for regression testing industrial applications.
C. Klammer, A. Kern
Writing Unit Tests: It’s Now or Never!
Write unit tests now or newer! In this paper, we claim that it is not worth the effort writing unit tests for old untested code. It’s more likely that it is more difficult to test than code that is already covered by tests. We share our experiences in trying to create tests for existing code and list the most common testability issues and derived findings.
E. Engström, K. Petersen
Mapping software testing practice with software testing research – SERP-test taxonomy
Background: There is a gap between software testing research and practice. One reason is the discrepancy between how testing research is reported and how testing challenges are perceived in industry. Aim: We propose the SERP-test taxonomy to structure information on testing interventions and practical testing challenges from a common perspective and thus bridge the communication gap. Method: To develop the taxonomy we follow a systematic incremental approach which includes reviews of ex- isting taxonomies, standards and classifications, expert interviews and an open on-line survey. Conclusion: The SERP-test taxonomy may be used by both researchers and practitioners to classify and search for testing challenges or interventions. The SERP- test Staxonomy also supports comparison of testing interventions by providing an instrument for assessing the distance between them and thus identify relevant points of comparisons.
10:30 - 11:00
11:00 - 12:30
Full Paper Presentations: Model-based Test Automation
V. Entin, M. Winder, B. Zhang, A. Claus
A Process to Increase the Model Quality in the Context of Model-Based Testing
In the past years model-based testing (MBT) has become a widely-used approach to the test automation in the industrial context. Until now the application of MBT has been limited to the software quality engineers with very good modeling skills. In order to guarantee the completeness of a model and to increase its precision there is a need to allow the usage of the approach by other project stakeholders such as requirements engineers as well as software quality engineers with a limited modeling experience. In this contribution we share the challenges discovered during the several years of the application of a certain MBT technique in a SCRUM project with particular regard to the definition of precise and complete models. A process which involves the entire software project team into the model definition starting at the very early stages of product development is presented along with its concrete implementation. First experiences with the application of the process in a particular project are presented.
M. Micallef, C. Colombo
Lessons learnt from using DSLs for Automated Software Testing
Domain Specific Languages (DSLs) provide a means of unambiguously expressing concepts in a particular domain. Although they may not refer to it as such, companies build and maintain DSLs for software testing on a day-to-day basis, especially when they define test suites using the Gherkin language. However, although the practice of specifying and automating test cases using the Gherkin language and related technologies such as Cucumber has become mainstream, the curation of such languages presents a number of challenges. In this paper we discuss lessons learnt from five case studies on industry systems, two involving the use of Gherkin-type syntax and another three case studies using more rigidly defined language grammars. Initial observations indicate that the likelihood of success of such efforts is increased if one manages to use an approach which separates the concerns of domain experts who curate the language, users who write scripts with the language, and engineers who wire the language into test automation technologies thus producing executable test code. We also provide some insights into desirable qualities of testing DSLs in different contexts.
T. Arts, K. Bogdanov, A. Gerdes, J. Hughes
Graphical editing support for QuickCheck models
In order to have QuickCheck generate test cases, a QuickCheck user has to provide a state machine specification. Each API call that is part of a test case is specified in that state machine, together with random generators for the arguments of the call. QuickCheck generates a sequence of API calls by randomly picking one of the specified API calls. Obviously, not each sequence is a meaningful test case; QuickCheck uses user specified preconditions to filter API calls in positions where they are meaningless. For example, in many cases, the implementation under test starts un-initialized and needs one initialization call as first API call in the sequence. The QuickCheck libraries offer two solutions for this: explicit preconditions depending on the state data, where the user explicitly checks and updates this data, and implicit preconditions by defining a finite state machine abstraction to restrict the generated sequences. In this paper, we demonstrate by an example how implicit preconditions make the specifications more concise and readable. Moreover, we present an extension to QuickCheck to graphically edit a finite state machine while creating a formal specification. The seamless interaction between graphics and formal model simplifies the task of the user to create formal specifications. In particular, for large, industrially relevant specifications, this is an advantage. By doing so, we enable users to write specifications quicker, make the formal part easier to understand and to quickly visualise what abstract tests are generated. Finally, we enable the users to influence test distributions in a graphical way.
T. Gustafsson, M. Skoglund, A. Kobetski, D. Sundmark
Automotive System Testing by Independent Guarded Assertions
Testing is a key activity in industry to verify and validate products before they reach end customers. In hardware- in-the-loop system-level verification of automotive systems, testing is often performed using sequential execution of test scripts, each containing a mix of stimuli and assertions. In this paper, we propose and study an alternative approach for automated system-level testing automotive systems. In our approach, assertion-only test scripts and one (or several) stimuli- only script(s), execute concurrently on the test driver. By sep- arating the stimuli from the assertions, with each assertion independently determining when the system under test shall be verified, we seek to achieve three things: 1) tests that better represent real-world handling of the product, 2) reduced test execution time, and 3) increased defect detection. In addition to describing our proposed approach in detail, we provide experimental results from an industrial case study evaluating the approach in an automotive system test environment.
12:30 - 14:00
13:45 - 14:15
14:15 - 15:00
Keynote 2: Per Runeson (Lund University, Sweden)
The 4+1 View Model of Industry–Academia Collaboration Experiences
Software engineering research cannot be conducted in academic isolation. In order for researchers to address real world challenges, and for practitioners to adopt novel research results, industry-academia collaboration must take place. However, there are several factors that impact on the success or failure of such collaboration.
We have collected our experience from several such projects over two decades into an “architectural” model for industry–academia collaboration, inspired by Kruchten’s software architecture model. The model has four views of i) time, ii) space, iii) activity and iv) domain, corresponding to the questions: when, where, how and what. The +1 view is the collaboration scenario, binding the other four together.
The experiences, captured in the model, include i) the need for long term relations, ii) the observation that physical distance plays a role also in the digital world, iii) that the collaboration may include several kinds of activity for mutual benefit, and iv) that industries in different domains may learn from each other, catalyzed by academic research.
This keynote speech presents the 4+1 model and related experience, which may help the researcher and practitioner better utilize their collaboration, leading to improved industry practices and software engineering research.
15:00 - 15:30
15:30 - 17:00
Full Paper Presentations: Analysis and Improvement
Software Quality Research: from Processes to Model-based Techniques
In this article we state that cyber-physical systems and the Internet of Things pose challenges for software quality research. In the emerging real-time digital economy companies need to gain deep customer insight. The state of the art in model-based systems and model-based testing allows software engineers for product-based and quantitative control of quality and for increased productivity. These gains can be invested to better understand the domain and business conditions. We argue that cyber-physical systems pose an excellent basis for conducting collaborative research in model-based systems and model-based testingin the near future and report on lessons learnt in three areas of software (quality) research: (1) process-oriented quality, (2) model-based systems and (3) model-based testing.
D. Tengeri, A. Beszédes, T. Gergely, L. Vidács, D. Havas, T. Gyimóthy
Beyond Code Coverage – an Approach for Test Suite Assessment and
Code coverage is successfully used to guide white box test design and evaluate the respective test completeness. However, simple overall coverage ratios are often not precise enough to effectively help when a (regression) test suite needs to be reassessed and evolved after software change. We present an approach for test suite assessment and improvement that utilizes code coverage information, but on a more detailed level and adds further evaluation aspects derived from the coverage. The main use of the method is to aid various test suite evolution situations such as removal, refactoring and extension of test cases as a result of code change or test suite efficiency enhancement. We define various metrics to express different properties of test suites beyond simple code coverage ratios, and present the assessment and improvement process as an iterative application of different improvement goals and more specific sub-activities. The method is demonstrated by applying it to improve the tests of one of our experimental systems.
C. Biray, F. Buzluca
A Learning-Based Method for Detecting Defective Classes in Object-Oriented Systems
Code or design problems in software classes reduce understandability, flexibility and reusability of the system. Performing maintenance activities on defective components such as adding new features, adapting to the changes, finding bugs, and correcting errors, is hard and consumes a lot of time. Unless the design defects are corrected by a refactoring process these error-prone classes will most likely generate new errors after later modifications. Therefore, these classes will have a high error frequency (EF), which is defined as the ratio between the number of errors and modifications. Early estimate of error-prone classes helps developers to focus on defective modules, thus reduces testing time and maintenance costs. In this paper, we propose a learning-based decision tree model for detecting error-prone classes with structural design defects. The main novelty in our approach is that we consider EFs and change counts (ChC) of classes to construct a proper data set for the training of the model. We built our training set that includes design metrics of classes by analyzing numerous releases of real-world software products and considering EFs of classes to mark them as error-prone or non-error-prone. We evaluated our method using two long-standing software solutions of Ericsson Turkey. We shared and discussed our findings with the development teams. The results show that, our approach succeeds in finding error-prone classes and it can be used to decrease the testing and maintenance costs.
L. Vidács, F. Horváth, J. Mihalicza, B. Vancsics, A. Beszédes
Supporting Software Product Line Testing by Optimizing Code Configuration Coverage
Software product lines achieve much shorter time to market by system level reuse and code variability. A possible way to achieve this flexibility is to use generic components, including the core system, in different products in alternative configurations. The focus of testing efforts for such complex and highly variable systems often shifts from testing specific products to assessing the overall quality of the core system or potential new configurations. As a complementary approach to feature models and related combinatorial testing methods optimizing for feature coverage, we apply a source code oriented analysis of variability. We present two algorithms that optimize for high coverage of the common code base in terms of C++ preprocessor-based configurations with a limited set of actual configurations selected for testing. The methods have been evaluated on iGO Navigation, a large industrial system with typical configuration support for product lines, hence we believe the approach can be generalized to other systems as well.