In software testing, mutation analysis is a technique that systematically inserts simple bugs into a program under test. Once a set of buggy programs, known as mutants, has been created, they are run on a set of test cases. If all mutants fail, the set of test cases (or the technique selecting them) is deemed good. Fundamental to mutation analysis is the so-called coupling effect hypothesis. The hypothesis states that a set of test cases detecting most simple bugs in a program will also detect most complex faults in the same program. If the coupling effect does not exist for real software systems, then mutation analysis is not a reliable way of deciding whether or not a program is thoroughly tested. This would also cast doubt on the use of mutation analysis in research as a way of assessing the effectiveness of testing techniques.
The validity of any evidence on the coupling effect is limited by the extent to which the programs and faults studied are representative of real-world programs and naturally-occurring faults. However, nearly all existing studies of the coupling effect use small programs and artificial faults generated for the sake of the experiment. In this project, we will empirically study the coupling effect in large software systems with naturally-occurring faults. We expect to provide significant evidence supporting or refuting the hypothesis, allowing researchers and testers to determine whether mutation analysis is a meaningful way to assess their work.
First Name | Last Name | Title |
---|---|---|
Daniel | Sundmark | Professor |
Elaine | Weyuker | Visiting Professor |
Sara | Abbaspour | Associated Senior Lecturer |
Thomas | Ostrand |
Intermittently Failing Tests in the Embedded Systems Domain (Jul 2020) Per Erik Strandberg, Thomas Ostrand, Elaine Weyuker, Wasif Afzal, Daniel Sundmark International Symposium on Software Testing and Analysis (ISSTA'20)
A Runtime Verification Tool for Detecting Concurrency Bugs in FreeRTOS Embedded Software (Aug 2018) Sara Abbaspour, Daniel Sundmark, Sigrid Eldh, Hans Hansson 17th IEEE International Symposium on Parallel and Distributed Computing ( ISPDC-2018)
Concurrency bugs in open source software: a case study (Apr 2017) Sara Abbaspour, Daniel Sundmark, Sigrid Eldh, Hans Hansson Journal of Internet Services and Applications (JISA)
Runtime Verification for Detecting Suspension Bugs in Multicore and Parallel Software (Mar 2017) Sara Abbaspour, Daniel Sundmark, Hans Hansson ICST workshop on Testing Extra-Functional Properties and Quality Characteristics of Software Systems (ITEQS'17)
Transitioning Fault Predition Models to a New Environment (Sep 2016) Jesper Derehag , Elaine Weyuker, Thomas Ostrand, Daniel Sundmark European Dependable Computing Conference (EDCC'16)
A Runtime Verification based Concurrency Bug Detector for FreeRTOS Embedded Software Sara Abbaspour, Eduard Paul Enoiu, Adnan Causevic, Daniel Sundmark, Hans Hansson Journal of IEEE Access (IEEE-Access)