TESTOMAT Project - The Next Level of Test Automation

Status:

finished

Start date:

2017-10-01

End date:

2020-11-30

Quality software has long been synonymous with software “without bugs”. Today, however, quality software has come to mean “easy to adapt” because of the constant pressure to change. Consequently, modern software teams seek for a delicate balance between two opposing forces: striving for reliability and striving for agility.

The TESTOMAT project will support software teams to strike the right balance by increasing the development speed without sacrificing quality. The achieve this goal, the project will advance the state-of-the-art in test automation for software teams moving towards a more agile development process.

The project will ultimately result in a Test Automation Improvement Model, which will define key improvement areas in test automation, with the focus on measurable improvement steps. Moreover, the project will advance the state of the art in test automation tools, investigating topics like test effectiveness, test prioritisation and testing for quality standards.

The results of the TESTOMAT project will allow the software testing teams in the consortium to make their testing more effective so that more resources are available for adding value to their products. The tool vendors and consultants on the other hand, will improve their offering and as such gain market share in a growing but highly competitive market.

[Show all publications]

An Autonomous Performance Testing Framework using Self-Adaptive Fuzzy Reinforcement Learning (Mar 2021)
Mahshid Helali Moghadam, Mehrdad Saadatmand , Markus Borg , Markus Bohlin, Björn Lisper
Software Quality Journal (Springer) (SQJ)

Poster: Performance Testing Driven by Reinforcement Learning (Oct 2020)
Mahshid Helali Moghadam, Mehrdad Saadatmand , Markus Borg , Markus Bohlin, Björn Lisper
IEEE 13th International Conference on Software Testing, Validation and Verification (ICST2020)

Performance Comparison of Two Deep Learning Algorithms in Detecting Similarities Between Manual Integration Test Cases (Oct 2020)
Cristina Landin, Leo Hatvani, Sahar Tahvili, Hugo Haggren , Martin Längkvist , Amy Loutfi , Anne Håkansson
The Fifteenth International Conference on Software Engineering Advances (ICSEA 2020)

Automated Analysis of Flakiness-mitigating Delays (Oct 2020)
Jean Malm, Adnan Causevic, Björn Lisper, Sigrid Eldh
IEEE/ACM 1st International Conference on Automation of Software Test (AST'20)

From Requirements to Verifiable Executable Models using Rebeca (Sep 2020)
Marjan Sirjani, Luciana Provenzano, Sara Abbaspour, Mahshid Helali Moghadam
Software Engineering and Formal Methods Collocated Workshops 2020 (SEFMW 2020)

A Novel Methodology to Classify Test Cases Using Natural Language Processing and Imbalanced Learning (Aug 2020)
Sahar Tahvili, Leo Hatvani, Enislay Ramentol , Rita Pimentel , Wasif Afzal, Francisco Herrera
Engineering Applications of Artificial Intelligence (EAAI)


Björn Lisper, Professor

Room: U1-091
Phone: +46-21-151709