You are required to read and agree to the below before accessing a full-text version of an article in the IDE article repository.

The full-text document you are about to access is subject to national and international copyright laws. In most cases (but not necessarily all) the consequence is that personal use is allowed given that the copyright owner is duly acknowledged and respected. All other use (typically) require an explicit permission (often in writing) by the copyright owner.

For the reports in this repository we specifically note that

  • the use of articles under IEEE copyright is governed by the IEEE copyright policy (available at
  • the use of articles under ACM copyright is governed by the ACM copyright policy (available at
  • technical reports and other articles issued by M‰lardalen University is free for personal use. For other use, the explicit consent of the authors is required
  • in other cases, please contact the copyright owner for detailed information

By accepting I agree to acknowledge and respect the rights of the copyright owner of the document I am about to access.

If you are in doubt, feel free to contact

Poster: Performance Testing Driven by Reinforcement Learning

Publication Type:

Conference/Workshop Paper


IEEE 13th International Conference on Software Testing, Validation and Verification


Performance testing involving performance test case generation and execution remains a challenge, particularly for complex systems. Different application-, platform- and workload-based factors can influence the performance of the software under test. Common approaches for generating the platform-based and workload-based test conditions are often based on system model or source code analysis, real usage modelling and use-case based design techniques. Nonetheless, those artifacts might not be always available during the testing. Moreover, creating a detailed performance model is often difficult. On the other hand, test automation solutions such as automated test case generation can enable effort and cost reduction with the potential to improve the intended test criteria coverage. Furthermore, if the optimal way (policy) to generate the test cases can be learnt by the testing system, then the learnt policy can be reused in further testing situations such as testing variants or evolved versions of the software, and upon changeable factors of testing process. This capability can lead to additional cost and computation time saving in the testing process. In this research, we have developed an autonomous performance testing framework using model-free reinforcement learning augmented by fuzzy logic and self-adaptive strategies. It is able to learn the optimal policy to generate different platform-based and workload-based test conditions without access to the system model and source code. The use of fuzzy logic and self-adaptive strategy helps to tackle the issue of uncertainty and improve the accuracy and adaptivity of the proposed learning. Our evaluation experiments showed that the proposed autonomous performance testing framework is able to generate the test conditions efficiently and in a way adaptive to varying testing situations.


@inproceedings{Helali Moghadam5838,
author = {Mahshid Helali Moghadam and Mehrdad Saadatmand and Markus Borg and Markus Bohlin and Bj{\"o}rn Lisper},
title = {Poster: Performance Testing Driven by Reinforcement Learning},
month = {October},
year = {2020},
booktitle = {IEEE 13th International Conference on Software Testing, Validation and Verification},
url = {}