ABOUT ME: I am a researcher and lecturer at Mälardalen University in Västerås, Sweden, primarily affiliated with the Software Testing Laboratory and the Formal Modelling and Analysis groups at the Department of Networked and Embedded Systems. A native of Bucharest, I earned an Engineer's degree from the Polytechnic University of Bucharest in 2009 and a PhD from Mälardalen University in 2016.
My research interests span requirements engineering, applied formal verification, software engineering, and empirical research, especially how to test, maintain, evolve and assure high-quality industrial software systems. I teach automated testing and model-based testing at the master and PhD levels as well as to industrial practitioners. Currently, I am doing research on a diverse array of topics in software development, including requirements modelling and analysis, product line engineering, the ethical and human aspects of software testing, the role of automatic test generation (where tests are intelligently and algorithmically created) in industrial practice; the use of model checking for engineering better systems; the nature of creating efficient and effective tests.
SUPERVISION: If you are interested in doing a bachelor, master or PhD thesis at Mälardalen University, and if you are a good and ambitious student interested in software engineering, embedded system development and software testing, then have a look at some general topics listed below (these topics are not taken by any student). If you are interested in any of these please email me.
I advise bachelor and master’s theses in all areas that I actively conduct research in:
Software Testing, with a particular focus on test design and benchmarking of tests.
Requirement Engineering, with a focus on requirement modeling, analysis and verification.
Embedded Systems, particularly the development of industrial control and safety-critical software.
Model Checking and Model-Based Testing, particularly the use of models (e.g., timed automata) for building better systems.
Human aspects of Software Engineering, particularly cognitive aspects of software development.
Engineering Digital Systems and Circuits, especially using Verilog and other HDL architectures, and particularly how they relate to testing.
PODCAST: Listen to my podcast on software engineering called Testing Habits. These are conversations about software testing and software engineering.
Requirements Engineering and Model-Based System Engineering
As systems continue to increase in complexity, some companies have turned to Model-Based Systems Engineering (MBSE) to address different challenges, such as requirement complexity, consistency, traceability, and quality assurance during system development. Our research is focusing on the adoption of MBSE and the empirical study of requirement engineering practices as well as requirement management and analysis.
Metrics for Quality Assurance
Software metrics have been used in the software engineering community for predicting quality metrics such as maintainability, bug proneness and robustness. In our studies, we focus on experimental evidence to support using these metrics to estimate different aspects during system development.
Human Aspects of Test Design
Software testing is a complex, intellectual activity-based (at least) on analysis, reasoning, decision making, abstraction and collaboration performed in a highly demanding environment. Naturally, it uses and allocates multiple cognitive resources in software testers. However, while a cognitive psychology perspective is increasingly used in the general software engineering literature, it has yet to find its place in software testing. To the best of our knowledge, no theory of software testers’ cognitive processes exists. We took the first steps towards such a theory by presenting a cognitive model of software testing based on how problem-solving is conceptualized in cognitive psychology. The results support a problem solving-based model of test design for capturing testers’ cognitive processes that could help in improving test design practices and tools supporting these activities.
Automatic Test Generation
Since the early days of software testing, automatic test generation has been suggested as a way of allowing tests to be created at a lower cost. However, industrially useful and applicable tools for automatic test generation are still scarce. As a consequence, the evidence regarding the applicability or feasibility of automatic test generation in industrial practice is limited. This is especially problematic if we consider the use of automatic test generation for industrial safety-critical control systems, such as are found in power plants, airplanes, or trains.
Our results show that there are still challenges associated with the use of automatic test generation. In particular, we found that while automatically generated tests, based on code coverage or mutation, can exercise the logic of the software as well as tests written manually, and can do so in a fraction of the time, they do not show better fault detection compared to manually created tests. Our results highlight the need for improving the goals used by automatic test generation tools.
Combinatorial Testing
Combination test generation techniques are test generation methods where tests are created by combining the input values of the software based on a certain combinatorial strategy. Our results show that these techniques can be improved and be successfully used in industrial practice. We proposed the use of timed base-choice criterion for testing industrial control software.
The idea of using combinatorial testing in software testing practice stands as significant progress in the development of automatic test generation approaches. Combinatorial testing is capable of aiding an engineer in testing of industrial software.
Model-Based Analysis and Verification
Design models that can be introduced earlier in the development process provide a holistic system description that captures the structure and functionality of a software system, as well as related extra-functional information, e.g., timing properties and resource annotations. I was the coauthor of several studies that proposed efficient verification techniques, like model-checking, that can be applied to high-level design artefacts to provide early information on the design and implementation of embedded software systems.
Unveiling Cognitive Biases in Software Testing: Insights from a Survey and Controlled Experiment (Dec 2024) Eduard Paul Enoiu, Alexandru Cusmaru , Jean Malm 31st Asia-Pacific Software Engineering Conference (APSEC 2024)
Optimizing Model-based Generated Tests: Leveraging Machine Learning for Test Reduction (Jul 2024) Muhammad Nouman Zafar, Wasif Afzal, Eduard Paul Enoiu, Zulqarnain Haider, Inderjeet Singh The 20th Workshop on Advances in Model Based Testing (A-MOST 2024)
Requirements Similarity and Retrieval (Jul 2024) Muhammad Abbas, Sarmad Bashir, Mehrdad Saadatmand , Eduard Paul Enoiu, Daniel Sundmark
Synthesis and Verification of Mission Plans for Multiple Autonomous Agents under Complex Road Conditions (Jun 2024) Rong Gu, Eduard Baranov , Afshin Ameri E., Eduard Paul Enoiu, Baran Çürüklü, Cristina Seceleanu, Axel Legay , Kristina Lundqvist ACM Transactions on Software Engineering and Methodology (TOSEM)
Automating Test Generation of Industrial Control Software through a PLC-to-Python Translation Framework and Pynguin (Feb 2024) Mikael Ebrahimi Salari, Eduard Paul Enoiu, Cristina Seceleanu, Wasif Afzal 30th Asia-Pacific Software Engineering Conference (APSEC2023)
SmartDelta project: Automated quality assurance and optimization across product versions and variants (Nov 2023) Mehrdad Saadatmand , Muhammad Abbas, Eduard Paul Enoiu, Bernd-Holger Schlingloff , Wasif Afzal, Benedikt Dornauer , Michael Felderer Microprocessors and Microsystems (MICPRO)
Jean Malm
Mikael Ebrahimi Salari
Damir Bilic
Daniel Flemström (former)
Henrik Gustavsson
Muhammad Abbas
Muhammad Nouman Zafar
Rong Gu (former)
Sarmad Bashir