You are required to read and agree to the below before accessing a full-text version of an article in the IDE article repository.

The full-text document you are about to access is subject to national and international copyright laws. In most cases (but not necessarily all) the consequence is that personal use is allowed given that the copyright owner is duly acknowledged and respected. All other use (typically) require an explicit permission (often in writing) by the copyright owner.

For the reports in this repository we specifically note that

  • the use of articles under IEEE copyright is governed by the IEEE copyright policy (available at http://www.ieee.org/web/publications/rights/copyrightpolicy.html)
  • the use of articles under ACM copyright is governed by the ACM copyright policy (available at http://www.acm.org/pubs/copyright_policy/)
  • technical reports and other articles issued by M‰lardalen University is free for personal use. For other use, the explicit consent of the authors is required
  • in other cases, please contact the copyright owner for detailed information

By accepting I agree to acknowledge and respect the rights of the copyright owner of the document I am about to access.

If you are in doubt, feel free to contact webmaster@ide.mdh.se

Requirements Ambiguity Detection and Explanation with LLMs: An Industrial Study

Fulltext:


Publication Type:

Conference/Workshop Paper

Venue:

International Conference on Software Maintenance and Evolution


Abstract

Developing large-scale industrial systems requires high-quality requirements to avoid costly rework and project delays. However, linguistic ambiguities in natural language (NL) requirements have been a long-standing challenge, often introducing misinterpretations and inconsistencies that propagate throughout the development lifecycle. Such ambiguous NL requirements necessitate early detection and well-reasoned explanations to clarify and prevent further misunderstandings among stakeholders. While solutions have been developed to detect ambiguities in NL requirements, the advent of generative large language models (LLMs) offers new avenues for explanation-augmented requirements ambiguity detection. This paper empirically investigates LLMs for ambiguity detection and explanation in real-world industrial requirements by adopting an in-context learning paradigm. Our results from three industrial datasets show that LLMs achieve a 20.2% average performance increase in classifying ambiguous requirements when prompted with ten relevant in-context demonstrations (10-shot), compared to no demonstrations (0-shot). Additionally, we conducted human evaluations of the LLM-generated outputs with eight industry experts along four dimensions---naturalness, adequacy, usefulness and relevance---to gain practical insights. The results show an average rating of 3.84 out of 5 across evaluation criteria, indicating that the approach is effective in providing supporting explanations for requirement ambiguities.

Bibtex

@inproceedings{Bashir7221,
author = {Sarmad Bashir and Alessio Ferrari and Muhammad Abbas Khan and Per Erik Strandberg and Zulqarnain Haider and Mehrdad Saadatmand and Markus Bohlin},
title = {Requirements Ambiguity Detection and Explanation with LLMs: An Industrial Study},
booktitle = {International Conference on Software Maintenance and Evolution},
url = {http://www.es.mdu.se/publications/7221-}
}