Call for papers, Evaluation of Qualitative Aspects of Intelligent Software Assistants

With the growing complexity of modern software systems, software engineers need to cope with the so-called information overloading along the whole development lifecycle, spanning from the require- ment elicitation to the development of the actual system. In addition, fast-evolving technologies and frameworks are emerging daily. Therefore, non-expert users may struggle to express the requirements properly or select the proper third-party software libraries needed to implement a specific functionality. Such inconvenience impacts mainly the software design and construction phases, which means more than 50% of the effort made by the software engineers during the life of the project. Current software projects need to be easily scalable in order to reduce such maintenance costs. To address these challenges, intelligent software assistants have been proposed to ease the burden of choice by providing a set of automated capabilities to help developers in several tasks, e.g., debugging, testing, navigating Q&A forums, and extracting information from open-source repositories. After an inference phase, the system can provide a set of valuable items, namely recommendations, according to the current task. While traditional systems rely on curated knowledge bases as the primary foundation for their recommendation processes, the advent of cutting-edge AI models, or the so-called Large Language Models (LLMs) like those of the GPT family, is dramatically changing how these systems are designed, developed, and evaluated. Currently, IDEs like Visual Studio and Eclipse are being extended with LLM-based assistants, e.g., Copilot or Caret. In this respect, a key point is to ensure a set of qualitative aspects that go beyond the accuracy of those assistants. Concretely, the provided items must be free from any kind of bias, ensure the user’s privacy, adhering software licenses, and, overall, contribute to building reliable and trustworthy software projects. This objective has been recently recognized by the European Commission, which proposed the AI Act, a dedicated standard for prominent AI-intensive systems providing a wide range of requirements, methodologies, and metrics focused on ensuring the mentioned qualitative aspects. Thus, there is a need to assess the outcomes of intelligent software assistants by adhering to rigorous protocols and methodologies provided by the empirical software engineering field of study. To this end, we propose a special issue for Automated Software Engineering that focuses on evaluating qualitative aspects in intelligent software assistants, with the aim of attracting researchers to this research field and creating a community to share and discuss new ideas and start collaborations.

Topics of interest include, but are not limited to, the following:

  • Re-usage of AI-based tools, techniques, and methodologies in developing intelligent software assistants.
  • Foundational theories for automated software assistants to understand the underlying principles that can drive the development of more robust and generalizable recommendation systems in software engineering, with a focus on their evaluation.
  • Evaluating quality aspects of software assistants, e.g., explainability, transparency, and fairness, ensuring that software assistants produce reliable results.
  • New methods, tools, and frameworks to support development tasks, e.g., code-related tasks, automated classification of software artifacts, or code generation leveraging generative AI models.
  • Designing specific prompt engineering techniques for intelligent software assistants based on large language models to ensure quality aspects.
  • Data-driven approaches for software assistants: Leveraging large-scale data from open-source software (OSS) repositories, Q&A forums, and issue trackers to enhance the effectiveness of software assistants.
  • Integration with human-in-the-loop systems: Balancing automated recommendations with human expertise to improve decision-making in complex SE scenarios.
  • Adoption of advanced generative AI models, including LLMs, pre-trained models (PTMs) for software assistance, particularly emphasizing the quality effects.
  • Empirical studies and controlled experiments to assess qualitative aspects of intelligent systems.
  • Evolution of software systems and long-term recommendations, e.g., how software assistants can cope with the evolving nature of software systems and provide recommendations that consider long-term system maintainability and evolution.
  • Cross-disciplinary applications of software assistant: Studying how techniques from other do- mains, e.g., human-computer interaction, natural language processing, and social network analysis, can enhance their effectiveness and usability.
  • Surveys and experience reports on software assistants to support software engineering tasks, both in academic and industry use cases.

Workshop Information

The 1st edition of the workshop on Evaluation of Qualitative Aspects of Intelligent Software Assistants (EQUISA - https://conf.researchr.org/home/ease-2025/equisa-2025) will be held in Istanbul, Turkey, on June 17th, 2025, and co-located with the 29th International Conference on Evaluation and Assessment in Software Engineering (EASE - https://conf.researchr.org/home/ease-2025). The primary goal of EQUISA 2025 is to provide a dedicated forum for exploring and discussing the qualitative dimensions of intelligent software assistants, encompassing their design, development, and deployment in real-world applications. EQUISA solicits two categories of contributions: full research (up to 10 pages) and ongoing re- research (up to 5 pages) papers. Full research papers can describe empirical research (i.e., quantitative, qualitative, and mixed research) on intelligent software systems. We also welcome replication studies and negative results papers if they can support advice or lessons learned. Ongoing research papers should aim at communicating new ideas in the context of developing intelligent software assistants for which the authors want to obtain early feedback from the workshop community, especially on the evaluation and assessment strategies. Such papers must describe the idea and the proposed evaluation and assessment strategy, possibly (but not necessarily) with some preliminary results. We expect to receive at least 10 submissions in total. Each submission will be reviewed by at least three workshop program committee members. This year, the workshop received 4 submissions, and three of them have been accepted as full papers after a single-blind review process. Thus, we will be invited to submit an extended version of their manuscripts.

Deadline

Submission opens: July 1, 2025;

Submission deadline: December 31, 2025;

Date first review round completed: March 15, 2026

Date revised manuscripts due: July 31, 2026;

Date completion of the review and revision process (final notification): November 31, 2026.

How to Submit

All submitted papers will undergo a rigorous peer-review process and should adhere to the general principles of the Automated Software Engineering Journal, prepared according to the Guide for Authors https://ause-journal.github.io/cfp.html . The authors of the papers accepted for the 1st International Workshop on Evaluation of Qualitative Aspects of Intelligent Software Assistant (EQUISA) will be invited to substantially extend and submit their work to the special issue. The workshop is in its first edition and is co-located with the 29th International Conference on Evaluation and Assessment in Software Engineering (EASE 2025). Submitted papers must be original, must not have been previously published, or be under consideration for publication elsewhere. In case a paper has already been presented at a conference, it should be extended by at least 30% new material before being submitted for this special issue. Authors must provide any previously published material relevant to their submission and describe the additions.

Editor

Guest editor list:

Claudio Di Sipio is a post-doc researcher in the Department of Engineering, Mathematics, and Computer Science at the University of L’Aquila, within the SWEN research group. He received his Ph.D in 2023 at the University of L’Aquila, and he has been an invited researcher at GEODES lab, University of Montreal, for six months. His research interests include recommendation systems for software engineering, mining OSS repositories, model-driven engineering, and the application of ML/AI techniques for software engineering. He served as a program committee member for different international conferences, including ASE, MSR, FORGE, and MODELS, and journals (TOSEM, TSE,IST, SoSyM). He has been recently appointed as a guest editor for the IST journal for the special issue entitled “Next-Generation Model-Based Software Engineering with Foundation Models” . Contact him at claudio.disipio@univaq.it. Further information about him is available at https://claudiodsi.github.io/

Valeria Pontillo is a postdoctoral researcher at the Gran Sasso Science Institute (GSSI), Italy. She received a bachelor’s, master’s, and Ph.D. degrees in Computer Science from the University of Salerno, Italy. Her research interests include software code and test quality, predictive analytics, mining software repositories, software maintenance and evolution, empirical software engineering, and security aspects in software code. She serves and has served as a reviewer for international conferences (e.g., ICSE 2026, CHASE 2025, SANER, SCAM NIER, ICSME NIER 2024, and ASE NIER 2024) and journals in the software engineering field (e.g., EMSE, TSE, TOSEM, JSS, IST). She has been program co-chair of SECUTE 2024 and track co-chair of the Software Management track of the 51st Euromicro Conference Series on Software Engineering and Advanced Applications (SEAA). In addition, she served as a guest editor for the special issue “Security Testing for Complex Software Systems (SECUTE)” at Springer’s Empirical Software Engineering journal. Contact her at valeria.pontillo@gssi.it. More info at https://valeriapontillo.github.io/.

Riccardo Rubei is a postdoctoral researcher at the University of L’Aquila (Italy). He earned his Ph.D. in 2022 from the University of L’Aquila. His research interest is related to software engineering, recommender systems and several aspects of MDE. Furthermore he is active in the field of sustainability and green software engineering. He organized the workshop in STAF 2024 entitled ”Large Language Models for Model-Driven Engineering” and “Foundations and Practice of Visual Modeling” in Models 2024. He served as a program committee of several software engineering conferences including MSR, ICSE Artifact Evaluation, Models, and as a reviewer for several journals, such as Software and Systems Modeling (SoSyM), JSS, and Information Processing and Management (IPM) to name a few. Contact him at riccardo.rubei@univaq.it

Pablo Gomez-Abajo is an Assistant Professor in the Department of Computer Science of the Uni- versidad Aut´ onoma de Madrid. In 2015, he joined the Modelling and Software Engineering research group (https://miso.es) led by Juan de Lara and Esther Guerra. In 2020, he defended his Ph.D. thesis as excellent cum laude. His research interests include model-driven engineering, domain-specific languages, model-based mutation and metamorphic testing, and the automated generation of exercises. He has served as a program committee member for several international conferences, including SLE and MODELS, and has been a reviewer for Elsevier’s Information and Software Technology and SoftwareX journals. Contact him at pablo.gomeza@uam.es. More info at https://gomezabajo.github.io.