Call for papers, Replications and Negative Results in SE

A significant portion of software engineering research is empirical and builds on prior work. However, the current publishing landscape incentivizes positive results, such as methods that outperform the prior state-of-the-art. This special issue is dedicated to replication studies of prior work and negative results. The scope of negative results also includes failed attempts that were unreported in previously published work. This Special Issue is meant to disseminate ideas that did not prove fruitful, but would interest the broader SE community. It also encourages researchers to extend previously published work to additional datasets, as well as report on the reproducibility of prior studies.

Reproducibility studies should clearly report the parts that were and were not reproducible, and are encouraged to follow the ACM guidelines on reproducibility (different team, different experimental setup): “The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.”

Specifically, this issue explores the following, non-exhaustive list of topics:

  • Broadly applicable empirical results; for example, if something that did not work is something that practitioners would think reasonable to try, especially if the demonstration of its failure is accompanied by some hypothesis or explanation.
  • Ablation studies of previous work, such as those showing that the improvements reported were misattributed to the wrong component of their method, or that some component of the method does not significantly affect performance.
  • Datasets or experiments showing that previous approaches do not generalize to other tasks, datasets, or domains.
  • Trivial baselines that work suspiciously well for some task/dataset, that have not been previously reported.
  • Experiments demonstrating the instability of prior work due to random seeds, hardware-specific constraints, processing methods, etc.
  • Demonstration of issues with widely-used methodology in the SE literature, including but not limited to: data collection/preprocessing methods, evaluation methodologies and metrics, etc. Such papers are encouraged to provide recommendations to the community.
  • Theoretical results showing that X should not be expected to work, possibly augmented with experiments demonstrating this negative effect.

How to Submit

Submissions must be original and not published or under review elsewhere. However, some overlap with prior work is expected due to the nature of the call.

All accepted papers from the RENE workshop at ASE will be invited to submit extended versions of their papers to the Special Issue. Alternatively, authors may submit solely to the Special Issue on the Springer’s website: https://link.springer.com/collections/hagebcgebi.

Deadline

Jan 31, 2025

Guest editors

  • Rahul Yedida, LexisNexis, rahul@ryedida.me
  • Tim Menzies, North Carolina State University