The NLBSE'24 Tool Competition

Jan 1, 2024ยท
Rafael Kallis
,
Giuseppe Colavito
,
Ali Al-Kaswan
,
Luca Pascarella
,
Oscar Chaparro
,
Pooja Rani
ยท 0 min read
Abstract
We report on the organization and results of the tool competition of the third International Workshop on Natural Language-based Software Engineering (NLBSE'24). As in prior editions, we organized the competition on automated issue report classification, with focus on small repositories, and on automated code comment classification, with a larger dataset. In this tool competition edition, six teams submitted multiple classification models to automatically classify issue reports and code comments. The submitted models were fine-tuned and evaluated on a benchmark dataset of 3 thousand issue reports or 82 thousand code comments, respectively. This paper reports details of the competition, including the rules, the teams and contestant models, and the ranking of models based on their average classification performance across issue report and code comment types.
Type
Publication
Proceedings of the Third ACM/IEEE International Workshop on NL-Based Software Engineering