Infrastructure-as-code (IaC) is the DevOps practice enabling management and provisioning of infrastructure through the definition of machine-readable files, hereinafter referred to as IaC scripts. Similar to other source code artefacts, these files may contain defects that can preclude their correct functioning. In this paper, we aim at assessing the role of product and process metrics when predicting defective IaC scripts.
In the paper ‘Within-Project Defect Prediction of Infrastructure-as-Code Using Product and Process Metrics’, published in IEEE Transactions on Software Engineering, S. Dalla Palma, D. Di Nucci, F. Palomba, and D. A. Tamburri propose a fully integrated machine-learning framework for IaC Defect Prediction, that allows for repository crawling, metrics collection, model building, and evaluation. To evaluate it, the team analyzed 104 projects and employed five machine-learning classifiers to compare their performance in flagging suspicious defective IaC scripts.
The key results of the study report Random Forest as the best-performing model, with a median AUC-PR of 0.93 and MCC of 0.80. Furthermore, at least for the collected projects, product metrics identify defective IaC scripts more accurately than process metrics. As the authors point out, their findings put a baseline for investigating IaC Defect Prediction and the relationship between the product and process metrics, and IaC scripts’ quality.
You can access the paper here.