Generative AI for Infrastructure-as-Code configurations in response to drift detection
Infrastructure as Code (IaC) is an essential tool in cloud computing. It enables infrastructure to be managed and provisioned via code rather than manual configurations. In practice, however, manual and automated tasks do not always work well together. Processing a small manual change in the code can therefore be a labor-intensive and error-prone process.
Large Language Models and IaC background
The paper provides an overview of Large Language Models (LLMs), emphasizing the importance of prompt engineering and benchmarking techniques to optimize their performance. It also provides background information on IaC, highlighting the differences between stateful and stateless tools, as well as the challenges posed by drift. The research focuses on evaluating the effectiveness of generative AI in two specific scenarios: a single-file IaC configuration and a multi-file configuration. In both cases, manual changes are made to the deployed infrastructure. Four prompting techniques are used to generate IaC code solutions. These solutions are benchmarked using the BLEU score to measure how accurate the LLM's output is compared to human-written code.
Results and proof of concept
The results show the extent to which generative AI can support developers in debugging manual changes within IaC configurations. A proof of concept implementation demonstrates how these functionalities can be integrated into a DevOps workflow.
Conclusions and future research
The paper concludes with a discussion of the findings, recommendations for improvements, and an outline of future research directions to further strengthen the synergy between generative AI and IaC practices.