LEVERAGING ADVANCED MACHINE LEARNING STRATEGIES FOR OPTIMIZED TIMING OF DEVOPS & MICROSERVICES DEPLOYMENT: A PRAGMATIC APPROACH TO PREDICTIVE MODELING



Authors

  • Amarjeet Singh,1 Alok Aggarwal2

DOI:

https://doi.org/10.15282/jmes.17.1.2023.10.0759


Keywords:

Random Forest Regressor, Gradient Boosting Regressor, Long short-term memory, RMSE DevOps, Git, Subversion, micro-service.


Abstract

It has been observed that during the software development life cycle lifecycle, software developers could never predict the optimal time to deploy the microservices. This frequently fails production deployment, improper utilization of resources, and cost savings resulting in a financial loss and credibility to the business product. This work explores the utilization of advanced machine learning strategies to effectively predict the optimal timing for executing pull requests, push requests, and deployments within the context of software development projects for microservices in DevOps culture. By analyzing and assimilating historical data encompassing various aspects such as code modifications, team activity levels, and project milestones, the aim is to equip developers with actionable insights that can significantly enhance project planning and coordination. Three different models were implemented and evaluated; Random forest regression, Long short-term memory, and Gradient boosting regression. The results shows that all three models performed well, with low mean absolute error and root mean squared error values and high R-squared scores. The Long short-term memory model achieved the highest R-squared score of 0.89, indicating its ability to capture trends and predict the optimal timing accurately. Actual timing versus the forecasted timing was visualized for pull requests, push requests, and deployments. Results show that the models' predictions aligned well with the actual timing, although some deviation were observed, especially in the case of pull requests. Also, the evaluation of the model on the real-world data shows that the process can be streamlined and the productivity of the whole process of pull requests, push requests, and deployments can be optimized by 25% - 32% based on selected five samples.



Published

2024-04-16

How to Cite