Ai-enabled automated common vulnerability scoring from common vulnerabilities and exposures descriptions
Abstract: With the sheer amount of vulnerabilities, manually evaluating the impact of them is challenging. This paper proposes employing artificial intelligence models as substitutes for humans or as aides to human experts in estimating vulnerabilities. We compare the precision, recall, and F1 score amongst the Universal Sentence Encoder, Generative Pre-trained Transformer, and Support Vector Machine, trained on 118,000 vulnerabilities and tested on 51,000 vulnerabilities, with human experts on mean estimation error and variance for each type of vulnerability from the state of the art work in estimating vulnerability severity scores. The Universal Sentence Encoder demonstrates superior performance with results (72/77 % accuracy on severity level prediction) that significantly outperform human experts in assessment tasks for various types of vulnerabilities with high efficiency for memory consumption and low running time. Additionally, we examine the efficacy of our models in predicting the components of vulnerability evaluation metrics and the severity level of vulnerabilities. The findings highlight the potential of artificial intelligence agents to assist cybersecurity experts in this task which in the current state of the art is entirely manual.
Loading