Abstract: The automated assessment of symbolic knowledge, derived, for instance, from extraction procedures, facilitates the autotuning of machine learning algorithms, obviating inherent biases in subjective human evaluations. Despite advancements, comprehensive metrics for evaluating knowledge quality are missing in the literature. To address this gap, our study introduces ICE, a novel evaluation metric designed to measure the quality of symbolic knowledge. This metric computes a score by considering three quality sub-indices, namely, predictive performance, human readability and completeness, and it can be tailored to suit the specific requirements of the case at hand by adjusting the weights assigned to each sub-index. We present here the mathematical formulation of the ICE score, and show its effectiveness through comparative analyses with existing quality scores applied to real-world tasks.
Loading