Abstract: Machine learning models have been extensively used for analyzing Earth Observation images and have played a crucial role in advancing the field. While most studies focus on improving the model’s performance, some aim to understand the model’s output. These explainable approaches provide reasoning behind the model’s output, establishing trust and confidence in the results. However, the evaluation of these models’ performance is mainly based on accuracy. To enhance the fairness and transparency of machine learning models, the evaluation of these models on Earth Observation images should also focus on explainability. This work reflects on existing research on explaninable AI in Remote Sensing and further outlines the desirable properties of the gold standard metric for evaluating explainable machine learning models on EO images.
Loading