Real-world-robustness of tree-based classifiers

TMLR Paper379 Authors

22 Aug 2022 (modified: 28 Feb 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: The concept of trustworthy AI has gained widespread attention lately. One of the aspects relevant to trustworthy AI is robustness of ML models. In this study, we show how to exactly compute the recently introduced measure of real-world-robustness - a measure for robustness against naturally occurring distortions of input data - for tree-based classifiers under the assumption that the natural distortions are given as probability distributions. The idea is to extract the decision rules of a trained tree-based classifier, separate the feature space into non-overlapping regions and determine the probability that a data sample with distortion returns its predicted label. The original method works for all black box classifiers, but is only an approximation and only works if the input dimension is not too high, whereas our proposed method returns exact results.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: 1.) Abstract: We expanded the abstract to make our contribution more clear. 2.) Section 1: We extended the Introduction Section with a more detailed review of literature, including a discussion of natural adversarial examples and common corruptions. 3.) Section 1: We have clarified the notation in and around Equation (1) and (2). 4.) Section 1: We have added a short discussion on why real-world-robustness is a useful measure in certain applications. 5.) Section 2.2: We revisited the robustness calculation for data samples with correlated features and the uncertainty in different dimensions modelled by different probability distributions and present an approach to calculate the robustness in such scenarios. Thus, we show that our approach is not limited to just Gaussian perturbations. 6.) Section 4: We expanded the results Section with more details on the experiments and results, adding two Tables (Table 1 and Table 3) for the different experiments and two Figures (Figure 4 and Figure 5). 7.) Section 4.4: We added results for an XGBoosted Decision Tree model, with correlated features and the uncertainty in different dimensions modelled by different probability distributions. This is related to the comment about Section 2.2. 8.) Section 5: We have slightly expanded the Conclusion Section to emphasize the difference between our measure of robustness and other existing measures. 9.) General remarks: We have modified some sentences for an easier reading flow and improved notations.
Assigned Action Editor: ~Krishnamurthy_Dvijotham2
Submission Number: 379
Loading