Scaling-invariant maximum margin preference learning

Published: 01 Jan 2021, Last Modified: 30 May 2024Int. J. Approx. Reason. 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: One natural way to express preferences over items is to represent them in the form of pairwise comparisons, from which a model is learned in order to predict further preferences. In this setting, if an item a is preferred to the item b, then it is natural to consider that the preference still holds after multiplying both vectors by a positive scalar (e.g., 2a≻2b<math><mn is="true">2</mn><mi is="true">a</mi><mo is="true">≻</mo><mn is="true">2</mn><mi is="true">b</mi></math>). Such invariance to scaling is satisfied in maximum margin learning approaches for pairs of test vectors, but not for the preference input pairs, i.e., scaling the inputs in a different way could result in a different preference relation being learned. In addition to the scaling of preference inputs, maximum margin methods are also sensitive to the way used for normalizing (scaling) the features, which is an essential pre-processing phase for these methods. In this paper, we define and analyse more cautious preference relations that are invariant to the scaling of features, or preference inputs, or both simultaneously; this leads to computational methods for testing dominance with respect to the induced relations, and for generating optimal solutions (i.e., best items) among a set of alternatives. In our experiments, we compare the relations and their associated optimality sets based on their decisiveness, computation time and cardinality of the optimal set.
Loading