Competence-Conscious Associative Rank Aggregation
Keywords:Competence, Ranking, Meta-Learning
AbstractThe ultimate goal of ranking methods is to achieve the best possible ranking performance for the problem at
hand. Recently, a body of empirical evidence has emerged suggesting that methods that learn to rank offer substantial
improvements in enough situations to be regarded as a relevant advance for applications that depend on ranking.
Previous studies have shown that different (learning to rank) methods may produce conflicting ranked lists. Rank
aggregation is based on the idea that combining such lists may provide complementary information that can be used
to improve ranking performance. In this paper we investigate learning to rank methods that uncover, from the training
data, associations between document features and relevance levels in order to estimate the relevance of documents with
regard to a given query. There is a variety of statistic measures or metrics that provide a different interpretation for an
association. Interestingly, we observed that each association metric has a specific domain for which it is most competent
(that is, there is a specific set of documents for which a specific metric consistently produces better ranked lists). We
employ a second-stage meta-learning approach, which describes the domain of competence of each metric, enabling a
more sensible aggregation of the ranked lists produced by different metrics. We call this new aggregation paradigm
competence-conscious associative rank aggregation. We conducted a systematic evaluation of competence-conscious
aggregation methods using the LETOR 3.0 benchmark collections. We demonstrate that the proposed aggregation
methods outperform the constituent learning to rank methods not only when they are considered in isolation, but also
when they are combined using existing aggregation approaches.
Download data is not yet available.