Abstract: While previous work in comparing statistical significance tests for IR system evaluation have focused on paired data tests (e.g., for evaluating two systems using a common test collection), two-sample tests must be used when the reproducibility of IR experiments across different test collections must be examined. Using real runs and a test collection from the NTCIR-15 WWW-3 Task, the present study compares the properties of three two-sample significance tests for comparing two systems: Student's t-test (i.e., the classical parametric test), the Wilcoxon rank sum test (i.e., the classical nonparametric test), and the randomisation test (i.e., a population-free method that utilises modern computational power). In terms of the false positive rate (i.e., the chance of detecting a statistical significance even though the two samples of evaluation measure scores come from the same system), the three tests behave similarly, although the Wilcoxon rank sum test appears to be slightly more robust than the other two for very small topic set sizes (e.g., 10 topics each) with a large significance level (e.g., α=0.10). On the other hand, the t-test and the Wilcoxon rank sum test are very similar to each other from the following two viewpoints: "How often do they both detect a nonexistent difference?" and "How often do they both overlook a true difference?" Compared to the two classical significance tests, the randomisation test behaves markedly differently in terms of the above two viewpoints. Hence, we suggest that researchers should at least be aware of the above properties of the three two-sample tests when choosing from them.
0 Replies
Loading