Designing Replicable Networking Experiments with TriScaleDownload PDF

Anonymous

Aug 02, 2021 (edited Nov 04, 2021)JSYS 2021 Aug Papers Blind SubmissionReaders: Everyone
  • Keywords: replicability, experimental evaluations, networking
  • TL;DR: How many times should one repeat an experiment to draw sound conclusions? This paper presents a rationale methodology that answers this question, among others.
  • Abstract: When designing their performance evaluations, networking researchers often encounter questions such as: How long should a run be? How many runs to perform? How to account for the variability across multiple runs? What statistical methods should be used to analyze the data? Despite their best intentions, researchers often answer these questions differently, thus impairing the replicability of their evaluations and the confidence in their results. In this paper, we propose a concrete methodology for the design and analysis of performance evaluations. Our approach hierarchically partitions the performance evaluation into three timescales, following the principle of separation of concerns. The idea is to understand, for each timescale, the temporal characteristics of variability sources, and then to apply rigorous statistical methods to derive performance results with quantifiable confidence in spite of the inherent variability. We implement this methodology in a software framework called TriScale. For each performance metric, TriScale computes a variability score that estimates, with a given confidence, how similar the results would be if the evaluation were replicated; in other words, TriScale quantifies the replicability of evaluations. We showcase the practicality and usefulness of TriScale on four different case studies demonstrating that TriScale helps to generalize and strengthen published results. Improving the standards of replicability in networking is a complex challenge. This paper is an important contribution to this endeavor; it provides networking researchers with a rational and concrete experimental methodology rooted in sound statistical foundations. The first of its kind.
  • Area: Networking
  • Type: Tool/benchmark
  • Conflicts: no conflicts
14 Replies

Loading