Reducing Model Sensitivity via Single-Elimination Tournament Inference

Reducing Model Sensitivity via Single-Elimination Tournament Inference

Philip Ogren, Ari Kobren, Naveen Jafer Nizar, Vikramraj Sitpal, Runge Huang, Esha Ghorpade, Weiqi Wang, Dhruv Agarwal

12 November 2024

We present SEDe, a simple method that solves item selection problems (e.g., multiple choice question answering) via single-elimination tournaments. Specifically, SEDe decomposes an initial selection problem into a collection of smaller problems; each having fewer items than the initial problem. By virtue of being short and simple, state-of-the-art language models avoid the performance degradation that arise from position bias and long inputs, and solve each small problem accurately. In experiments on 3 tasks---multiple choice question answering, multi-document question answering, and software issue localization---we show that SEDe leads to higher accuracy than both an in-context learning and a position debiased baseline on selection problems with many items. Unlike the baselines, SEDe is robust to increases in the number of items. Our analysis reveals that improvements due to SEDe over the other two methods are more dramatic as model size decreases and problem size increases.


Venue : Empirical Methods on Natural Language Processing 2024

File Name : ms.pdf



  • What’s New