Interactively Learning Preference Constraints in Linear Bandits

Interactively Learning Preference Constraints in Linear Bandits

Abstract

We study sequential decision-making with known rewards and unknown constraints, motivated by situations where the constraints represent expensive-to-evaluate human preferences, such as safe and comfortable driving behavior. We formalize the challenge of interactively learning about these constraints as a novel linear bandit problem which we call constrained linear best-arm identification. To solve this problem, we propose the Adaptive Constraint Learning (ACOL) algorithm. We provide an instancedependent lower bound for constrained linear best-arm identification and show that ACOL’s sample complexity matches the lower bound in the worst-case. In the average case, ACOL’s sample complexity bound is still significantly tighter than bounds of simpler approaches. In synthetic experiments, ACOL performs on par with an oracle solution and outperforms a range of baselines. As an application, we consider learning constraints to represent human preferences in a driving simulation. ACOL is significantly more sample efficient than alternatives for this application. Further, we find that learning preferences as constraints is more robust to changes in the driving scenario than encoding the preferences directly in the reward function.

Grafik Top
Authors
  • Lindner, David
  • Tschiatschek, Sebastian
  • Hofmann, Katja
  • Krause, Andreas
Grafik Top
Shortfacts
Category
Paper in Conference Proceedings or in Workshop Proceedings (Paper)
Event Title
The Thirty-ninth International Conference on Machine Learning
Divisions
Data Mining and Machine Learning
Subjects
Kuenstliche Intelligenz
Event Location
Baltimore, USA
Event Type
Conference
Event Dates
17.-23.07.2022
Series Name
Proceedings of Machine Learning Research
Page Range
pp. 13505-13527
Date
17 July 2022
Export
Grafik Top