Learning Safety Constraints from Demonstrations with Unknown Rewards

Learning Safety Constraints from Demonstrations with Unknown Rewards

Abstract

We propose Convex Constraint Learning for Reinforcement Learning (CoCoRL), a novel approach for inferring shared constraints in a Constrained Markov Decision Process (CMDP) from a set of safe demonstrations with possibly different reward functions. While previous work is limited to demonstrations with known rewards or fully known environment dynamics, CoCoRL can learn constraints from demonstrations with different unknown rewards without knowledge of the environment dynamics. CoCoRL constructs a convex safe set based on demonstrations, which provably guarantees safety even for potentially sub-optimal (but safe) demonstrations. For near-optimal demonstrations, CoCoRL converges to the true safe set with no policy regret. We evaluate CoCoRL in tabular environments and a continuous driving simulation with multiple constraints. CoCoRL learns constraints that lead to safe driving behavior and that can be transferred to different tasks and environments. In contrast, alternative methods based on Inverse Reinforcement Learning (IRL) often exhibit poor performance and learn unsafe policies.

Grafik Top
Authors
  • Lindner, David
  • Chen, Xin
  • Tschiatschek, Sebastian
  • Hofmann, Katja
  • Krause, Andreas
Grafik Top
Shortfacts
Category
Technical Report (Technical Report)
Divisions
Data Mining and Machine Learning
Publisher
CoRR arXiv
Date
25 May 2023
Export
Grafik Top