WeaNF":" Weak Supervision with Normalizing Flows

WeaNF":" Weak Supervision with Normalizing Flows

Abstract

A popular approach to decrease the need for costly manual annotation of large data sets is weak supervision, which introduces problems of noisy labels, coverage and bias. Methods for overcoming these problems have either relied on discriminative models, trained with cost functions specific to weak supervision, and more recently, generative models, trying to model the output of the automatic annotation process. In this work, we explore a novel direction of generative modeling for weak supervision":" Instead of modeling the output of the annotation process (the labeling function matches), we generatively model the input-side data distributions (the feature space) covered by labeling functions. Specifically, we estimate a density for each weak labeling source, or labeling function, by using normalizing flows. An integral part of our method is the flow-based modeling of multiple simultaneously matching labeling functions, and therefore phenomena such as labeling function overlap and correlations are captured. We analyze the effectiveness and modeling capabilities on various commonly used weak supervision data sets, and show that weakly supervised normalizing flows compare favorably to standard weak supervision baselines.

Grafik Top
Authors
  • Stephan, Andreas
  • Roth, Benjamin
Grafik Top
Shortfacts
Category
Paper in Conference Proceedings or in Workshop Proceedings (Paper)
Event Title
60th Annual Meeting of the Association for Computational Linguistics
Divisions
Data Mining and Machine Learning
Subjects
Kuenstliche Intelligenz
Maschinelles Sehen
Event Location
Dublin, Ireland
Event Type
Workshop
Event Dates
May 22-27
Series Name
Proceedings of the 7th Workshop on Representation Learning for NLP
ISSN/ISBN
978-1-955917-48-3
Page Range
pp. 269-279
Date
22 May 2022
Export
Grafik Top