Helpful assistant or fruitful facilitator? Investigating how personas affect language model behavior

Helpful assistant or fruitful facilitator? Investigating how personas affect language model behavior

Abstract

One way to steer generations from large language models (LLM) is to assign a persona: a role that describes how the user expects the LLM to behave (e.g., a helpful assistant, a teacher, a woman). This paper investigates how personas affect diverse aspects of model behavior. We assign to seven LLMs 162 personas from 12 categories spanning variables like gender, sexual orientation, and occupation. We prompt them to answer questions from five datasets covering objective (e.g., questions about math and history) and subjective tasks (e.g., questions about beliefs and values). We also compare persona’s generations to two baseline settings: a control persona setting with 30 paraphrases of “a helpful assistant” to control for models’ prompt sensitivity, and an empty persona setting where no persona is assigned. We find that for all models and datasets, personas show greater variability than the control setting and that some measures of persona behavior generalize across models.

Grafik Top
Authors
  • Luz de Araujo, Pedro Henrique
  • Roth, Benjamin
Grafik Top
Shortfacts
Category
Journal Paper
Divisions
Data Mining and Machine Learning
Subjects
Kuenstliche Intelligenz
Journal or Publication Title
PLoS ONE
ISSN
1932-6203
Publisher
Public Library of Science
Page Range
pp. 1-31
Number
6
Volume
20
Date
30 June 2025
Official URL
https://doi.org/10.1371/journal.pone.0325664
Export
Grafik Top