Modulating Language Models with Emotions
ACL 2021 - Findings
Generating context-aware language that embodies diverse emotions is an important step towards building empathetic NLP systems. In this paper, we propose a formulation of conditional layer normalization—a technique inspired by computer vision—that allows us to use large-scale language models for emotional response generation. In empirical and human evaluation, our models outperform prior baseline methods while maintaining diversity, fluency, and coherence, obtaining competitive performance even when using only 10% of the available training data.