Neuroscience and Social Decision Making Series
Human languages are powerful solutions to the complex coordination problems that arise between social agents. They provide stable, shared expectations about how the words we say correspond to the beliefs and intentions in our heads. However, to handle an ever-changing environment with new things to talk about and new partners to talk with, linguistic knowledge must be flexible: we give old words new meaning on the fly. In this talk, I will present work investigating the cognitive mechanisms that support this balance between stability and flexibility in social interaction. First, I will introduce a theoretical framework re-casting communication as a meta-learning problem and propose a computational model that formalizes the problem of coordinating on meaning as hierarchical probabilistic inference: community-level expectations provide a stable prior, and dynamics within an interaction are driven by partner-specific adaptation. Next, I will show how recent connections between this hierarchical Bayesian framework and continual learning in deep neural networks can be exploited to implement and evaluate a neural image-captioning agent that successfully adapts to human partners in real time. Finally, I provide an empirical basis for further model development by quantitatively characterizing convention formation behavior in a new corpus of natural-language communication in the classic Tangrams task. By using techniques from natural language processing to examine the (syntactic) structure and (semantic) content of referring expressions, we find that pairs coordinate on equally efficient but increasingly idiosyncratic solutions to the problem of reference. Taken together, this line of work builds a computational foundation for a dynamic view of meaning in communication.