Yael Niv

Associate Professor, Princeton Neuroscience Institute

Research in the Niv lab focuses on the neural and computational processes underlying reinforcement learning and decision-making — the ongoing day-to-day processes by which we learn from trial and error, without explicit instructions, to predict future events and to act upon the environment so as to maximize reward and minimize punishment. We use computational modeling and analytical tools in combination with human functional imaging, with an emphasis on model-based experimentation. In particular, we are interested in normative explanations of behavior: models that offer a principled understanding of why our brain mechanisms use the computational algorithms that they do, and in what sense, if at all, these are optimal. In our hands, the main goal of computational models is not to simulate the system, but rather to understand what high-level computations is that system realizing, and what functionality do these computations fulfill. Some questions that we are particularly interested in are: How does the brain identify which are the critical aspects of a task that should be represented and learned about? How do we determine when one piece of experience is similar to another (generalization) and thus the information from both should be combined (learning), versus two different situations that should be encoded separately (discrimination; memory)? What is the nature of the interaction between attention systems in the prefrontal cortex and reinforcement learning systems in the basal ganglia?