Nathaniel Daw, New York University
Multiple decision-making systems in the brain: function and dysfunction
Plenary Keynote Lecture
The idea that decisions can arise from multiple, distinct and even competing systems underlies prominent conceptions of a number of disorders, particularly those involving self-control or compulsion such as drug abuse or obsessive compulsive disorder. However the ubiquity of these sorts of dual-systems views of behavior in psychology, psychiatry, and neuroscience makes their essentially puzzling nature. First, differentiating the influences of such systems on behavior poses both experimental and conceptual challenges, particularly given the traditional understanding of their characteristics in terms of relatively loose descriptors such as "reflective" vs "reflexive." Also, having multiple decision makers doesn't really solve the decision-making problem but rather compounds it, since if their preferences differ, they must be decided between. I discuss recent work that reframes these issues computationally, in terms of different strategies for reinforcement learning, known as model-free and model-based algorithms. Model-free algorithms are closely tied to predominant theories of the dopamine system, of drugs of abuse, and to classic notions of reinforcement in psychology going back to Thorndike's law of effect. Model-based algorithms provide a framework for formalizing more deliberative or rule-based decision-making, such as planning using cognitive maps. Such a quantitative description helps to explain why the brain implements multiple strategies and how it can adaptively deploy them in appropriate circumstances. It also has helped to quantify and differentiate the contributions of both mechanisms to behavior and neural signaling. Finally, I discuss how these mechanisms might be compromised in disorders.
Raymond J. Dolan, Wellcome Trust Centre for Neuroimaging
Why computational psychiatry?
I will briefly summarise the standard approach to the study of psychiatric disease, highlighting an implicit dependence on descriptive psychopathology as embodied in classification schemes such as ICD and DSM. Computational psychiatry is fundamentally an attempt to circumvent the assumptions that underpin these standard approaches to the problems of mental illness. This approach rests on a view that parameterising fundamental psychological and neurobiological processes within a computational framework can provide a basis to understand how these processes go awry in disease, and in so doing it aspires to change the landscape of research in to serious mental illness.
Emrah Düzel, German Center for Neurodegenerative Diseases (DZNE)
A neo-Hebbian framework for episodic memory
Plenary Teaching Lecture
While studies in rodents suggest that the neurotransmitter dopamine is necessary for enabling the molecular consolidation of hippocampal plasticity, very little is known about the role of dopamine for hippocampus-dependent memory in humans. On the basis of functional imaging studies and pharmacological experiments using the dopamine precursor levodopa, I will discuss which role dopamine may play for human episodic memory (a form of memory that is critically dependent on the hippocampus) in young and older adults. In particular, I will emphasize the motivational regulation of episodic memory from the vantage point of novelty and reward processing in dopaminergic circuitry. Our data point towards a framework in which dopamine release in response to novelty promotes the hippocampal consolidation of episodic memories while novelty processing in dopaminergic circuitry motivates exploratory behavior.
Michael J. Frank, Brown University
Interactive dynamics of corticostriatal circuits in learning and decision-making
Plenary Keynote Lecture
The striatum and dopaminergic systems have long been implicated in reward-based behavior, with debates focusing on the relative roles of this system in learning, motor performance, and reward-based decision-making. I will present computational models (neural network and a novel algorithmic analysis) implicating the corticostriatal system at the intersection of all of these functions -- not independently, but interactively. Dopamine modulates the extent to which choice incentives are influenced by positive vs negative value during the decision process itself, during learning, with each of these effects reciprocally influencing the other. Imbalances of this system can lead to aberrant decision and learning processes in psychiatric conditions. These same principles can be extended to account for complex interactive dynamics among multiple corticostriatal circuits during higher level cognitive selection and learning.
Karl Friston, Wellcome Trust Centre for Neuroimaging
The Bayesian brain, free energy and psychopathology
Plenary Keynote Lecture
How much about our interactions with – and experience of – our world can be deduced from basic principles? This talk reviews recent attempts to understand the self-organised behaviour of embodied agents, like ourselves, as satisfying basic imperatives for sustained exchanges with the environment. In brief, one simple driving force appears to explain many aspects of perception and action – the minimisation of surprise or prediction error. In the context of perception, this corresponds to Bayes-optimal predictive coding (that suppresses exteroceptive prediction errors) and – in the context of action – reduces to classical motor reflexes (that suppress proprioceptive prediction errors). We will look at some of the phenomena that emerge from this single principle; such as perceptual synthesis and action selection. I will focus on the key role of precision in making predictions under uncertainty. Neurobiologically, precision may be encoded by the postsynaptic gain of neuronal populations reporting prediction error and is a clear target of neuromodulatory pathologies implicated in many psychiatric disorders. I hope to illustrate this using simulations of hallucinations and failures of affordance, of the sort seen schizophrenia and Parkinson's disease.
Quentin Huys, Gatsby Computational Neuroscience Unit
Behavioural data modelling in psychiatry
Plenary Teaching Lecture
The first part of the talk will take the form of a brief tutorial overview over techniques to model behavioural data. Using a simple reinforcement learning model, I will describe standard techniques for fitting models, highlighting some common pitfalls and remedies. The second part will focus on hypothesis testing by means of model comparison and validation. I will then focus on some specific issues that arise in psychiatric setting, which have to do with group comparisons and psychometric or other correlates. In the third part, I will attempt to delineate an application to the problem of co-morbidity between depression and anxiety disorders.
Thomas R. Insel, National Institute of Mental Health
Psychiatry in the 21st century will be driven by a greater understanding of mental disorders as brain disorders. While some have considered this a “reductionistic” approach to the mind, there is thus far little evidence that the brain is any simpler than our constructs about mental function. Whether considered at the molecular, cellular, or systems level, brain function remains far more complex and mysterious than we can understand with current methods. The era of studying single transcripts, single cells, or single circuits has given way to network science, based on the increasing recognition that mental function is an emergent property of distributed neural activity. Just as the study of genomic function has required the development of computational methods for analyzing thousands of individual variants, the study of neural networks has been leveraged by computational methods that can begin to manage the extraordinarily large datasets of modern neuroscience. There remain, however, important questions about how these methods are applied and whether they reflect the critical links between neural and mental activity. This introductory lecture will lay out what is at stake – why it matters that we have an accurate method to describe neural network activity in the human brain and what we can expect as psychiatry evolves into clinical neuroscience.
Zebulun Kurth-Nelson, Wellcome Trust Centre for Neuroimaging
Measuring delay discounting: task design and data analysis
In this workshop I will describe methods for assessing delay discounting, an important measure of impulsivity. Impulsivity is a strong candidate for a trans-disease endophenotype, a basis for dimensional psychiatry. Measuring impulsivity can help to dissect complex disorders like addiction, both behaviorally and neurally. The workshop will have three parts. First, I will go through the quantitative methods that are most commonly used to describe discounting and show how these are linked to theoretical constructs in decision-making. Second, I will talk about methods for designing experimental protocols and analyzing data, to maximize sensitivity and avoid confounds. Third, I will highlight recent results in the discounting literature that illustrate the importance of understanding discounting and the complex factors that modulate it.
Máté Lengyel, University of Cambridge
Computational modelling of synaptic function
Plenary Teaching Lecture
Synapses are the main communication channels between neurons, and as such their functioning is essential for the operation of neural circuits. Importantly, the efficacy of synaptic transmission between neurons shows a great deal of variability and plasticity over time, both on short (millisecond) and long (second) time scales. While long-term synaptic plasticity has been generally seen as being central to information storage in the nervous system, little is known about how such information can be read out from synapses, and how short-term synaptic plasticity may contribute to efficient information processing. I will present computational analyses of information storage, recall, and transmission, and show how several biophysical properties of neurons and synapses can be understood as contributing to and resolving fundamental informational bottlenecks that constrain neural computations.
Ulman Lindenberger, Max Planck Institute for Human Development
Why computational methods in cognitive ageing research?
Normal ageing alters the chemistry, structure, and functional organization of the brain, and results in a wide range of behavioural changes. Computational models help to identify the mechanisms that govern the onset and amount of these changes, and to discover antecedents of preserved cognitive abilities in old age. This Advanced Course and Symposium holds the promise to foster the development and application of computational methods that reorganize and improve our understanding of behavioural ageing.
Christoph Mathys, University of Zurich
Bayesian hierarchical modelling of learning
Probability theory formally prescribes how agents should optimally learn about their environment from sensory information. This learning corresponds to the updating of beliefs according to Bayes' theorem. While an exact Bayesian approach is optimal from the perspective of probability theory, it is questionable whether it is, or can be, implemented by the brain. Exact Bayesian belief updating entails complicated integrals which are not tractable analytically and difficult to evaluate in real time. Recently, however, theoretical advances have enabled computationally efficient approximations to exact Bayesian inference during learning (Friston, 2009; Daunizeau et al., 2010; Mathys et al., 2011) and have furnished a basis for biologically plausible mechanisms that might underlie belief updating in the brain. These approaches rest on variational Bayesian techniques which optimize a free-energy bound on the surprise about sensory inputs, given a model of the environment. In this workshop, participants are introduced to such models and their application to experimental data. Specifically, we focus on a recent derivation of reinforcement learning from Bayesian principles (Mathys et al., 2011) which rests on a generic hierarchical model of the environment and ist (in)stability. This model provides simple update equations that are computationally efficient. Furthermore, they take the form of precision-weighted prediction errors, which corresponds to the modulatory effects of dopamine as proposed by Friston (2009). In this workshop, we apply the generic learning model to behavioral data from human subjects in various learning and decision-making tasks. We show how applying the generic learning model to different decision models and fitting these various combinations to subjects' behavioral responses provides estimates of their time-varying beliefs about the state of their environment; we demonstrate how such inferred belief trajectories can be used to construct regressors in the analysis of fMRI data using SPM; and we indicate how the performance of competing models can be evaluated using Bayesian model comparison.
Anthony R. McIntosh, Rotman Research Institute
Development of a noisy brain
Plenary Keynote Lecture
My talk will present an overview of a recent line of work that examines how brain noise changes across the lifespan. I will show how this noise follows the ubiquitous inverted-U function and how such changes can be understood in the framework of information processing capacity in nonlinear systems. The relation of brain noise to clinical conditions and potential impact in recovery will conclude the presentation.
Rosalyn Moran, Wellcome Trust Centre for Neuroimaging
Dynamic Causal Modelling of fMRI data
This lecture will outline the biophysical and mathematical foundations of Dynamic Causal Modelling (DCM). DCM is a methodology for the analysis of brain connectivity. It can be applied to a multitude of imaging modalities – in this lecture DCM for fMRI will be described. DCM, in all instantiations, provides a generative model for empirical responses, which is inverted using Bayesian statistical techniques in order to make inferences on the neural substrates of connected brain networks. The idea behind DCM is that neuroimaging data are the response of a dynamic input–output system to endogenous fluctuations or exogenous (experimental) inputs. One models the measured response of a network of sources, where each source corresponds to an ensemble of neural activity, described coarsely in the case of DCM for fMRI. This lecture will describe the dynamical systems that comprise the generative model and the corresponding inversion procedures and statistics. It will demonstrate how the techniques can be applied in practice through the use of published case studies.
Timo von Oertzen, University of Virginia
Modelling longitudinal data
Observations of change processes are so different from each other that there exists no “one- fits-all” solution. Even less does there exist a dictionary in which we can look up the appropriate model or method to fit our specific data with. Instead, obnoxious as it sounds, we have to restart our brain on every new study, preferably even invent the best model to derive predictions that allows us to achieve what we want to achieve with our data. In this workshop, we will discuss the foundations of longitudinal modelling and some common pitfalls. We will introduce Onyx and OpenMx as tools to set up our models, and look at some common examples of longitudinal models, specifically Latent Growth Curve Models, Autoregression Models, Dual Change Score Models and Latent Differential Equations.
Klaas Enno Stephan, Swiss Federal Institute of Technology
Dynamic causal modelling (DCM): Theory and translational applications
Plenary Teaching Lecture
Dynamic Causal Modelling (DCM) is a generic Bayesian framework for inversion and comparison of hierarchical system models that link (hidden) neuronal population dynamics to observed measurements by a modality-specific biophysical forward model. Current implementations of DCM exist for fMRI, EEG/MEG and LFP data. Importantly, given a set of alternative hypotheses about the mechanisms (in terms of synaptic functions, connectional architecture, and modulatory influences) that gave rise to the observed data, the relative plausibility of competing models can be determined using Bayesian model selection. This presentation outlines the theoretical foundations of DCM and shows examples of its potential utility for translational applications.
Xiao-Jing Wang, Yale University
Circuit mechanisms and theory of working memory and decision-making
Plenary Keynote Lecture
Recent work in animal physiology, human brain imaging and computational modelling, has begun to identify specific neural circuit mechanisms of cognitive functions. In particular, our work has focused on the circuit mechanism in the prefrontal cortex (PFC) and posterior parietal cortex (PPC) that underlies working memory, our ability to actively hold and manipulate information online in the absence of direct sensory stimulation. The circuit mechanism we discovered through biologically realistic modelling has two core components: slow reverberating dynamics capable of sustaining persistent activity pattern underlying working memory storage, and winner-take-all competition for stimulus selectivity and resistance against distractors during working memory. These turned out to be exactly what are needed for neural computations underlying decision-making, because a deliberate decision typically requires a temporal accumulation of evidence for or against different choice options (via slow reverberation) leading to a categorical choice (through winner-take-all competition). Therefore, a fundamental “cognitive-type” circuit mechanism is capable of both working memory and decision-making. This finding has led my group and others to investigate all sorts of decision processes, including selective attention, reward-based economic choice behavior, rule-guided flexible sensori-motor mapping, pattern-match decisions, inhibitory control of action, even probabilistic inference. In this lecture, I will provide a survey of these findings and explain basic conceptual and mechanistic advances. Elucidation of "cognitive-type" circuit mechanisms, that are capable of multiple functions underlying flexible behavior, represents a significant step in our efforts to understand cognitive deficits associated with psychiatric illness and potentially lead to novel interventions.