John Ashburner, Wellcome Trust Centre for Neuroimaging
Computational brain anatomy
Plenary Teaching Lecture
I will present an overview of computational procedures for examining neuroanatomical variability, with a focus on approaches that can be applied using the SPM software package. I'll begin by discussing simple volumetric approaches, with an emphasis on voxel-based morphometry, and the pre-processing steps involved using the SPM software. Most volumetric studies involve univariate approaches, with a correction for some global measure, such as total brain volume. I'll go on to discuss multivariate approaches, which may allow the overall form of the brain to be more accurately modelled. Such models of anatomical variability may prove accurate enough to make useful clinical predictions.

Michael Breakspear, QIMR Berghofer Medical Research Institute
Meet the Fokkers: Modelling large-scale brain dynamics
Plenary Keynote Lecture
The “laws of motion” that describe the dynamics of the brain’s physical states should, in principle, be governed by a closed set of equations that could be discovered. What might those equations look like, how will they be obtained, and will they be relevant to neuroscientists undertaking empirical research? Here I review the stochastic differential equations that have thus far been inferred from large-scale neuroimaging data. These equations take two forms: Mean field models that relate detailed biophysical mechanisms to empirical observables, and stochastic normal form models that strip those details away to reveal the fundamental relationship between deterministic and stochastic processes, and the dynamical skeleton on which this relationship unfolds. Recent analyses of neurophysiological data has uncovered both of these ingredients, namely the multi-attractor dynamical skeleton and the complex state- and history-dependent stochastic influences. Whilst these equations predict particular instances of brain dynamics, they can also be recast into a master equation that describes the ensemble of all possible states, namely the Fokker-Planck equation. Although the definitive form of the Fokker-Planck is not clear, it holds the potential to relate physical states of the brain to the probabilistic principles of action and perception, and as such, is a useful guide.

Roshan Cools, Radboud University Medical Center and Radboud University Nijmegen
Dopamine and the motivational control of cognition
Plenary Keynote Lecture
Willpower: scholars from a variety of discipline have been fascinated by this concept forcenturies. In psychology, it is a concept intimately linked with phenomena such as cognitive control and motivation. What motivates us to exert cognitive control? Why are some people driven by the least bit of stimulation, while others have difficulty getting off the couch for almost anything? Accumulating evidence indicates that they both implicate brain circuitry connecting the prefrontal cortex and the striatum. These brain regions are highly sensitive to modulation by the ascending neuromodulator dopamine. However, surprisingly little is known, at either the psychological or neural level, about the interaction between motivation and cognitive control: Does being motivated imply that we have more cognitive control over our behaviour? Is drive and motivation always a good thing, or can motivation also have detrimental consequences for cognitive control? I will present recent work in my group indicating that motivation does not enhance all cognitive processes in a nonspecific manner, but can in fact impair some cognitive processes. Our findings are in line with findings that (i) changes in appetitive motivation are accompanied by changes in dopamine, and (ii) dopamine has contrasting effects on cognitive control depending on current task demands and associated neural systems. One important implication of the observations is that being motivated does not necessarily contribute to greater cognitive control.

Philip R. Corlett, Yale School of Medicine
Psychopharmacology in psychiatry
Plenary Teaching Lecture

We do not yet understand how the brain produces the symptoms of psychosis, the disconnection from reality characterized by hallucinations (odd perceptions without stimulus) and delusions (inappropriate beliefs) that attends serious mental illnesses like schizophrenia. One experimental approach involves safely and temporarily inducing psychosis-like symptoms in healthy volunteers with drug administration. Different drugs produce both common and distinct symptoms. A challenge is to understand how apparently different manipulations can produce overlapping symptoms. Bayesian formulations of information processing in the brain provide a framework that maps onto neural circuitry and gives us a context within which we can relate the symptoms of psychosis to their underlying causes. This helps us to understand the similarities and differences across the common models of psychosis. The model focuses on information processing in terms of both prior expectancies and current inputs. A mismatch between these leads us to update inferences about the world and to generate new predictions for the future. According to this model, what we experience shapes what we learn, and what we learn modifies how we experience things. This simple idea gives us a powerful and flexible way of understanding the symptoms of psychosis where perception, learning and inference are deranged. We will the model in light of what we understand about  the neuropharmacology of psychotomimetic drugs and thereby attempt to account for the common and the distinctive effects of NMDA receptor antagonists, serotonergic hallucinogens, cannabinoids and dopamine agonists.


Peter Dayan, Gatsby Computational Neuroscience Unit
Computational neuromodulation
Plenary Keynote Lecture
That neuromodulators such as dopamine, serotonin, acetylcholine and norepinephrine are the instruments of effective decision-making and control is shown clearly by the extent to which they are the villains of the piece in psychiatric and neurological disorders. I will review computational approaches to neuromodulation in this domain, looking at phasic quantities such as prediction errors, tonic quantities such as long-run reward rates, and quantities varying over intermediate timescales such as various forms of uncertainty. New experimental methods are being developed for recording and manipulating neuromodulatory systems; we shall consider some pointers for the future.

Gustavo Deco, Universitat Pompeu Fabra
Linking the functional and structural human connectome
Opening Lecture
The ongoing activity of the brain at rest, i.e. under no stimulation and in absence of any task, is astonishingly highly structured into spatio-temporal patterns. These spatio-temporal patterns, called resting state networks, display low-frequency characteristics (<0.1 Hz) observed typically in the blood-oxygenation level-dependent (BOLD) fMRI signal of human subjects. We aim here to understand the origins of resting state activity through modelling. Integrating the biologically realistic DTI/DSI based neuroanatomical connectivity into a brain model, the resultant emerging resting state functional connectivity of the brain network fits quantitatively best the experimentally observed functional connectivity in humans when the brain network operates at the edge of instability. Under these conditions, the slow fluctuating (< 0.1 Hz) resting state networks emerge as structured noise fluctuations around a stable low firing activity equilibrium state in the presence of latent “ghost” multi-stable attractors. The multistable attractor landscape defines a functionally meaningful dynamic repertoire of the brain network that is inherently present in the neuroanatomical connectivity.

Pascal Fries, Ernst Strüngmann Institute for Neuroscience
Communication through coherence
Plenary Keynote Lecture
I will show that natural viewing induces very pronounced gamma-band synchronization in visual cortex. This early visual gamma synchronizes to higher areas only if it conveys attended stimuli. Attentional top-down control is mediated via beta-band synchronization. Top-down beta enhances bottom-up gamma. Across 28 pairs of simultaneously recorded visual areas, gamma mediates bottom-up and beta top-down influences. Finally, I will show how pyramidal cells and interneurons are differentially synchronized and affected by attention and by stimulus repetition.

Douglas D. Garrett, Max Planck Institute for Human Development
Brain signal variability and dynamics
Methods Workshop
Neuroscientists have long observed that brain activity is naturally variable from moment-tomoment, yet neuroimaging research has rarely considered signal variability as a within-person measure of interest. Our work on younger and older adults suggests that within-person brain signal variability offers highly predictive, complementary, and even orthogonal views of brain function compared to traditional mean-based measures. In particular, we continue to find that older, poorer performing adult brains often exhibit less signal variability, within and across brain regions and tasks. Accordingly, I will discuss the idea that contrary to traditional theoretical expectations of adult-developmental increases in "neural noise," brain ageing could instead be re-conceived of as a generalized process of increasing system rigidity and loss of dynamic range. I will also cover various practical aspects of computing and analyzing brain signal variability so that workshop attendees can easily incorporate various signal variability measures into their own research programs.

Quentin Huys, Swiss Federal Institute of Technology (ETH)
Christoph Mathys, Wellcome Trust Centre for Neuroimaging
Hierarchical modelling of learning
Methods Workshop
In the first part, we will look at modelling behavioural choice data. We will introduce generative learning models and work through some simple examples of how to fit these to data. We will describe maximum likelihood, empirical Bayesian, fixed effects and random effects approaches both at the single and group level in terms of parameters and models, and also discuss weighted regression of parameters onto other data such as questionnaire measures. At the end of the first part, we will introduce the notion of model complexity and discuss approaches to model comparison. In a second part, we will look at examples of hierarchical learning models, in particular the hierarchical Gaussian filter (HGF). The HGF is a recent derivation of one-step update equations from Bayesian principles that rests on a hierarchical generative model of the environment and its (in)stability. It offers a principled, flexible, and efficient framework for the modelling of individual differences in learning in behaving agents. This is illustrated by three examples from different domains: a location cueing paradigm with varying cue validity, an audio-visual association learning task, and a prediction task exploring optimism bias that was implemented as a smartphone app.

Zebulun Kurth-Nelson, Wellcome Trust Centre for Neuroimaging
Dynamics of association retrieval
MPS-UCL Fellows Talk
In the venerable paradigm called sensory preconditioning, two neutral stimuli are first linked by being paired. This link can then be shown to mediate generalization such that rewards subsequently associated with one stimulus (we call the 'direct') are also expected when the other ('indirect') is presented alone. Sensory preconditioning taps into fundamental processes by which internal representations of stimuli are constructed and bound in a manner that can be exploited to facilitate adaptive decision-making performance. Here, we used the spatiotemporal precision of MEG to examine the short-time evolution of these representational structures. We chose indirect stimuli from three distinct categories (faces, body parts or scenes) and used multivariate methods to decode the category from the MEG signal elicited by its direct pair, during the latter's conditioning to reward. We found a rich temporal structure, with two distinct aspects of the representation of the indirect stimulus being evident during conditioning: one at the time the direct stimulus was presented and the other at the time of the reward. The strength of the representation at the earlier time was correlated with a measure of the strength of generalization of reward from indirect to direct stimulus. This suggests that decomposing the spatiotemporal trajectory of representations is essential to understand how they are used in adaptive decision making.

Máté Lengyel, University of Cambridge
Plenary Teaching Lecture
Priors pervade our cognition, from low-level perception to high-level reasoning, and their deficiencies lead to pathologic behaviour. The lecture will cover the following topics:
1. Theoretical foundations (Bayesian inference).
2. The use of priors in sensorimotor control and high-level cognition.
3. Measuring priors empirically using behavioural and neural data.
4. Complexity, subject-specificity, and task-independence of priors.
5. Adaptation of priors to the natural environment: behavioural and neural evidence.
6. Experience-dependence adaptation of priors (aka learning): behavioural and neural evidence.

Yael Niv, Princeton University
Task representations, why they matter, and how we learn them
Plenary Keynote Lecture
In recent years ideas from the computational field of reinforcement learning (RL) have revolutionized the study of learning in the brain, famously providing new, precise theories about the effects of dopamine on learning in the basal ganglia. However, the first ingredient in any RL algorithm is a representation of the task as a sequence of states. Where do these state representations come from? In this talk I will first argue, and demonstrate using behavioural experiments, that animals and humans learn the structure of a task, thus forming a state space through experience. I will then present some results regarding the algorithms that they may use for such learning, and their neural implementation.

Robb Rutledge, Wellcome Trust Centre for Neuroimaging
A computational and neural model of momentary subjective well-being
MPS-UCL Fellows Talk
The subjective well-being or happiness of individuals is an important metric for societies, but we know little about how the cumulative influence of daily life events are aggregated into subjective feelings. Using computational modelling, we show that emotional reactivity in the form of momentary happiness in response to outcomes of a probabilistic reward task is explained not by current wealth, but by the combined influence of recent reward expectations and prediction errors resulting from those expectations, quantities related to the phasic activity of dopamine neurons. Using functional MRI, we show that the very same influences account for task-dependent activity in the ventral striatum in a manner akin to the influences underpinning changes in happiness. We then manipulated dopamine levels with levodopa, the precursor to dopamine, and found that levodopa selectively enhanced subjective well-being following wins from gambles with potential gains but not losses, demonstrating a causal role for dopamine in mediating the impact of rewards on subjective well-being.

Klaas Enno Stephan, Swiss Federal Institute of Technology
Advanced dynamic causal modelling (DCM)
Plenary Teaching Lecture
Dynamic causal modelling (DCM) is a Bayesian framework for identification and comparison of dynamic system models for neuroimaging data. In this presentation, I will outline the ideas behind DCM, with a particular focus on model selection, and show examples of how it can be used to address questions about psychiatric disease.

Manuel Völkle, Max Planck Institute for Human Development
Within vs. between person differences in behaviour
Methods Workshop
The vast majority of empirical research in the behavioural sciences is based on the analysis of between-person (BP) variation. In contrast, much of applied psychology is concerned with the analysis of variation within individuals (WP). Furthermore, the mechanisms specified by psychological theories generally operate within, rather than across, individuals. This disconnect between research practice, applied demands, and psychological theories constitutes a major threat to the conceptual integrity of the field. In this workshop I want to approach this topic from three different perspectives: Models of causal inference, panel data analysis, and personspecific research. I will begin with a short introduction to the framework of potential outcomes and the conditions under which causal claims can be made at the individual and group level. Next, I will discuss different approaches to longitudinal data analysis and how panel data may allow researchers to decompose different sources of variance in order to make causal claims in situations where randomization is not possible. Finally, I will focus on WP and BP sources of variation in psychological constructs and will discuss how simultaneously considering both may help to identify possible reasons for nonequivalence of BP and WP structures, as well as establishing areas of convergence. Besides introducing basic concepts of causal inference, panel data analysis, and person-specific research, the workshop will also provide a forum to discuss recent controversies and unresolved conceptual and statistical-methodological problems.

Gabriel Ziegler, Wellcome Trust Centre for Neuroimaging
Computational brain anatomy in ageing using longitudinal MRI
Methods Workshop
In this workshop I will discuss approaches in order to model individual change parameters using repeated measures MRI. We begin with anatomical preprocessing of longitudinal images and continue with Bayesian random effects models (RFX) of individuals and ensemble structural trajectories. An open question still is how cross-sectional and longitudinal estimators relate to each other, and how one might account for the high-dimenional character of spatial data. These RFX models are then further applied to simulated and real MRI data. In the last part of the workshop I will show work in progress on structural change prediction models based on Gaussian processes and using a dynamical systems approach to longitudinal data.

Go to Editor View