Bernstein-Tutorials (on Sunday, 25th August 2013):
Co-Organiser: Felix Wichmann, BCCN Tübingen, University of Tübingen, Germany.
The Bernstein tutorials will take place on Sunday, 25th of August from 9:00 to 16:00. They are divided into a morning session (from 9:00 to 12:00) and an afternoon session (13:00 to 16:00). Each session will have a coffee break, and light snacks and sandwiches will be provided over the lunch break.
Tutorials A1-A2 will be full-day tutorials, covering both morning and afternoon sessions. Tutorials B1-B10 will be half-day tutorials. Assignment of these half-day tutorials to morning and afternoon sessions was decided in May by a poll among the participants registered for this event.
Reading and teaching material for each tutorial will be provided by the speakers either before or shortly after the meeting. For titles and confirmed speakers, see the following list.
Click on the titles to see the abstracts for each tutorial.
Supported by: National Bernstein Network for Computational Neuroscience, Germany.
A1: Programming Bricolage for Psychophysicists: Essential Tools and Best Practices for Efficient Stimulus Presentation and Data Analysis
(Tiziano Zito, Berlin and Michael Hanke, Magdeburg, Germany)
This tutorial provides an opportunity to fill a few gaps in the typical training of a psychophysicist. It will help to learn programming scripts for stimulus presentation and data analysis efficiently -- exploiting a computer instead of fighting it. We will demonstrate best practices and easy development tools that make coding faster and more robust, as well as the result more functional and reusable for the next experiment and the next student. We expect our audience to be familiar with at least one programming language or environment (Python, Matlab, Labview, IDL, Mathematica, C, Java, to just name a few) and be willing to change their attitude towards software development.
A2: Modelling Vision
(Heiko Neumann1, Ulm and Lars Schwabe2, Rostock, Germany)
This tutorial is structured into two parts that will be covered by a morning and an afternoon session. In the morning session we first motivate the role of models in vision science. We show that models can provide links between experimental data from different modalities such as, e.g., psychophysics, neurophysiology, and brain imaging. Models can be used to formulate hypotheses and knowledge about the visual system that can be subsequently tested in experiments and, in turn, also lead to model improvements. To some extent, however, modeling vision is indeed an art as the visual system can be described at various levels of abstraction (e. g. purely descriptive vs. functional models) and different spatial and temporal granularity (e. g. visually responsive neurons vs. brain-wide dynamics, or perceptual tasks vs. learning to see during development). Therefore, throughout the tutorial we address questions such as “How to chose a model for a given question?”, and “How to compare different models?”.
Based on this general introduction we will review phenomenological models of early and mid-level vision, addressing vision topics such as perceptual grouping, surface perceptions, motion integration, and optical flow. We discuss a few specific models and show how they can be linked to data from visual psychophysics, and how they may generalize to other visual features. In line with this year’s ECVP focus on “Computational Neuroscience”, we also discuss how such models can be used to constrain hypotheses about the neural code in the visual system, or to make implicit assumptions about these codes explicit.
In the afternoon session we first consider neurodynamical models of visual processing and show how cortical network models can affect the interpretation of psychophysical and brain imaging data. We then show how physiological and anatomical findings, as summarized by neurodynamical models, can be used to design experiments and stimuli for visual psychophysics. We then also consider the modeling of vision via modeling learning in the visual system. The rational behind such modeling approaches is that a proper learning algorithm based on first principles will produce models of visual systems when stimulated with natural stimuli. The advantages and pitfalls of such normative modeling will be discussed. Finally, we consider models of higher-level form and motion processing, e.g. biological motion or articulated motion, and compare the performance of such models with human performance and recent advances in visual computing such as markerless motion capture.
1) Institute for Neural Information Processing, Faculty of Engineering and Computer Sciences, University of Ulm, James-Franck-Ring, D-89069 Ulm, Germany
2) Adaptive and Regenerative Software Systems, Dept of Computer Science and Electrical Engineering, University of Rostock, Albert-Einstein-Str. 22 18059 Rostock, Germany
B1: Introduction to Matlab and PsychophysicsToolbox
(Mario Kleiner, Tübingen, Germany)
Psychtoolbox-3 is a cross-platform, free and open-source software toolkit for the Linux, MacOSX and Windows operating systems. It extends the GNU/Octave and Matlab programming environments with functionality that allows to conduct neuroscience experiments in a relatively easy way, with a high level of flexibility, control and precision. It has a number of coping mechanisms to diagnose and compensate for common flaws found in computer operating systems and hardware. It also takes unique advantage of the programmability of modern graphics cards and of low-level features of other computer hardware, operating systems and open-source technology to simplify many standard tasks, especially for realtime generation and post-processing of dynamic stimuli.
This tutorial aims to provide an introduction into the effective use of Psychtoolbox. However, participants of the tutorial are encouraged to state their interest in specific topics well ahead of time, so i can try to tailor large parts of the tutorial to the actual interests of the audience if there happens to be clusters of common wishes. Ideally this will be interactive rather than a lecture. Wishes can be posted to the issue tracker at GitHub ( https://github.com/kleinerm/Psychtoolbox-3/issues/new ) with the label [ecvp2013], or via e-mail to email@example.com with the subject line [ecvp2013ptb].
B2: Introduction to Python and PsychoPy
(Jonathan Peirce, Nottingham, United Kingdom)
This tutorial will introduce the basics of how to use PsychoPy and Python for visual neuroscience. PsychoPy is open-source, platform independent, easy to install and learn and provides an extremely flexible platform for running experiments. It has the unique advantage of both a scripting interface (similar to Psychtoolbox but using the Python language) and a graphical interface requiring little or no programming (ideal for teaching environments and simpler experiments). This tutorial will get you started with both interfaces, and show how the two can be used together by building a basic experiment visually and then customizing it with code.
If possible, bring along a laptop with PsychoPy installed and we can make it more of an interactive workshop, with live exercises.
B3: Introduction to fMRI data analysis and classification
(Jakob Heinzle, Zurich, Switzerland)
This tutorial is addressed to people interested in, but not yet familiar with, analysing functional magnetic resonance imaging (fMRI) data. It will introduce the key basics of fMRI data analysis and classification. The tutorial will start with a brief introduction to the physics and physiology underlying fMRI measurements. The main part will then be devoted to the analysis of fMRI data, focussing particularly on visual experiments. Finally, we will give a short overview of novel approaches using fMRI data for classification. The goal is to provide an overview over the general principles of fMRI data analysis and classification, and the material presented is not tied to any specific analysis software. We will highlight relevant references and emphasize potential pitfalls. We hope to provide participants with all the necessary ingredients to embark on their own analysis of fMRI data.
The aim of this lecture is to provide an illustrative tutorial on methods for single-trial EEG analysis. Concepts of feature extraction and classification will be explained in a way that is accessible also to participants with less technical background. Nevertheless all techniques required for state-of-the-art Brain-Computer Interfacing will be covered. The presented methods will be illustrated with concrete examples from the Berlin Brain-Computer Interface (BBCI) research project.
B5: Introduction to Kernel Methods
(Frank Jäkel, Osnabrück, Germany)
The abilities to learn and to categorize are fundamental for cognitive systems, be it animals or machines, and therefore have attracted attention from engineers and psychologists alike. Early machine learning algorithms were inspired by psychological and neural models of learning. However, machine learning is now an independent and mature field that has moved beyond psychologically or neurally inspired algorithms towards providing foundations for a theory of learning that is rooted in statistics. Here, we provide an introduction to a popular class of machine learning tools, called kernel methods. These methods are widely used in computer vision and modern data analysis. They are therefore potentially interesting for vision research, too. However, reading about kernel methods can sometimes be intimidating because many papers in machine learning assume that the reader is familiar with functional analysis. In this tutorial, I give basic explanations of the key theoretical concepts that are necessary to be able to get started with kernel methods - the so-called kernel trick, positive definite kernels, reproducing kernel Hilbert spaces, the representer theorem, and regularization.
B6: Statistics of Signal Detection Models
(Kenneth Knoblauch, Bron, France)
This tutorial will focus on the statistical tools to analyze and to model psychophysical experiments within the framework of Signal Detection Theory. This includes choice experiments (detection, discrimination, identification, etc.) and rating scale experiments with ROC analyses. In many cases, the decision rule underlying these paradigms is linear, thereby permitting the analyses to be simplified to a Generalized Linear Model (GLM). Rating scales, similarly, are analyzed by using ordinal regression models with cumulative link functions. With these approaches, we can define straight-forward procedures to fit the data, to test hypotheses about them, to obtain confidence intervals, etc. Diagnostic plots and tests will be used to evaluate goodness of fit and to explain some potential pitfalls that can occur in the data. Most off-the-shelf software packages now include tools for performing GLMs, thus, making it easy to implement these tests and procedures. Examples will be shown using the R programming environment and language (http://www.r-project.org/). Extensions of these models to include random effects allow estimation and control for observer and stimulus variability. Finally, an example will be shown of this approach with a paradigm for measuring appearance. Background reading includes the recent books "Modeling Psychophysical Data in R", K. Knoblauch & L. T. Maloney, 2012, Springer, (for R users) and "Psychophysics: A Practical Introduction", F. A. A. Kingdom & N. Prins, 2010, Academic Press (for matlab users).
B7: Classification images
(Simon Barthelmé, Geneva, Switzerland
A large part of vision science is about figuring out the rules that govern perceptual categorisation. What makes us see a person as male or female? A pattern as symmetric or asymmetric? A smile or a frown on a face?
Classification images (Ahumada and Lovell, 1971), use noise to uncover the rules defining a perceptual category. Adding a moderate amount of noise to the picture of a smiling face will produce a random stimulus, essentially a "perturbed" version of the original: still identifiable as a face, but with altered features (Kontsevich and Tyler, 2004). Depending on the exact pattern of the noise, the perturbed face will sometimes look just as smiling as the original, sometimes distinctively less so. Viewed geometrically, what this means is that the added noise sometimes takes the original stimulus across the smiling/unsmiling boundary. The intuition behind the original technique is that by looking at those noise patterns that lead to a response change and comparing to those that do not, we should be able to characterise the features that the visual system uses to decide whether a face is smiling or not.
In this tutorial I will introduce this classical technique and a number of applications, but I will focus especially on setting a broader context. Although classification images are native to psychology, they have close cousins in many areas of science (Murray, 2011). We will see that classification images have interesting ties to a range of concepts and techniques across the disciplines, from Generalised Linear Models in statistics, to compressed sensing in computer science. Putting classification images in context helps us understand why they work, when they work, and how they can be extended.
Ahumada, A. J. and Lovell, J. (1971). Stimulus features in signal detection. The Journal of the Acoustical Society of America, 49(6B):1751-1756.
Murray, R. F. (2011). Classification images: A review. Journal of vision, 11(5).
B8: Statistical Modelling of Psychophysical Data
(Jakob Macke, Tübingen, Germany)
In this tutorial, we will discuss some statistical techniques that one can use in order to obtain a more accurate statistical model of the relationship between experimental variables and psychophysical performance. We will use models which include the effect of additional, non-stimulus determinants of behaviour, and which therefore give us additional flexibility in analysing psychophysical data. For example, these models will allow us to estimate the effect of experimental history on the responses on an observer, and to automatically correct for errors which can be attributed to such history-effects. By reanalysing a large data-set of low-level psychophysical data, we will show that the resulting models have vastly superior statistical goodness of fit, give more accurate estimates of psychophysical functions and allow us to detect and capture interesting temporal structure in psychophysical data.
In summary, the approach presented in this tutorial does not only yield more accurate models of the data, but also has the potential to reveal unexpected structure in the kind of data that every visual scientist has plentiful-- classical psychophysical data with binary responses.
B9: Attractor Networks and the Dynamics of Visual Perception
(Jochen Braun, Magdeburg, Germany and Gustavo Deco, Barcelona, Spain)
First principles of statistical inference suggest (e.g., Friston, Breakspear, Deco, 2012) that visual perception relies on two interaction loops: a fast ‘recognition loop’ to match retinal input and memorized world models and a slow ‘learning loop’ to improve these world models. Focusing on the fast loop, we try to make these abstract notions fruitful in terms of novel experimental paradigms and observations. The first half of the tutorial reviews the activity dynamics of attractor networks at different space-time scales – especially mesoscopic models of cortical columns and groups of columns and macroscopic models of whole-brain dynamics – and the second half compares the dynamics of perceptual decisions in the context of choice tasks, multi-stable percepts, and cooperative percepts. We argue that only a combination of principled models of collective neural dynamics and careful empirical studies of perceptual dynamics can guide us towards a fuller understanding of the principles and mechanisms of visual inference.
B10: Bayesian Methods and Generative Models
(József Fiser, Budapest, Hungary)
In the last two decades, a quiet revolution took place in vision research, in which Bayesian methods replaced the once-dominant signal detection framework as the most suitable approach to modeling visual perception and learning. This tutorial will review the most important aspects of this new framework from the point of view of vision scientists. We will start with a motivation as to why Bayes, then continue with a quick overview of the basic concepts (uncertainty and probabilistic representations, basic equations), moving on to the main logic and ingredients of generative models including Bayesian estimation, typical generative models, belief propagation, and sampling methods. Next we will go over in detail of some celebrated examples of Bayesian modeling to see the argument and implementation of the probabilistic framework in action. Finally, we will have an outlook as to what the potential of the generative framework is to capture vision, and what the new challenges are to be resolved by the next generation of modelers.