Symposia


Monday 26th Tuesday 27th Wednesday 28th Thursday 29th
The Scope and Limits of Visual Processing under Continuous Flash Suppression Computational Neuroscience meets Visual Perception Visual Perception in Schizophrenia: Vision Research, Computational Neuroscience, and Psychiatry Are Eye Movements Optimal?
Perceptual Memory and Adaptation: Models, Mechanisms, and Behavior   Visual Noise: New Insights  
Synergistic Human Computer Interaction (HCI)   Non-retinotopic Bases of Visual Perception  

Programme overview (You can find the detailed programme here)

Symposium 1

Perceptual memory and adaptation: Models, mechanisms, and behavior
(organized by Caspar M. Schwiedrzik, USA)

Our perception is constantly shaped by previous experience. This is evident in perceptual after effects such as the tilt after effect, where prolonged viewing of one orientation causes subsequent orientations to be perceived away from the adaptor. Similar effects occur at a neural level, where neurons adapt their firing rates when repeatedly stimulated with the same stimulus. Intriguingly, previous experience does not have unitary effects on perception. In fact, it can bias perception in two completely opposing ways, either exerting an attractive effect, sensitizing the brain to perceive the same stimulus again (perceptual memory or stabilization), or a repulsive effect, biasing the brain to perceive a ‘different’ stimulus (perceptual adaptation). The neural mechanisms underlying both perceptual phenomena and computations are only recently starting to become unraveled. Particular emphasis has been paid to the role of neuronal adaptation, which is thought by many to dynamically optimize neural coding at the single neuron and network level. However, a mechanistic understanding especially of the coexistence of perceptual memory and adaptation is still lacking. Furthermore, it remains unclear whether and how the behavioral effects and underlying mechanisms can be unified within one theoretical framework.

This symposium will investigate the relationship between the perceptual and neural after effects of previous experience and discuss their functional role in perception and neural computation. To this end, we will cover recent advances in the understanding of the effects of previous experience in terms of models, mechanisms, and behavior, with a translational approach from single cell recordings in macaque monkeys to fMRI and psychophysics in humans. In particular, the speakers will present different views on the functional role of neuronal adaptation, discuss the different temporal scales and neural underpinnings of previous experience in general and perceptual memory and adaptation in particular, and what happens when their implementation fails in psychiatric disorders. We will conclude by discussing if and how the diverse findings and proposals on the neural and perceptual effects of previous experience can be unified under one theoretical framework, namely as a case of perceptual inference within the predictive coding framework.

Speakers: Valentin Dragoi (USA), Pascal Mamassian (France), Caspar M. Schwiedrzik (USA), Philipp Sterzer (Germany), and Rufin Vogels (Belgium)

Symposium 2

Are eye movements optimal?
(organized by Preeti Verghese, USA)

Much recent work has concerned itself with the question of whether or not saccade planning is optimal. Results have been mixed. This symposium reflects on that work and asks the important question: What tasks have human eye movements evolved for? In what tasks is it important for saccades to be optimal?

Is saccade optimality a question we should be concerned with? How have these movements evolved to serve the sensing needs of the organism? This symposium reviews the efficiency of saccadic eye movements in the context of different laboratory tasks and opens the discussion into how we should approach thinking about and studying the saccadic system for the purposes for understanding their role in vision, perception and action in the real world.
Current research investigating the efficiency of eye movements provides mixed results, depending on the task. Some studies indicate that saccadic targeting is efficient. For instance, eye movements executed during search for a single target appear efficient and appear to incorporate knowledge about where the target is most likely to occur. On the other hand, saccades are not always directed to maximize expected gain in a reward/penalty paradigm, nor do they seem to fully incorporate the decrease in visibility with target eccentricity. Furthermore, they are grossly suboptimal in tasks requiring a sequence of saccades to gather information about multiple targets. Thus, while saccade planning appears flexible in some task conditions, it may not reflect the most efficient information gathering strategy. The aim of the symposium is to discuss factors that govern saccade targeting by considering the relative contribution of prior visual experience, potential cost/benefit, time pressure and task demands.

Speakers: Eileen Kowler (USA), Matt Peterson (USA), Bill Geisler (USA), Larry Maloney (USA), Alexander Schütz (Germany), and Preeti Verghese (USA)

Symposium 3

Visual noise: New insights
(organized by Daniel Baker, UK)

Over the past three decades, numerous studies have explored the processing of visual stimuli in the presence of added external noise. An important aim of many of these experiments was to estimate the level of internal noise within the observer, and the technique has proved popular in both basic and clinical work across all sensory domains.

Recently, however, several studies have begun to question both the fundamental assumptions that underpin this paradigm, and the techniques that experimenters and modellers have commonly used. Specific concerns include the following:

  1. Estimates of the distribution of internal noise suggest that it is not normal (Neri, 2013, J Neural Eng)
  2. External noise can affect the observer’s processing strategy (Allard & Cavanagh, 2011, Vis Res;  2012, J Vis)
  3. Pixel noise masks can suppress the target as well as increasing variance (Baker & Meese, 2012, JoV; Hansen & Hess, 2012, Vis Res)
  4. Computational models can make incorrect predictions if they are implemented inappropriately (Klein & Levi, 2009, J Opt Soc Am A)

These concerns cast doubt on the interpretation of previous results, and mean that best practise for future research using the external noise paradigm is unclear. The aim of this symposium is to summarise recent work on visual noise, and propose practical and theoretical solutions to the problems outlined above (and others). We will bring together experts in psychophysics and computational modelling of neural systems. The session will consist of six 15 minute talks, followed by a (probably very lively) discussion.

Speakers: Rémy Allard (Canada), Daniel Baker (UK), Keith May (UK), Stan Klein (USA), Josh Solomon (UK) and Peter Neri (UK)

Symposium 4

Non-retinotopic bases of visual perception
(organized by Haluk Ogmen, USA)

Neighboring points in the environment are mapped through the optics of the eyes on neighboring points in the retina. This retinotopic organization is preserved in early visual areas. Retinotopic representations and retinotopically localized receptive-fields have been two fundamental pillars upon which most theoretical accounts of visual perception are built. Yet, due to the movements of the observer and those of the objects in the environment, retinotopic representations are highly unstable and blurred and thus fail to explain the stability and clarity of our perception. A fundamental question in vision science is to understand how retinotopic representations are transformed into non-retinotopic representations in order to synthesize the clarity and stability that underlies our perceptual experience.

The symposium is motivated by the recent developments, both in terms of experimental paradigms and theoretical concepts, that provide novel perspectives to this fundamental question. The aims of the symposium are (i) to expose the field in a relatively comprehensive and coherent manner to the broader vision community and (ii) to promote a fruitful exchange of ideas from empirical findings to theoretical concepts. We expect our proposed symposium to lead to a synthesis of the current state-of-the-art of the field and chart directions for future research.

Speakers: David Burr (Italy), Elisabeth Hein (Germany), Takahiro Kawabe (Japan), Haluk Ogmen (USA), and Martin Rolfs (Germany)



Symposium 5

The scope and limits of visual processing under continuous flash suppression
(organized by Martin Hebart, Germany, and Guido Hesselmann, Germany)

Continuous flash suppression (CFS) is a relatively novel behavioral method for the investigation of unconscious visual processing. By presenting continuously flashing masks to one eye, it is possible to suppress stimuli shown on the other eye for extended periods of time, even when shown at fixation. Since its conception (Tsuchiya & Koch, 2005), numerous experiments using CFS have produced a rich body of behavioral and neuroimaging data on the scope and limits of visual processing in the absence of conscious awareness.

Eight years later, the partly conflicting results now offer a multifaceted picture of what information can escape interocular suppression by CFS and influence overt behavior. In this symposium, we will discuss recent developments and major questions concerning unconscious processing during CFS. For example, it has been suggested that CFS selectively disrupts ventral visual processing while leaving dorsal stream processing intact. However, recent evidence challenges this dichotomy by showing preserved ventral stream processing.

After a brief overview of the possible applications of CFS, the symposium speakers will provide recent examples of the use of CFS and in that way illustrate the breadth of experimental approaches using this method. The symposium ends with a discussion between the speakers and the auditorium on the scope and limits of unconscious processing during CFS. Apart from experienced researchers using CFS, we also want to provide a platform for young scientists that only recently have started working with this technique. We do believe, therefore, that a lively and controversial discussion will ensue.

Speakers: Karin Ludwig (Germany), Sid Kouider (France), Marcus Rothkirch (Germany), Roy Salomon (Switzerland), Timo Stein (Italy), and Bahador Bahrami (UK)

 

Symposium 6

Visual perception in schizophrenia: Vision research, computational neuroscience, and psychiatry
(organized by Christoph Teufel, UK)

The sensory and perceptual experiences of schizophrenic patients are often unusual and highly unpleasant. The best-known and most staggering of these symptoms are hallucinations. The fact that our visual system has the potential to create a full-blown percept in the absence of an appropriate stimulus is remarkable, and a better understanding of this phenomenon is bound to shed light on some of the most fundamental aspects of biological vision. A better understanding of visual perception in schizophrenia might therefore not only be useful for developing more targeted interventions in patients but will also provide important insights into healthy vision.
An integration of efforts from vision research, psychiatry, and computational neuroscience is crucial in order to achieve real progress in this domain. This symposium will therefore take a holistic and interdisciplinary perspective to address two related questions. First, the symposium will attempt to shed light on how low-level perceptual abnormalities might contribute to the high-level symptoms that are reported by schizophrenic patients. Second, the symposium will explore what the abnormalities in sensory and perceptual processing in schizophrenia can tell us about the healthy visual system. A particular focus will be on the potential of computational neuroscience to provide the tools for addressing these questions in a principled manner.

Speakers: Paul Fletcher (UK), Bruce Christensen (Canada), Steven Dakin (UK), Sophie Denève (France), Dae-Shik Kim (Korea), Philipp Sterzer (Germany), and Christoph Teufel (UK)

Symposium 7

Synergistic human computer interaction (HCI)
(organized by Tandra Ghose, Germany)

In keeping with the focus of ECVP 2013 to foster further exchange of ideas between disciplines, this symposium addresses the interdisciplinary spirit of HCI by highlighting the synergies between technical and cognitive aspects of HCI. Over the years, members of vision-science and technical HCI communities have been borrowing state-of-the-art understanding of the other field to bring major breakthroughs in their respective fields. The symposium will present an introduction to the synergistic approach followed by some examples (Ghose). An overview of vision/sensation & perception related HCI will be presented with one specific example, “Attentive Computing”, which uses eye gaze to enhance various daily activities such as reading (Kieninger). The next talk will provide examples of this synergy by discussing the theory of perceptual learning and its application in developing interfaces for education, medical imaging, and also air-traffic control (Kellman). Building on the same principles we show how the two disciplines interact for applications in forensic science, more specifically fingerprint matching (Ghose). Further examples include: Implications of multimodal integration for development of better training systems by using examples of linguistically mediated visual search and pattern of eye movements during problem solving (Spivey); followed by an example of a virtual head that is used to measure perceived gaze direction. (Hecht). A summary of the potential of this synergistic approach with implications to both behavioral sciences and engineering communities will terminate the session (Kieninger).

Speakers: Tandra Ghose (Germany), Thomas Kieninger (Germany), Phil Kellman (USA), Michael Spivey (USA), and Heiko Hecht (Germany)

For any queries about symposia, please mail to symp2013@ecvp.uni-bremen.de.