2016 Workshop on Cognitive Science

13. April 2016, 8:00am to 6:30pm

at Vortragssaal der ULB S01 | 20 Untergeschoss | Magdalenenstraße 8

14. April 2016, 8:00am bis 6:30pm

at Alten Hauptgebäude S01|03 R 23 Hochschulstr. 1

A flyer in German can be found here.

Schedule

13. April: Perception DaySpeakerSession Chair
Begrüssung (8:00)Constantin RothkopfJan Peters
Perception & VisionRadoslaw Martin CichyStefan Roth
 Carsten Rother 
 Cristian Sminchisescu 
 Marianne Maertens 
Kaffeepause (10:45)  
Spatial PerceptionBetty MohlerConstantin Rothkopf
 Holger Schultheis 
 Antje Nuthmann 
Mittagspause (13:00)  
Perception & Machine LearningFrank JäkelJan Peters
 Frank Hutter  
 Kristian Kersting 
Kaffeepause (16:00)  
Perception & Neural RepresentationJakob MackeHeinz Koeppl
 Tatjana Tchumatchenko 
   
   
14. April: Action DaySpeakerSession Chair
Actions & Decision Making (8:00)Christoph W. KornAndre Seyfarth
 Thorsten Pachur 
 Aldo Faisal 
 Paul Schrater 
Kaffeepause (10:30)  
Actions & Motor Learning (11:30)Ian S. HowardRalf Galuske
 Enrico Chiovetto 
 Daniel Braun 
Mittagspause (13:00)  
Actions, Signals & Machine Learning (14:15)Moritz Grosse-WentrupJan Peters
 Stefan Haufe 
Kaffeepause (15:30)  
Actions & Motor Control (16:10)Dominik EndresAndre Seyfarth
 Thorsten Stein 
 Matthias Weigelt 

Invited Speakers at the Workshop

Aldo Faisal, Imperial College London

Title:

Reverse engineering the perception-action loop

Abstract:

Our research questions are centred on a basic characteristic of neural systems: structured variability in their behaviour and their underlying meaning for mechanisms underpinning motor control and learning. Variability can be observed across many levels of biological behaviour: from the movements of our limbs, the responses of neurons in our brain, to the interaction of biomolecules. Such variability is emerging as a key ingredient in understanding biological principles (Faisal, Selen & Wolpert, 2008, Nature Rev Neurosci) and yet lacks adequate quantitative and computational methods for description and analysis. Crucially, we find that biological and behavioural variability contains important information that our brain and our technology can make us of (instead of just averaging it away): The brain knows about variability and uncertainty and it is linked to its own computations. Therefore, we use and develop statistical machine learning techniques, to predict behaviour and analyse data of closed-loop experiments. We present an overview of recent work on human learning and control as well as derived cognitive engineering solutions.

Short Bio:

Dr Faisal is Associate Professor at the Dept. of Bioengineering and the Dept. of Computing at Imperial College London and director of the Brain & Behaviour Lab since 2009. Aldo read Computer Science and Physics in Germany (Diplomarbeit with Helge Ritter). He studied Biology at Cambridge University (Master thesis with Malcolm Burrows FRS) and PhD in Computational Neuroscience (with Simon Laughlin, FRS) in 2006. Elected to a prestigious Junior Research Fellowship he joined the Computational & Biological Learning Group of Daniel Wolpert FRS Cambridge's Engineering Department to work on computational motor control.

Antje Nuthmann, University of Edinburgh

Title:

Perception, Attention and Eye Guidance in Real-world Scenes: Experiments and Modelling

Abstract:

How do we gather real-world visual information for perception and action? Our approach is to record observers’ eye movements to indicate where attention is being allocated in static and dynamic images of real-world scenes. In the first part of this talk, I will summarize recent research on the spatial (Where?) and temporal (When?) decisions involved in eye-movement control during scene perception. Our research on the decision where next to fixate challenges the conventional view that visually salient regions of scenes attract attention and gaze. Our research on when decisions has led to the CRISP computational model of fixation durations. In the second part of this talk, I will present research on visuomotor and higher-level aspects of scene perception. One line of research concerns the conditions under which incongruent objects in the scene (e.g., a Nutella jar in a bathroom) attract eye fixations. Another series of studies has revealed that neither foveal nor central vision is necessary for locating a target object in a scene. These results demonstrate that findings from studies using highly artificial displays do not necessarily generalize to more realistic situations, and they challenge researchers to address the understanding of real-world scene perception.

Short Bio:

Antje Nuthmann studied Psychology at the Humboldt University Berlin and the University of Toronto, graduating in 2002. In 2006 she was awarded a Dr. phil. in Psychology from the University of Potsdam, for research on eye-movement control in reading. In 2007, she was awarded a Research Fellowship at the University of Edinburgh, UK, where in 2010 she was promoted early to a tenured lectureship in the field of computational visual cognition.

Betty Mohler, Max Planck Institute for Biological Cybernetics

Title:

Using Virtual Reality to Investigate Body Perception

Abstract:

In the Space and Body Perception research group we have created novel methodologies to study perception and action such that we can investigate the underlying mechanisms for perceiving and recognizing body size for both healthy and clinical populations. The methods and stimuli we use for studying self-body perception have many advantages over traditional stimuli and methods typically used. Our stimuli of bodies are individualized to the specific person being tested and are also more ecologically valid since the distortions we make in each body (i.e. changes in BMI) are based on a large sampling of statistically possible human body shapes and perceptual research. Therefore, using these biometric self-avatars we can systematically and easily vary the visual cues about an individual and allow people to see the self avatar from first or third person perspective. We have conducted several experiments with both acute and chronic stroke patients, anorexic patients, obese individuals and people that vary in both their BMI and body satisfaction from the average person. With these individuals we use several methodologies that are extended versions of five of my key publications [Piryankova et al. 2014a, 2015b, Linkenauger et al. 2014, 2015a, 2015b] that aim to investigate body shape perception and the specificity of any distortions in body shape perception to self or other.

Short Bio:

Betty Mohler has conducted multi-disciplinary research at the Max Planck Institute for Biological Cybernetics since 2007. She leads an independent research group (W2), Space and Body Perception, where she creates novel methods using virtual reality to investigate perception and action. Before joining the Max Planck Institute for Biological Cybernetics, she received a Ph.D. for research on the effect of feedback within a virtual environment on human distance perception and adaptation.

Carsten Rother, TU Dresden

Title:

Towards Building the Rich Scene Model

Abstract:

Humans have the amazing capability of seeing a few images of a scene and are then able to describe the scene in great detail, such as naming the objects in the scene, as well as giving the object’s depth. In this talk I will present a project we call “Building the Rich Scene Model” where the aim is to replicate these human capabilities. The term “Rich Scene Model” stands for rich, detailed information about the scene, as well as for learning rich relations between different scene aspects. While building the rich scene model we face various challenges and open questions, such as “Is there a synergy effect between scene aspects?”, “How can we integrate prior knowledge into the scene model?”, “How can we learn the scene model with little and incomplete training data?”, and “How can humans use the model?”. I will give preliminary answers to these question. On the practical side, when casting the rich scene model as a deep convolutional neural network, we are able to achieve state of the art result for some applications such as object pose estimation, depth estimation and semantic segmentation for indoor scenes.

Short Bio:

Carsten Rother received the diploma degree with distinction in 1999 from the University of Karlsruhe, and the PhD degree in 2003 from the Royal Institute of Technology in Stockholm. Until 2013 he was researcher with Microsoft Research Cambridge (UK), and since then he is full Professor at TU Dresden. His research interest are in Computer Vision and Machine Learning, with a focus on Graphical Models and, recently, Deep Learning. He has published over 120 articles (H-index of 54), won several awards and has co-developed two Microsoft products.

Christoph W. Korn, University of Zurich

Title:

Humans employ optimal and heuristic decision policies to avoid virtual starvation and predation

Abstract:

Ideally, biological agents should make decisions that minimize imminent and protracted threats to homeostasis. For example, foraging for food averts starvation in the short and/or long term but poses the risk of immediate energy expenditure and of a sudden attack by a predator. Optimal decisions therefore require an extensive model-based tree search over a vast combination of future states that depend probabilistically on the agents’ actions, their energy resources, and the foraging environments. To facilitate such computations, humans may resort to myopic model-free heuristics that exclusively rely on the situation at hand and disregard upcoming states. However, it is unclear if and how humans compute such optimal and heuristic policies.

One behavioral and two fMRI studies used different versions of newly developed Markov decision-making tasks that distill some fundamental aspects of homeostasis in the laboratory. A priori optimal policies for minimizing virtual threats to homeostasis were derived via numerical simulations and dynamic programming. In two studies, participants faced the threat of virtual starvation at different time horizons. In the third study, they additionally risked virtual predation.

State-of-the-art Bayesian model comparisons demonstrated that participants minimized virtual threats to homeostasis. They performed both a complex model-based tree search over probabilistic prospective outcomes and a simple model-free use of the best available heuristic decision variable(s). The policies were related to macroscopically different brain regions (such as medial prefrontal cortex for the optimal policy, and intraparietal and dorsolateral prefrontal cortices for heuristic policies). Crucially, conflict between the decision policies led to slower reaction times and anterior cingulate cortex engagement.

These findings lend support for parallel computation of optimal policies, which explicitly minimize threats over longer time horizons, and heuristic policies, which approximate threat minimization with the help of momentary environmental variables. I suggest an arbitration mechanism for the multiplicity of decision controllers involved in addressing the biological challenge of maintaining homeostasis.

Short Bio:

I am interested in elucidating to what extent human decisions rely on rational and optimal standards versus biases and heuristics. My research combines computational modeling with fMRI to investigate healthy and psychiatric populations. Currently, I work with Dominik Bach at the University of Zurich. I obtained my PhD with Hauke Heekeren at the Berlin School of Mind and Brain and the Freie Universität Berlin. Before, I studied Neuroscience and Biomedicine in London, Paris, and Würzburg.

Cristian Sminchisescu, Lund University

Title:

Large Scale Hierarchical Machine Learning for Computational Perception

Abstract:

Artificial intelligence and computational perception are rapidly becoming transformative technologies through the widespread availability of high-throughput sensors, computation power and big data. In this talk I will review research in machine learning, in particular the learning of large-scale, hierarchical structured models based on a recently developed matrix backpropagation methodology, and the formulation of visual inference problems within the weakly supervised setup of deep reinforcement learning. I will also review my inter-disciplinary (cognitive science) work in the computational modeling of human eye movements and large dataset creation, as well as computational and human studies devoted to the perception of the three-dimensional articulated pose from monocular images.

Short Bio:

Cristian Sminchisescu is a Full Professor in the Department of Mathematics, Faculty of Engineering at Lund University, working in machine learning and computational perception. He has obtained a doctorate in computer science and applied mathematics from INRIA, and has done postdoctoral research in the Artificial intelligence Laboratory at the University of Toronto. He has been an area chair for major conferences like CVPR, ICCV and ECCV, and will be a program chair for ECCV 2018 in Munich. His work has been funded by the US National Science Foundation, the German Science Foundation, the Swedish Science Foundation, the European Commission under a Marie Curie Excellence Grant, and recently, the European Research Council under an ERC Consolidator Grant.

Daniel Braun, Max Planck Institute for Biological Cybernetics

Title:

Sensorimotor learning and decision-making in cognitive systems

Abstract:

Intelligent systems are often thought of as systems that can learn from their experience and alter their behavior accordingly. Mathematically, this is usually formalized within the framework of rational agency, where Bayes optimal models serve both as a normative and as a descriptive standard of intelligent behavior in the cognitive sciences. In the first part of the talk we discuss limitations of such models that arise from limited information-processing resources and model uncertainty and propose an information-theoretic framework for learning and acting with limited resources that is inspired by statistical physics and thermodynamics. In the second part of the talk we will give an overview of virtual reality experiments that have tested these information-processing principles in human sensorimotor behavior in the context of model uncertainty, human-machine interaction and structural learning in highly variable environments.

Short Bio:

Daniel studied physics, biology and philosophy at Uni Freiburg. He did his PhD with Daniel Wolpert in Cambridge as a visiting PhD student and at the same time obtained a PhD in philosophy of mind. Daniel also worked for one year with Stefan Schaal in Los Angeles. Since 2011 he runs an Emmy Noether Group at MPI Tübingen and has recently won an ERC Starting Grant.

Dominik Endres, Philipps-Univ. Marburg

Title:

Understanding the semantic structure of high-level vision and the basic representational units of motor control

Abstract:

An important recent research focus of Cognitive Science is understanding how semantic information is represented in the brain. Following the network theory of semantics (Churchland, 1988) which posits that an entity’s meaning is the network of relations in which it is embedded, my work aims at finding the basic representational units used by the brain, elucidate the concepts built out of these units, understand how they are related, and to correlate these concepts and their relations with behavioral measures. In the area of object vision, we showed that Formal Concept Analysis (FCA) (Ganter and Wille, 1997) yields interpretable semantic information from single-cell recordings (Endres and Földiák, 2009) and from fMRI BOLD responses recorded from human subjects (Endres et al., 2012). FCA is a mathematical formulation of the explicit coding hypothesis (Földiák, 2009) and does not impose inappropriate structure on the data like e.g. hierarchical clustering. Moreover, we employed FCA to elucidate organizational differences between high-level and low-level visual cortical areas. To validate these observations quantitatively, we show that the attribute structure computed from the IT fMRI signal is highly predictive of subjective similarity ratings, but we found no such relationship to responses from early visual cortex. To study the basic representational units in the motor control, we developed a framework for learning to generate (Velychko et al., 2014; Taubert et al., 2012) and comparing different types of movement primitives from human movement data (Endres et al., 2013). Movement primitives can serve as the low-level building blocks that connect continuous motor output to discrete, possibly language-like representational structures (Pastra and Aloimonos, 2012). While our research has not yet arrived at the level of discovering general action semantic relations, pilot data indicate that our model selection framework agrees with human perceptual judgments Chiovetto et al. (2014). Furthermore, we demonstrated that such primitives are not only compressed descriptions of movement kinematics, but can be used to generate near-optimal, dynamically feasible humanoid gait of a robot (Koch et al., 2015).

Short Bio:

Dominik Endres studied Physics at the University of Würzburg, graduating in 1998 with a thesis on Computational Neurophysics. Afterwards he commenced part-time studies for PhD in Computational Neuroscience at the School of Psychology, University of St. Andrews, UK, advised by Dr. Peter Földiak, graduating in 2006. During that time, he also co-founded and ran a small IT-business. After Post-Doc positions in St. Andrews and Tübingen (with Prof. M.A. Giese), where he worked on on computational vision and motor control, he accepted a Junior Professorship in Theoretical Neuroscience at the University of Marburg in 2014.

Enrico Chiovetto, University Clinic of Tuebingen

Title:

Computational and experimental approaches to study sensorimotor coordination and learning of redundant movements

Abstract:

How the central nervous system controls the excessive number of degrees of freedom to accomplish complex whole-body movements remains still an open question. Many recent studies have shown that a possible solution to the problem might be provided by a modular architecture that is composed of invariant control modules (usually referred to as motor primitives or synergies), which are linearly combined to generate the desired motor output. The identification of such components has usually been carried out by applying different unsupervised learning algorithms. However, the availability of different methods may complicate the comparison and interpretation of the results obtained in different studies. In the first part of my talk I propose a unifying framework for the description of motor primitives and a new algorithm (FADA) for their identification developed according to this framework. I show how all the different definitions of motor synergies given in the literature can be derived from one single generative model and that the algorithm can identify, from both artificial and experimental data sets, any kind of motor primitives with identification accuracy typically equal or even better than other standard techniques commonly used. In the second part of the talk I present some recent results that I have obtained investigating how complex motor coordination patterns vary over time during a highly redundant whole-body task (walking on a narrow beam) in both constrained and unconstrained conditions. Although many studies have indeed focused so far on the identification of the modular and low-dimensional organizations underlying complex movements, less attention has however been given to the adaptive mechanisms that allow the human body to deal flexibly with varying constraints or situations when specific degrees of freedom become unavailable. A deeper understanding of the neurophysiological and biomechanical principles at the base of complex redundant movements has implications for robotics, prosthetics and neuroscience.

Short Bio:

Dr. Enrico Chiovetto received a M.S. in Electrical Engineering from the University of Padua (2004) and a Ph.D. in Humanoid Technologies from the University of Genoa (2010). Since 2010 he has been working at the University Clinic of Tuebingen in the context of multiple EU-funded robotic projects. His research interests include the study of the neural, biomechanical and perceptual mechanisms of sensorimotor coordination and learning during complex human movements and the development of computational models and algorithms for action representation and learning based on biological principles.

Frank Hutter, University of Freiburg

Title:

End-to-end learning & optimization

Abstract:

Deep learning has recently helped AI systems to achieve (close-to) human-level performance in several tasks, including speech recognition, object classification, and higher-level reasoning & planning in games. While the major benefit of deep learning is that it enables the learning of useful data representations in an end-to-end fashion, the overall network architecture and the learning algorithms' sensitive hyperparameters still need to be set manually by human experts. In this talk, I will show how we can automate this task based on an additional "meta"-layer that models and optimizes the performance of (deep) machine learning pipelines, thereby paving the way to effective, fully automated end-to-end learning. Example applications on data from neuroscience, robotics, and computer vision demonstrate the effectiveness of this approach. I will also briefly show related applications to the end-to-end optimization of algorithms for solving hard combinatorial problems.

Short Bio:

Frank Hutter is currently leading an Emmy Noether Research Group in Computer Science at the University of Freiburg. He received his PhD from the University of British Columbia (2009) and his MSc from TU Darmstadt (2004). Frank received a doctoral dissertation award and several best paper awards for his work on AI, machine learning, and automated algorithm design. He enjoys solving real problems and won several international competitions on machine learning, SAT solving, and AI planning.

Frank Jäkel, University of Osnabrück

Title:

Concepts and Categories: Combining Insights from Machine Learning and Experimental Psychology

Abstract:

Categorization is a fundamental cognitive ability. Many, if not all, higher cognitive functions, like language or problem-solving, crucially depend on categorization. Therefore, categorization has been studied by cognitive scientists and researchers in artificial intelligence alike. Early machine learning algorithms for categorization were inspired by psychology and neuroscience, but today machine learning is a mature field and more recent methods have been developed far beyond their original cognitive motivations. These methods, in turn, can be used to inform experimental studies of human categorization behavior. I will show several examples of how insights from machine learning can feed back into experimental psychology. This is, however, not a one way route: Cognitive models can still shed light on human conceptual behaviors that currently no computer can emulate. I will argue that a full understanding of concepts and categories will depend on a combination of insights from machine learning and experimental psychology.

Short Bio:

Frank Jäkel is a Juniorprofessor for cognitive modeling at the University of Osnabrück. He was a postdoctoral fellow at MIT and at TU Berlin after finishing his PhD at the MPI for Biological Cybernetics. He studied cognitive science, artificial intelligence, and neuroscience in Osnabrück, Edinburgh and Tübingen.

Holger Schultheis, University of Bremen

Title:

Towards a unified view on human spatial cognition

Abstract:

In my talk I will propose a novel, unified view on human spatial cognition in terms of combining structures for mental spatial representation (MR) and spatial reference frames (RF).

According to this view, RF and MR are tightly but flexibly linked: tightly, because a combination of both is crucial for spatial cognition; flexibly, because there are no a­-priori restrictions on which MR may be combined with which RF: For instance, an MR employing an array structure in which two entities X and Y occupy adjacent array cells can support both representing (a) that X is north of Y (when employing a cardinal direction RF) and (b) that X is to the left of Y (in case a tabletop RF is employed). Thus, by flexibly combining MR and RF, the same MR may yield different representations. As a result, comparatively small sets of RF and MR may be sufficient to explain a wide range of spatial abilities. Moreover, the proposed view implies that seemingly different abilities may be similar or identical in important respects, because different abilities may share similar or identical MR or RF.

I will illustrate this proposal and its advantages by highlighting computational cognitive models of three spatial abilities: deductive reasoning, spatial term use, and perspective taking. Apart from the insights that each of the three models provides on its own, they jointly highlight possible commonalities between the three abilities that previous research has failed to recognize.

This indicates that the proposed approach of a systematic consideration of flexible combinations of RF and MR has the potential to provide a unifying view on human spatial cognition that helps to explain a multiplicity of phenomena by employing only a limited number of key components.

Short Bio:

Holger Schultheis graduated from Saarland University (2004) with a diploma in psychology and a diploma in computer science. He received his PhD from the University of Bremen (2009), where he also recently submitted his habilitation. His research focuses on experimentally investigating and computationally modeling higher cognition as well as on developing modeling methodology. He is currently senior researcher at the Bremen Spatial Cognition Center after being adjunct professor in the Department of Psychology at the University of Notre Dame (Indiana, USA) and PI in the Collaborative Research Center SFB/TR 8 Spatial Cognition at the University of Bremen.

Ian S. Howard, Plymouth University

Title:

Past and future actions influence motor learning

Abstract:

Understanding the neural basis of human movement requires experimentation at many different levels. Here we first introduce the methodology we use to investigate the motor system, which is based on the vBOT manipulandum. The vBOT is a modular general-purpose 2-dimensional planar mechanical robotic interface, specifically designed for the investigation of motor learning and adaptation during arm movements. In ball sports, the role of backswing and follow-through are both considered important for generating a good shot, even though they play no direct role in hitting the ball. Here we demonstrate the scientific basis of both of these phenomena. We present results from studies that investigated the effect of immediate past movement on learning novel dynamics. These experiments involved participants making two-part arm movements. The first part constituted a contextual movement that was predictive of opposing dynamics experienced in the second part. We found that such immediate past movement provided a strong source of contextual information that affected learning and recall in the subsequent movement. We describe several experiment conditions that investigated how different aspects of the past movement affected this phenomenon and present illustrative results. We then investigated the corresponding effects of immediate future movement on motor learning. This study used a similar two-part movement paradigm, with the roles of the corresponding movements reversed. In this case the second movement constituted a contextual movement that was predictive of opposing dynamics experienced during the first movement. We again found that future movement affected motor learning and recall. On the basis of this result we hypothesized that if highly variable future movements directly followed a single dynamic learning task, each separate future direction would partition the learning in the preceding movement resulting in slower learning. In contrast, if there were only a consistent single future movement, no such partitioning would occur, resulting in faster learning. We found this was indeed the case and present the results from this study. Overall our findings suggest that there is a critical period both before and after a current movement that determines motor memory activation and controls learning.

Short Bio:

Dr. Howard is a senior lecturer in the CRNS at Plymouth University. He studied Electronic Engineering at UCL, where he subsequently completed a PhD. He undertook a postdoc in sensori-motor control with Prof. Daniel Wolpert in Cambridge. He adopts a multi-disciplinary approach to research, using a combination of robotic development, behavioral experimental studies and computational modeling.

Jakob Macke, Research Center Caesar

Title:

Understanding the statistics of neural population dynamics

Abstract:

Understanding how neurons collectively represent sensory input, perform computations and guide behaviour is one of the central goals of neuroscience. Large-scale recording methods make it possible to measure neural activity in large populations and to gain insights into their collective dynamics. However, even advanced recording methods can only sparsely sample activity in local circuits. What can the statistical structure of sparsely sampled neural population data tell us about underlying mechanisms and computations?

First, I will demonstrate the importance of taking the sampling process into account when testing theories of neural computation: Multiple previous studies have observed so-called ‘signatures of criticality’ in neural population data, and given rise to the notion that neural populations are optimized to be operating at a ‘critical point’. I will show that these effects arise generically in the presence of subsampling, and challenge the idea that ‘criticality’ is an organizing principle of neural computation.

Second, I will describe how the dynamics of high-dimensional neural population activity can be characterized using low-dimensional state-space models. Using cortical population data, I will show that this approach can be used to ‘stitch’ together sequentially imaged sets of neurons into one underlying dynamics model, and demonstrate how it allows us to gain insights into how populations of neurons collectively represent sensory stimuli.

Short Bio:

Jakob Macke is a Max Planck Research Group Leader at Research Center caesar, Bonn. He studied Mathematics at Oxford University and performed graduate research at the MPI Tübingen. After two years as a postdoc at the Gatsby Computational Neuroscience Unit, he returned to the MPI Tübingen as a junior group leader. He has received the Gibbs Prize (Proxime Accessit) from Oxford University and the Otto Hahn Medal of the Max Planck Society, and is an elected member of the ‘Junge Akademie’ at the Berlin-Brandenburg Academy of Sciences and National Academy of Sciences Leopoldina.

Kristian Kersting, TU Dortmund

Title:

Lifted Machine Learning

Abstract:

Our minds make inference that appear to go far beyond machine learning. Whereas people can learn richer representations and use them for a wider range of functions, machine learning has been mainly employed in a stand-alone context, constructing a single function from a table of training examples. In this talk, I shall touch upon computational models that can capture these human learning aspects by combining relational logic and statistical learning. However, as we tackle larger and larger relational learning problems, the cost of inference comes to dominate learning time and makes performance very slow. Hence, we need to find ways to reduce the cost of inference both at learning and at run time. One promising direction to speed up inference is to exploit symmetries in the computational models. I shall illustrate this for probabilistic inference, linear programs, and convex quadratic programs.

Short Bio:

Kristian Kersting is an Associate Professor for Data Mining at the TU Dortmund University, Germany. He received his PhD in CS from the University of Freiburg, Germany, in 2006. After a PostDoc at the MIT, he was with the Fraunhofer IAIS and the University of Bonn. He (co-)authored more than 130 technical publications. He co-chaired ECMLPKDD-2013 as well as the KDD-2015 Best Paper Award Committee and is an action editor of MLJ, DAMI, JAIR, and AIJ.

Marianne Maertens, TU Berlin

Title:

Understanding human vision – a synthetic approach

Abstract:

Humans can easily distinguish a variety of objects based on their unique visual appearances. Perceiving the color, the material and the location of an object is vital for human survival as it allows us to acquire food, navigate, and use tools. Vision is our dominant sense and understanding vision is hence fundamental to understanding human experiences in general. The extraction of invariant object properties from the sensory input signal is difficult, because the retinal input is corrupted by variations in the environment that are irrelevant to the intrinsic properties of objects. It is not yet understood how the visual system resolves the ambiguities present in the sensory input signal. Recently, we showed that a simple image measure was sufficient to predict the appearance of surfaces in the presence of changes in illumination. Together with other pieces of evidence this foreshadows a paradigm shift in the study of visual perception. We call the observed mechanism a smart perceptual mechanism, because – contrary to the prevailing paradigm – the visual system does not estimate a veridical model of the physical world, but selectively uses information to compute properties of interest. I consider this to be an example case where the study of human vision can inspire computational vision by revealing simple biological mechanisms that have evolved over thousands of years to enable complex functioning.

Short Bio:

Marianne Maertens is the head of an Emmy-Noether junior research group at the Faculty of Electrical Engineering and Computer Science at the Technical University of Berlin. She received her PhD at the Max-Planck-Institute of Cognitive Neuroscience in Leipzig working on visual scene segmentation in human perception. Her current research interests comprise perceptual organization, models of visual processing, human grasping.

Matthias Weigelt, Uni Paderborn

Title:

Cognitive science in action: Experimental investigations onanticipatory control of human motor behavior

Abstract:

In the talk, I will present an overview of our work on the planning constraints and organization principles of human motor behavior. A special focus will be on the impact of goal representations and action effects for the acquisition of object manipulation skills. The experiments portrayed include simple object manipulations, bimanual coordination, and the execution of movement sequences in the context of dual-task performance. In addition, experiments on the development of motor planning skills will be presented. As the empirical findings will show, anticipating future goal states and action effects is a strong determinant of human motor behavior. It is argued that anticipatory behavioral control constitutes a basic principle in the construction of everyday actions.

Short Bio:

Matthias Weigelt received his PhD in psychology with Wolfgang Prinz at the MPI for Human Cognitive and Brain Sciences in Munich in 2004. He worked as a postdoc with Thomas Schack at Bielefeld University and as a responsible investigator in the DFG Cluster of Excellence in Cognitive Interaction Technology (CITEC), before becoming a professor for sport psychology, first at Saarland University (in 2010) and later at Paderborn University (since 2011).

Moritz Grosse-Wentrup, Max Planck Institute for Intelligent Systems

Title:

Machine Learning and Biosignal Processing for Cognitive Science

Abstract:

Understanding cognition means to solve the problem how neural systems translate sensory information into behaviour. To date, this problem has been approached by constructing mechanistic models for well-circumscribed experimental setups. I present a new approach to cognitive science, in which we develop machine learning algorithms to directly infer the neural basis of cognition from large-scale data. In particular, I will present novel methods for learning cause-effect relationships in cognitive systems with latent confounders, discuss how to address the problem of non-stationarity when studying dynamic cognitive systems, and demonstrate the utility of our methods on new classes of brain-computer interfaces (BCIs) for communication and rehabilitation.

Short Bio:

Moritz Grosse-Wentrup is group leader (W2) at the Max Planck Institute for Intelligent Systems in Tübingen. He studied at the Technische Universität München (TUM) and at the University of Maryland, College Park, and obtained the title of Dr.-Ing. from TUM. His awards include the BCI Research Award, the Teaching Award of the Tübingen Graduate School of Neural Information Processing, and the Chorafas Award for the best doctoral thesis at TUM. He is chair of the PRNI steering committee, founding member of the EURASIP Special Area Team on Biomedical Image and Signal Analysis, and serves as NIPS area chair.

Paul Schrater University of Minnesota

Title:

Learning and representing value in an uncertain world: Probabilistic models of value

Abstract:

While it is fair to say we choose what we value, the relative ease with which we make choices and actions masks deep uncertainties and paradoxes in our representation of value. For example, ambiguous and uncertain options are typically devalued when pitted against sure things - however, curiosity makes uncertainty valuable. In general, ecological decisions can involve goal uncertainty, uncertainty about the value of goals, and time/state-dependent values. When a soccer player moves the ball down the field, looking for an open teammate or a chance to score a goal, the value of action plans like passing, continuing or shooting depends on conditions like teammate quality, remaining metabolic energy, defender status and proximity to goal- all of which need to be integrated in real time. In this talk, we explicate two challenging aspects of human valuation using hierarchical probabilistic value representations. Hierarchical probabilistic value representations provide a principled framework for complex, contextual value learning and for the conversion of different kinds of value by representing more abstract goals across a hierarchy. We show that preference reversals can be generated from rational value learning with hierarchical context, including anchoring and similarity effects, and we use our theory to experimentally induce preference reversals by manipulating subject’s history of experience. We also show how probabilistic representations of value can solve the problem of converting and integrating heterogeneous values, like metabolic costs vs. scoring a soccer goal. By modeling values in terms of probabilities of achieving better outcomes, we can integrate probabilistic value representations seamlessly into control theoretic models by decomposing complex multi-goal problems into weighted mixture of control policies, each of which produces a sequence of actions associated with more specific goal. Critically, the weights are inferences that integration all the time-varying probabilistic information about the relative quality of each policy. We use the approach to give a rational account for a set of reaching and oculomotor experiments with multiple goals.

Short Bio:

Prof. Schrater is an Associate Professor of Computer Science and Psychology at the University of Minnesota. He received his PhD in Neuroscience from the University of Pennsylvania in 1999 supervised by Eero Simoncelli and David Knill, and a postdoc with Dan Kersten. His research investigates principles of learning and motivation, primarily from a control perspective. As a professor of both Psychology an Computer Science, he has a dual interest in human and machine learning, and his research involves a synthesis of the two areas. Dr. Schrater work has involved developing and testing probabilistic models for brain computations in perception, motor control, learning, motivation and memory. Much of his research has involved uncovering how the central nervous system compensates for uncertainty by learning, acquiring new information, and by adjusting its motor strategies. More recently, his research has focused on computational models of intrinsic motivation and the intersection between decision making, motivation, and learning, using video games as a both a tool and a topic of interest.

Radoslaw Martin Cichy, FU Berlin

Title

Towards a spatio-temporally resolved and algorithmically explicit account of visual cognition

Abstract:

Understanding visual cognition in the brain requires answering three questions: what is happening where and when in the human brain when we see? In this talk I will present recent work that addresses these questions in an integrated analysis framework combining human magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI) and deep neural networks (DNNs). The talk has three parts. In the first part, I will show how fMRI and MEG can be combined using multivariate analysis techniques (pattern recognition plus representational similarity analysis) to yield a spatio-temporally integrated view of human brain activity during object vision. In the second part I will show how DNNs can be used to understand the human visual system: they predict the spatial-temporal hierarchy of the human visual system, and representations of abstract visual properties, such as scene size, find an analogue in DNNs. In the third, short part I will exemplarily highlight ongoing and future work: how do neural representations relate to behavior, how does visual cognition develop, and how flexible is it?

Short Bio:

Radoslaw Martin Cichy is heading he Visual Cognition group at FU Berlin. His research focuses on mapping and understanding the neural dynamics of visual object recognition, using MEEG, fMRI, and deep neural networks. Before, RM Cichy worked at CSAIL, MIT. He holds a B.S. degree in Cognitive Science (University of Osnabrück) and a Ph.D. in psychology (Humboldt University, BCCN Berlin).

Stefan Haufe, TU Berlin

Title:

How much can non-invasive neurophysiology tell us about brain connectivity?

Abstract:

Analysis of brain connectivity using electro- and magnetoencephalography (EEG/MEG) is an emerging field promising to greatly extend our understanding of human brain functioning in health and disease. The main problem with EEG and MEG data is the irreversible linear mixing of brain sources into sensors as a result volume conduction in the head. We have demonstrated that some of the most popular connectivity measures are 'non-robust', that is, lead to false detections of interactions for such data. In this talk, we will present methods to 'robustify' arbitrary connectivity measures, using which the problem of spurious connectivity can be avoided. We will moreover outline our efforts to quantitatively evaluate connectivity estimates in terms of bias and variance, which include the development of a public benchmark and an empirical study of the reproducibility of results across standard estimation pipelines. Finally, we will discuss the potential use of robust connectivity measures as diagnostic markers of pathological brain activity in psychiatric and neurological disorders such as drug addiction and Parkinson's disease.

Short Bio:

Stefan Haufe is a Marie-Curie postdoctoral fellow at Technische Universität Berlin. He was previously a Marie-Curie postdoctoral fellow at Columbia University, New York, and a postdoctoral researcher at the City College of New York and TU Berlin. He received a Ph.D. degree in natural sciences from TU Berlin in 2011 and a Diploma in computer science from Martin-Luther-Universität Halle-Wittenberg in 2005. His research interests include brain connectivity analysis, EEG/MEG inverse modeling, statistical source separation, decoding, interpretation and validation in neuroscience, brain-computer interfacing, mental state monitoring, and the study of attention and emotion using hyperscanning paradigms.

Tatjana Tchumatchenko, Max Planck Institute for Brain Research

Title:

Processing of sensory information in large neural networks

Abstract:

We now know that higher cognitive functions such as sensory perception, motor control and decision making emerge from the activity of large spiking networks. To understand how these higher cognitive functions are mechanistically implemented in the cortex and to mimic at least some of these functions in man-made machine learning networks it is necessary to address two fundamental yet open challenges. The first challenge is to uncover how large spiking neural networks encode the sensory information they receive in the patterns of spikes which they emit. The second challenge is to understand how neural networks can switch on-demand between different computations through small adjustments of their intrinsic parameters such that they achieve the desired response for arriving sensory inputs. In this talk, I will present my recent work addressing these two challenges.

Short Bio:

Dr. Tatjana Tchumatchenko is head of the independent research group "Theory of Neuronal Dynamics" at the Max Planck Institute for Brain Research in Frankfurt. She received her Dr. rer. nat in Computational Neuroscience from Göttingen University and performed post-doctoral research at the Center for Theoretical Neuroscience at Columbia University. The goal of her research is to understand the computational rules of large neural networks and how they relate to the emergence of cognitive functions. The long-term goal of her research is to use this mechanistic understanding of large cortical networks not only to for a theoretical understanding of cognitive functions but also to translate it into efficient artificial neural networks for biological signal processing. Throughout her academic career, Dr. Tchumatchenko received numerous fellowships and awards including Collaborative Center Funding (SFB1080), Behrens-Weise-Foundation Award, Fellowship of the Volkswagen Foundation and the German National Merit Foundation. Most recently, she has received the Heinz Maier-Leibnitz-Preise 2016.

Thorsten Pachur, Max Planck Institute for Human Development

Title:

How attention shapes decision making under risk

Abstract:

How do people make decisions under risk, such as when choosing between different investment options or between medical treatments? The arguably most prominent descriptive model of risky decision making is cumulative prospect theory (CPT; Tversky & Kahneman, 1992), which accounts for decisions in terms of economic concepts such as utility and subjective probability curves. As other economic models, CPT is usually viewed as being mute with regard to the underlying cognitive processing. Based on behavioral and process-tracing studies as well as computer simulations, however, I demonstrate that individual differences in CPT’s parameters track differences in attentional processes during predecisional information search. For instance, CPT meaningfully measures the amount of attention the decision maker allocates to risk or reward information, or how much attention is allocated to gains relative to losses. By this virtue, CPT is also able to capture the operation of heuristic processes (e.g., limited and ordered search). These finding highlight CPT as a tool to measure cognitive processing; in addition, they open up new perspectives on the cognitive processes that give rise to the characteristic shapes of CPT’s value and weighting functions. After all, attentional processes seem to be an important driver of fundamental properties of decisions under risk such as overweighting or rare events and loss aversion.

Short Bio:

Thorsten Pachur is Senior Researcher at the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin. He studied psychology at the Free University Berlin and the University of Sussex and received his PhD from the Free University of Berlin in 2006; from 2006-2012 he worked as a research scientist at the University of Basel, where he completed his habilitation in 2012. Thorsten is interested in the memory and other search processes in decision making.

Thorsten Stein, Karlsruhe Institute of Technology

Title:

Interlimb transfer in motor adaptation and skill learning

Abstract:

Movements are an important aspect of human life because they are our only possibility to interact with the world. As humans are not proficient in performing all possible tasks and its variations by birth, the motor system must be capable of learning new skills as well as adapting existing skills to changes in conditions. However, a fundamental feature of human motor learning is the ability to transfer motor expertise from one task or context to another, i.e., expertise gained through practice in one situation changes performance in a different situation. Thereby, interlimb transfer refers to a generalization of motor learning from one limb to another. This type of transfer is a well-documented phenomenon and is of high interest for theoretical and practical reasons. In my talk I will present two studies investigating interlimb transfer from a basic and an applied research perspective: [1] Motor adaptation occurs in response to both external perturbations and changes in the [2] From an applied research perspective we were interested if the phenomenon of interlimb transfer

Short Bio:

Thorsten Stein is an assistant professor for Movement Science and Biomechanics at the Karlsruhe Institute of Technology (KIT). He received a Diploma degree in Sports Science (major Computer Science) and the Ph.D. degree in Computational Motor Control from the TU Darmstadt in 2004 and 2010, respectively. In 2009, he was awarded with the “Reinhard Daugs Förderpreis” of the section Motor Behavior of the German Society of Sport Science and in 2011 with a Young Investigator Group (“Computational Motor Control and Learning”) within the framework of the German Excellence Initiative at the KIT. He is Head of the BioMotion Center at the Institute of Sports and Sports Science at KIT. His research focusses on Postural Control; Motor Coordination; Adaption, Generalization and Consolidation of Human Motor Memory; Biomechanics of Human Movements and Sports Performance.