Our annual summer school gives students an in-depth look at dynamic field theory over the course of a single week. The summer school covers everything from the basics of dynamic field theory up to our latest research projects.
This virtual edition of our summer school will consist of two parts: A live-lecture series and a hands-on workshop.
The lecture series will be held as a video conference and provides a step-by-step introduction to Dynamic Field Theory. It is open to everyone; you only have to register with your email address. Lectures will take place from August 15 to August 20 between 3:00 and 6:00 pm (UTC +2).
The two-and-a-half-day project workshop gives students the opportunity to put the newly acquired skills to use in a concrete hands-on modeling project. Students solve the task in our open-source simulation environment under the guidance of a personal tutor. The one-on-one tutoring limits the number of participants who can take part in the workshop. To apply, please fill out the online application form, which asks for a cover letter and a CV as well as some other information that we use to prepare projects for you. We encourage workshop applications by small groups of participants, maybe two or three colleagues who will work together locally on the same project and may share a tutor. All participants of such a group should apply separately and list their potential group partners in the application form. The workshop will take place August 18-20. Personal tutoring via video conference will be available on each workshop day at flexible hours.
Participation in any part of the school is free of charge.
Who is this summer school for?
The summer school is aimed at students at the advanced undergraduate or graduate level, postdocs and faculty members in the areas of embodied cognition, cognitive science, developmental science, cognitive neuroscience, developmental robotics, autonomous robotics, cognitive robotics, and anyone who wants to learn about dynamic field theory.
Structure of the school
The school is led by Prof. Gregor Schöner, who lectures on the basic concepts of DFT in the live lecture series. Workshop sessions provide hands-on training working with dynamic field models in COSIVINA and CEDAR, our interactive simulation environments. Students learn to create a dynamic field model from the ground up and learn how to apply the concepts of DFT to their home domain of interest.
For the hands-on work, the exercises and projects, we use numeric simulators. COSIVINA is a simulator for dynamic field theory that is based on Matlab and supports quick assembly of DFT models, for instance to account for human behavioral data.
Models based on larger field architectures can be implemented in the C++ based simulator CEDAR, which features a graphical user interface in which core elements of DFT can be assembled without in-depth programming knowledge. For more advanced projects in CEDAR, basic knowledge of C++ is required.
To be announced
Some of the topics that the summer school covers include:
- Neural dynamics and basic attractor states
- Dynamic fields
- Introduction to COSIVINA and CEDAR
- Links to neurophysiology and embodiment
- Multi-dimensional fields and multi-layer dynamics
- Higher cognition
- Sequence generation
To be announced
DFT 1: Neural dynamics, neural fields, their dynamic instabilities and dynamic regimes
This first of three core lectures of the DFT summerschool introduces the foundational ideas of neural dynamics and dynamic neural fields. The core are the attractor states and their characteristic instabilities for detection, selection, and memory. A case study illustrates how DFT models are used to account for behavioral and neural data, a second case study illustrates how DFT models ultimately link to sensory and motor systems.
|Video||DFT 1: Neural dynamics, neural fields, their dynamic instabilities and dynamic regimes|
DFT 2: Toward higher cognition: binding, coordinate transforms, grounding, mental mapping
This second core lecture of the DFT school looks into dynamic neural fields that combine different types of feature dimension and provide the basis for flexibly extracting and binding features, enable visual search. Binding through space enables higher-dimensional representations within the localist neural representations of DFT. Coordinate transforms exploit the binding of source and target coordinate frames to a space that steers the coordinate transform. Two examples illustrate how higher cognition emerges: Perceptual grounding and mental mapping of relational phrases.
|Video||DFT 2: Toward higher cognition: binding, coordinate transforms, grounding, mental mapping|
DFT 3: Sequences, and general discussion
This third core lecture discusses the mechanism in neural dynamics for the autonomous generation of sequences of behavioral or mental acts. This is followed by a general discussion: Why DFT architectures work as promised; The role of embodiment in DFT; The relationship between DFT and connectionism/deep neural network, between DFT and symbolic computation, and between DFT and Vector-Symbolic-Architectures.
|Video||DFT 3: Sequences and general discussion|
|Exercises||Exercise 1: Neural node dynamics|
|Exercises||Exercise 2: Neural field dynamics|
Survey over Cedar and DFF
Jan Tekülve and Daniel Sabinasz present the software frameworks Cedar and DFF for DFT simulations.
Exercise 3: Field architectures
Tutorial and Exercise with the Cedar Simulator
Raul Grieben: Bridging DFT and Deep Neural Networks
In this special lecture, Raul Grieben presents a part of his work on a neural theory of visual search.His presentation emphasizes how a deep neural network that supports classification is mapped onto the localist representations of DFT and how guidance features for visual search can be extracted from the deep network. He demonstrates how the model accounts for scene grammar by explaining how visual anchors work.
|Video||Raul Grieben: Bridging DFT and Deep Neural Networks|
Daniel Sabinasz: Models of Grounded Cognition
In this special lecture, Daniel Sabinasz reviews the notion of grounded cognition. He points to the challenges that a neural theory of grounding must face and outlines the DFT approach toward solving these problems. The high point of his talk is an outline of his own recent theoretical work on neural representations for conceptual structure and an account for the grounding of nested relations.
|Video||Daniel Sabinasz: Models of Grounded Cognition|
Mathis Richter: Neuromorphic computing and DFT
In this Guest Lecture, Mathis Richter shows how neuromorphic computing can be used to implement DFT architectures in form of genuinely neural computation. His emphasis is on architectures that link to sensory inputs, but he reviews neuromorphic computation more broadly as well.
|Video||Mathis Richter: Neuromorphic computing and DFT|
|Lecture slides||John P Spencer: Word-Object Learning via Visual Exploration in Space (WOLVES): A Neural Process Model of Cross-Situational Word Learning|
Cora Hummert: Using mouse tracking to study visual search
In this shorter presentation, Cora Hummert reviews an experimental study that was inspired by the DFT accounts for visual search and for reaching movements outlined earlier in the course and, for some of the participants, studied in the projects workshop. Within a visual search task, participants moved a computer mouse to point at an object presented at varied locations that matched the search cues, color and orientation. A distractor object that only matched color attracted the path of the mouse.
|Video||Special Lecture Cora Hummert: Using mouse tracking to study visual search|
Jan Tekülve: Neural Process models of intentionality
(The Google doc with live videos is here.) In this final special lecture of the DFT summer school 2022, Jan Tekülve introduces the notion of "intentionality" as defined by philosophers of mind as a way to characterize what a neurally grounded account for the "mind" needs to achieve. He then provides a neural process model within DFT that realizes intentionality in the six psychological modes: perception, memory, beliefs, and action, planning, goals that sample the two directions of fit of intentionality, mind-to-world and world-to-mind. He makes this concrete around a simple toy scenario in which a robotic vehicle explores an environment, of which it builds a model, and in which it transforms objects by "painting" (in a highly simplified way). The robot learns contingencies of the world (of color combination in painting over colored surfaces), which it uses to achieve its goal.
|Video||Special Lecture Jan Tekülve: Neural Process model of Intentionality|
Introduction of the workshop projects
Jan Tekülve and Daniel Sabinasz present the projects that participants can do in Cedar, Covina, or DFF during the workshop poertion of the summer school.
Visual Search Project (Cedar)
All Cedar projects should start with this project
|Exercises||Spatial Language Project (Cedar)|
|Exercises||Sequence Replay Project (Cedar)|
Reaching Project (Cedar)
Might only run on Windows, try the template before starting
|Exercises||Simon Effect Project (Cosivina)|
Cedar Template Files
Template Architecture files for all Cedar Projects. The reaching project needs the _simulator file, while all other projects can be done with the _picture file.