
DFT: From Neural Principles to Autonomous Neural Dynamic Agents
How can we build an autonomous agent that is adaptive, interpretable, and grounded in neural principles in 90 minutes?
This tutorial introduces Dynamic Field Theory (DFT), a mathematical framework for modeling cognition, perception, and action as the continuous evolution of neural activation patterns across time and space. Unlike static or purely data-driven approaches, DFT supports the development of autonomous agents that generate goal-directed behavior in real time, integrating perception, memory, and prior knowledge. The intrinsic stability properties of all neural representations in such DFT agents lead to scalability and enable online learning and adaptation. DFT enables neural process models that are transparent, interpretable, and that provide mechanistic explanations grounded in neural principles.
Participants will gain hands-on experience with DFT by collaboratively building a simple autonomous neural dynamic agent. The tutorial also introduces the latest applications of DFT in cognitive science, robotics, and hybrid approaches that integrate DFT with deep neural networks, combining the strengths of both methods.
Schedule
Building a Neural Dynamic Agent (90 min)
Participants will be guided through the step-by-step construction of a simple autonomous neural dynamic agent in DFT. This agent will be developed live and continuously updated to solve a series of progressively more complex toy tasks in simplified environments. Each modeling step is carefully tied back to DFT's core theoretical concepts, allowing participants to gain both hands-on modeling insight and deeper conceptual understanding.
No prior experience with DFT is required.
Break and Networking (30 min)
An informal break with space for questions, networking, and informal discussions.
Hybrid Models with DFT (15 min)
This session explores how DFT can be combined with deep neural networks to create hybrid models.
Latest DFT Applications (25 min)
DFT researchers will present short, focused flash talks (approximately 5 minutes each) on the latest DFT applications, illustrating the versatility of the framework across domains.
Open Discussion and Q&A (20 min)
We invite all participants to join an open discussion. Topics may include practical questions about applying DFT, conceptual debates around neural process modeling, or reflections on the role of neural dynamics in modern AI.
Latest DFT Applications

The ability of situated, embodied agents to pursue goals using internal knowledge is central to autonomous behavior. This raises the challenge of how neural systems represent and coordinate intentional states like perception, memory, belief, and action. Drawing on Searle’s intentional modes, we model these states as stabilized neural activation patterns, with transitions driven by dynamic instabilities. This enables flexible sequencing of internal states based on context. Beliefs emerge through learning as associative memory networks that activate to support goal achievement.

Handling relations require solving the problems of structured representation: First, categorization of relations require extracting representation that is invariant over specific features of objects in relationships. Second, a wide variety of objects could be bound to different relational roles in a flexible way, but objects need to still retaining their individual entities. Third, Each object has to be treated as instances in order to a) distinguish objects sharing the same set of features and to b) preserve the identity of an object that plays different roles in multiple relations. Lastly, all of these problems have to be solved in a scalable way without having explicit representation of all possible combinations of object fillers and roles. These problems are addressed in an example scenario in which the agent has to perceptually ground an object specified by multiple nested relations.

Generalization from previous experience to a novel situation requires the ability to recognize the similarity between them. Identifying similarity based on relational roles, i.e. analogy, is what makes it possible to abstract over concrete features of objects present in a specific situation. Generating a structured representation of one situation and searching for an object that matches that structured representation in another situation is the key process to establish the mapping between objects across two situations. Additionally, the agent needs to resist the mapping in terms of featural similarity by sequential testing of mapping hypotheses. We demonstrate how a visual analogical mapping problem can be solved given two scenes by spatial selection of matching objects.

We present ROBOVERINE, a neural dynamic robotic active vision process model of selective visual attention and scene grammar in naturalistic environments. The model addresses significant challenges for cognitive robotic models of visual attention: combined bottom-up salience and topdown feature guidance, combined overt and covert attention, coordinate transformations, two forms of inhibition of return, finding objects outside of the camera frame, integrated spaceand object-based analysis, minimally supervised few-shot continuous online learning for recognition and guidance templates, and autonomous switching between exploration and visual search. Furthermore, it incorporates a neural process account of scene grammar — prior knowledge about the relation between objects in the scene — to reduce the search space and increase search efficiency. The model also showcases the strength of bridging two frameworks: Deep Neural Networks for feature extractions and Dynamic Field Theory for cognitive operations.

The ability to generate structured action sequences in response to goals is fundamental for autonomous agents. In embodied robotic systems, this requires internal states—such as intentions and object bindings—that persist, adapt to sensory input, and transition in context-sensitive ways. Intentions must remain stable yet flexible; object identities must be preserved across roles; and sequences must unfold through competitive neural dynamics. These functions must arise from scalable neural processes.
Building a Neural Dynamic Agent (90 min)
Participants will be guided through the step-by-step construction of a simple autonomous neural dynamic agent in DFT. This agent will be developed live and continuously updated to solve a series of progressively more complex toy tasks in simplified environments. Each modeling step is carefully tied back to DFT's core theoretical concepts, allowing participants to gain both hands-on modeling insight and deeper conceptual understanding.
No prior experience with DFT is required.
Break and Networking (30 min)
An informal break with space for questions, networking, and informal discussions.
Hybrid Models with DFT (15 min)
This session explores how DFT can be combined with deep neural networks to create hybrid models.
Latest DFT Applications (25 min)
DFT researchers will present short, focused flash talks (approximately 5 minutes each) on the latest DFT applications, illustrating the versatility of the framework across domains.
Open Discussion and Q&A (20 min)
We invite all participants to join an open discussion. Topics may include practical questions about applying DFT, conceptual debates around neural process modeling, or reflections on the role of neural dynamics in modern AI.
Exercises | Juniper GitHub |
Lecture slides | DFT Tutorial Part 1 |
Lecture slides | DFT Tutorial Part 2 |
Lecture slides | DFT Tutorial Part 3 |
Lecture slides | Flash Talk - Intentional Agent |
Lecture slides | Flash Talk - Nested Phrases |
Lecture slides | Flash Talk - Analogy |
Lecture slides | Flash Talk - Robotic Active Vision |
Lecture slides | Flash Talk - Action Grammar |