teaser

New paper generalizes DFT model WOLVES to novelty preference and mutual exclusivity tasks.

A new paper by Bhat, Samuelson & Spencer, published in Child Development, addresses two important questions: 1) can a prior theory developed to explain how children map new words to new objects in ambiguous naming situations generalize to new data; and 2) how does processing of words and objects interact during word learning?  Bhat et al. (2023) addressed these questions by simulating the data from two published studies in our prior Dynamic Field Theory model WOLVES – Word-Object Learning via Visual Exploration in Space (Bhat, Spencer & Samuelson, 2022).

Bhat et al simulated the data from two published studies. Mather et al. (2011) found that when a word is presented in the context of two objects, one of which is the same every trial and one that is new every trial, infants take longer to change from looking at the ‘same’ object to looking at the ‘new’ one compared to when no word is presented. They argued that this is because visual and auditory processes draw on the same set of attentional resources. WOLVES demonstrates, however, that this is only partially the case; Bhat et al. (2023) found that in addition, growing memories for objects and words compete for attention with the new objects that causing a reduction in looking to the familiar object.

Mather and Plunkett (2012) examined the role of object novelty in ‘mutual exclusivity’—children’s tendency to map new words to objects that they do not already know the name of by presenting children both a name-unknown and completely novel object in a mapping task. They found that children mapped new words to the completely novel object and concluded that mutual exclusivity was based on novelty. Bhat et al.'s simulations of this task supported the role of novelty in mutual exclusivity, but also showed a role for the strength of the infant’s knowledge of the known words and objects. This means novelty is not an isolated, sufficient process for word-object mapping, but part of a more complex system.

This work shows the usefulness of formal theories in which multiple complex processes, and their interactions, can be specified, for revealing how processes interact to create behaviour. It also highlights the importance of capturing change over time in models, something DFT models focus on, for understanding how growing representations shape behaviour. Development and learning are all about time, Because WOLVES processes stimuli and builds knowledge moment-by-moment, just like infants and children, it can capture details of these processes that other models cannot.