Individual Submission Summary
Share...

Direct link:

Poster #8 - How relational language promotes relational representation: The role of visual attention

Sat, March 23, 12:45 to 2:00pm, Baltimore Convention Center, Floor: Level 1, Exhibit Hall B

Integrative Statement

Language has been shown to support relational encoding in young children (e.g., Loewenstein & Gentner, 2005; Dessalegn & Landau, 2008; 2013). For example, 4-year-olds shown a square that is half red and half blue quickly forget which side each color was on. However, if children are provided with language that labels the relation between the colors (i.e., “The red is on top of the blue”) they form more stable representations. The current study tested the role of visual attention as a potential mechanism for this effect.
Forty-two children (4.0-5.0 years) were tested. Each trial included a six second encoding phase, followed by a 1 second delay, then an unlimited test phase (See Figure 1). During encoding, participants saw a square that was split vertically or horizontally (i.e., a square that is red on the top and blue on the bottom). In the Relational Language block, children heard the image described in terms of the relation between the two colors (i.e., “The red is on the top of the blue”) whereas in the Control Language block children heard language that directed them to pay attention (“Look carefully at this one”). The order of the language blocks was counterbalanced such that half of the children heard relational language followed by control language; half heard the opposite. In the test phase, children were shown two images in silence - one identical to the target from encoding, and one its mirror image. Children were asked to select the image that matched the target. Accuracy and eye-gaze were recorded.
Behavioral results from block 1 replicate prior work: children were more likely to select the correct item at test when they heard relational language during encoding (M=.77, SD = .15) compared to control language (M=.60, SD = .14) t(40) = 3.72, p<.001 (See Figure 1a). To consider the role of looking patterns, we analyzed all trials that included at least 1000ms of looking to the target image. Although children showed a general bias to look at the top and left of the target images, children who heard relational language adapted their looking patterns based on the image shown. They were less likely to fixate on the top or left when they heard that the red was on the bottom or the right t(17) =3.74, p=.001. Children who heard control language did not adapt their looking patterns t(18) =1.54, p =.14 (See Figure 2b). Finally, we found that hearing relational language in block one led to sustained performance in block two. Whereas children who heard control language in block one improved significantly when they got relational language in block two t(19) = 3.29, p<.01, children who heard relational language in block one maintained their performance to the control block t(18) =1.14, p=.17.
Together, these results indicate that hearing relational language during a relational encoding task helps children bind feature and color information in ways that may have immediate sustained effects. Furthermore, hearing relational language helps children focus their attention in systematic ways that support successful relational encoding.

Authors