Search
Program Calendar
Browse By Day
Browse By Time
Browse By Panel
Browse By Session Type
Browse By Topic Area
Search Tips
Virtual Exhibit Hall
Personal Schedule
Sign In
X (Twitter)
Infant-directed speech has been widely characterized by its special acoustic properties, which may garner infants’ attention to the speech stream and facilitate language learning by highlighting structure. Given the multimodal nature of infant-directed communication, a focus on speech alone sidesteps the richness that the combination of speech with other modalities may provide to infants. Exploring the features of this rich multimodal signal may help explain the ease with which infants learn their language.
One skill that infants need to acquire is the ability to segment the speech stream into linguistic units that can ultimately be mapped to meanings. This task is challenging since the speech signal does not provide one single reliable cue to word edges. However, the provision of multimodal infant-directed communication may aid the infant by providing redundant and/or exaggerated cues marking linguistic units in the input. For example, in previous work we show that when touch and speech are consistently aligned, infants are able to exploit this cross-modal frequency information to aid in their segmentation of the speech stream. In this talk we explore whether such useful information is provided in multimodal infant-directed communication.
In a sample of 24 mother-infant dyads gathered when infants were 5 months, we explored how tactile cues were used with infant-directed speech. Dyads were audio and video recorded while they engaged in book-reading for 6 minutes with no specific instruction regarding the nature of the interaction. Recordings were then micro-coded in ELAN and Praat for touch and speech behavior; and events were extracted and analyzed. Analyses revealed that in the production of target nouns, caregivers exaggerated their average pitch when those nouns were produced with a concomitant touch than when they were produced without touch (words with touch M = 7.518; SD = 0.552; words without touch M = 6.989; SD = 0.499; Wilcoxon signed rank test V=18; p = .004).
In a new sample of free play interactions gathered from 17 mother-infant dyads when infants were 16 to 18 months, we are examining whether multimodal infant-directed communication, and specifically the combination of touch cues with infant-directed speech, results in similarly exaggerated events. These dyads were audio and video recorded while they engaged in free play with toys for 6 minutes with no specific instruction regarding the nature of infant-directed communication. Recordings were also micro-coded in ELAN and Praat for touch and speech behavior. Ongoing analyses explore whether the multimodal production of nouns within a play context includes exaggerated acoustic features as previously found in a book-reading task with younger infants.
These analyses can shed light on features of infant-directed communication that go beyond the speech signal and can potentially provide a more accurate picture on the nature of acoustic modification in infant-directed speech.