WORDS CAN SHIFT

Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication. Speaker intentions often vary dynamically depending on different nonverbal contexts, such as vocal patterns and facial expressions. For example, with the same sentence, “The movie is sick!”, the speaker can convey different sentiments when showing different facial expressions or vocal intonations. Although the speaker is using the same adjective “sick” to describe movies, they could be very excited about the movie or find the movie disappointing by showing opposing nonverbal behaviors. As a result, when modeling human language, it is essential to not only consider the literal meaning of the words but also the nonverbal contexts in which these words appear.

To better model the meaning of words and sentences in different nonverbal contexts, we seek to capture such dynamics in human language by considering the nonverbal signals as a shift of its verbal representations. Since the visual and acoustic behaviors often have a much higher temporal frequency than words, leading to a sequence of accompanying visual and acoustic “subword” units for each uttered word, we also model the structure of nonverbal behaviors during each word span. To this end, we propose the Recurrent Attended Variation Embedding Network (RAVEN) that models the fine-grained structure of nonverbal “subword” sequences and dynamically shifts word representations based on nonverbal cues. Our proposed model achieves competitive performance on two benchmark datasets for multimodal sentiment analysis and emotion recognition. We also visualize the shifted word representations in different nonverbal contexts and summarize common patterns regarding multimodal variations of word representations.

Ying Shen
Ying Shen
Computer Science Ph.D. Student

Related