Displaying 1 - 5 of 5
How does language reliably evoke emotion, as it does when people read a favorite novel or listen to a skilled orator? Recent evidence suggests that comprehension involves a mental simulation of sentence content that calls on the same neural systems used in literal action, perception, and emotion. In this study, we demonstrated that involuntary facial expression plays a causal role in the processing of emotional language. Subcutaneous injections of botulinum toxin-A (BTX) were used to temporarily paralyze the facial muscle used in frowning. We found that BTX selectively slowed the reading of sentences that described situations that normally require the paralyzed muscle for expressing the emotions evoked by the sentences. This finding demonstrates that peripheral feedback plays a role in language processing, supports facial-feedback theories of emotional cognition, and raises questions about the effects of BTX on cognition and emotional reactivity. We account for the role of facial feedback in language processing by considering neurophysiological mechanisms and reinforcement-learning theory.
This study was designed to test the hypothesis that Japanese subjects exhibit different patterns of resting EEG asymmetry compared with Westerners. EEG was recorded from the left and right temporal and parietal scalp regions in bilingual Japanese and Western subjects during eyes-open and eyes-closed rest periods before and after the performance of a series of cognitive tasks. Alpha activity was integrated and digitized. Japanese subjects were found to exhibit greater relative right-sided parietal activation during the eyes closed condition. This difference was found to be a function of greater left hemisphere activation among the Westerners. Various possible contributors to this cross-cultural differences are discussed.
For decades the importance of background situations has been documented across all areas of cognition. Nevertheless, theories of concepts generally ignore background situations, focusing largely on bottom-up, stimulus-based processing. Furthermore, empirical research on concepts typically ignores background situations, not incorporating them into experimental designs. A selective review of relevant literatures demonstrates that concepts are not abstracted out of situations but instead are situated. Background situations constrain conceptual processing in many tasks (e.g., recall, recognition, categorization, lexical decision, color naming, property verification, property generation) across many areas of cognition (e.g., episodic memory, conceptual processing, visual object recognition, language comprehension). A taxonomy of situations is proposed in which grain size, meaningfulness, and tangibility distinguish the cumulative situations that structure cognition hierarchically.
Previous research has shown that na_ve participants display a high level of agreement when asked to choose or drawschematic representations, or image schemas, of concrete and abstract verbs [Proceedings of the 23rd Annual Meeting of the Cognitive Science Society, 2001, Erlbaum, Mawhah, NJ, p. 873]. For example, participants tended to ascribe a horizontal image schema to push, and a vertical image schema to respect. This consistency in offline data is preliminary evidence that language invokes spatial forms of representation. It also provided norms that were used in the present research to investigate the activation of spatial image schemas during online language comprehension. We predicted that if comprehending a verb activates a spatial representation that is extended along a particular horizontal or vertical axis, it will affect other forms of spatial processing along that axis. Participants listened to short sentences while engaged in a visual discrimination task (Experiment 1) and a picture memory task (Experiment 2). In both cases, reaction times showed an interaction between the horizontal/vertical nature of the verb's image schema, and the horizontal/vertical position of the visual stimuli. We argue that such spatial effects of verb comprehension provide evidence for the perceptual–motor character of linguistic representations.