Recent neuroimaging and neuropsychological work has begun to shed light on how the brain responds to the viewing of facial expressions of emotion. However, one important category of facial expression that has not been studied on this level is the facial expression of pain. We investigated the neural response to pain expressions by performing functional magnetic resonance imaging (fMRI) as subjects viewed short video sequences showing faces expressing either moderate pain or, for comparison, no pain. In alternate blocks, the same subjects received both painful and non-painful thermal stimulation. Facial expressions of pain were found to engage cortical areas also engaged by the first-hand experience of pain, including anterior cingulate cortex and insula. The reported findings corroborate other work in which the neural response to witnessed pain has been examined from other perspectives. In addition, they lend support to the idea that common neural substrates are involved in representing one's own and others' affective states.
Four experiments testing right-handed adult males examined interhemispheric transfer time (IHTT) estimation with visual evoked potentials (EPs) elicited in response to hemiretinal presentations of checkerboard-flash stimuli. Experiment 1 was a study of the relation between reaction time (RT) and EP measures of IHTT. EP measures provided more valid estimates than RT measures because more subjects showed IHTT in the direction of anatomical prediction. Experiment 2 showed that EPs derived from lateral occipital sites provided more valid and longer estimates of IHTT compared with EPs from medial occipital sites. Experiment 3 showed no difference between random versus blocked hemiretinal stimuli. Experiment 4 showed that IHTT derived with a linked-ears reference provided more valid estimates than IHTT derived with a mid-frontal reference and that small changes in stimulus eccentricity did not influence IHTT. The findings of these experiments indicate that noninvasive estimates of visual IHTT can be obtained in humans.
Studies of emotion signaling inform claims about the taxonomic structure, evolutionary origins, and physiological correlates of emotions. Emotion vocalization research has tended to focus on a limited set of emotions: anger, disgust, fear, sadness, surprise, happiness, and for the voice, also tenderness. Here, we examine how well brief vocal bursts can communicate 22 different emotions: 9 negative (Study 1) and 13 positive (Study 2), and whether prototypical vocal bursts convey emotions more reliably than heterogeneous vocal bursts (Study 3). Results show that vocal bursts communicate emotions like anger, fear, and sadness, as well as seldom-studied states like awe, compassion, interest, and embarrassment. Ancillary analyses reveal family-wise patterns of vocal burst expression. Errors in classification were more common within emotion families (e.g., 'self-conscious,' 'pro-social') than between emotion families. The three studies reported highlight the voice as a rich modality for emotion display that can inform fundamental constructs about emotion.
Research on temporal-order judgments, reference frames, discrimination tasks, and links to oculomotor control suggest important differences between inhibition of return (IOR) and attentional costs and benefits. Yet, it is generally assumed that IOR is an attentional effect even though there is little supporting evidence. The authors evaluated this assumption by examining how several factors that are known to influence attentional costs and benefits affect the magnitude of IOR: target modality, target intensity, and response mode. Results similar to those previously reported for attention were observed: IOR was greater for visual than for auditory targets, showed an inverse relationship with target intensity, and was equivalent for manual and saccadic responses. Important parallels between IOR and attentional costs and benefits are indicated, suggesting that, like attention, IOR may in part affect sensory-perceptual processes.
Who benefits most from making sacrifices for others? The current study provides one answer to this question by demonstrating the intrinsic benefits of sacrifice for people who are highly motivated to respond to a specific romantic partner's needs noncontingently, a phenomenon termed communal strength. In a 14-day daily-experience study of 69 romantic couples, communal strength was positively associated with positive emotions during the sacrifice itself, with feeling appreciated by the partner for the sacrifice, and with feelings of relationship satisfaction on the day of the sacrifice. Furthermore, feelings of authenticity for the sacrifice mediated these associations. Several alternative hypotheses were ruled out: The effects were not due to individuals higher in communal strength making qualitatively different kinds of sacrifices, being more positive in general, or being involved in happier relationships. Implications for research and theory on communal relationships and positive emotions are discussed.
Working memory (WM) representations serve as templates that guide behavior, but the neural basis of these templates remains elusive. We tested the hypothesis that WM templates are maintained by biasing activity in sensoriperceptual neurons that code for features of items being held in memory. Neural activity was recorded using event-related potentials (ERPs) as participants viewed a series of faces and responded when a face matched a target face held in WM. Our prediction was that if activity in neurons coding for the features of the target is preferentially weighted during maintenance of the target, then ERP activity evoked by a nontarget probe face should be commensurate with the visual similarity between target and probe. Visual similarity was operationalized as the degree of overlap in visual features between target and probe. A face-sensitive ERP response was modulated by target-probe similarity. Amplitude was largest for probes that were similar to the target, and decreased monotonically as a function of decreasing target-probe similarity. These results indicate that neural activity is weighted in favor of visual features that comprise an actively held memory representation. As such, our findings support the notion that WM templates rely on neural populations involved in forming percepts of memory items.