Displaying 1 - 2 of 2
People believe they see emotion written on the faces of other people. In an instant, simple facial actions are transformed into information about another's emotional state. The present research examined whether a perceiver unknowingly contributes to emotion perception with emotion word knowledge. We present 2 studies that together support a role for emotion concepts in the formation of visual percepts of emotion. As predicted, we found that perceptual priming of emotional faces (e.g., a scowling face) was disrupted when the accessibility of a relevant emotion word (e.g., anger) was temporarily reduced, demonstrating that the exact same face was encoded differently when a word was accessible versus when it was not. The implications of these findings for a linguistically relative view of emotion perception are discussed.
Four theories of the human conceptual system—semantic memory, exemplar models, feed‐forward connectionist nets, and situated simulation theory—are characterised and contrasted on five dimensions: (1) architecture (modular vs. non‐modular), (2) representation (amodal vs. modal), (3) abstraction (decontextualised vs. situated), (4) stability (stable vs. dynamical), and (5) organisation (taxonomic vs. action–environment interface). Empirical evidence is then reviewed for the situated simulation theory, and the following conclusions are reached. Because the conceptual system shares mechanisms with perception and action, it is non-modular. As a result, conceptual representations are multi-modal simulations distributed across modality‐specific systems. A given simulation for a concept is situated, preparing an agent for situated action with a particular instance, in a particular setting. Because a concept delivers diverse simulations that prepare agents for action in many different situations, it is dynamical. Because the conceptual system’s primary purpose is to support situated action, it becomes organised around the action–environment interface.