Displaying 1 - 2 of 2
In three experiments, participants received nouns or noun phrases for objects and verbally generated their properties ("feature listing"). Several sources of evidence indicated that participants constructed perceptual simulations to generate properties for the noun phrases during conceptual combination. First, the production of object properties for noun phrases depended on occlusion, with unoccluded properties being generated more often than occluded properties. Because a perceptual variable affected conceptual combination, perceptual simulations appeared central to combining the concepts for modifiers and head nouns. Second, neutral participants produced the same distributions of properties as participants instructed to describe images, suggesting that the conceptual representations used by neutral participants were similar to the mental images used by imagery participants. Furthermore, the property distributions for neutral and imagery participants differed from those for participants instructed to produce word associations. Third, participants produced large amounts of information about background situations associated with the object cues, suggesting that the simulations used to generate properties were situated. The experiments ruled out alternative explanations that simulation effects occur only for familiar noun phrases associated with perceptual memories and that rules associated with modifiers produce occlusion effects. A process model of the property generation task grounded in simulation mechanisms is presented. The possibility of integrating the simulation account of conceptual combination with traditional accounts and well-established findings is explored.
Theories of knowledge such as feature lists, semantic networks, and localist neural nets typically use a single global symbol to represent a property that occurs in multiple concepts. Thus, a global symbol represents mane across HORSE, PONY, and LION. Alternatively, perceptual theories of knowledge, as well as distributed representational systems, assume that properties take different local forms in different concepts. Thus, different local forms of mane exist for HORSE, PONY, and LION, each capturing the specific form that mane takes in its respective concept. Three experiments used the property verification task to assess whether properties are represented globally or locally (e.g., Does a PONY have mane?). If a single global form represents a property, then verifying it in any concept should increase its accessibility and speed its verification later in any other concept. Verifying mane for PONY should benefit as much from having verified mane for LION earlier as from verifying mane for HORSE. If properties are represented locally, however, verifying a property should only benefit from verifying a similar form earlier. Verifying mane for PONY should only benefit from verifying mane for HORSE, not from verifying mane for LION. Findings from three experiments strongly supported local property representation and ruled out the interpretation that object similarity was responsible (e.g., the greater overall similarity between HORSE and PONY than between LION and PONY). The findings further suggest that property representation and verification are complicated phenomena, grounded in sensory-motor simulations.