Displaying 1 - 4 of 4
Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.
We present a new sparse shape modeling framework on the Laplace-Beltrami (LB) eigenfunctions. Traditionally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes by forming a Fourier series expansion. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we propose to filter out only the significant eigenfunctions by imposing l1-penalty. The new sparse framework can further avoid additional surface-based smoothing often used in the field. The proposed approach is applied in investigating the influence of age (38-79 years) and gender on amygdala and hippocampus shapes in the normal population. In addition, we show how the emotional response is related to the anatomy of the subcortical structures.
Previous research has shown that na_ve participants display a high level of agreement when asked to choose or drawschematic representations, or image schemas, of concrete and abstract verbs [Proceedings of the 23rd Annual Meeting of the Cognitive Science Society, 2001, Erlbaum, Mawhah, NJ, p. 873]. For example, participants tended to ascribe a horizontal image schema to push, and a vertical image schema to respect. This consistency in offline data is preliminary evidence that language invokes spatial forms of representation. It also provided norms that were used in the present research to investigate the activation of spatial image schemas during online language comprehension. We predicted that if comprehending a verb activates a spatial representation that is extended along a particular horizontal or vertical axis, it will affect other forms of spatial processing along that axis. Participants listened to short sentences while engaged in a visual discrimination task (Experiment 1) and a picture memory task (Experiment 2). In both cases, reaction times showed an interaction between the horizontal/vertical nature of the verb's image schema, and the horizontal/vertical position of the visual stimuli. We argue that such spatial effects of verb comprehension provide evidence for the perceptual–motor character of linguistic representations.