Displaying 1 - 2 of 2
The present paper briefly describes and contrasts two different motivations crucially involved in decision making and cooperation, namely fairness-based and compassion-based motivation. Whereas both can lead to cooperation in comparable social situations, we suggest that they are driven by fundamentally different mechanisms and, overall, predict different behavioral outcomes. First, we provide a brief definition of each and discuss the relevant behavioral and neuroscientific literature with regards to cooperation in the context of economic games. We suggest that, whereas both fairness- and compassion-based motivation can support cooperation, fairness-based motivation leads to punishment in cases of norm violation, while compassion-based motivation can, in cases of defection, counteract a desire for revenge and buffer the decline into iterative noncooperation. However, those with compassion-based motivation alone may get exploited. Finally, we argue that the affective states underlying fairness-based and compassion-based motivation are fundamentally different, the former driven by anger or fear of being punished and the latter by a wish for the other person's well-being.
Functional neuroimaging investigations in the fields of social neuroscience and neuroeconomics indicate that the anterior insular cortex (AI) is consistently involved in empathy, compassion, and interpersonal phenomena such as fairness and cooperation. These findings suggest that AI plays an important role in social emotions, hereby defined as affective states that arise when we interact with other people and that depend on the social context. After we link the role of AI in social emotions to interoceptive awareness and the representation of current global emotional states, we will present a model suggesting that AI is not only involved in representing current states, but also in predicting emotional states relevant to the self and others. This model also proposes that AI enables us to learn about emotional states as well as about the uncertainty attached to events, and implies that AI plays a dominant role in decision making in complex and uncertain environments. Our review further highlights that dorsal and ventro-central, as well as anterior and posterior subdivisions of AI potentially subserve different functions and guide different aspects of behavioral regulation. We conclude with a section summarizing different routes to understanding other people’s actions, feelings and thoughts, emphasizing the notion that the predominant role of AI involves understanding others’ feeling and bodily states rather than their action intentions or abstract beliefs.