Showing posts with label motivation. Show all posts
Showing posts with label motivation. Show all posts

Wednesday, March 9, 2011

Social justice and democratic stability


One thing I find interesting about the sustained demonstrations and protests in Madison, Wisconsin is the fact that people on the streets do not seem to be chiefly motivated by personal material interests. Rather, the passion and the sustainability of the protests against Governor Walker's plans seem to derive from an outrage felt by many people in Wisconsin and throughout the country, that the Governor's effort is really an attempt to reduce people's rights -- in this case, the right to come together as a group of workers to bargain together. This is a well established right in the private sector, protected by the National Labor Relations Act, and the rationale is substantially the same in the public sector.

So when the Governor attempts to eliminate the right to collective bargaining for public employees, he offends the sense of justice of many citizens in Wisconsin and elsewhere -- whether or not they are directly affected, whether or not they themselves are members of unions. Restricting established social rights is a very serious thing -- well beyond the specific calculation of interests that people might make. It's morally offensive in the way that state efforts to roll back voting rights would be offensive.  And this moral offensiveness can be a powerful motivator of collective resistance.

So this seems to provide an intriguing clue about political mobilization more generally. To what extent is moral outrage, a perception of injustice, an important motivator of individual political engagement and activism?

When we look at the MENA rebellions, even from a great distance, it seems that concerns about social and political justice have as much prominence as more material motivations -- demands for food subsidies, demands for state-sponsored jobs programs, etc. Egyptians, Tunisians, and Libyans interviewed on the BBC talk more frequently about the outrage of dictatorship and arbitrary state power than they do about material demands. And really, it's hard to see an economic interest in forcing a certain kind of food subsidy program being strong enough to create enough of a political motivation to lead a person to stand up against fighter jets and attack helicopters.

This is an insight that James Scott expressed a generation ago in The Moral Economy of the Peasant: Rebellion and Subsistence in Southeast Asia, and E. P. Thompson a generation before that in Customs in Common, in the theory of the moral economy. In its essence, the theory holds that the fact of sustained violation of a person's moral expectations of the society around him or her is a decisive factor in collective mobilization in many historical circumstances.  Later theorists of political activism have downplayed the idea of moral outrage, preferring more material motivations based on self-interest.  But the current round of activism and protest around the globe seems to point back in the direction of these more normative motivations -- combined, of course, with material interests.  So it is worth reexamining the idea that a society that badly offends the sense of justice of segments of its population is likely to stimulate resistance.

This basic and compelling idea has an important implication for sustainable social and political stability in a political regime. If citizens react collectively to sustained injustice, then a regime that wants to rule sustainably needs to respect the demands of justice.  It is important for a social order to arrive at a rough-and-ready shared understanding about the rights and obligations that citizens have, and it is important for the state to conduct itself in ways that honor those expectations.

This sounds like a social contract theory, and it certainly has something in common with that normative theory. But its relevance here is more sociological. It is the basis of a prediction about what social and political circumstances will elicit sustained protest and resistance, and which kinds of arrangements are likely to be accepted indefinitely by the population. And it leads to a policy recommendation for any regime that wants to create a stable, ongoing polity: work hard to make sure that social arrangements and institutions treat citizens fairly, and don't gratuitously violate their deeply held convictions about their rights and about the general features of justice.

So what moral expectations do American citizens have about how society ought to work? Several things seem fairly clear. Americans care about equality of opportunity. We are deeply rankled by the idea that the good opportunities in society are somehow captured by an elite of any sort. Second is the idea of equal treatment of all citizens by the institutions of the state. Teenagers and persons of color rankle at being singled out for special attention by the police. Women rightly seethe at the persistence of institutions in the workplace that continue to treat them differently. Arab Americans rightly resent special scrutiny at airports.  We don't accept status inequality easily -- especially in our own cases.  And third, we are very sensitive about the inviolability of our rights -- our right to vote, right to go where we want, right to speak our minds and associate with whomever we want to.

What Americans don't yet seem to have is a specific moral sensitivity to extreme inequalities of income and wealth. The fact of the accelerating concentration of income at the top doesn't seem to produce the moral outrage in the US that perhaps it would in France or Germany. And maybe this comes from another element of our moral economy -- the idea that inequalities are all right as long as they are fairly earned. But more information about bonuses on Wall Street and the banking industry may begin to erode that tolerance.  

It is intriguing to have widely separated examples of social mobilization going on right now.  Surely there are sociologists and political scientists working right now to interview leaders and followers in Egypt, Madison, or Benghazi trying to sort out the motivations and social networks through which these movements arose and solidified.  As a theoretically informed prediction, it seems likely that moral motivations like resentment of arbitrary power, violations of strongly held rights, and persistent status inequalities will be found to have played a role.

Social justice and democratic stability


One thing I find interesting about the sustained demonstrations and protests in Madison, Wisconsin is the fact that people on the streets do not seem to be chiefly motivated by personal material interests. Rather, the passion and the sustainability of the protests against Governor Walker's plans seem to derive from an outrage felt by many people in Wisconsin and throughout the country, that the Governor's effort is really an attempt to reduce people's rights -- in this case, the right to come together as a group of workers to bargain together. This is a well established right in the private sector, protected by the National Labor Relations Act, and the rationale is substantially the same in the public sector.

So when the Governor attempts to eliminate the right to collective bargaining for public employees, he offends the sense of justice of many citizens in Wisconsin and elsewhere -- whether or not they are directly affected, whether or not they themselves are members of unions. Restricting established social rights is a very serious thing -- well beyond the specific calculation of interests that people might make. It's morally offensive in the way that state efforts to roll back voting rights would be offensive.  And this moral offensiveness can be a powerful motivator of collective resistance.

So this seems to provide an intriguing clue about political mobilization more generally. To what extent is moral outrage, a perception of injustice, an important motivator of individual political engagement and activism?

When we look at the MENA rebellions, even from a great distance, it seems that concerns about social and political justice have as much prominence as more material motivations -- demands for food subsidies, demands for state-sponsored jobs programs, etc. Egyptians, Tunisians, and Libyans interviewed on the BBC talk more frequently about the outrage of dictatorship and arbitrary state power than they do about material demands. And really, it's hard to see an economic interest in forcing a certain kind of food subsidy program being strong enough to create enough of a political motivation to lead a person to stand up against fighter jets and attack helicopters.

This is an insight that James Scott expressed a generation ago in The Moral Economy of the Peasant: Rebellion and Subsistence in Southeast Asia, and E. P. Thompson a generation before that in Customs in Common, in the theory of the moral economy. In its essence, the theory holds that the fact of sustained violation of a person's moral expectations of the society around him or her is a decisive factor in collective mobilization in many historical circumstances.  Later theorists of political activism have downplayed the idea of moral outrage, preferring more material motivations based on self-interest.  But the current round of activism and protest around the globe seems to point back in the direction of these more normative motivations -- combined, of course, with material interests.  So it is worth reexamining the idea that a society that badly offends the sense of justice of segments of its population is likely to stimulate resistance.

This basic and compelling idea has an important implication for sustainable social and political stability in a political regime. If citizens react collectively to sustained injustice, then a regime that wants to rule sustainably needs to respect the demands of justice.  It is important for a social order to arrive at a rough-and-ready shared understanding about the rights and obligations that citizens have, and it is important for the state to conduct itself in ways that honor those expectations.

This sounds like a social contract theory, and it certainly has something in common with that normative theory. But its relevance here is more sociological. It is the basis of a prediction about what social and political circumstances will elicit sustained protest and resistance, and which kinds of arrangements are likely to be accepted indefinitely by the population. And it leads to a policy recommendation for any regime that wants to create a stable, ongoing polity: work hard to make sure that social arrangements and institutions treat citizens fairly, and don't gratuitously violate their deeply held convictions about their rights and about the general features of justice.

So what moral expectations do American citizens have about how society ought to work? Several things seem fairly clear. Americans care about equality of opportunity. We are deeply rankled by the idea that the good opportunities in society are somehow captured by an elite of any sort. Second is the idea of equal treatment of all citizens by the institutions of the state. Teenagers and persons of color rankle at being singled out for special attention by the police. Women rightly seethe at the persistence of institutions in the workplace that continue to treat them differently. Arab Americans rightly resent special scrutiny at airports.  We don't accept status inequality easily -- especially in our own cases.  And third, we are very sensitive about the inviolability of our rights -- our right to vote, right to go where we want, right to speak our minds and associate with whomever we want to.

What Americans don't yet seem to have is a specific moral sensitivity to extreme inequalities of income and wealth. The fact of the accelerating concentration of income at the top doesn't seem to produce the moral outrage in the US that perhaps it would in France or Germany. And maybe this comes from another element of our moral economy -- the idea that inequalities are all right as long as they are fairly earned. But more information about bonuses on Wall Street and the banking industry may begin to erode that tolerance.  

It is intriguing to have widely separated examples of social mobilization going on right now.  Surely there are sociologists and political scientists working right now to interview leaders and followers in Egypt, Madison, or Benghazi trying to sort out the motivations and social networks through which these movements arose and solidified.  As a theoretically informed prediction, it seems likely that moral motivations like resentment of arbitrary power, violations of strongly held rights, and persistent status inequalities will be found to have played a role.

Social justice and democratic stability


One thing I find interesting about the sustained demonstrations and protests in Madison, Wisconsin is the fact that people on the streets do not seem to be chiefly motivated by personal material interests. Rather, the passion and the sustainability of the protests against Governor Walker's plans seem to derive from an outrage felt by many people in Wisconsin and throughout the country, that the Governor's effort is really an attempt to reduce people's rights -- in this case, the right to come together as a group of workers to bargain together. This is a well established right in the private sector, protected by the National Labor Relations Act, and the rationale is substantially the same in the public sector.

So when the Governor attempts to eliminate the right to collective bargaining for public employees, he offends the sense of justice of many citizens in Wisconsin and elsewhere -- whether or not they are directly affected, whether or not they themselves are members of unions. Restricting established social rights is a very serious thing -- well beyond the specific calculation of interests that people might make. It's morally offensive in the way that state efforts to roll back voting rights would be offensive.  And this moral offensiveness can be a powerful motivator of collective resistance.

So this seems to provide an intriguing clue about political mobilization more generally. To what extent is moral outrage, a perception of injustice, an important motivator of individual political engagement and activism?

When we look at the MENA rebellions, even from a great distance, it seems that concerns about social and political justice have as much prominence as more material motivations -- demands for food subsidies, demands for state-sponsored jobs programs, etc. Egyptians, Tunisians, and Libyans interviewed on the BBC talk more frequently about the outrage of dictatorship and arbitrary state power than they do about material demands. And really, it's hard to see an economic interest in forcing a certain kind of food subsidy program being strong enough to create enough of a political motivation to lead a person to stand up against fighter jets and attack helicopters.

This is an insight that James Scott expressed a generation ago in The Moral Economy of the Peasant: Rebellion and Subsistence in Southeast Asia, and E. P. Thompson a generation before that in Customs in Common, in the theory of the moral economy. In its essence, the theory holds that the fact of sustained violation of a person's moral expectations of the society around him or her is a decisive factor in collective mobilization in many historical circumstances.  Later theorists of political activism have downplayed the idea of moral outrage, preferring more material motivations based on self-interest.  But the current round of activism and protest around the globe seems to point back in the direction of these more normative motivations -- combined, of course, with material interests.  So it is worth reexamining the idea that a society that badly offends the sense of justice of segments of its population is likely to stimulate resistance.

This basic and compelling idea has an important implication for sustainable social and political stability in a political regime. If citizens react collectively to sustained injustice, then a regime that wants to rule sustainably needs to respect the demands of justice.  It is important for a social order to arrive at a rough-and-ready shared understanding about the rights and obligations that citizens have, and it is important for the state to conduct itself in ways that honor those expectations.

This sounds like a social contract theory, and it certainly has something in common with that normative theory. But its relevance here is more sociological. It is the basis of a prediction about what social and political circumstances will elicit sustained protest and resistance, and which kinds of arrangements are likely to be accepted indefinitely by the population. And it leads to a policy recommendation for any regime that wants to create a stable, ongoing polity: work hard to make sure that social arrangements and institutions treat citizens fairly, and don't gratuitously violate their deeply held convictions about their rights and about the general features of justice.

So what moral expectations do American citizens have about how society ought to work? Several things seem fairly clear. Americans care about equality of opportunity. We are deeply rankled by the idea that the good opportunities in society are somehow captured by an elite of any sort. Second is the idea of equal treatment of all citizens by the institutions of the state. Teenagers and persons of color rankle at being singled out for special attention by the police. Women rightly seethe at the persistence of institutions in the workplace that continue to treat them differently. Arab Americans rightly resent special scrutiny at airports.  We don't accept status inequality easily -- especially in our own cases.  And third, we are very sensitive about the inviolability of our rights -- our right to vote, right to go where we want, right to speak our minds and associate with whomever we want to.

What Americans don't yet seem to have is a specific moral sensitivity to extreme inequalities of income and wealth. The fact of the accelerating concentration of income at the top doesn't seem to produce the moral outrage in the US that perhaps it would in France or Germany. And maybe this comes from another element of our moral economy -- the idea that inequalities are all right as long as they are fairly earned. But more information about bonuses on Wall Street and the banking industry may begin to erode that tolerance.  

It is intriguing to have widely separated examples of social mobilization going on right now.  Surely there are sociologists and political scientists working right now to interview leaders and followers in Egypt, Madison, or Benghazi trying to sort out the motivations and social networks through which these movements arose and solidified.  As a theoretically informed prediction, it seems likely that moral motivations like resentment of arbitrary power, violations of strongly held rights, and persistent status inequalities will be found to have played a role.

Thursday, March 11, 2010

The moral sentiments




One of Adam Smith's contributions to the study of philosophical ethics is his book, The Theory of Moral Sentiments. It is an interesting work, one part descriptive moral psychology, one part theory of the emotions.  Here is the opening paragraph (link):
How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.
So Smith asserts as a matter of empirical fact that there are common moral emotions and feelings -- sympathy, pity, compassion -- that underlie human social and moral behavior.  And the most basic kinds of morally motivated behavior -- altruism in particular -- are explained by the workings of these natural emotions of empathy with other human beings.  So Smith posed a fundamental question: is there an innate human moral psychology, beyond the reach of training and teaching, that accounts for our willingness to give to others and sometimes sacrifice important interests for the good of others?  Why do firemen rush into the highly dangerous environment of a large fire in order to rescue the people inside?

Now fast-forward to the post-Darwinian world; look at the human organism from the point of view of the study of primate behavior; and ask this key question: Is there an evolutionary basis for social behaviors? Are there emotions supporting cooperation that were selected for through our evolutionary history? Is a moral capacity hardwired?

Philosophers have treated this question in the past.  Allan Gibbard's Wise Choices, Apt Feelings: A Theory of Normative Judgment is a particularly good example. Here is how Gibbard describes the situation.
Consider now human beings evolving in hunting-gathering societies.  We could expect them to face an abundance of human bargaining situations, involving mutual aid, personal property, mates, territory, use of housing, and the like.  Human bargaining situations tend to be evolutionary bargaining situations.  Human goals tend toward biological fitness, toward reproduction.  The point is not, of course, that a person's sole goal is to maximize his reproduction; few if any people have that as a goal at all.  Rather, the point concerns propensities to develop goals.  Those propensities that conferred greatest fitness were selected; hence in a hunting-gathering society, people tended to want the various things it was fitness-enhancing for them to want.  Conditions of primitive human life must have required intricate coordination--both of the simple cooperative kinds involved, say, in meeting a person, and of the kind required for bargaining problems to yield mutually beneficial outcomes. Propensities well coordinated with the propensities of others would have been fitness-enhancing, and so we may view a vast array of human propensities as coordinating devices.  Our emotional propensities, I suggest, are largely the results of these selection pressures, and so are our normative capacities. (67)
One of Gibbard's key points is an analytical one. He argues against the idea of there being specific moral content, ethical principles, or moral emotions that are embodied in the central nervous system (CNS) as a result of variation and selection. Instead, he argues for there being a hardwired set of more abstract capacities that have CNS reality and selection advantage: the ability to learn a norm and to act in accordance with it.  (Richard Joyce makes a similar point: "Evolutionary psychology does not claim that observable human behavior is adaptive, but rather that it is produced by psychological mechanisms that are adaptations.  The output of an adaptation need not be adaptive" (5).)

This is the part that seems counter-intuitive from a simple Darwinian point of view. Wouldn't an organism possessing a genetically determined disposition to act contrary to its mortal interests almost necessarily have less reproductive success? So shouldn't such a gene quickly lose out to a more opportunistic alternative? Gibbard considers the evolutionary arguments surrounding the topic of altruism (including Richard Dawkins' Selfish Gene), and concludes -- not necessarily.  It is possible to mount an evolutionary argument that establishes the fitness-enhancing characteristics of some specific kinds of altruistic behavior.

So what does the current research on this topic add to what we already knew?  And, can we draw any interesting connections back to the venerable Smith?

In fact, there seems to be a new surge of interest in the topic.  A number of philosophers and psychologists are now interested in treating moral psychology as an empirical question, and they are interested in working back to the evolutionary environment in which these human capacities emerged.   (For example, Richard Joyce, The Evolution of Morality and Walter Sinnott-Armstrong, ed., Moral Psychology, Volume 1: The Evolution of Morality: Adaptations and Innateness.)  Particularly interesting is research by Michael Tomasello and his collaborators.  Tomasello is the co-director of the Max Planck Institute for Evolutionary Anthropology.  He argues that human beings are hardwired for cooperation, empathy, and social intensionality in a very interesting recent book, Why We Cooperate.  A great deal of his research has to do with experiments and observations of human children (9-24 months) and of young non-human primates.  He finds, essentially, that infants and children display a range of behaviors that seem to reveal a natural readiness for altruism, sharing, coordination, and eventually following of norms.  "I only propose that the kinds of collaborative activities in which young children today engage are the natural cradle of social norms of the cooperative variety.  This is because they contain the seeds of the two key ingredients" (89-90).  He presents a range of experimental data supporting these ideas:
  • Human infants have a pre-cultural disposition to be helpful and empathetic (12-14 months) 
  • Human toddlers adjust their cooperative and normative behavior to be more attentive to the behavior of others: generous to the generous and not to the ungenerous. 
  • Human infants and toddlers have a precultural disposition to absorb and enforce norms. 
  • The emotions of guilt and shame to be hardwired to conformance to norms. 
  • Infants appear to take a "we" intentional stance without learning.  They are able to quickly figure out what another agent is trying to do.
  • Chimps differ from human infants in virtually each of these areas. 
Here is a particularly interesting piece of evidence that Tomasello offers in support of the idea that human evolution was shaped by selection pressures that favored social coordination: the whites of the eyes in the human being.  Almost all non-human species have eyes that are primarily dark; whereas human eyes feature a large and conspicuous circle of white (the sclera).  The whites of the eyes permit an observer to determine what another individual is looking at -- allowing human individuals to achieve a substantially greater degree of shared attention and coordination.  "My team has argued that advertising my eye direction for all to see could only have evolved in a cooperative social environment in which others were not likely to exploit it to my detriment" (76).

So does this recent work on the evolutionary basis of moral emotions have anything to do with Smith and the moral sentiments?  What the two bodies of thought have in common is the idea that there is a psychological foundation to moral behavior, cooperation, altruism, and helping.  Pure maximizing rationality doesn't get you to "helping"; rather, there needs to be some psychological impulse to improve things for the other person.  Where evolutionary psychology differs from Smith is precisely in the nature of the explanation that is offered for this moral psychology; we have the advantage of having a pretty good idea of how natural selection works on biological traits, and we are therefore in a better position than Smith was to explain why human beings possess moral sentiments.  What we cannot yet answer is the question of the nature of the mechanism at the level of the central nervous system or the cognitive system, of how these moral sentiments are embodied in the human organism.

(It is interesting to contrast this line of argument with that of Tom Nagel in The Possibility of Altruism.  Nagel argues against the moral psychology of Hume -- very similar to that of Smith -- and argues that altruism is actually a feature of rationality.  We behave altruistically, fundamentally, because we have a rational representation of the reality of the external world and of other persons; and to recognize the reality of another person is immediately to have a reason to help the other person.  So no "motor" of moral emotion is needed in order to explain altruistic behavior.  On this approach, we don't need to postulate moral sentiments to explain moral behavior; all we need is a rich conception of practical rationality.)

The moral sentiments




One of Adam Smith's contributions to the study of philosophical ethics is his book, The Theory of Moral Sentiments. It is an interesting work, one part descriptive moral psychology, one part theory of the emotions.  Here is the opening paragraph (link):
How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.
So Smith asserts as a matter of empirical fact that there are common moral emotions and feelings -- sympathy, pity, compassion -- that underlie human social and moral behavior.  And the most basic kinds of morally motivated behavior -- altruism in particular -- are explained by the workings of these natural emotions of empathy with other human beings.  So Smith posed a fundamental question: is there an innate human moral psychology, beyond the reach of training and teaching, that accounts for our willingness to give to others and sometimes sacrifice important interests for the good of others?  Why do firemen rush into the highly dangerous environment of a large fire in order to rescue the people inside?

Now fast-forward to the post-Darwinian world; look at the human organism from the point of view of the study of primate behavior; and ask this key question: Is there an evolutionary basis for social behaviors? Are there emotions supporting cooperation that were selected for through our evolutionary history? Is a moral capacity hardwired?

Philosophers have treated this question in the past.  Allan Gibbard's Wise Choices, Apt Feelings: A Theory of Normative Judgment is a particularly good example. Here is how Gibbard describes the situation.
Consider now human beings evolving in hunting-gathering societies.  We could expect them to face an abundance of human bargaining situations, involving mutual aid, personal property, mates, territory, use of housing, and the like.  Human bargaining situations tend to be evolutionary bargaining situations.  Human goals tend toward biological fitness, toward reproduction.  The point is not, of course, that a person's sole goal is to maximize his reproduction; few if any people have that as a goal at all.  Rather, the point concerns propensities to develop goals.  Those propensities that conferred greatest fitness were selected; hence in a hunting-gathering society, people tended to want the various things it was fitness-enhancing for them to want.  Conditions of primitive human life must have required intricate coordination--both of the simple cooperative kinds involved, say, in meeting a person, and of the kind required for bargaining problems to yield mutually beneficial outcomes. Propensities well coordinated with the propensities of others would have been fitness-enhancing, and so we may view a vast array of human propensities as coordinating devices.  Our emotional propensities, I suggest, are largely the results of these selection pressures, and so are our normative capacities. (67)
One of Gibbard's key points is an analytical one. He argues against the idea of there being specific moral content, ethical principles, or moral emotions that are embodied in the central nervous system (CNS) as a result of variation and selection. Instead, he argues for there being a hardwired set of more abstract capacities that have CNS reality and selection advantage: the ability to learn a norm and to act in accordance with it.  (Richard Joyce makes a similar point: "Evolutionary psychology does not claim that observable human behavior is adaptive, but rather that it is produced by psychological mechanisms that are adaptations.  The output of an adaptation need not be adaptive" (5).)

This is the part that seems counter-intuitive from a simple Darwinian point of view. Wouldn't an organism possessing a genetically determined disposition to act contrary to its mortal interests almost necessarily have less reproductive success? So shouldn't such a gene quickly lose out to a more opportunistic alternative? Gibbard considers the evolutionary arguments surrounding the topic of altruism (including Richard Dawkins' Selfish Gene), and concludes -- not necessarily.  It is possible to mount an evolutionary argument that establishes the fitness-enhancing characteristics of some specific kinds of altruistic behavior.

So what does the current research on this topic add to what we already knew?  And, can we draw any interesting connections back to the venerable Smith?

In fact, there seems to be a new surge of interest in the topic.  A number of philosophers and psychologists are now interested in treating moral psychology as an empirical question, and they are interested in working back to the evolutionary environment in which these human capacities emerged.   (For example, Richard Joyce, The Evolution of Morality and Walter Sinnott-Armstrong, ed., Moral Psychology, Volume 1: The Evolution of Morality: Adaptations and Innateness.)  Particularly interesting is research by Michael Tomasello and his collaborators.  Tomasello is the co-director of the Max Planck Institute for Evolutionary Anthropology.  He argues that human beings are hardwired for cooperation, empathy, and social intensionality in a very interesting recent book, Why We Cooperate.  A great deal of his research has to do with experiments and observations of human children (9-24 months) and of young non-human primates.  He finds, essentially, that infants and children display a range of behaviors that seem to reveal a natural readiness for altruism, sharing, coordination, and eventually following of norms.  "I only propose that the kinds of collaborative activities in which young children today engage are the natural cradle of social norms of the cooperative variety.  This is because they contain the seeds of the two key ingredients" (89-90).  He presents a range of experimental data supporting these ideas:
  • Human infants have a pre-cultural disposition to be helpful and empathetic (12-14 months) 
  • Human toddlers adjust their cooperative and normative behavior to be more attentive to the behavior of others: generous to the generous and not to the ungenerous. 
  • Human infants and toddlers have a precultural disposition to absorb and enforce norms. 
  • The emotions of guilt and shame to be hardwired to conformance to norms. 
  • Infants appear to take a "we" intentional stance without learning.  They are able to quickly figure out what another agent is trying to do.
  • Chimps differ from human infants in virtually each of these areas. 
Here is a particularly interesting piece of evidence that Tomasello offers in support of the idea that human evolution was shaped by selection pressures that favored social coordination: the whites of the eyes in the human being.  Almost all non-human species have eyes that are primarily dark; whereas human eyes feature a large and conspicuous circle of white (the sclera).  The whites of the eyes permit an observer to determine what another individual is looking at -- allowing human individuals to achieve a substantially greater degree of shared attention and coordination.  "My team has argued that advertising my eye direction for all to see could only have evolved in a cooperative social environment in which others were not likely to exploit it to my detriment" (76).

So does this recent work on the evolutionary basis of moral emotions have anything to do with Smith and the moral sentiments?  What the two bodies of thought have in common is the idea that there is a psychological foundation to moral behavior, cooperation, altruism, and helping.  Pure maximizing rationality doesn't get you to "helping"; rather, there needs to be some psychological impulse to improve things for the other person.  Where evolutionary psychology differs from Smith is precisely in the nature of the explanation that is offered for this moral psychology; we have the advantage of having a pretty good idea of how natural selection works on biological traits, and we are therefore in a better position than Smith was to explain why human beings possess moral sentiments.  What we cannot yet answer is the question of the nature of the mechanism at the level of the central nervous system or the cognitive system, of how these moral sentiments are embodied in the human organism.

(It is interesting to contrast this line of argument with that of Tom Nagel in The Possibility of Altruism.  Nagel argues against the moral psychology of Hume -- very similar to that of Smith -- and argues that altruism is actually a feature of rationality.  We behave altruistically, fundamentally, because we have a rational representation of the reality of the external world and of other persons; and to recognize the reality of another person is immediately to have a reason to help the other person.  So no "motor" of moral emotion is needed in order to explain altruistic behavior.  On this approach, we don't need to postulate moral sentiments to explain moral behavior; all we need is a rich conception of practical rationality.)

The moral sentiments




One of Adam Smith's contributions to the study of philosophical ethics is his book, The Theory of Moral Sentiments. It is an interesting work, one part descriptive moral psychology, one part theory of the emotions.  Here is the opening paragraph (link):
How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.
So Smith asserts as a matter of empirical fact that there are common moral emotions and feelings -- sympathy, pity, compassion -- that underlie human social and moral behavior.  And the most basic kinds of morally motivated behavior -- altruism in particular -- are explained by the workings of these natural emotions of empathy with other human beings.  So Smith posed a fundamental question: is there an innate human moral psychology, beyond the reach of training and teaching, that accounts for our willingness to give to others and sometimes sacrifice important interests for the good of others?  Why do firemen rush into the highly dangerous environment of a large fire in order to rescue the people inside?

Now fast-forward to the post-Darwinian world; look at the human organism from the point of view of the study of primate behavior; and ask this key question: Is there an evolutionary basis for social behaviors? Are there emotions supporting cooperation that were selected for through our evolutionary history? Is a moral capacity hardwired?

Philosophers have treated this question in the past.  Allan Gibbard's Wise Choices, Apt Feelings: A Theory of Normative Judgment is a particularly good example. Here is how Gibbard describes the situation.
Consider now human beings evolving in hunting-gathering societies.  We could expect them to face an abundance of human bargaining situations, involving mutual aid, personal property, mates, territory, use of housing, and the like.  Human bargaining situations tend to be evolutionary bargaining situations.  Human goals tend toward biological fitness, toward reproduction.  The point is not, of course, that a person's sole goal is to maximize his reproduction; few if any people have that as a goal at all.  Rather, the point concerns propensities to develop goals.  Those propensities that conferred greatest fitness were selected; hence in a hunting-gathering society, people tended to want the various things it was fitness-enhancing for them to want.  Conditions of primitive human life must have required intricate coordination--both of the simple cooperative kinds involved, say, in meeting a person, and of the kind required for bargaining problems to yield mutually beneficial outcomes. Propensities well coordinated with the propensities of others would have been fitness-enhancing, and so we may view a vast array of human propensities as coordinating devices.  Our emotional propensities, I suggest, are largely the results of these selection pressures, and so are our normative capacities. (67)
One of Gibbard's key points is an analytical one. He argues against the idea of there being specific moral content, ethical principles, or moral emotions that are embodied in the central nervous system (CNS) as a result of variation and selection. Instead, he argues for there being a hardwired set of more abstract capacities that have CNS reality and selection advantage: the ability to learn a norm and to act in accordance with it.  (Richard Joyce makes a similar point: "Evolutionary psychology does not claim that observable human behavior is adaptive, but rather that it is produced by psychological mechanisms that are adaptations.  The output of an adaptation need not be adaptive" (5).)

This is the part that seems counter-intuitive from a simple Darwinian point of view. Wouldn't an organism possessing a genetically determined disposition to act contrary to its mortal interests almost necessarily have less reproductive success? So shouldn't such a gene quickly lose out to a more opportunistic alternative? Gibbard considers the evolutionary arguments surrounding the topic of altruism (including Richard Dawkins' Selfish Gene), and concludes -- not necessarily.  It is possible to mount an evolutionary argument that establishes the fitness-enhancing characteristics of some specific kinds of altruistic behavior.

So what does the current research on this topic add to what we already knew?  And, can we draw any interesting connections back to the venerable Smith?

In fact, there seems to be a new surge of interest in the topic.  A number of philosophers and psychologists are now interested in treating moral psychology as an empirical question, and they are interested in working back to the evolutionary environment in which these human capacities emerged.   (For example, Richard Joyce, The Evolution of Morality and Walter Sinnott-Armstrong, ed., Moral Psychology, Volume 1: The Evolution of Morality: Adaptations and Innateness.)  Particularly interesting is research by Michael Tomasello and his collaborators.  Tomasello is the co-director of the Max Planck Institute for Evolutionary Anthropology.  He argues that human beings are hardwired for cooperation, empathy, and social intensionality in a very interesting recent book, Why We Cooperate.  A great deal of his research has to do with experiments and observations of human children (9-24 months) and of young non-human primates.  He finds, essentially, that infants and children display a range of behaviors that seem to reveal a natural readiness for altruism, sharing, coordination, and eventually following of norms.  "I only propose that the kinds of collaborative activities in which young children today engage are the natural cradle of social norms of the cooperative variety.  This is because they contain the seeds of the two key ingredients" (89-90).  He presents a range of experimental data supporting these ideas:
  • Human infants have a pre-cultural disposition to be helpful and empathetic (12-14 months) 
  • Human toddlers adjust their cooperative and normative behavior to be more attentive to the behavior of others: generous to the generous and not to the ungenerous. 
  • Human infants and toddlers have a precultural disposition to absorb and enforce norms. 
  • The emotions of guilt and shame to be hardwired to conformance to norms. 
  • Infants appear to take a "we" intentional stance without learning.  They are able to quickly figure out what another agent is trying to do.
  • Chimps differ from human infants in virtually each of these areas. 
Here is a particularly interesting piece of evidence that Tomasello offers in support of the idea that human evolution was shaped by selection pressures that favored social coordination: the whites of the eyes in the human being.  Almost all non-human species have eyes that are primarily dark; whereas human eyes feature a large and conspicuous circle of white (the sclera).  The whites of the eyes permit an observer to determine what another individual is looking at -- allowing human individuals to achieve a substantially greater degree of shared attention and coordination.  "My team has argued that advertising my eye direction for all to see could only have evolved in a cooperative social environment in which others were not likely to exploit it to my detriment" (76).

So does this recent work on the evolutionary basis of moral emotions have anything to do with Smith and the moral sentiments?  What the two bodies of thought have in common is the idea that there is a psychological foundation to moral behavior, cooperation, altruism, and helping.  Pure maximizing rationality doesn't get you to "helping"; rather, there needs to be some psychological impulse to improve things for the other person.  Where evolutionary psychology differs from Smith is precisely in the nature of the explanation that is offered for this moral psychology; we have the advantage of having a pretty good idea of how natural selection works on biological traits, and we are therefore in a better position than Smith was to explain why human beings possess moral sentiments.  What we cannot yet answer is the question of the nature of the mechanism at the level of the central nervous system or the cognitive system, of how these moral sentiments are embodied in the human organism.

(It is interesting to contrast this line of argument with that of Tom Nagel in The Possibility of Altruism.  Nagel argues against the moral psychology of Hume -- very similar to that of Smith -- and argues that altruism is actually a feature of rationality.  We behave altruistically, fundamentally, because we have a rational representation of the reality of the external world and of other persons; and to recognize the reality of another person is immediately to have a reason to help the other person.  So no "motor" of moral emotion is needed in order to explain altruistic behavior.  On this approach, we don't need to postulate moral sentiments to explain moral behavior; all we need is a rich conception of practical rationality.)

Sunday, November 1, 2009

Assurance game




How does a group of people succeed in coming together to contribute to a collective project over an extended period of time?  For example, what leads a group of unemployed workers to travel to the capital to lobby for an extension of unemployment benefits, or a group of expatriate Burmese people in London to attend demonstrations against the junta?  What motivations are relevant at the individual level? And what circumstances are most conducive to creating and sustaining collective action?

Purely self-interested egoists won't make it -- that is the message of Mancur Olson's Logic of Collective Action: Public Goods. The maximizing egoist will reason that the activity will either succeed or fail independent of his/her own participation.  If it succeeds then he will enjoy the benefits of cooperation; and if it fails he will have avoided the wasted costs of participation.  Either way the egoist does better by refraining from participation.  So collective action in pursuit of a public good is all but impossible within a society of rationally disinterested egoists.  As Amartya Sen observes in "Rational Fools" (link),  "The purely economic man is indeed close to being a social moron." 

But we know that this conclusion does a bad job of describing real social life.  People in villages, communities, political parties, religious organizations, public television audiences, and ethnic groups do in fact often succeed in getting themselves organized and mobilized in pursuit of a public good for the group.  Often the level of mobilization is below the level that would be optimal for production of the good for the population; often it is fairly straightforward to identify the symptoms of incipient free-riding; but ordinary social experience and history alike are replete with examples of voluntary collective action.

Many theories can be articulated in order to account for the spontaneous occurrence of collective action.  People may be irrational; they may be motivated entirely by non-utility considerations; they may be governed by norms of solidarity beyond their rational control; they may be disciplined by grassroots organizations that punish defectors; there may be an evolutionary basis hard-wired into the human cognitive-deliberative system that favors cooperation; or, for that matter, there may be a hard-wired impulse towards punishing defectors from common projects that tips the balance of utility calculation for would-be free-riders.

But here is a factor that seems to be a credible observation about social motivation and that still makes sense of the behavior in deliberative terms.  Many real social actors seem to be what might be called "conditional altruists": they are willing to contribute some effort or personal resource to a collective project if they have grounds for confidence that a reasonable number of other members of the group will contribute as well. (Jon Elster explores the idea in The Cement of Society: A Survey of Social Order.)  And it isn't that these actors make a calculation error along the lines of the fallacy of unanimity -- "I want the benefits of the collective action, and it won't occur without me."  Instead, they seem to reason in ways that would please a communitarian: "I'm a member of this group, I believe that other members will do what's good for the group, and I'm willing to do my part as well."  This is a fairly explicit willingness to sacrifice the benefits of free riding.  But the conditional part is important as well: the conditional altruist is calculating about the likelihood of success in the collective undertaking, and is willing to participate only if he/she judges that enough other people will contribute as well to make the undertaking feasible.

Conditional altruism thus attributes a common moral psychology to social actors, which we might refer to as the "fairness factor."  Individuals are willing to factor collective goods into their calculation of the costs and benefits of action, and they have some degree of motivation to act in accordance with a proposed collective action that would benefit them even if they could evade participation.  They are disposed to act fairly: "If I benefit from the action, I should take my fair share of creating the benefit."  (Allan Gibbard's Wise Choices, Apt Feelings: A Theory of Normative Judgment offers an effort to bring together the evolutionary history of the species with a philosopher's analysis of moral reasoning.)

If fairness or conditional altruism are real components of human agency (for all or many human beings), then we can identify a few factors that are likely to increase the likelihood of cooperation and collective action.  Measures that increase the actor's assurance of the behavior of others will have the effect of eliciting higher levels of collective action.  And it is possible to think of quite a few social circumstances that have this effect.  A shared history of success in collective action is clearly relevant to current actors' level of assurance about future cooperation.  Shared history can be made more powerful in the present through the currency of songs, stories, and performances that highlight earlier successes (Michael Taylor, Community, Anarchy and Liberty).  Researchers who study peasant village communities emphasize the importance of face-to-face relations among villagers; individuals know a good deal about the past behavior of their neighbors, which can provide a better basis for predicting their future cooperative behavior (Robert Netting, Smallholders, Householders: Farm Families and the Ecology of Intensive, Sustainable Agriculture). And members of small, stable communities also know that they will need to interact with each other long into the future -- increasing the cost of non-cooperation today (Robert Axelrod, The Evolution of Cooperation: Revised Edition).

What is particularly interesting about this topic is the fact that actual social outcomes show a wide range of variations in the degree of self-interest and fairness that seems to be present.  Some groups seem to act more like Mancur Olson egoists; others (like Welsh coal miners) seem to act as though they have a very high "solidarity and fairness" quotient.  So no single answer to the question of collective action seems to work: "people are rational egoists," "people are altruists," or "people are conditional altruists."  Rather, a given opportunity for collective action seems to display a mix of all these styles of reasoning.  These variations could be the result of several independent factors: differences in the formation of individuals' moral psychology (emphasizing individualism or community from infancy); differences in current institutional settings (arrangements that make future interactions seem more likely to each participant); even potentially differences in personality or the genetic basis of decision-making across individuals.

I'm sure that there is work in experimental economics that probes the boundaries of this feature of practical reasoning.  Ordinary social experience informs us that people have different levels of willingness to undertake sacrifice for a group's projects.  And having a more nuanced empirical understanding of how people behave in the settings of potential cooperation and collective action would help refine our understanding of the thought-processes and styles of reasoning through which individuals decide what to do. Here is an interesting paper by Ernst Fehr and Klaus Schmidt titled "The Economics of Fairness, Reciprocity and Altruism – Experimental Evidence and New Theories."

Assurance game




How does a group of people succeed in coming together to contribute to a collective project over an extended period of time?  For example, what leads a group of unemployed workers to travel to the capital to lobby for an extension of unemployment benefits, or a group of expatriate Burmese people in London to attend demonstrations against the junta?  What motivations are relevant at the individual level? And what circumstances are most conducive to creating and sustaining collective action?

Purely self-interested egoists won't make it -- that is the message of Mancur Olson's Logic of Collective Action: Public Goods. The maximizing egoist will reason that the activity will either succeed or fail independent of his/her own participation.  If it succeeds then he will enjoy the benefits of cooperation; and if it fails he will have avoided the wasted costs of participation.  Either way the egoist does better by refraining from participation.  So collective action in pursuit of a public good is all but impossible within a society of rationally disinterested egoists.  As Amartya Sen observes in "Rational Fools" (link),  "The purely economic man is indeed close to being a social moron." 

But we know that this conclusion does a bad job of describing real social life.  People in villages, communities, political parties, religious organizations, public television audiences, and ethnic groups do in fact often succeed in getting themselves organized and mobilized in pursuit of a public good for the group.  Often the level of mobilization is below the level that would be optimal for production of the good for the population; often it is fairly straightforward to identify the symptoms of incipient free-riding; but ordinary social experience and history alike are replete with examples of voluntary collective action.

Many theories can be articulated in order to account for the spontaneous occurrence of collective action.  People may be irrational; they may be motivated entirely by non-utility considerations; they may be governed by norms of solidarity beyond their rational control; they may be disciplined by grassroots organizations that punish defectors; there may be an evolutionary basis hard-wired into the human cognitive-deliberative system that favors cooperation; or, for that matter, there may be a hard-wired impulse towards punishing defectors from common projects that tips the balance of utility calculation for would-be free-riders.

But here is a factor that seems to be a credible observation about social motivation and that still makes sense of the behavior in deliberative terms.  Many real social actors seem to be what might be called "conditional altruists": they are willing to contribute some effort or personal resource to a collective project if they have grounds for confidence that a reasonable number of other members of the group will contribute as well. (Jon Elster explores the idea in The Cement of Society: A Survey of Social Order.)  And it isn't that these actors make a calculation error along the lines of the fallacy of unanimity -- "I want the benefits of the collective action, and it won't occur without me."  Instead, they seem to reason in ways that would please a communitarian: "I'm a member of this group, I believe that other members will do what's good for the group, and I'm willing to do my part as well."  This is a fairly explicit willingness to sacrifice the benefits of free riding.  But the conditional part is important as well: the conditional altruist is calculating about the likelihood of success in the collective undertaking, and is willing to participate only if he/she judges that enough other people will contribute as well to make the undertaking feasible.

Conditional altruism thus attributes a common moral psychology to social actors, which we might refer to as the "fairness factor."  Individuals are willing to factor collective goods into their calculation of the costs and benefits of action, and they have some degree of motivation to act in accordance with a proposed collective action that would benefit them even if they could evade participation.  They are disposed to act fairly: "If I benefit from the action, I should take my fair share of creating the benefit."  (Allan Gibbard's Wise Choices, Apt Feelings: A Theory of Normative Judgment offers an effort to bring together the evolutionary history of the species with a philosopher's analysis of moral reasoning.)

If fairness or conditional altruism are real components of human agency (for all or many human beings), then we can identify a few factors that are likely to increase the likelihood of cooperation and collective action.  Measures that increase the actor's assurance of the behavior of others will have the effect of eliciting higher levels of collective action.  And it is possible to think of quite a few social circumstances that have this effect.  A shared history of success in collective action is clearly relevant to current actors' level of assurance about future cooperation.  Shared history can be made more powerful in the present through the currency of songs, stories, and performances that highlight earlier successes (Michael Taylor, Community, Anarchy and Liberty).  Researchers who study peasant village communities emphasize the importance of face-to-face relations among villagers; individuals know a good deal about the past behavior of their neighbors, which can provide a better basis for predicting their future cooperative behavior (Robert Netting, Smallholders, Householders: Farm Families and the Ecology of Intensive, Sustainable Agriculture). And members of small, stable communities also know that they will need to interact with each other long into the future -- increasing the cost of non-cooperation today (Robert Axelrod, The Evolution of Cooperation: Revised Edition).

What is particularly interesting about this topic is the fact that actual social outcomes show a wide range of variations in the degree of self-interest and fairness that seems to be present.  Some groups seem to act more like Mancur Olson egoists; others (like Welsh coal miners) seem to act as though they have a very high "solidarity and fairness" quotient.  So no single answer to the question of collective action seems to work: "people are rational egoists," "people are altruists," or "people are conditional altruists."  Rather, a given opportunity for collective action seems to display a mix of all these styles of reasoning.  These variations could be the result of several independent factors: differences in the formation of individuals' moral psychology (emphasizing individualism or community from infancy); differences in current institutional settings (arrangements that make future interactions seem more likely to each participant); even potentially differences in personality or the genetic basis of decision-making across individuals.

I'm sure that there is work in experimental economics that probes the boundaries of this feature of practical reasoning.  Ordinary social experience informs us that people have different levels of willingness to undertake sacrifice for a group's projects.  And having a more nuanced empirical understanding of how people behave in the settings of potential cooperation and collective action would help refine our understanding of the thought-processes and styles of reasoning through which individuals decide what to do. Here is an interesting paper by Ernst Fehr and Klaus Schmidt titled "The Economics of Fairness, Reciprocity and Altruism – Experimental Evidence and New Theories."

Assurance game




How does a group of people succeed in coming together to contribute to a collective project over an extended period of time?  For example, what leads a group of unemployed workers to travel to the capital to lobby for an extension of unemployment benefits, or a group of expatriate Burmese people in London to attend demonstrations against the junta?  What motivations are relevant at the individual level? And what circumstances are most conducive to creating and sustaining collective action?

Purely self-interested egoists won't make it -- that is the message of Mancur Olson's Logic of Collective Action: Public Goods. The maximizing egoist will reason that the activity will either succeed or fail independent of his/her own participation.  If it succeeds then he will enjoy the benefits of cooperation; and if it fails he will have avoided the wasted costs of participation.  Either way the egoist does better by refraining from participation.  So collective action in pursuit of a public good is all but impossible within a society of rationally disinterested egoists.  As Amartya Sen observes in "Rational Fools" (link),  "The purely economic man is indeed close to being a social moron." 

But we know that this conclusion does a bad job of describing real social life.  People in villages, communities, political parties, religious organizations, public television audiences, and ethnic groups do in fact often succeed in getting themselves organized and mobilized in pursuit of a public good for the group.  Often the level of mobilization is below the level that would be optimal for production of the good for the population; often it is fairly straightforward to identify the symptoms of incipient free-riding; but ordinary social experience and history alike are replete with examples of voluntary collective action.

Many theories can be articulated in order to account for the spontaneous occurrence of collective action.  People may be irrational; they may be motivated entirely by non-utility considerations; they may be governed by norms of solidarity beyond their rational control; they may be disciplined by grassroots organizations that punish defectors; there may be an evolutionary basis hard-wired into the human cognitive-deliberative system that favors cooperation; or, for that matter, there may be a hard-wired impulse towards punishing defectors from common projects that tips the balance of utility calculation for would-be free-riders.

But here is a factor that seems to be a credible observation about social motivation and that still makes sense of the behavior in deliberative terms.  Many real social actors seem to be what might be called "conditional altruists": they are willing to contribute some effort or personal resource to a collective project if they have grounds for confidence that a reasonable number of other members of the group will contribute as well. (Jon Elster explores the idea in The Cement of Society: A Survey of Social Order.)  And it isn't that these actors make a calculation error along the lines of the fallacy of unanimity -- "I want the benefits of the collective action, and it won't occur without me."  Instead, they seem to reason in ways that would please a communitarian: "I'm a member of this group, I believe that other members will do what's good for the group, and I'm willing to do my part as well."  This is a fairly explicit willingness to sacrifice the benefits of free riding.  But the conditional part is important as well: the conditional altruist is calculating about the likelihood of success in the collective undertaking, and is willing to participate only if he/she judges that enough other people will contribute as well to make the undertaking feasible.

Conditional altruism thus attributes a common moral psychology to social actors, which we might refer to as the "fairness factor."  Individuals are willing to factor collective goods into their calculation of the costs and benefits of action, and they have some degree of motivation to act in accordance with a proposed collective action that would benefit them even if they could evade participation.  They are disposed to act fairly: "If I benefit from the action, I should take my fair share of creating the benefit."  (Allan Gibbard's Wise Choices, Apt Feelings: A Theory of Normative Judgment offers an effort to bring together the evolutionary history of the species with a philosopher's analysis of moral reasoning.)

If fairness or conditional altruism are real components of human agency (for all or many human beings), then we can identify a few factors that are likely to increase the likelihood of cooperation and collective action.  Measures that increase the actor's assurance of the behavior of others will have the effect of eliciting higher levels of collective action.  And it is possible to think of quite a few social circumstances that have this effect.  A shared history of success in collective action is clearly relevant to current actors' level of assurance about future cooperation.  Shared history can be made more powerful in the present through the currency of songs, stories, and performances that highlight earlier successes (Michael Taylor, Community, Anarchy and Liberty).  Researchers who study peasant village communities emphasize the importance of face-to-face relations among villagers; individuals know a good deal about the past behavior of their neighbors, which can provide a better basis for predicting their future cooperative behavior (Robert Netting, Smallholders, Householders: Farm Families and the Ecology of Intensive, Sustainable Agriculture). And members of small, stable communities also know that they will need to interact with each other long into the future -- increasing the cost of non-cooperation today (Robert Axelrod, The Evolution of Cooperation: Revised Edition).

What is particularly interesting about this topic is the fact that actual social outcomes show a wide range of variations in the degree of self-interest and fairness that seems to be present.  Some groups seem to act more like Mancur Olson egoists; others (like Welsh coal miners) seem to act as though they have a very high "solidarity and fairness" quotient.  So no single answer to the question of collective action seems to work: "people are rational egoists," "people are altruists," or "people are conditional altruists."  Rather, a given opportunity for collective action seems to display a mix of all these styles of reasoning.  These variations could be the result of several independent factors: differences in the formation of individuals' moral psychology (emphasizing individualism or community from infancy); differences in current institutional settings (arrangements that make future interactions seem more likely to each participant); even potentially differences in personality or the genetic basis of decision-making across individuals.

I'm sure that there is work in experimental economics that probes the boundaries of this feature of practical reasoning.  Ordinary social experience informs us that people have different levels of willingness to undertake sacrifice for a group's projects.  And having a more nuanced empirical understanding of how people behave in the settings of potential cooperation and collective action would help refine our understanding of the thought-processes and styles of reasoning through which individuals decide what to do. Here is an interesting paper by Ernst Fehr and Klaus Schmidt titled "The Economics of Fairness, Reciprocity and Altruism – Experimental Evidence and New Theories."