Showing posts with label CAT_methodology. Show all posts
Showing posts with label CAT_methodology. Show all posts

Sunday, April 17, 2011

Scenario-based projections of social processes


As we have noted in previous posts, social outcomes are highly path-dependent and contingent (link, link, link, link). This implies that it is difficult to predict the consequences of even a single causal intervention within a complex social environment including numerous actors -- say, a new land use policy, a new state tax on services, or a sweeping cap-and-trade policy on CO2 emissions. And yet policy changes are specifically designed and chosen in order to bring about certain kinds of outcomes. We care about the future; we adopt policies to improve this or that feature of the future; and yet we have a hard time providing a justified forecast of the consequences of the policy.

This difficulty doesn't only affect policy choices; it also pertains to large interventions like the democracy uprisings in the Middle East and North Africa. There are too many imponderable factors -- the behavior of the military, the reactions of other governments, the consequent strategies of internal political actors and parties (the Muslim Brotherhood in Egypt) -- so activists and academic experts alike are forced to concede that they don't really know what the consequences will be.

One part of this imponderability derives from the fact that social changes are conveyed through sets of individual and collective actors. The actors have a variety of motives and modes of reasoning, and the collective actors are forced to somehow aggregate the actions and wants of subordinate actors. And it isn't possible to anticipate with confidence the choices that the actors will make in response to changing circumstances. At a very high level of abstraction, it is the task of game theory to model strategic decision-making over a sequence of choices (problems of strategic rationality); but the tools of game theory are too abstract to allow modeling of specific complex social interactions.

A second feature of unpredictability in extended social processes derives from the fact that the agents themselves are not fixed and constant throughout the process. The experience of democracy activism potentially changes the agent profoundly -- so the expectations we would have had of his/her choices at the beginning may be very poorly grounded by the middle and end. Some possible changes may make a very large difference in outcomes -- actors may become more committed, more open to violence, more ready to compromise, more understanding of the grievances of other groups, ... This is sometimes described as endogeneity -- the causal components themselves change their characteristics as a consequence of the process.

So the actors change through the social process; but the same is often true of the social organizations and institutions that are involved in the process. Take contentious politics -- it may be that a round of protests begins around a couple of loose pre-existing organizations. As actors seek to achieve their political goals through collective action, they make use of the organizations for their communications and mobilization resources. But some actors may then also attempt to transform the organization itself -- to make it more effective or to make it more accommodating to the political objectives of this particular group of activists. (Think of Lenin as a revolutionary organization innovator.) And through their struggles, they may elicit changes in the organizations of the "forces of order" -- the police may create new tactics (kettling) and new sub-organizations (specialized intelligence units). So the process of change is likely enough to transform all the causal components as well -- the agents and their motivations as well as the surrounding institutions of mobilization and control. Rather than a set of billiard balls and iron rods with fixed properties and predictable aggregate consequences, we find a fluid situation in which the causal properties of each of the components of the process are themselves changing.

One way of trying to handle the indeterminacy and causal complexity of these sorts of causal processes is to give up on the goal of arriving at specific "point" predictions about outcomes and instead concentrate on tracing out a large number of possible scenarios, beginning with the circumstances, actors, and structures on the ground. In some circumstances we may find that there is a very wide range of possible outcomes; but we may find that a large percentage of the feasible scenarios or pathways fall within a much narrower range. This kind of reasoning is familiar to economists and financial analysts in the form of Monte Carlo simulations. And it is possible that the approach can be used for modeling likely outcomes in more complex social processes as well -- war and peace, ethnic conflict, climate change, or democracy movements.

Agent-based modeling is one component of approaches like these (link).  This means taking into account a wide range of social factors -- agents, groups, organizations, institutions, states, popular movements, and then modeling the consequences of these initial assumptions. Robert Axelrod and colleagues have applied a variety of modeling techniques to these efforts (link).

Another interesting effort to carry out such an effort is underway at the RAND Pardee Center, summarized in a white paper called Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. Here is how the lead investigators describe the overall strategy of the effort:
This report describes and demonstrates a new, quantitative approach to long-term policy analysis (LTPA).  These robust decisionmaking methods aim to greatly enhance and support humans’ innate decisionmaking capabilities with powerful quantitative analytic tools similar to those that have demonstrated unparalleled effectiveness when applied to more circumscribed decision problems.  By reframing the question “What will the long-term future bring?” as “How can we choose actions today that will be consistent with our long-term interests?” robust decisionmaking can harness the heretofore unavailable capabilities of modern computers to grapple directly with the inherent difficulty of accurate long-term prediction that has bedeviled previous approaches to LTPA. (iii)
LTPA is an important example of a class of problems requiring decisionmaking under conditions of  deep uncertainty—that is, where analysts do not know, or the parties to a decision cannot agree on, (1) the appropriate conceptual models that describe the relationships among the key driving forces that will shape the long-term future, (2) the probability distributions used to represent uncertainty about key variables and parameters in the mathematical representations of these conceptual models, and/or (3) how to value the desirability of alternative outcomes. (iii)
And here, in a nutshell, is how the approach is supposed to work:
This study proposes four key elements of successful LTPA: 
Consider large ensembles (hundreds to millions) of scenarios.
• Seek robust, not optimal, strategies.
• Achieve robustness with adaptivity.
• Design analysis for interactive exploration of the multiplicity of plausible futures.
 
These elements are implemented through an iterative process in which the computer helps humans create a large ensemble of plausible scenarios, where each scenario represents one guess about how the world works (a future state of the world) and one choice of many alternative strategies that might be adopted to influence outcomes. Ideally, such ensembles will contain a sufficiently wide range of plausible futures that one will match whatever future, surprising or not, does occur—at least close enough for the purposes of crafting policies robust against it.  (xiii)
Thus, computer-guided exploration of scenario and decision spaces can provide a prosthesis for the imagination, helping humans, working individually or in groups, to discover adaptive near-term strategies that are robust over large ensembles of plausible futures. (xiv)
The hard work of this approach is to identify the characteristics of policy levers, exogenous uncertainties, measures, and relationship (XLRM).  Then the analysis turns to identifying a very large number of possible scenarios, depending on the initial conditions and the properties of the actors and organizations. (This aspect of the analysis is analogous to multiple plays of a simulation game like SimCity.) Finally, the approach requires aggregating the large number of scenarios to allow the analysis to reach some conclusions about the distribution of futures entailed by the starting position and the characteristics of the actors and institutions.  And the method attempts to assign a measure of "regret" to outcomes, in order to assess the policy steps that might be taken today that lead to the least regrettable outcomes in the distant future.

It appears, then, that there are computational tools and methods that may prove useful for social explanation and social prediction -- not of single outcomes, but of the range of outcomes that may be associated with a set of interventions, actors, and institutions.

Scenario-based projections of social processes


As we have noted in previous posts, social outcomes are highly path-dependent and contingent (link, link, link, link). This implies that it is difficult to predict the consequences of even a single causal intervention within a complex social environment including numerous actors -- say, a new land use policy, a new state tax on services, or a sweeping cap-and-trade policy on CO2 emissions. And yet policy changes are specifically designed and chosen in order to bring about certain kinds of outcomes. We care about the future; we adopt policies to improve this or that feature of the future; and yet we have a hard time providing a justified forecast of the consequences of the policy.

This difficulty doesn't only affect policy choices; it also pertains to large interventions like the democracy uprisings in the Middle East and North Africa. There are too many imponderable factors -- the behavior of the military, the reactions of other governments, the consequent strategies of internal political actors and parties (the Muslim Brotherhood in Egypt) -- so activists and academic experts alike are forced to concede that they don't really know what the consequences will be.

One part of this imponderability derives from the fact that social changes are conveyed through sets of individual and collective actors. The actors have a variety of motives and modes of reasoning, and the collective actors are forced to somehow aggregate the actions and wants of subordinate actors. And it isn't possible to anticipate with confidence the choices that the actors will make in response to changing circumstances. At a very high level of abstraction, it is the task of game theory to model strategic decision-making over a sequence of choices (problems of strategic rationality); but the tools of game theory are too abstract to allow modeling of specific complex social interactions.

A second feature of unpredictability in extended social processes derives from the fact that the agents themselves are not fixed and constant throughout the process. The experience of democracy activism potentially changes the agent profoundly -- so the expectations we would have had of his/her choices at the beginning may be very poorly grounded by the middle and end. Some possible changes may make a very large difference in outcomes -- actors may become more committed, more open to violence, more ready to compromise, more understanding of the grievances of other groups, ... This is sometimes described as endogeneity -- the causal components themselves change their characteristics as a consequence of the process.

So the actors change through the social process; but the same is often true of the social organizations and institutions that are involved in the process. Take contentious politics -- it may be that a round of protests begins around a couple of loose pre-existing organizations. As actors seek to achieve their political goals through collective action, they make use of the organizations for their communications and mobilization resources. But some actors may then also attempt to transform the organization itself -- to make it more effective or to make it more accommodating to the political objectives of this particular group of activists. (Think of Lenin as a revolutionary organization innovator.) And through their struggles, they may elicit changes in the organizations of the "forces of order" -- the police may create new tactics (kettling) and new sub-organizations (specialized intelligence units). So the process of change is likely enough to transform all the causal components as well -- the agents and their motivations as well as the surrounding institutions of mobilization and control. Rather than a set of billiard balls and iron rods with fixed properties and predictable aggregate consequences, we find a fluid situation in which the causal properties of each of the components of the process are themselves changing.

One way of trying to handle the indeterminacy and causal complexity of these sorts of causal processes is to give up on the goal of arriving at specific "point" predictions about outcomes and instead concentrate on tracing out a large number of possible scenarios, beginning with the circumstances, actors, and structures on the ground. In some circumstances we may find that there is a very wide range of possible outcomes; but we may find that a large percentage of the feasible scenarios or pathways fall within a much narrower range. This kind of reasoning is familiar to economists and financial analysts in the form of Monte Carlo simulations. And it is possible that the approach can be used for modeling likely outcomes in more complex social processes as well -- war and peace, ethnic conflict, climate change, or democracy movements.

Agent-based modeling is one component of approaches like these (link).  This means taking into account a wide range of social factors -- agents, groups, organizations, institutions, states, popular movements, and then modeling the consequences of these initial assumptions. Robert Axelrod and colleagues have applied a variety of modeling techniques to these efforts (link).

Another interesting effort to carry out such an effort is underway at the RAND Pardee Center, summarized in a white paper called Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. Here is how the lead investigators describe the overall strategy of the effort:
This report describes and demonstrates a new, quantitative approach to long-term policy analysis (LTPA).  These robust decisionmaking methods aim to greatly enhance and support humans’ innate decisionmaking capabilities with powerful quantitative analytic tools similar to those that have demonstrated unparalleled effectiveness when applied to more circumscribed decision problems.  By reframing the question “What will the long-term future bring?” as “How can we choose actions today that will be consistent with our long-term interests?” robust decisionmaking can harness the heretofore unavailable capabilities of modern computers to grapple directly with the inherent difficulty of accurate long-term prediction that has bedeviled previous approaches to LTPA. (iii)
LTPA is an important example of a class of problems requiring decisionmaking under conditions of  deep uncertainty—that is, where analysts do not know, or the parties to a decision cannot agree on, (1) the appropriate conceptual models that describe the relationships among the key driving forces that will shape the long-term future, (2) the probability distributions used to represent uncertainty about key variables and parameters in the mathematical representations of these conceptual models, and/or (3) how to value the desirability of alternative outcomes. (iii)
And here, in a nutshell, is how the approach is supposed to work:
This study proposes four key elements of successful LTPA: 
Consider large ensembles (hundreds to millions) of scenarios.
• Seek robust, not optimal, strategies.
• Achieve robustness with adaptivity.
• Design analysis for interactive exploration of the multiplicity of plausible futures.
 
These elements are implemented through an iterative process in which the computer helps humans create a large ensemble of plausible scenarios, where each scenario represents one guess about how the world works (a future state of the world) and one choice of many alternative strategies that might be adopted to influence outcomes. Ideally, such ensembles will contain a sufficiently wide range of plausible futures that one will match whatever future, surprising or not, does occur—at least close enough for the purposes of crafting policies robust against it.  (xiii)
Thus, computer-guided exploration of scenario and decision spaces can provide a prosthesis for the imagination, helping humans, working individually or in groups, to discover adaptive near-term strategies that are robust over large ensembles of plausible futures. (xiv)
The hard work of this approach is to identify the characteristics of policy levers, exogenous uncertainties, measures, and relationship (XLRM).  Then the analysis turns to identifying a very large number of possible scenarios, depending on the initial conditions and the properties of the actors and organizations. (This aspect of the analysis is analogous to multiple plays of a simulation game like SimCity.) Finally, the approach requires aggregating the large number of scenarios to allow the analysis to reach some conclusions about the distribution of futures entailed by the starting position and the characteristics of the actors and institutions.  And the method attempts to assign a measure of "regret" to outcomes, in order to assess the policy steps that might be taken today that lead to the least regrettable outcomes in the distant future.

It appears, then, that there are computational tools and methods that may prove useful for social explanation and social prediction -- not of single outcomes, but of the range of outcomes that may be associated with a set of interventions, actors, and institutions.

Scenario-based projections of social processes


As we have noted in previous posts, social outcomes are highly path-dependent and contingent (link, link, link, link). This implies that it is difficult to predict the consequences of even a single causal intervention within a complex social environment including numerous actors -- say, a new land use policy, a new state tax on services, or a sweeping cap-and-trade policy on CO2 emissions. And yet policy changes are specifically designed and chosen in order to bring about certain kinds of outcomes. We care about the future; we adopt policies to improve this or that feature of the future; and yet we have a hard time providing a justified forecast of the consequences of the policy.

This difficulty doesn't only affect policy choices; it also pertains to large interventions like the democracy uprisings in the Middle East and North Africa. There are too many imponderable factors -- the behavior of the military, the reactions of other governments, the consequent strategies of internal political actors and parties (the Muslim Brotherhood in Egypt) -- so activists and academic experts alike are forced to concede that they don't really know what the consequences will be.

One part of this imponderability derives from the fact that social changes are conveyed through sets of individual and collective actors. The actors have a variety of motives and modes of reasoning, and the collective actors are forced to somehow aggregate the actions and wants of subordinate actors. And it isn't possible to anticipate with confidence the choices that the actors will make in response to changing circumstances. At a very high level of abstraction, it is the task of game theory to model strategic decision-making over a sequence of choices (problems of strategic rationality); but the tools of game theory are too abstract to allow modeling of specific complex social interactions.

A second feature of unpredictability in extended social processes derives from the fact that the agents themselves are not fixed and constant throughout the process. The experience of democracy activism potentially changes the agent profoundly -- so the expectations we would have had of his/her choices at the beginning may be very poorly grounded by the middle and end. Some possible changes may make a very large difference in outcomes -- actors may become more committed, more open to violence, more ready to compromise, more understanding of the grievances of other groups, ... This is sometimes described as endogeneity -- the causal components themselves change their characteristics as a consequence of the process.

So the actors change through the social process; but the same is often true of the social organizations and institutions that are involved in the process. Take contentious politics -- it may be that a round of protests begins around a couple of loose pre-existing organizations. As actors seek to achieve their political goals through collective action, they make use of the organizations for their communications and mobilization resources. But some actors may then also attempt to transform the organization itself -- to make it more effective or to make it more accommodating to the political objectives of this particular group of activists. (Think of Lenin as a revolutionary organization innovator.) And through their struggles, they may elicit changes in the organizations of the "forces of order" -- the police may create new tactics (kettling) and new sub-organizations (specialized intelligence units). So the process of change is likely enough to transform all the causal components as well -- the agents and their motivations as well as the surrounding institutions of mobilization and control. Rather than a set of billiard balls and iron rods with fixed properties and predictable aggregate consequences, we find a fluid situation in which the causal properties of each of the components of the process are themselves changing.

One way of trying to handle the indeterminacy and causal complexity of these sorts of causal processes is to give up on the goal of arriving at specific "point" predictions about outcomes and instead concentrate on tracing out a large number of possible scenarios, beginning with the circumstances, actors, and structures on the ground. In some circumstances we may find that there is a very wide range of possible outcomes; but we may find that a large percentage of the feasible scenarios or pathways fall within a much narrower range. This kind of reasoning is familiar to economists and financial analysts in the form of Monte Carlo simulations. And it is possible that the approach can be used for modeling likely outcomes in more complex social processes as well -- war and peace, ethnic conflict, climate change, or democracy movements.

Agent-based modeling is one component of approaches like these (link).  This means taking into account a wide range of social factors -- agents, groups, organizations, institutions, states, popular movements, and then modeling the consequences of these initial assumptions. Robert Axelrod and colleagues have applied a variety of modeling techniques to these efforts (link).

Another interesting effort to carry out such an effort is underway at the RAND Pardee Center, summarized in a white paper called Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. Here is how the lead investigators describe the overall strategy of the effort:
This report describes and demonstrates a new, quantitative approach to long-term policy analysis (LTPA).  These robust decisionmaking methods aim to greatly enhance and support humans’ innate decisionmaking capabilities with powerful quantitative analytic tools similar to those that have demonstrated unparalleled effectiveness when applied to more circumscribed decision problems.  By reframing the question “What will the long-term future bring?” as “How can we choose actions today that will be consistent with our long-term interests?” robust decisionmaking can harness the heretofore unavailable capabilities of modern computers to grapple directly with the inherent difficulty of accurate long-term prediction that has bedeviled previous approaches to LTPA. (iii)
LTPA is an important example of a class of problems requiring decisionmaking under conditions of  deep uncertainty—that is, where analysts do not know, or the parties to a decision cannot agree on, (1) the appropriate conceptual models that describe the relationships among the key driving forces that will shape the long-term future, (2) the probability distributions used to represent uncertainty about key variables and parameters in the mathematical representations of these conceptual models, and/or (3) how to value the desirability of alternative outcomes. (iii)
And here, in a nutshell, is how the approach is supposed to work:
This study proposes four key elements of successful LTPA: 
Consider large ensembles (hundreds to millions) of scenarios.
• Seek robust, not optimal, strategies.
• Achieve robustness with adaptivity.
• Design analysis for interactive exploration of the multiplicity of plausible futures.
 
These elements are implemented through an iterative process in which the computer helps humans create a large ensemble of plausible scenarios, where each scenario represents one guess about how the world works (a future state of the world) and one choice of many alternative strategies that might be adopted to influence outcomes. Ideally, such ensembles will contain a sufficiently wide range of plausible futures that one will match whatever future, surprising or not, does occur—at least close enough for the purposes of crafting policies robust against it.  (xiii)
Thus, computer-guided exploration of scenario and decision spaces can provide a prosthesis for the imagination, helping humans, working individually or in groups, to discover adaptive near-term strategies that are robust over large ensembles of plausible futures. (xiv)
The hard work of this approach is to identify the characteristics of policy levers, exogenous uncertainties, measures, and relationship (XLRM).  Then the analysis turns to identifying a very large number of possible scenarios, depending on the initial conditions and the properties of the actors and organizations. (This aspect of the analysis is analogous to multiple plays of a simulation game like SimCity.) Finally, the approach requires aggregating the large number of scenarios to allow the analysis to reach some conclusions about the distribution of futures entailed by the starting position and the characteristics of the actors and institutions.  And the method attempts to assign a measure of "regret" to outcomes, in order to assess the policy steps that might be taken today that lead to the least regrettable outcomes in the distant future.

It appears, then, that there are computational tools and methods that may prove useful for social explanation and social prediction -- not of single outcomes, but of the range of outcomes that may be associated with a set of interventions, actors, and institutions.

Monday, December 13, 2010

Diagrams and economic thought

source: The Paretian System (link)

The most vivid part of any undergraduate student's study of economics is probably the diagrams.  Economists since Walras, Pareto, and Marshall have found it useful to express their theories and hypotheses making use of two-axis diagrams, allowing for very economical formulation of fundamental relationships. Supply-demand curves, production functions, and a graph of diminishing marginal product all provide a way of making geometrical sense of a given economic principle or hypothesis.  They allow us to visualize the relationships that are postulated among a set of factors.

Mark Blaug has made a long and fruitful career out of his remarkable ability of placing economic thought into its context (Economic Theory in Retrospect (1962), The Methodology of Economics: Or, How Economists Explain (1992)).  Now he has collaborated with Peter Lloyd to produce Famous Figures and Diagrams in Economics (2010), and the book is a marvelous contribution.

The book is organized into several large sections: Demand and supply curve analysis; Welfare economics; Special markets; General equilibrium analysis; Open economies; Macroeconomic analysis; and Growth and income distribution.  Experts have been recruited to write short, technical but accessible essays on some 58 topics, including discussion of about 150 diagrams.  

The figures that the book considers pretty much reproduce the history of modern economic thought.  And, indeed, some figures have been repeatedly rediscovered; Blaug attributes the "Marshallian cross" to Cournot (1838), Rau (1841), Dupuit (1844), Mangodt (1863), and Jenkin (1870).  Almost all the examples are drawn from the history of orthodox neo-classical economics; rare exceptions are Joan Robinson's "graph of discrimination" and August Losch's "market areas".  The main insights of classical economics are equally amenable to presentation through diagrams, so it is interesting that the classical economists (including Marx) were not particularly inclined to use them.  Here is a diagram not included in the book, representing Michio Morishima's effort to express some of Ricardo's central economic intuitions:


It is worth thinking a bit about what a diagram is, from a cognitive point of view.  To start, it is not a data graph; a diagram does not generally provide a summary of actual economic variables over time, such as unemployment.  But generally an economic diagram is not simply a graph of a given mathematical function either, plotting the value of a function over part of the domain of the independent variable.  We need more than a graphing calculator to create a useful economic diagram.

Rather, an economic diagram is a stylized representation of the behavior and interaction of (often) several variables in a range of interest.  Take the most fundamental diagram of neoclassical economics, the supply-demand diagram.  We are asked to consider "supply" and "demand" over a range of "price".  One curve represents the quantity of the good that will be produced at a low price through high price; the other curve represents the quantity of the good that will be purchased at the range of prices.  The intersection of the curves is the point of interest; it is the equilibrium at which quantity demanded equals quantity supplied.  The shape of the curve is significant; a straight line represents the view that supply and demand are linear with respect to price, whereas a curved line represents a non-linear relation between quantity and price.  (Each increment in price stimulates a smaller change in quantity.)

Here are some of the uses of diagrams in economics that Blaug and Lloyd mention in their introduction:
Figures and diagrams have been used in economic theory in several ways.  They have been used as a device to discover economic results; theorems or properties of models; or comparative static propositions and dynamic propositions. They have been used to prove some results. And they have been used as an expository device. (5)
They go on to quote Marshall:
It happens with a few unimportant exceptions all the results which have been obtained by the application of mathematical methods to pure economic theory can be obtained independently by the method of diagrams.  Diagrams represent simultaneously to the eye the chief forces which are at work, laid out, as it were, in a map; and thereby suggest results to which attention has not been directed by the use of methods of mathematical analysis. (5)
We might imagine that economic diagrams are purely mathematical constructs, and we might suppose that we have little choice in the way that a diagram is constructed.  But Edward Tufte has quite a bit to say on this subject in a series of books beginning with The Visual Display of Quantitative Information.  Essentially Tufte's message is that quantitative ideas can be conveyed in better and worse ways, and that much of the communication we do about quantities is misleading.  Conveying a quantitative relationship through a diagram can be done more or less insightfully; it is up to the economist to find a concise way of representing the relationships he/she is interested in exploring.

source: EJ Marey's train schedule, Paris to Lyons, in Edward Tufte,  The Visual Display of Quantitative Information

Blaug and Lloyd take some note of the "presentation aesthetics" of economic diagrams when they discuss modern methods of presentation:
In many areas of economic theory, the way in which economists understand economic concepts and propositions is through figures and diagrams.  What teacher of economic theory has not seen the dawn of understanding come over students when, failing to understand an exposition of some complex model in algebra or calculus, they are presented with a simple illustration? ... 
One can comprehend relationships among a number of variables (as in the box diagrams) or the effects of shifting curves or multiple equilibria more readily than in the corresponding algebra. This advantage has been increased by modern technologies.  Textbooks today use multi-coloured diagrams to great effect and the delivery of diagrams in classroom from computer-based programs allow overlays and other graphical techniques that aid the exposition of complex ideas. (8-9)
One of my favorite economic diagrams is the one introduced by Mark Elvin to represent his theory of a high-level equilibrium trap in agricultural development in The Pattern of the Chinese Past.


This diagram represents several different kinds of historical change in one compact figure: gradual technical progress along a production curve, shift of production curves through technical innovation, and the maximum production possibility curve that lies above each of these.  The axes represent "total output" and "rural population." The concave shape of each curve has a very specific economic and demographic meaning: as population grows within a given mix of techniques, output grows more slowly; so average output per capita approaches the subsistence line OS.  The HLET is graphically and laconically indicated on the upper right quadrant of the graph; there is no further room for technical improvement, and population has increased to the point where there is no surplus to fund radical technological innovation.  (Elvin's theory of the high-level equilibrium trap is discussed in my Microfoundations, Methods, and Causation; link.)

Diagrams and economic thought

source: The Paretian System (link)

The most vivid part of any undergraduate student's study of economics is probably the diagrams.  Economists since Walras, Pareto, and Marshall have found it useful to express their theories and hypotheses making use of two-axis diagrams, allowing for very economical formulation of fundamental relationships. Supply-demand curves, production functions, and a graph of diminishing marginal product all provide a way of making geometrical sense of a given economic principle or hypothesis.  They allow us to visualize the relationships that are postulated among a set of factors.

Mark Blaug has made a long and fruitful career out of his remarkable ability of placing economic thought into its context (Economic Theory in Retrospect (1962), The Methodology of Economics: Or, How Economists Explain (1992)).  Now he has collaborated with Peter Lloyd to produce Famous Figures and Diagrams in Economics (2010), and the book is a marvelous contribution.

The book is organized into several large sections: Demand and supply curve analysis; Welfare economics; Special markets; General equilibrium analysis; Open economies; Macroeconomic analysis; and Growth and income distribution.  Experts have been recruited to write short, technical but accessible essays on some 58 topics, including discussion of about 150 diagrams.  

The figures that the book considers pretty much reproduce the history of modern economic thought.  And, indeed, some figures have been repeatedly rediscovered; Blaug attributes the "Marshallian cross" to Cournot (1838), Rau (1841), Dupuit (1844), Mangodt (1863), and Jenkin (1870).  Almost all the examples are drawn from the history of orthodox neo-classical economics; rare exceptions are Joan Robinson's "graph of discrimination" and August Losch's "market areas".  The main insights of classical economics are equally amenable to presentation through diagrams, so it is interesting that the classical economists (including Marx) were not particularly inclined to use them.  Here is a diagram not included in the book, representing Michio Morishima's effort to express some of Ricardo's central economic intuitions:


It is worth thinking a bit about what a diagram is, from a cognitive point of view.  To start, it is not a data graph; a diagram does not generally provide a summary of actual economic variables over time, such as unemployment.  But generally an economic diagram is not simply a graph of a given mathematical function either, plotting the value of a function over part of the domain of the independent variable.  We need more than a graphing calculator to create a useful economic diagram.

Rather, an economic diagram is a stylized representation of the behavior and interaction of (often) several variables in a range of interest.  Take the most fundamental diagram of neoclassical economics, the supply-demand diagram.  We are asked to consider "supply" and "demand" over a range of "price".  One curve represents the quantity of the good that will be produced at a low price through high price; the other curve represents the quantity of the good that will be purchased at the range of prices.  The intersection of the curves is the point of interest; it is the equilibrium at which quantity demanded equals quantity supplied.  The shape of the curve is significant; a straight line represents the view that supply and demand are linear with respect to price, whereas a curved line represents a non-linear relation between quantity and price.  (Each increment in price stimulates a smaller change in quantity.)

Here are some of the uses of diagrams in economics that Blaug and Lloyd mention in their introduction:
Figures and diagrams have been used in economic theory in several ways.  They have been used as a device to discover economic results; theorems or properties of models; or comparative static propositions and dynamic propositions. They have been used to prove some results. And they have been used as an expository device. (5)
They go on to quote Marshall:
It happens with a few unimportant exceptions all the results which have been obtained by the application of mathematical methods to pure economic theory can be obtained independently by the method of diagrams.  Diagrams represent simultaneously to the eye the chief forces which are at work, laid out, as it were, in a map; and thereby suggest results to which attention has not been directed by the use of methods of mathematical analysis. (5)
We might imagine that economic diagrams are purely mathematical constructs, and we might suppose that we have little choice in the way that a diagram is constructed.  But Edward Tufte has quite a bit to say on this subject in a series of books beginning with The Visual Display of Quantitative Information.  Essentially Tufte's message is that quantitative ideas can be conveyed in better and worse ways, and that much of the communication we do about quantities is misleading.  Conveying a quantitative relationship through a diagram can be done more or less insightfully; it is up to the economist to find a concise way of representing the relationships he/she is interested in exploring.

source: EJ Marey's train schedule, Paris to Lyons, in Edward Tufte,  The Visual Display of Quantitative Information

Blaug and Lloyd take some note of the "presentation aesthetics" of economic diagrams when they discuss modern methods of presentation:
In many areas of economic theory, the way in which economists understand economic concepts and propositions is through figures and diagrams.  What teacher of economic theory has not seen the dawn of understanding come over students when, failing to understand an exposition of some complex model in algebra or calculus, they are presented with a simple illustration? ... 
One can comprehend relationships among a number of variables (as in the box diagrams) or the effects of shifting curves or multiple equilibria more readily than in the corresponding algebra. This advantage has been increased by modern technologies.  Textbooks today use multi-coloured diagrams to great effect and the delivery of diagrams in classroom from computer-based programs allow overlays and other graphical techniques that aid the exposition of complex ideas. (8-9)
One of my favorite economic diagrams is the one introduced by Mark Elvin to represent his theory of a high-level equilibrium trap in agricultural development in The Pattern of the Chinese Past.


This diagram represents several different kinds of historical change in one compact figure: gradual technical progress along a production curve, shift of production curves through technical innovation, and the maximum production possibility curve that lies above each of these.  The axes represent "total output" and "rural population." The concave shape of each curve has a very specific economic and demographic meaning: as population grows within a given mix of techniques, output grows more slowly; so average output per capita approaches the subsistence line OS.  The HLET is graphically and laconically indicated on the upper right quadrant of the graph; there is no further room for technical improvement, and population has increased to the point where there is no surplus to fund radical technological innovation.  (Elvin's theory of the high-level equilibrium trap is discussed in my Microfoundations, Methods, and Causation; link.)

Diagrams and economic thought

source: The Paretian System (link)

The most vivid part of any undergraduate student's study of economics is probably the diagrams.  Economists since Walras, Pareto, and Marshall have found it useful to express their theories and hypotheses making use of two-axis diagrams, allowing for very economical formulation of fundamental relationships. Supply-demand curves, production functions, and a graph of diminishing marginal product all provide a way of making geometrical sense of a given economic principle or hypothesis.  They allow us to visualize the relationships that are postulated among a set of factors.

Mark Blaug has made a long and fruitful career out of his remarkable ability of placing economic thought into its context (Economic Theory in Retrospect (1962), The Methodology of Economics: Or, How Economists Explain (1992)).  Now he has collaborated with Peter Lloyd to produce Famous Figures and Diagrams in Economics (2010), and the book is a marvelous contribution.

The book is organized into several large sections: Demand and supply curve analysis; Welfare economics; Special markets; General equilibrium analysis; Open economies; Macroeconomic analysis; and Growth and income distribution.  Experts have been recruited to write short, technical but accessible essays on some 58 topics, including discussion of about 150 diagrams.  

The figures that the book considers pretty much reproduce the history of modern economic thought.  And, indeed, some figures have been repeatedly rediscovered; Blaug attributes the "Marshallian cross" to Cournot (1838), Rau (1841), Dupuit (1844), Mangodt (1863), and Jenkin (1870).  Almost all the examples are drawn from the history of orthodox neo-classical economics; rare exceptions are Joan Robinson's "graph of discrimination" and August Losch's "market areas".  The main insights of classical economics are equally amenable to presentation through diagrams, so it is interesting that the classical economists (including Marx) were not particularly inclined to use them.  Here is a diagram not included in the book, representing Michio Morishima's effort to express some of Ricardo's central economic intuitions:


It is worth thinking a bit about what a diagram is, from a cognitive point of view.  To start, it is not a data graph; a diagram does not generally provide a summary of actual economic variables over time, such as unemployment.  But generally an economic diagram is not simply a graph of a given mathematical function either, plotting the value of a function over part of the domain of the independent variable.  We need more than a graphing calculator to create a useful economic diagram.

Rather, an economic diagram is a stylized representation of the behavior and interaction of (often) several variables in a range of interest.  Take the most fundamental diagram of neoclassical economics, the supply-demand diagram.  We are asked to consider "supply" and "demand" over a range of "price".  One curve represents the quantity of the good that will be produced at a low price through high price; the other curve represents the quantity of the good that will be purchased at the range of prices.  The intersection of the curves is the point of interest; it is the equilibrium at which quantity demanded equals quantity supplied.  The shape of the curve is significant; a straight line represents the view that supply and demand are linear with respect to price, whereas a curved line represents a non-linear relation between quantity and price.  (Each increment in price stimulates a smaller change in quantity.)

Here are some of the uses of diagrams in economics that Blaug and Lloyd mention in their introduction:
Figures and diagrams have been used in economic theory in several ways.  They have been used as a device to discover economic results; theorems or properties of models; or comparative static propositions and dynamic propositions. They have been used to prove some results. And they have been used as an expository device. (5)
They go on to quote Marshall:
It happens with a few unimportant exceptions all the results which have been obtained by the application of mathematical methods to pure economic theory can be obtained independently by the method of diagrams.  Diagrams represent simultaneously to the eye the chief forces which are at work, laid out, as it were, in a map; and thereby suggest results to which attention has not been directed by the use of methods of mathematical analysis. (5)
We might imagine that economic diagrams are purely mathematical constructs, and we might suppose that we have little choice in the way that a diagram is constructed.  But Edward Tufte has quite a bit to say on this subject in a series of books beginning with The Visual Display of Quantitative Information.  Essentially Tufte's message is that quantitative ideas can be conveyed in better and worse ways, and that much of the communication we do about quantities is misleading.  Conveying a quantitative relationship through a diagram can be done more or less insightfully; it is up to the economist to find a concise way of representing the relationships he/she is interested in exploring.

source: EJ Marey's train schedule, Paris to Lyons, in Edward Tufte,  The Visual Display of Quantitative Information

Blaug and Lloyd take some note of the "presentation aesthetics" of economic diagrams when they discuss modern methods of presentation:
In many areas of economic theory, the way in which economists understand economic concepts and propositions is through figures and diagrams.  What teacher of economic theory has not seen the dawn of understanding come over students when, failing to understand an exposition of some complex model in algebra or calculus, they are presented with a simple illustration? ... 
One can comprehend relationships among a number of variables (as in the box diagrams) or the effects of shifting curves or multiple equilibria more readily than in the corresponding algebra. This advantage has been increased by modern technologies.  Textbooks today use multi-coloured diagrams to great effect and the delivery of diagrams in classroom from computer-based programs allow overlays and other graphical techniques that aid the exposition of complex ideas. (8-9)
One of my favorite economic diagrams is the one introduced by Mark Elvin to represent his theory of a high-level equilibrium trap in agricultural development in The Pattern of the Chinese Past.


This diagram represents several different kinds of historical change in one compact figure: gradual technical progress along a production curve, shift of production curves through technical innovation, and the maximum production possibility curve that lies above each of these.  The axes represent "total output" and "rural population." The concave shape of each curve has a very specific economic and demographic meaning: as population grows within a given mix of techniques, output grows more slowly; so average output per capita approaches the subsistence line OS.  The HLET is graphically and laconically indicated on the upper right quadrant of the graph; there is no further room for technical improvement, and population has increased to the point where there is no surplus to fund radical technological innovation.  (Elvin's theory of the high-level equilibrium trap is discussed in my Microfoundations, Methods, and Causation; link.)

Saturday, March 27, 2010

Skinner's spatial imagination


images: presentations of Skinner's data by Center for Geographic Analysis, Harvard University, AAS 2010


G. William Skinner was a remarkably generous scholar who inspired and assisted several generations of China specialists.  (Here is a link to a remembrance of Bill.)  He was prolific and fertile, and there is much to learn from rereading his work. There is quite a corpus of unpublished work in the form of research reports and conference papers.  Rereading this work is profoundly stimulating. It holds up very well as a source of ideas about social science analysis of concrete historical and social data, and there are many avenues of research that remain to be further explored.

Skinner is best known for his efforts to provide regional systems analysis of spatial patterns in China.   He thought of a social-economic region as a system of flows of people, goods, and ideas.  He argued for the crucial role that water transport played in knitting together the economic activities of a region in the circumstances of pre-modern transport.   

Skinner's work demonstrated the great value of spatial analysis.  Patterns emerge visually once we’ve selected the appropriate level of scope.  Mapping social and economic data is tremendously insightful.  He was also highly sensitive to the social and cultural consequences of these flows of activity.  For example, patterns of gender ratios show a pronounced regional pattern; Skinner demonstrates the relevance of core-periphery structure to social-cultural variables such as this one. 

Skinner plainly anticipated the historical GIS revolution conceptually.  And this is a feature of imagination, not technology.

A classic series of articles on the spatial structure of the Chinese countryside in the 1960s provided an important basis for rethinking “village” society. They also provided a rigorous application of central place theory to the concrete specificity of China.  Here are several maps drawn from these essays ("Marketing and Social Structure in Rural China." Journal of Asian Studies 24 (1-3), 1964-65). Here Skinner is trying out the theories of central place theory, and the theoretical prediction of economic space being structured as a system of nested hexagons with places linked by roads.





Another key contribution of Skinner's work is his analysis of China in terms of a set of eight or nine “macroregions”.  He argues that China was not a single national economic system, and it was not a set of separate provincial economies.  Instead, it consisted of a small number of “macroregions” of trade, commerce, and population activity, linked by water transport.  And macroregions were internally differentiated into core and periphery.  

Skinner used meticulous county-level databases to map the economic and demographic boundaries of the region.  Skinner identified core and periphery in terms of population density, agricultural use, and other key variables.  And he then measured a host of other variables – female literacy, for example – and showed that these vary systemically from core to periphery.  There is also an important ecological dimension to the argument; Skinner demonstrated that there is a flow of fertility from periphery to core as a result of the transfer of food and fuel from forests to urban cores.  (This analysis is developed in "Regional Urbanization in Nineteenth-Century China" in The City in Late Imperial China, edited by G. W. Skinner, Stanford University Press, 1977.)  Here are three maps developed by Skinner and his collaborators on the basis of the macroregions analysis.



This is a particularly expressive map of the Lower Yangzi macroregion, differentiated into 4 levels of core and periphery.  This is pretty much the full development of the macroregional analysis.


Another key idea in Skinner's work is his analysis of city systems into a spatial and functional hierarchy. He argued that it is possible to distinguish clearly between higher-level and lower-level urban places, and that there is an orderly arrangement of economic functions and marketing scope associated with the various urban places in a macroregion.


So regional analysis of China is a key contribution in Skinner's work. But Skinner did not restrict his research to China alone. He also did significant work on Japanese demography and family structure and female infanticide in the 1980s (for example, "Reproductive Strategies and the Domestic Cycle among Tokugawa Villagers," an AAS presentation in 1988).

And he brought his regional systems analysis to bear on France in an extended piece of research in the late 1980s. The maps that follow are drawn from an unpublished conference paper titled "Regional Systems and the Modernization of Agrarian Societies: France, Japan, China," dated 1991. This paper builds upon a 1988 paper titled "The Population Geography of Agrarian Societies: Regional Systems in Eurasia."

This analysis builds a view of France as a set of interrelated regions with core-periphery stucture.  Through the series of working maps Skinner painstakingly constructs an empirically based analysis of the economic regions of France in mid-nineteenth century.  And Skinner then asks one of his typically foundational questions: how do these geographical features play a causal role in cultural and demographic characteristics?





This map of never-married/married female ratios is one illustration of Skinner's effort to relate social, cultural, and demographic variables to the core-periphery structure of a region.  The pattern of high ratio corresponds fairly well across the map of France to the regions identified by demographic and agricultural factors.  And this serves to confirm the underlying idea -- that economic regionalization has major consequences for cultural and demographic behavior.


Likewise patterns of female life expectancy and net migration; here again we find the kind of regionalization of important social variables that Skinner documents in great detail in late imperial China.


Finally, Skinner also played an important role as a “macro-historian” of China.  His 1985 Presidential Address to the Association for Asian Studies was a tour-de-force, bringing his macroregional analysis into a temporal framework (Skinner, G. William. 1985. Presidential Address: The Structure of Chinese History. Journal of Asian Studies XLIV (2):271-92).  In this piece he demonstrates a “long-wave” set of patterns of economic growth and contraction in two widely separated macroregions.  And he argues that we understand China’s economic history better when we see these sub-national patterns.  He analyzes the economic and population history of North China and Southeast Coast, two widely separated macroregions, over several centuries.  And he demonstrates that the two regions display dramatically different economic trajectories over the long duree.  Skinner brings Braudel to China.

Here is the pattern he finds for two macroregions over a centuries-long expanse of time.  And significantly, if these patterns were superimposed into a “national” pattern, it would show pretty much of a flat performance, since the two macroregions are significantly out of phase in their boom and bust cycles.



Finally, an enduring contribution that Skinner made is his cheerful disregard of discipline. Economic anthropology, regional studies, demography, urban studies, history … Skinner moved freely among all these and more. It was topics and questions, not disciplinary strictures, that guided Skinner’s fertile and rigorous imagination.  And area specialists and social scientists alike can fruitfully gain from continued study of his research.  Fortunately, work is underway to make Skinner's unpublished research and data available to other scholars.  Here are some major projects:
  • Data and maps are being curated and presented at Harvard. Here is a beta site and here is the platform the China GIS team is using at AfricaMap.
  • The Skinner Archive at Harvard (link)
  • Skinner's unpublished papers and research materials are being digitized and presented at the University of Washington.  Here is a link.
  • The China Historical GPS project at Fudan University is presenting an ambitious digital mapping collection as well (link). 
(Presented at the Association for Asian Studies, Philadelphia, March 2010; panel on Skinner's legacy.)