[reposted from deadvoles.wordpress.com/]
Social systems are dynamic, internally heterogeneous, and loosely coupled. Some may object to my use of the term ‘system’ and certainly the word has a lot of baggage. By calling something a system, I am merely drawing attention to the fact that it admits descriptions in terms of parts, their properties, and the relationships between these; as a statement about something it really adds very little since all but possibly the simplest things can be described in such terms; however, we must call ‘it’ something, and by calling ‘it’ a ‘system’ I invite an analysis in terms of its constituent parts. The last sentence is slightly misleading however, because it suggests both that we can presuppose the thing (Brian Cantwell Smith’s Criterion of Ultimate Concreteness), and probably more importantly that any such thing is decomposable into parts, properties, and relations in only one right way. This is not the case at all; all but possibly the simplest things admit many analyses in which different parts, relations, and properties are distinguished and at different levels of granularity or precision, and accuracy. This needn’t be taken as an assertion of metaphysics so much as an assertion of pragmatism.
Obviously what sorts of descriptions are best is relative to purpose, context, and circumstance (if those indeed are three distinct things), but that only means that some difference between those descriptions must be able to account for their different utilities. There are a variety of information theoretic approaches that can be applied to such a problem. We can look at the complexity of the description itself, using something like algorithmic information theory; we can try to measure the amount of uncertainty a description reduces using Shannon’s mathematical communication theory, or we can try to look at the various approaches to measuring semantic information content that have been introduced into the philosophy literature. In a very general sense however, a good decomposition is one which is coherent and approaches some optimum ratio between the amount of information that can be extracted and the cost to do so.
So, social phenomena admit many possible decompositions; some may be better than others for some purposes and in some contexts; but here I want to ask: given the current state of social science, and our increasingly dire need for sound policy advice, what sorts of descriptions are we in want of? To put this in a slightly different way, what sorts of descriptions are needed to improve our explanations of social phenomena, both to advance our theoretical understanding, but also to advance our practical ability to provide valuable policy advice? That’s a big question (of course), and I don’t think it has one answer (wink), so I want to specifically focus on the sorts of descriptions that, to put it crudely, would enable us to do good macro.
Let me stop myself her and express a view I adopted fairly early in my intellectual life: its long past time we stop trying to explain macro-level social phenomena by projecting individual psychological or behavioral traits onto society: society is not the individual writ large. Nor is society something as orderly and well-engineered as, say, a mechanical clock. I can understand most of what I need to know about how a mechanical clock works by understanding how all the gears, springs, and other parts fit together: their respective properties and the dynamic relations between them. I don’t have to go much deeper than that. Their precise substance could be plastic, or brass, or wood: as long as they are rigid and sturdy enough to do their jobs, I don’t need to know. But in the open-ended, constantly evolving and boundary-transgressing world of human social systems, that sort of crude decomposition can only get us so far. Or put another way, descriptions which rely on the stability (in identity, function, etc.) of things like organizations, institutions, etc. — the low hanging fruit on the tree of knowledge– only help us when everything stays the same. But they don’t, even if for a long time it looks like nothing is changing. We see this is biological systems, for example, in which genetic diversity accumulates hidden by phenotypic homogeneity under some general regime, and when that regime changes, or some internal tipping point is reached, that hidden diversity rapidly becomes manifest in the distribution of phenotypes in the population.
As I said, society is heterogeneous, dynamic, and loosely coupled. By loosely coupled I mean that at reasonable levels of precision, most of its parts exhibit varying degrees of autonomy. Of course, autonomy is a contentious notion, but at the very least it means that behavior (of the parts in some natural decomposition) is determined in great degree by internal state rather than external inputs. That is, the parts exhibit relatively high degrees of independence from one another. Not too much of course; but not too little either, or they’d just be like the gears in some clock.
Back when, well back when I started reading things that led me to start thinking these sorts of things, the call to arms was something called ‘population thinking’ and ‘emergence’. The idea was to move toward ways of conceptualizing problems that avoid the traps of Platonistic essentialism. In particular that meant thinking about heterogeneous sets of individuals, and how the properties of their aggregates arise through their interactions. Methodologically, but to varying degrees of fidelity, this has been expressed in the rise of a number of interrelated approaches to modeling social systems. Fueled by advances in graph theory (especially from work in computer science) and the new ‘social’ web, we have the blooming of social network analysis which largely seeks to explain aggregate phenomena via the structural properties of social networks (however they end of being defined). In addition to (and in some ways complementary to) social network analysis, have been a variety of computational approaches to modeling, especially agent-based approaches which study how aggregate behavior arise from the interactions of modeled individual agents interacting in some domain or problem environment. These come in an exceeding abundance of variants that are difficult to describe.
In a recent post, Daniel Little discusses how what he calls methodological localism emphasizes two ways in which people are socially embedded: agents are socially situated, and social constituted. By socially situated he means how agents are locally embedded within systems of constraints: systems of social relations and institutions that determine the opportunities, costs, and choices available, i.e. the ‘game’ that agents have to play. Or to quote Marx:
Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past.
The social constitution of agents is a more subtle thing, but something that anthropologists are generally poignantly aware of. People are encultured beings. Their behavioral and cognitive repertoire comes into being as part of ongoing social interaction. They are learned, but their learning is not simply a matter of knowledge acquisition, but of agent becoming: we fully exploit the affordances of our developmental plasticity. To say that I am an American is not simply to say that I have adopted a convenient label, but to assert an embodied fact.
Little goes on to discuss how these two perspectives on social embeddedness give rise to differing approaches to modeling social phenomena:
These two aspects of embeddedness provide the foundation for rather different kinds of social explanation and inquiry. The first aspect of social embeddedness is entirely compatible with a neutral and universal theory of the agent — including rational choice theory in all its variants. The actor is assumed to be configured in the same way in all social contexts; what differs is the environment of constraint and opportunity that he or she confronts. This is in fact the approach taken by most scholars in the paradigm of the new institutionalism, it is the framework offered by James Coleman in Foundations of Social Theory, and it is also compatible with what analytical sociologists refer to as “structural individualism”. It also supports the model of “aggregative” explanation — explain an outcome as the result of the purposive actions of individuals responding to opportunities and constraints.
The second aspect, by contrast, assumes that human actors are to some important degree “plastic”, and they take shape in different ways in different social settings. The developmental context — the series of historically specific experiences the individual has as he/she develops personality and identity — leads to important variations in personality and agency in different settings. So just knowing that the local social structure has a certain set of characteristics — the specifics of a share-cropping regime, let us say — doesn’t allow us to infer how things will work out. We also need to know the features of identity, perception, motivation, and reasoning that characterize the local people before we can work out how they will process the features of the structure in which they find themselves. This insight suggests a research approach that drills down into the specific features of agency that are at work in a situation, and then try to determine how actors with these features will interact socially and collectively.
Clearly traditional economics is particularly wed to the first approach. At the individual level, economic agents are typically modeled as completely informed, perfectly rational and self-interested agents. In equilibrium models, say of market behavior, that idealized agent *is* writ large: all agents are the same and face the same situation and have the same information. It would be fair to say that this simplifying assumption has yielded very interesting formal results, but their adequacy as a foundation for an empirical science can be robustly criticized–though there are indeed circumstances in which, say, markets perform in close accordance to such models.
The behavioral revolution in economics of the last twenty years or so introduced various sorts of ‘boundedly rational’ agents. For example, Tversky and Kahneman demonstrate a number of ways in which real human agents violate these assumptions. In particular, their prospect theory holds that people have distinct utility functions for gain and loss domains (and that these domains are subject to framing effects). Generally speaking, Tversky and Kahneman found that people are risk-avoiding when facing gains, and risk-seeking when facing losses. However, some who utilize agents are assumed to have the similar enough risk preferences to justify ignoring individual differences. So while prospect theory’s agents are more psychologically ‘real’ than Homo economicus, they clearly fall within Little’s first domain. Other models do however include limited varieties of agents- usually agents with fixed strategies or preferences of one kind or another. What is most frequently omitted, perhaps because it is hard to model and hard to analyze, is the adaptive agent, agents who change and grow, agents who are socially constituted.
Recently, VOX, a policy and analysis forum for economists hosted a debate ‘What’s the Use of Economics?’ regarding the future of economics post the world’s economic crisis. In his contribution to this debate, Andrew Haldane, Executive Director of Financial Stability at the Bank of England blames the academic and professional economics profession for a number of sins contributing to the world’s current economic crisis. Amongst them is a failure to adequately take into account heterogeneity of economic agents in economic models:
These cliff-edge dynamics in socioeconomic systems are becoming increasingly familiar. Social dynamics around the Arab Spring in many ways closely resembled financial system dynamics following the failure of Lehman Brothers four years ago. Both are complex, adaptive networks. When gripped by fear, such systems are known to behave in a highly non-linear fashion due to cascading actions and reactions among agents. These systems exhibit a robust yet fragile property: swan-like serenity one minute, riot-like calamity the next.
These dynamics do not emerge from most mainstream models of the financial system or real economy. The reason is simple. The majority of these models use the framework of a single representative agent (or a small number of them). That effectively neuters the possibility of complex actions and interactions between agents shaping system dynamics…
Conventional models, based on the representative agent and with expectations mimicking fundamentals, had no hope of capturing these system dynamics. They are fundamentally ill-suited to capturing today’s networked world, in which social media shape expectations, shape behaviour and thus shape outcomes.
This calls for an intellectual reinvestment in models of heterogeneous, interacting agents, an investment likely to be every bit as great as the one that economists have made in DGSE models over the past 20 years. Agent-based modelling is one, but only one, such avenue. The construction and simulation of highly non-linear dynamics in systems of multiple equilibria represents unfamiliar territory for most economists. But this is not a journey into the unknown. Sociologists, physicists, ecologists, epidemiologists and anthropologists have for many years sought to understand just such systems. Following their footsteps will require a sense of academic adventure sadly absent in the pre-crisis period.