Evolutionary psychologists point to a growing relationship between rapid, non-linear change in the human social environment and relative stationarity in individuals' behavioral dispositions as evidence of "mismatch" between evolution's intended purposes and contemporary lifestyle (think Fast food, and pornography). This inconsistency dynamic has resulted in the emergence of a series of pervasive cognitive biases which impair long-term rationality (promote adaptively neutral Hyperbolic Discounting, at best) and thus seriously compromise the potential growth and development of modern, global economies—at least relative to some hypothetical yet ideal possible optimum.
A long-established convention of mainstream analytical economics involves the mathematical presumption that agents–be that individuals, households, or firms–are perfectly rational. As a consequence, most traditional models assume that economic agents are exceptionally proficient in their ability to generate expectations regarding the future value of a good or service. Within this convenient framework, agents are modeled in such a way so as to discriminate perfectly between competing economic payoffs, without so much a delay nor error.
Unfortunately, our past century has seen this oft-overextended free enterprise model of perfect market competition, one duly familiar to undergraduate students everywhere today, serve as chief architect to a series of increasingly rigid mathematical propositions regarding the supposed extreme generality of human social behavior. A "representative agent" in this world claims to consistently maximize its expected utility, subject to a well-defined budget constraint, by means of the application of perfect and complete information regarding its environment.
Leonard Nemoy aside, few among us could successfully pose as other-worldly in their being. So it is time we ask: how might our intuitions regarding prediction better serve the reality of our economic needs? If truly there are no Vulcans on this Earth, then how did so Spartan a model come to reign supreme among the best thinking concerning the future course of our global society? While clean mathematics can be a boon, when misapplied can be highly quixotic.
Despite the analytical tractability of the Neoclassical approach, it was within this very context of so completely unrealistic a tradition (Grossman & Stiglitz, 1980, p. 393) that the world would bear witness to widespread systemic failure in its financial markets beginning in 2008 (this was not the first time). The intellectual hegemony of the Efficient Markets Hypothesis (EMH), of which its practical implications are still far reaching—at least incidentally culminated in the collapse of international securities and brokerage firms such as Bear Stearns—was surely a testament to the blind impunity with which our preeminent policymakers were free to implement their own idiosyncratic (and unremittingly free-market) ideals.
Up until 1999, as anti-financial consolidation laws, such as Glass-Steagall, were being repealed, unregulated financial markets were particularly admonished for their conspicuous role in the propagation of the Great Depression of the 1930’s. But if, as per the EMH, markets attain equilibrium in general, whereby supply equals demand across all markets, and asset prices both rationally and instantaneously reflect all available information (Malkiel, 2003, pp. 59-61), then nothing should be feared of financial innovation, no matter how precarious the nature of Wall Street’s hottest new commercial products. Risks would be compensated proportionally with rewards. Markets would clear, assets would trade liquidly, and everything would be alright.
"Human decision-makers are characteristically predisposed to act on a relatively fixed complex of evolutionarily-entrenched preferences."
An old adage, presently heralded by mainstream financial economists such as Burton Malkiel (2003), maintains that “[efficient financial markets] don’t allow investors to earn above-average risk adjusted rewards (p. 60)”. They were not wrong, but fundamentally ignorant to some aspects of the same theory, and nearly a third of a century too late. Early in the 1980’s, experimental cognitive psychologists Daniel Kahneman and Amos Tversky began to suspect that contrary to traditional economic theory, human decision-makers are characteristically predisposed to act on a relatively fixed complex of evolutionarily-entrenched preferences. Although they were not first to synthesize concepts from evolutionary psychology (EP), they did develop a framework (Prospect Theory) within which EP would later explain the nature of systematic cognitive biases in the realm of decision-making under uncertainty.
According to evolutionary psychology, a large majority of such cognitive biases have, since the time of our forebears, become ineffective, or at least sub-optimal with respect to the maximization of financial payoffs in the domain of modern market transactions (Lo, 2004). That is, in a contemporary context, the employment of instinctively reasonable decision rules hesitates as a successful domain-general implementation, because cognitive strategies which were once presumed highly efficient in their capacity to resolve economic uncertainty, at least in the evolutionary domain of our ancestors, now struggle to take into account the contingencies of a rapidly changing world (Guo, 2005). The heuristic rules of the past, it seems, are continually outmoded in the sense that financial stakeholders involved in speculative market transactions are compelled, out of fear of elimination, to persistently co-adapt to their conspecifics in a dense ecology of competing investment strategies.
If Homo economicus of the mainstream Neoclassical tradition of human rationality is said to be utility-maximizing, then the most appropriate representative agent for a psychologically informed model of consumer choice is said to be utility-‘satisficing’. This is a term conjured from Herbert Simon (1955), which describes an individual’s limited capacity to take advantage of optimal solutions given local informational constraints. According to Simon (1955), the best an individual can do in terms of maximizing his or her utility is to reach not an optimum (where true marginal benefit equals true marginal cost), but instead aim for a satisfactory level of achievement consistent with expectations of probable survival outcomes.
"Individuals are subject to limitations in their immediate computational resources, and so are rational only to a particular maximum."
Lo (2004) reaffirms the position that individuals are subject to limitations in their immediate computational resources, and that as such they are boundedly rational (p. 17). Although philosophically fulfilling, without a method of calculating such payoffs, satisficing is merely relegated to an irrelevant tautology in the realm of economic analysis. What need of recognizing that the actions of economic agents are sub-optimal if their associated levels of utility cannot themselves be extrapolated from for purposes of prediction?
Despite early theoretical hurdles, this characterization of human behavior is much more in tune with economic reality, and closely parallels research on how people actually respond to the kinds of complex reasoning tasks which influence their social and economic decisions. The Wason selection task, for example, measures the extent to which individuals are capable of detecting violations to conditional rules of logic in different contexts. In the experiment, subjects were far more likely (65-80%) to reason logically given a context in which the rules were presented in terms of a social exchange, where the task involved cassava root and people with tattoos, as opposed to simply identifying shapes and colors (Cosmides & Tooby, 1997). It is important to note here that a familiarity with the particularities of the experimental objects (South American plants, the color wheel, etc.) was not important per se; rather, it was with a familiarity for certain universal, and hence evolutionarily-ingrained social circumstances (instances of identity-recognition and deception) that elicited within subjects a more accurate response to the questions posed. The results of experiments such as this have been used continually by psychologists to justify the hypothesis that human beings exhibit a fondness for conclusions based on generalizations (inductive reasoning based on knowledge of the frequency of past events) rather than those based on a series of discrete logical steps, or by way of lightning-fast deductive (Vulcan-like) calculations:
"In making predictions of judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction [alone]. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors (Kahneman & Tversky, 1973, p. 237)."
This is the Frequentist Hypothesis as reported in Guo (2005, pp. 11-12). Thus, generality of inductive problem-solving in economic agents holds a number of implications for resolving financial paradoxes which were once thought to be examples of persistent irrationality, and sheds light, as well, on economic phenomena which do indeed run amok of our boundedly rational expectations. The equity premium puzzle, for example, follows from the observation that the annual real returns on relatively risky assets, such as stocks (7%), have for approximately a century far exceeded those of relatively riskless assets such as treasury bills and bonds (1%) by a large margin, but that despite these differences, investors are far more willing to hold bonds (Benartzi & Thaler, 1995, pp. 73-74).
"[We] rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors."
According to Mehra and Prescott (1985), risk aversion by itself is far too limited a causal factor in explaining the prevalence of bond- over stock-holders. Benartzi and Thaler (1995) argue that because people are more sensitive to losses than they are to proportionally similar gains, as per the theory of loss aversion (Guo, 2005, pp. 24-25), they are also more likely to hold as attractive those stocks which they are not tempted to habitually evaluate. That is, a strong initial intention to hold an asset for a long period of time (typically bonds and treasury bills) is an implication by the owner that he is less likely to maintain a mental account of its price trajectory than if his intention were simply to sell the asset shortly after its purchase. Because most exchange-traded stocks are typically held under a short time horizon, and exhibit higher volatility in their returns than do bonds, it seems that an unwillingness on behalf of speculative investors to hold more stocks, even though the payoffs are likely to be more favorable, is related to a combination of biases for loss aversion and a preference for fewer evaluation periods.
Benartzi and Thaler (2005) refer to this as “myopic loss aversion”, and propose it as a solution to the equity premium puzzle. In Guo (2005)’s dynamic programming model of evolutionarily risk-sensitive optimal foraging, organisms optimize their long-run survival by becoming increasingly less risk-averse as their energy reserves decline. That is, as the time period between the present and their potential deaths becomes narrower (a more frequent evaluation period, as per stock-holding), the simulated organisms select increasingly riskier patches on which to graze for resources. Under conditions of high survival probability, organisms fare much better by selecting a patch associated with a lower risk of both starvation and predation (the purchase of bonds and treasury bills are prioritized when the investor is under the impression–however often and inaccurate that perception may be–that he is less likely to lose out).
The Efficient Markets Hypothesis (EMH) contends that because the prices of publicly traded assets, such as stocks, reflect all available market information, then the probability of detecting a trend in the future price trajectory of any such asset is no more predictable than the return on a random portfolio of stocks selected by a blindfolded monkey pointing haphazardly to lines of unread text in the Financial News section of your local paper. But then what of our persistent obsession with stock valuation?
"The logical possibility of perfect arbitrage immediately precludes the existence of non-random variations in stock prices."
If for any reason a statistically significant pattern were to be discovered in the financial data, it would be arbitraged so effectively that the transactions costs associated with pursuing it would very quickly exceed the potential dividends amassed through its capitalization, and profits would tend to zero (Malkiel, 2003, pp 60-61). In any case, the logical possibility of perfect arbitrage immediately precludes the existence of non-random variations in stock prices. But what is to say that the relevant economic actors could not merely believe, albeit falsely, and within a limited time horizon, for example, that there were in fact predictable dips and swings in the market?
Imagine if above-average risk adjusted returns were merely presumed to exist. Of course, this is precisely what we see in the present ecology of the stock market; at any given time, some proportion of investors commit to short (sale) and long (purchase) positions based on the fundamental calculated value of their selected stock, and some do so based solely on the assumed positions of other investors. The latter are technical traders, speculators who believe in the predictability of stock market patterns, which result from variations in investment species relative to a large pool of financial strategies.
"The emergence of self-fulfilling prophecies drove fundamental asset prices momentarily into disequilibrium and back."
In the 1990's, complex systems scientists, Brian Arthur prominent among them, produced a computational model (a similar model is presented here by J. Doyne Farmer), in which the hypotheses of both value investors and technical traders were pit against one another in an agent-based market simulation. The result was that booms and busts were generated endogenously, as part of the parameters of the model itself (LeBaron et al., 1999). Ultimately, the inductive behavior of the stock market participants–that is, their choice to buy when the value of the stock reached 34, and sell when it dropped to 25 (these conditions were not pre-specified)–had lead to self-fulfilling prophecies, which drove fundamental asset prices momentarily into disequilibrium and back, in what seemed to be an emergent, stylized form of business cycle volatility (Waldrop, 1992, pp. 273-274).
An increasingly popular approach in economic modeling takes advantage of heterogeneous (varying) agent preferences under simulated market conditions, with the implication that non-linearities such as asymmetric and incomplete information conditions emerge naturally, as if a by-product of agents' interactions. This program contrasts sharply with Orthodox economics in that the basic unit of analysis, or “economic man”, is far more robust and true to reality than his equivalent in the traditional Neoclassical paradigm, a program which took great pains in championing its grand mathematical dais for the now understood-to-be flawed tenets of (i) perfect rationality, (ii) utility maximization, and (iii) market equilibrium over those exceedingly more plausible and technically far simpler computational alternatives inspired by established theories of evolution and behavioral ecology.
Cosmides, L., & Tooby, J. (1997). Evolutionary Psychology: A Primer. University of California, Santa Barbara, Center for Evolutionary Psychology Website. Retrieved from http://www.psych.ucsb.edu/research/cep/primer.html
Grossman, S. J., & Stiglitz, J. E. (1980). On the Impossibility of Informationally Efficient Markets. The American Economic Review, 70(3), pp. 393-408. Retrieved from http://www.math.ku.dk/kurser/2003-1/invfin/GrossmanStiglitz.pdf
Guo, Kenrick. (2005). Examining Financial Puzzles from an Evolutionary Perspective. Unpublished master’s thesis, MIT Sloan School of Management, Cambridge, Massachusetts. Retrieved from http://dspace.mit.edu/bitstream/handle/1721.1/34147/68907019.pdf?sequence=1
Kaheman, D., & Tversky, A. (1973). On the Psychology of Prediction. Psychological Review, 80, pp. 237-251. DOI: http://dx.doi.org.ezproxy.library.uvic.ca/10.1037/h0034747
LeBaron, B., Arthur, W. B., & Palmer, R. (1999). Time Series Properties of an Artificial Stock Market. Journal of Economic Dynamics and Control, 23(9-10). pp. 1487-1516. Retrieved from people.bu.edu/fgourio/artificial_stock_market.pdf
Lo, Andrew W. (2004). The Adaptive Markets Hypothesis: Market Efficiency from an Evolutionary Perspective. Journal of Portfolio Management. Retrieved from http://web.mit.edu/alo/www/Papers/JPM2004.pdf
Malkiel, Burton G. (2003). The Efficient Market Hypothesis and Its Critics. Journal of Economic Perspectives, 17(1), pp. 59-82. Retrieved from http://www-stat.wharton.upenn.edu/~steele/Courses/434/434Context/EfficientMarket/malkiel.pdf
Mehra, R. & Prescott, E. C. (1985). The Equity Premium: A Puzzle. Journal of Monetary Economics, 15, pp. 145-161. Retrieved from http://dx.doi.org.ezproxy.library.uvic.ca/10.1016/0304-3932(85)90061-3
Simon, Herbert. (1955). A Behavioral Model of Rational Choice. Quarterly Journal of Economics, 69, pp. 69-99. Retrieved from http://www.math.mcgill.ca/vetta/CS764.dir/bounded.pdf
Waldrop, M. Mitchell (1992). Complexity: The Emerging Science at the Edge of Order and Chaos. Simon & Schuster: New York, NY.