Economic Modeling: A Case of Unrewarded Complexity

Standard

This is the fifth entry in a series of loosely connected posts on economics. Previous entries are here, here, here, and here.

I wrote “About Economic Forecasting” twelve years ago. Here are some highlights:

In the the previous post I disparaged the ability of economists to estimate the employment effects of the minimum wage. I’m skeptical because economists are notoriously bad at constructing models that adequately predict near-term changes in GDP. That task should be easier than sorting out the microeconomic complexities of the labor market.

Take Professor Ray Fair, for example. Prof. Fair teaches macroeconomic theory, econometrics, and macroeconometric models at Yale University. He has been plying his trade since 1968, first at Princeton, then at M.I.T., and (since 1974) at Yale. Those are big-name schools, so I assume that Prof. Fair is a big name in his field.

Well, since 1983, Prof. Fair has been forecasting changes in real GDP over the next four quarters. He has made 80 such forecasts based on a model that he has undoubtedly tweaked over the years. The current model is here. His forecasting track record is here. How has he done? Here’s how:

1. The median absolute error of his forecasts is 30 percent.

2. The mean absolute error of his forecasts is 70 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 4 percent; too low when real, four-quarter GDP growth is greater than 4 percent.

4. His forecasts have grown generally worse — not better — with time.

FIGURE 1
fair-model-forecasting-errors-vs-time
This and later graphs pertaining to Prof. Fair’s forecasts were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing; 8 percent of the errors exceed 200 percent.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

FIGURE 2
fair-model-estimated-vs-actual-growth-rate

It fails the most important test; that is, it doesn’t reflect the downward trend in economic growth:

FIGURE 3
fair-model-year-over-year-growth-estimated-and-actual

Could I do better? Well, I’ve done better — without knowing it until now — with the simple model that I devised to estimate the Rahn Curve. It’s described in “The Rahn Curve Revisited.” The following quotations and discussion draw on the October 20, 2016, version of that post:

The theory behind the Rahn Curve is simple — but not simplistic. A relatively small government with powers limited mainly to the protection of citizens and their property is worth more than its cost to taxpayers because it fosters productive economic activity (not to mention liberty). But additional government spending hinders productive activity in many ways, which are discussed in Daniel Mitchell’s paper, “The Impact of Government Spending on Economic Growth.” (I would add to Mitchell’s list the burden of regulatory activity, which grows even when government does not.)

Rahn curve (2)

. . . .

In an earlier post, I ventured an estimate of the Rahn curve that spanned most of the history of the United States. I came up with this relationship (terms modified for simplicity:

G = 0.054 -0.066F

To be precise, it’s the annualized rate of growth over the most recent 10-year span (G), as a function of F (fraction of GDP spent by governments at all levels) in the preceding 10 years. The relationship is lagged because it takes time for government spending (and related regulatory activities) to wreak their counterproductive effects on economic activity. Also, I include transfer payments (e.g., Social Security) in my measure of F because there’s no essential difference between transfer payments and many other kinds of government spending. They all take money from those who produce and give it to those who don’t (e.g., government employees engaged in paper-shuffling, unproductive social-engineering schemes, and counterproductive regulatory activities).

When F is greater than the amount needed for national defense and domestic justice — no more than 0.1 (10 percent of GDP) — it discourages productive, growth-producing, job-creating activity. And because government spending weighs most heavily on taxpayers with above-average incomes, higher rates of F also discourage saving, which finances growth-producing investments in new businesses, business expansion, and capital (i.e., new and more productive business assets, both physical and intellectual).

I’ve taken a closer look at the post-World War II numbers because of the marked decline in the rate of growth since the end of the war:

Real GDP 1947q1-2016q2

Here’s the revised result (with cosmetic changes in terminology):

G = 0.0275 -0.347F + 0.0769A – 0.000327R – 0.135P

Where,

G = real rate of GDP growth in a 10-year span (annualized)

F = fraction of GDP spent by governments at all levels during the preceding 10 years

A = the constant-dollar value of private nonresidential assets (business assets) as a fraction of GDP, averaged over the preceding 10 years

R = average number of Federal Register pages, in thousands, for the preceding 10-year period

P = growth in the CPI-U during the preceding 10 years (annualized).

The r-squared of the equation is 0.73 and the F-value is 2.00E-12. The p-values of the intercept and coefficients are 0.099, 1.75E-07, 1.96E-08, 8.24E-05, and 0.0096. The standard error of the estimate is 0.0051, that is, about half a percentage point. (Except for the p-value on the coefficient, the other statistics are improved from the previous version, which omitted CPI).

Here’s how the equations with and without P stack up against actual changes in 10-year rates of real GDP growth:

rahn-curve-model-actual-vs-estimates-with-and-without-p

The equation with P captures the “bump” in 2000, and is generally (though not always) closer to the mark than the equation without P.

What does the new equation portend for the next 10 years? Based on the values of F, A, R, and P for the most recent 10-year period (2006-2015), the real rate of growth for the next 10 years will be about 1.9 percent. (It was 1.4 percent for the version of the equation without P.) The earlier equation (discussed above) yields an estimate of 2.9 percent. The new equation wins the reality test, as you can tell by the blue line in the second graph above.

In fact the year-over-year rates of real growth for the past four quarters (2015Q3 through 2016Q2) are 2.2 percent, 1.9 percent, 1.6 percent, and 1.3 percent. So an estimate of 1.9 percent for the next 10 years may be optimistic.

I took the data set that I used to estimate the new equation and made a series of out-of-sample estimates of growth over the next 10 years. I began with the data for 1946-1964 to estimate the growth for 1965-1974. I continued by taking the data for 1946-1965 to estimate the growth for 1966-1975, and so on, until I had estimated the growth for every 10-year period from 1965-1974 through 2006-2015. In other words, like Prof. Fair I updated my model to reflect new data, and I estimated the rate of economic growth in the future. How did I do? Here’s a first look:

FIGURE 4
rahn-curve-model-estimation-errors-vs-actual-values

The errors get larger with time, but they are far smaller than the errors in Fair’s model (see figure 1).

Not only that, but there’s a much better fit. Compare the following graph with figure 2:

FIGURE 5
rahn-curve-model-10-year-real-rates-of-growth-actual-and-estimated

Why do the errors in Fair’s model and mine increase with time? Probably of the erratic downward trend in economic growth, which Fair doesn’t capture in his estimates (see figure 3), but which is matched more closely by my estimates:

FIGURE 6
rahn-curve-model-estimated-vs-actual

The moral of the story: It’s futile to build complex models of the economy. They can’t begin to capture the economy’s real complexity, and they’re likely to obscure the important variables — the ones that will determine the future course of economic growth.

A final note: In earlier posts I’ve disparaged economic aggregates, of which GDP is the apotheosis. And yet I’ve built this post around estimates of GDP. Am I contradicting myself?

Not really. There’s a rough consistency in measures of GDP across time, and I’m not pretending that GDP represents anything but an estimate of the monetary value of those products and services to which monetary values can be ascribed.

As a practical matter, then, if you’re a person who wants to know the likely future direction and value of GDP, stick with simple estimation techniques like the one I’ve demonstrated here. Don’t get bogged down in the inconclusive minutiae of a model like Prof. Fair’s.

 

Mathematical Economics

Standard

This is the fourth entry in a series of loosely connected posts on economics. Previous entries are here, here, and here.

Economics is a study of human behavior, not an exercise in mathematical modeling or statistical analysis, though both endeavors may augment an understanding of human behavior. Economics is about four things:

  • wants, as they are perceived by the persons who have those wants
  • how people try to satisfy their wants through mutually beneficial, cooperative action, which includes but is far from limited to market-based exchanges
  • how exogenous forces, including government interventions, enable or thwart the satisfaction of wants
  • the relationships between private action, government interventions, and changes in the composition, rate, and direction of economic activity

In sum, economics is about the behavior of human beings, which is why it’s called a social science. Well, economics used to be called a social science, but it’s been a long time (perhaps fifty years) since I’ve heard or read an economist refer to it as a social science. The term is too reminiscent of “soft and fuzzy” disciplines such as history, social psychology, sociology, political science, and civics or social studies (names for the amalgam of sociology and government that was taught in high schools way back when). No “soft and fuzzy” stuff for physics-envying economists.

However, the behavior of human beings — their thoughts and emotions, how those things affect their actions, and how they interact — is fuzzy, to say the least. Which explains why mathematical economics is largely an exercise in mental masturbation.

In my disdain for mathematical economics, I am in league with Arnold Kling, who is the most insightful economist I have yet encountered in more than fifty years of studying and reading about economics. I especially recommend Kling’s Specialization and Trade: A Reintroduction to Economics. It’s a short book, but chock-full of wisdom and straight thinking about what makes the economy tick. Here’s the blurb from Amazon.com:

Since the end of the second World War, economics professors and classroom textbooks have been telling us that the economy is one big machine that can be effectively regulated by economic experts and tuned by government agencies like the Federal Reserve Board. It turns out they were wrong. Their equations do not hold up. Their policies have not produced the promised results. Their interpretations of economic events — as reported by the media — are often of-the-mark, and unconvincing.

A key alternative to the one big machine mindset is to recognize how the economy is instead an evolutionary system, with constantly-changing patterns of specialization and trade. This book introduces you to this powerful approach for understanding economic performance. By putting specialization at the center of economic analysis, Arnold Kling provides you with new ways to think about issues like sustainability, financial instability, job creation, and inflation. In short, he removes stiff, narrow perspectives and instead provides a full, multi-dimensional perspective on a continually evolving system.

And he does, without using a single graph. He uses only a few simple equations to illustrate the bankruptcy of macroeconomic theory.

Those economists who rely heavily on mathematics like to say (and perhaps even believe) that mathematical expression is more precise than mere words. But, as Kling points out in “An Important Emerging Economic Paradigm,” mathematical economics is a language of “faux precision,” which is useful only when applied to well defined, narrow problems. It can’t address the big issues — such as economic growth — which depend on variables such as the rule of law and social norms which defy mathematical expression and quantification.

I would go a step further and argue that mathematical economics borders on obscurantism. It’s a cult whose followers speak an arcane language not only to communicate among themselves but to obscure the essentially bankrupt nature of their craft from others. Mathematical expression actually hides the assumptions that underlie it. It’s far easier to identify and challenge the assumptions of “literary” economics than it is to identify and challenge the assumptions of mathematical economics.

I daresay that this is true even for persons who are conversant in mathematics. They may be able to manipulate easily the equations of mathematical economics, but they are able to do so without grasping the deeper meanings — the assumptions and complexities — hidden by those equations. In fact, the ease of manipulating the equations gives them a false sense of mastery of the underlying, real concepts.

Much of the economics profession is nevertheless dedicated to the protection and preservation of the essential incompetence of mathematical economists. This is from “An Important Emerging Economic Paradigm”:

One of the best incumbent-protection rackets going today is for mathematical theorists in economics departments. The top departments will not certify someone as being qualified to have an advanced degree without first subjecting the student to the most rigorous mathematical economic theory. The rationale for this is reminiscent of fraternity hazing. “We went through it, so should they.”

Mathematical hazing persists even though there are signs that the prestige of math is on the decline within the profession. The important Clark Medal, awarded to the most accomplished American economist under the age of 40, has not gone to a mathematical theorist since 1989.

These hazing rituals can have real consequences. In medicine, the controversial tradition of long work hours for medical residents has come under scrutiny over the last few years. In economics, mathematical hazing is not causing immediate harm to medical patients. But it probably is working to the long-term detriment of the profession.

The hazing ritual in economics has as least two real and damaging consequences. First, it discourages entry into the economics profession by persons who aren’t high-IQ freaks, and who, like Kling, can discuss economic behavior without resorting to the sterile language of mathematics. Second, it leads to economics that’s irrelevant to the real world — and dead wrong.

Reaching back into my archives, I found a good example of irrelevance and wrongness in Thomas Schelling‘s game-theoretic analysis of segregation. Eleven years ago, Tyler Cowen (Marginal Revolution), who was mentored by Schelling at Harvard, praised Schelling’s Nobel prize by noting, among other things, Schelling’s analysis of the economics of segregation:

Tom showed how communities can end up segregated even when no single individual cares to live in a segregated neighborhood. Under the right conditions, it only need be the case that the person does not want to live as a minority in the neighborhood, and will move to a neighborhood where the family can be in the majority. Try playing this game with white and black chess pieces, I bet you will get to segregation pretty quickly.

Like many game-theoretic tricks, Schelling’s segregation gambit omits much important detail. It’s artificial to treat segregation as a game in which all whites are willing to live with black neighbors as long as they (the whites) aren’t in the minority. Most whites (including most liberals) do not want to live anywhere near any “black rednecks” if they can help it. Living in relatively safe, quiet, and attractive surroundings comes far ahead of whatever value there might be in “diversity.”

“Diversity” for its own sake is nevertheless a “good thing” in the liberal lexicon. The Houston Chronicle noted Schelling’s Nobel by saying that Schelling’s work

helps explain why housing segregation continues to be a problem, even in areas where residents say they have no extreme prejudice to another group.

Segregation isn’t a “problem,” it’s the solution to a potential problem. Segregation today is mainly a social phenomenon, not a legal one. It reflects a rational aversion on the part of whites to having neighbors whose culture breeds crime and other types of undesirable behavior.

As for what people say about their racial attitudes: Believe what they do, not what they say. Most well-to-do liberals — including black one like the Obamas — choose to segregate themselves and their children from black rednecks. That kind of voluntary segregation, aside from demonstrating liberal hypocrisy about black redneck culture, also demonstrates the rationality of choosing to live in safer and more decorous surroundings.

Dave Patterson of the defunct Order from Chaos put it this way:

[G]ame theory has one major flaw inherent in it: The arbitrary assignment of expected outcomes and the assumption that the values of both parties are equally reflected in these external outcomes. By this I mean a matrix is filled out by [a conductor, and] it is up to that conductor’s discretion to assign outcome values to that grid. This means that there is an inherent bias towards the expected outcomes of conductor.

Or: Garbage in, garbage out.

Game theory points to the essential flaw in mathematical economics, which is reductionism: “An attempt or tendency to explain a complex set of facts, entities, phenomena, or structures by another, simpler set.”

Reductionism is invaluable in many settings. To take an example from everyday life, children are warned — in appropriate stern language — not to touch a hot stove or poke a metal object into an electrical outlet. The reasons given are simple ones: “You’ll burn yourself” and “You’ll get a shock and it will hurt you.” It would be futile (in almost all cases) to try to explain to a small child the physical and physiological bases for the warnings. The child wouldn’t understand the explanations, and the barrage of words might cause him to forget the warnings.

The details matter in economics. It’s easy enough to say, for example, that a market equilibrium exists where the relevant supply and demand curves cross (in a graphical representation) or where the supply and demand functions yield equal values of price and quantity (in a mathematical representation). But those are gross abstractions from reality, as any economist knows — or should know. Expressing economic relationships in mathematical terms lends them an unwarranted air of precision.

Further, all mathematical expressions, no matter how complex, can be expressed in plain language, though it may be hard to do so when the words become too many and their relationships too convoluted. But until one tries to do so, one is at the mercy of the mathematical economist whose equation has no counterpart in the real world of economic activity. In other words, an equation represents nothing more than the manipulation of mathematical relationships until it’s brought to earth by plain language and empirical testing. Short of that, it’s as meaningful as Urdu is to a Cockney.

Finally, mathematical economics lends aid and comfort to proponents of economic control. Whether or not they understand the mathematics or the economics, the expression of congenial ideas in mathematical form lends unearned — and dangerous — credibility to the controller’s agenda. The relatively simple multiplier is a case in point. As I explain in “The Keynesian Multiplier: Phony Math,”

the Keynesian investment/government-spending multiplier simply tells us that if ∆Y = $5 trillion, and if b = 0.8, then it is a matter of mathematical necessity that ∆C = $4 trillion and ∆I + ∆G = $1 trillion. In other words, a rise in I + G of $1 trillion doesn’t cause a rise in Y of $5 trillion; rather, Y must rise by $5 trillion for C to rise by $4 trillion and I + G to rise by $1 trillion. If there’s a causal relationship between ∆G and ∆Y, the multiplier doesn’t portray it.

I followed that post with “The True Multiplier“:

Math trickery aside, there is evidence that the Keynesian multiplier is less than 1. Robert J. Barro of Harvard University opens an article in The Wall Street Journal with the statement that “economists have not come up with explanations … for multipliers above one.”

Barro continues:

A much more plausible starting point is a multiplier of zero. In this case, the GDP is given, and a rise in government purchases requires an equal fall in the total of other parts of GDP — consumption, investment and net export. . . .

What do the data show about multipliers? Because it is not easy to separate movements in government purchases from overall business fluctuations, the best evidence comes from large changes in military purchases that are driven by shifts in war and peace. A particularly good experiment is the massive expansion of U.S. defense expenditures during World War II. The usual Keynesian view is that the World War II fiscal expansion provided the stimulus that finally got us out of the Great Depression. Thus, I think that most macroeconomists would regard this case as a fair one for seeing whether a large multiplier ever exists.

I have estimated that World War II raised U.S. defense expenditures by $540 billion (1996 dollars) per year at the peak in 1943-44, amounting to 44% of real GDP. I also estimated that the war raised real GDP by $430 billion per year in 1943-44. Thus, the multiplier was 0.8 (430/540). The other way to put this is that the war lowered components of GDP aside from military purchases. The main declines were in private investment, nonmilitary parts of government purchases, and net exports — personal consumer expenditure changed little. Wartime production siphoned off resources from other economic uses — there was a dampener, rather than a multiplier. . . .

There are reasons to believe that the war-based multiplier of 0.8 substantially overstates the multiplier that applies to peacetime government purchases. For one thing, people would expect the added wartime outlays to be partly temporary (so that consumer demand would not fall a lot). Second, the use of the military draft in wartime has a direct, coercive effect on total employment. Finally, the U.S. economy was already growing rapidly after 1933 (aside from the 1938 recession), and it is probably unfair to ascribe all of the rapid GDP growth from 1941 to 1945 to the added military outlays. [“Government Spending Is No Free Lunch,” The Wall Street Journal (online.WSJ.com), January 22, 2009]

This is from Valerie A. Ramsey of  the University of California-San Diego and the National Bureau of Economic Research:

. . . [I]t appears that a rise in government spending does not stimulate private spending; most estimates suggest that it significantly lowers private spending. These results imply that the government spending multiplier is below unity. Adjusting the implied multiplier for increases in tax rates has only a small effect. The results imply a multiplier on total GDP of around 0.5. [“Government Spending and Private Activity,” January 2012]

In fact,

for the period 1947-2012 I estimated the year-over-year percentage change in GDP (denoted as Y%) as a function of G/GDP (denoted as G/Y):

Y% = 0.09 – 0.17(G/Y)

Solving for Y% = 0 yields G/Y = 0.53; that is, Y% will drop to zero if G/Y rises to 0.53 (or thereabouts). At the present level of G/Y (about 0.4), Y% will hover just above 2 percent, as it has done in recent years. (See the graph immediately above.)

If G/Y had remained at 0.234, its value in 1947:

  • Real growth would have been about 5 percent a year, instead of 3.2 percent (the actual value for 1947-2012).
  • The total value of Y for 1947-2012 would have been higher by $500 trillion (98 percent).
  • The total value of G would have been lower by $61 trillion (34 percent).

The last two points, taken together, imply a cumulative government-spending multiplier (K) for 1947-2012 of about -8. That is, aggregate output in 1947-2012 declined by 8 dollars for every dollar of government spending above the amount represented by G/Y = 0.234.

But -8 is only an average value for 1947-2012. It gets worse. The reduction in Y is cumulative; that is, every extra dollar of G reduces the amount of Y that is available for growth-producing investment, which leads to a further reduction in Y, which leads to a further reduction in growth-producing investment, and on and on. (Think of the phenomenon as negative compounding; take a dollar from your savings account today, and the value of the savings account years from now will be lower than it would have been by a multiple of that dollar: [1 + interest rate] raised to nth power, where n = number of years.) Because of this cumulative effect, the effective value of K in 2012 was about -14.

The multiplier is a seductive and easy-to-grasp mathematical construct. But in the hands of politicians and their economist-enablers, it has been an instrument of economic destruction.

Perhaps “higher” mathematical economics is potentially less destructive because it’s inside game played by economists for the benefit of economists. I devoutly hope that’s true.

Economists as Scientists

Standard

This is the third entry in a series of loosely connected posts on economics. The first entry is here and the second entry is here. (Related posts by me are noted parenthetically throughout this one.)

Science is something that some people “do” some of the time. There are full-time human beings and part-time scientists. And the part-timers are truly scientists only when they think and act in accordance with the scientific method.*

Acting in accordance with the scientific method is a matter of attitude and application. The proper attitude is one of indifference about the correctness of a hypothesis or theory. The proper application rejects a hypothesis if it can’t be tested, and rejects a theory if it’s refuted (falsified) by relevant and reliable observations.

Regarding attitude, I turn to the most famous person who was sometimes a scientist: Albert Einstein. This is from the Wikipedia article about the Bohr-Einstein debate:

The quantum revolution of the mid-1920s occurred under the direction of both Einstein and [Niels] Bohr, and their post-revolutionary debates were about making sense of the change. The shocks for Einstein began in 1925 when Werner Heisenberg introduced matrix equations that removed the Newtonian elements of space and time from any underlying reality. The next shock came in 1926 when Max Born proposed that mechanics were to be understood as a probability without any causal explanation.

Einstein rejected this interpretation. In a 1926 letter to Max Born, Einstein wrote: “I, at any rate, am convinced that He [God] does not throw dice.” [Apparently, Einstein also used the line in Bohr’s presence, and Bohr replied, “Einstein, stop telling God what to do.” — TEA]

At the Fifth Solvay Conference held in October 1927 Heisenberg and Born concluded that the revolution was over and nothing further was needed. It was at that last stage that Einstein’s skepticism turned to dismay. He believed that much had been accomplished, but the reasons for the mechanics still needed to be understood.

Einstein’s refusal to accept the revolution as complete reflected his desire to see developed a model for the underlying causes from which these apparent random statistical methods resulted. He did not reject the idea that positions in space-time could never be completely known but did not want to allow the uncertainty principle to necessitate a seemingly random, non-deterministic mechanism by which the laws of physics operated.

It’s true that quantum mechanics was inchoate in the mid-1920s, and that it took a couple of decades to mature into quantum field theory. But there’s more than a trace of “attitude” in Einstein’s refusal to accept quantum mechanics, to stay abreast of developments in the theory, and to search quixotically for his own theory of everything, which he hoped would obviate the need for a non-deterministic explanation of quantum phenomena.

Improper application of the scientific method is rife. See, for example the Wikipedia article about the replication crisis, John Ioannidis’s article, “Why Most Published Research Findings Are False.” (See also “Ty Cobb and the State of Science” and “Is Science Self-Correcting?“) For a thorough analysis of the roots of the crisis, read Michael Hart’s book, Hubris: The Troubling Science, Economics, and Politics of Climate Change.

A bad attitude and improper application are both found among the so-called scientists who declare that the “science” of global warming is “settled,” and that human-generated CO2 emissions are the primary cause of the apparent rise in global temperatures during the last quarter of the 20th century. The bad attitude is the declaration of “settled science.” In “The Science Is Never Settled” I give many prominent examples of the folly of declaring it to be “settled.”

The improper application of the scientific method with respect to global warming began with the hypothesis that the “culprit” is CO2 emissions generated by the activities of human beings — thus anthropogenic global warming (AGW). There’s no end of evidence to the contrary, some of which is summarized in these posts and many of the links found therein. There’s enough evidence, in my view, to have rejected the CO2 hypothesis many times over. But there’s a great deal of money and peer-approval at stake, so the rush to judgment became a stampede. And attitude rears its ugly head when pro-AGW “scientists” shun the real scientists who are properly skeptical about the CO2 hypothesis, or at least about the degree to which CO2 supposedly influences temperatures. (For a depressingly thorough account of the AGW scam, read Michael Hart’s Hubris: The Troubling Science, Economics, and Politics of Climate Change.)

I turn now to economists, as I have come to know them in more than fifty years of being taught by them, working with them, and reading their works. Scratch an economist and you’re likely to find a moralist or reformer just beneath a thin veneer of rationality. Economists like to believe that they’re objective. But they aren’t; no one is. Everyone brings to the table a large serving of biases that are incubated in temperament, upbringing, education, and culture.

Economists bring to the table a heaping helping of tunnel vision. “Hard scientists” do, too, but their tunnel vision is generally a good thing, because it’s actually aimed at a deeper understanding of the inanimate and subhuman world rather than the advancement of a social or economic agenda. (I make a large exception for “hard scientists” who contribute to global-warming hysteria, as discussed above.)

Some economists, especially behavioralists, view the world through the lens of wealth-and-utility-maximization. Their great crusade is to force everyone to make rational decisions (by their lights), through “nudging.” It almost goes without saying that government should be the nudger-in-chief. (See “The Perpetual Nudger” and the many posts linked to therein.)

Other economists — though far fewer than in the past — have a thing about monopoly and oligopoly (the domination of a market by one or a few sellers). They’re heirs to the trust-busting of the late 1800s and early 1900s, a movement led by non-economists who sought to blame the woes of working-class Americans on the “plutocrats” (Rockefeller, Carnegie, Ford, etc.) who had merely made life better and more affordable for Americans, while also creating jobs for millions of them and reaping rewards for the great financial risks that they took. (See “Monopoly and the General Welfare” and “Monopoly: Private Is Better than Public.”) As it turns out, the biggest and most destructive monopoly of all is the federal government, so beloved and trusted by trust-busters — and too many others. (See “The Rahn Curve Revisited.”)

Nowadays, a lot of economists are preoccupied by income inequality, as if it were something evil and not mainly an artifact of differences in intelligence, ambition, and education, etc. And inequality — the prospect of earning rather grand sums of money — is what drives a lot of economic endeavor, to good of workers and consumers. (See “Mass (Economic) Hysteria: Income Inequality and Related Themes” and the many posts linked to therein.) Remove inequality and what do you get? The Soviet Union and Communist China, in which everyone is equal except party operatives and their families, friends, and favorites.

When the inequality-preoccupied economists are confronted by the facts of life, they usually turn their attention from inequality as a general problem to the (inescapable) fact that an income distribution has a top one-percent and top one-tenth of one-percent — as if there were something especially loathsome about people in those categories. (Paul Krugman shifted his focus to the top one-tenth of one percent when he realized that he’s in the top one percent, so perhaps he knows that’s he’s loathsome and wishes to deny it, to himself.)

Crony capitalism is trotted out as a major cause of very high incomes. But that’s hardly a universal cause, given that a lot of very high incomes are earned by athletes and film stars beside whom most investment bankers and CEOs are making peanuts. Moreover, as I’ve said on several occasions, crony capitalists are bright and driven enough to be in the stratosphere of any income distribution. Further, the fertile soil of crony capitalism is the regulatory power of government that makes it possible.

Many economists became such, it would seem, in order to promote big government and its supposed good works — income redistribution being one of them. Joseph Stiglitz and Paul Krugman are two leading exemplars of what I call the New Deal school of economic thought, which amounts to throwing government and taxpayers’ money at every perceived problem, that is, every economic outcome that is deemed unacceptable by accountants of the soul. (See “Accountants of the Soul.”)

Stiglitz and Krugman — both Nobel laureates in economics — are typical “public intellectuals” whose intelligence breeds in them a kind of arrogance. (See “Intellectuals and Society: A Review.”) It’s the kind of arrogance that I mentioned in the preceding post in this series: a penchant for deciding what’s best for others.

New Deal economists like Stiglitz and Krugman carry it a few steps further. They ascribe to government an impeccable character, an intelligence to match their own, and a monolithic will. They then assume that this infallible and wise automaton can and will do precisely what they would do: Create the best of all possible worlds. (See the many posts in which I discuss the nirvana fallacy.)

New Deal economists, in other words, live their intellectual lives  in a dream-world populated by the likes of Jiminy Cricket (“When You Wish Upon a Star”), Dorothy (“Somewhere Over the Rainbow”), and Mary Jane of a long-forgotten comic book (“First I shut my eyes real tight, then I wish with all my might! Magic words of poof, poof, piffles, make me just as small as [my mouse] Sniffles!”).

I could go on, but you should by now have grasped the point: What too many economists want to do is change human nature, channel it in directions deemed “good” (by the economist), or simply impose their view of “good” on everyone. To do such things, they must rely on government.

It’s true that government can order people about, but it can’t change human nature, which has an uncanny knack for thwarting Utopian schemes. (Obamacare, whose chief architect was economist Jonathan Gruber, is exhibit A this year.) And government (inconveniently for Utopians) really consists of fallible, often unwise, contentious human beings. So government is likely to march off in a direction unsought by Utopian economists.

Nevertheless, it’s hard to thwart the tax collector. The regulator can and does make things so hard for business that if one gets off the ground it can’t create as much prosperity and as many jobs as it would in the absence of regulation. And the redistributor only makes things worse by penalizing success. Tax, regulate, and redistribute should have been the mantra of the New Deal and most presidential “deals” since.

I hold economists of the New Deal stripe partly responsible for the swamp of stagnation into which the nation’s economy has descended. (See “Economic Growth Since World War II.”) Largely responsible, of course, are opportunistic if not economically illiterate politicians who pander to rent-seeking, economically illiterate constituencies. (Yes, I’m thinking of old folks and the various “disadvantaged” groups with which they have struck up an alliance of convenience.)

The distinction between normative economics and positive economics is of no particular use in sorting economists between advocates and scientists. A lot of normative economics masquerades as positive economics. The work of Thomas Piketty and his comrades-in-arms comes to mind, for example. (See “McCloskey on Piketty.”) Almost everything done to quantify and defend the Keynesian multiplier counts as normative economics, inasmuch as the work is intended (wittingly or not) to defend an intellectual scam of 80 years’ standing. (See “The Keynesian Multiplier: Phony Math,” “The True Multiplier,” and “Further Thoughts about the Keynesian Multiplier.”)

Enough said. If you want to see scientific economics in action, read Regulation. Not every article in it exemplifies scientific inquiry, but a good many of them do. It’s replete with articles about microeconomics, in which the authors uses real-world statistics to validate and quantify the many axioms of economics.

A final thought is sparked by Arnold Kling’s post, “Ed Glaeser on Science and Economics.” Kling writes:

I think that the public has a sort of binary classification. If it’s “science,” then an expert knows more than the average Joe. If it’s not a science, then anyone’s opinion is as good as anyone else’s. I strongly favor an in-between category, called a discipline. Think of economics as a discipline, where it is possible for avid students to know more than ordinary individuals, but without the full use of the scientific method.

On this rare occasion I disagree with Kling. The accumulation of knowledge about economic variables, or pseudo-knowledge such as estimates of GDP (see “Macroeconomics and Microeconomics“), either leads to well-tested, verified, and reproducible theories of economic behavior or it leads to conjectures, of which there are so many opposing ones that it’s “take your pick.” If that’s what makes a discipline, give me the binary choice between science and story-telling. Most of economics seems to be story-telling. “Discipline” is just a fancy word for it.

Collecting baseball cards and memorizing the statistics printed on them is a discipline. Most of economics is less useful than collecting baseball cards — and a lot more destructive.

Here’s my hypothesis about economists: There are proportionally as many of them who act like scientists as there are baseball players who have career batting averages of at least .300.
__________
* Richard Feynman, a physicist and real scientist, had a different view of the scientific method than Karl Popper’s standard taxonomy. I see Feynman’s view as complementary to Popper’s, not at odds with it. What is “constructive skepticism” (Feynman’s term) but a gentler way of saying that a hypothesis or theory might be falsified and that the act of falsification may point to a better hypothesis or theory?

Economics and Science

Standard

This is the second entry in what I expect to be a series of loosely connected posts on economics. The first entry is here.

Science is unnecessarily daunting to the uninitiated, which is to say, the vast majority of the populace. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

Here I will dissect science, then turn to economics and begin a discussion of its scientific and non-scientific aspects. It has both, though at least one non-scientific aspect (the Keynesian multiplier) draws an inordinate amount of attention, and has many true believers within the profession.

Science is knowledge, but not all knowledge is science. A scientific body of knowledge is systematic; that is, the granular facts or phenomena which comprise the body of knowledge must be connected in patterned ways. The purported facts or phenomena of a science must represent reality, things that can be observed and measured in some way. Scientists may hypothesize the existence of an unobserved thing (e.g., the ether, dark matter), in an effort to explain observed phenomena. But the unobserved thing stands outside scientific knowledge until its existence is confirmed by observation, or because it remains standing as the only plausible explanation of observable phenomena. Hypothesized things may remain outside the realm of scientific knowledge for a very long time, if not forever. The Higgs boson, for example, was hypothesized in 1964 and has been tentatively (but not conclusively) confirmed since its “discovery” in 2011.

Science has other key characteristics. Facts and patterns must be capable of validation and replication by persons other than those who claim to have found them initially. Patterns should have predictive power; thus, for example, if the sun fails to rise in the east, the model of Earth’s movements which says that it will rise in the east is presumably invalid and must be rejected or modified so that it correctly predicts future sunrises or the lack thereof. Creating a model or tweaking an existing model just to account for a past event (e.g., the failure of the Sun to rise, the apparent increase in global temperatures from the 1970s to the 1990s) proves nothing other than an ability to “predict” the past with accuracy.

Models are usually clothed in the language of mathematics and statistics. But those aren’t scientific disciplines in themselves; they are tools of science. Expressing a theory in mathematical terms may lend the theory a scientific aura, but a theory couched in mathematical terms is not a scientific one unless (a) it can be tested against facts yet to be ascertained and events yet to occur, and (b) it is found to accord with those facts and events consistently, by rigorous statistical tests.

A science may be descriptive rather than mathematical. In a descriptive science (e.g., plant taxonomy), particular phenomena sometimes are described numerically (e.g., the number of leaves on the stem of a species), but the relations among various phenomena are not reducible to mathematics. Nevertheless, a predominantly descriptive discipline will be scientific if the phenomena within its compass are connected in patterned ways, can be validated, and are applicable to newly discovered entities.

Non-scientific disciplines can be useful, whereas some purportedly scientific disciplines verge on charlatanism. Thus, for example:

  • History, by my reckoning, is not a science because its account of events and their relationships is inescapably subjective and incomplete. But a knowledge of history is valuable, nevertheless, for the insights it offers into the influence of human nature on the outcomes of economic and political processes.
  • Physics is a science in most of its sub-disciplines, but there are some (e.g., cosmology) where it descends into the realm of speculation. It is informed, fascinating speculation to be sure, but speculation all the same. The idea of multiverses, for example, can’t be tested, inasmuch as human beings and their tools are bound to the known universe.
  • Economics is a science only to the extent that it yields empirically valid insights about  specific economic phenomena (e.g., the effects of laws and regulations on the prices and outputs of specific goods and services). Then there are concepts like the Keynesian multiplier, about which I’ll say more in this series. It’s a hypothesis that rests on a simplistic, hydraulic view of the economic system. (Other examples of pseudo-scientific economic theories are the labor theory of value and historical determinism.)

In sum, there is no such thing as “science,” writ large; that is, no one may appeal, legitimately, to “science” in the abstract. A particular discipline may be a science, but it is a science only to the extent that it comprises a factual and replicable body of patterned knowledge. Patterned knowledge includes theories with predictive power.

A scientific theory is a hypothesis that has thus far been confirmed by observation. Every scientific theory rests eventually on axioms: self-evident principles that are accepted as true without proof. The principle of uniformity (which can be traced to Galileo) is an example of such an axiom:

Uniformitarianism is the assumption that the same natural laws and processes that operate in the universe now have always operated in the universe in the past and apply everywhere in the universe. It refers to invariance in the metaphysical principles underpinning science, such as the constancy of causal structure throughout space-time, but has also been used to describe spatiotemporal invariance of physical laws. Though an unprovable postulate that cannot be verified using the scientific method, uniformitarianism has been a key first principle of virtually all fields of science

Thus, for example, if observer B is moving away from observer A at a certain speed, observer A will perceive that he is moving away from observer B at that speed. It follows that an observer cannot determine either his absolute velocity or direction of travel in space. The principle of uniformity is a fundamental axiom of modern physics, most notably of Einstein’s special and general theories of relativity.

There’s a fine line between an axiom and a theory. Was the idea of a geocentric universe an axiom or a theory? If it was taken as axiomatic — as it surely was by many scientists for about 2,000 years — then it’s fair to say that an axiom can give way under the pressure of observational evidence. (Such an event is what Thomas Kuhn calls a paradigm shift.) But no matter how far scientists push the boundaries of knowledge, they must at some point rely on untestable axioms, such as the principle of uniformity. There are simply deep and (probably) unsolvable mysteries that science is unlikely to fathom.

This brings me to economics, which — in my view — rests on these self-evident axioms:

1. Each person strives to maximize his or her sense of satisfaction, which may also be called well-being, happiness, or utility (an ugly word favored by economists). Striving isn’t the same as achieving, of course, because of lack of information, emotional decision-making, buyer’s remorse, etc

2. Happiness can and often does include an empathic concern for the well-being of others; that is, one’s happiness may be served by what is usually labelled altruism or self-sacrifice.

3. Happiness can be and often is served by the attainment of non-material ends. Not all persons (perhaps not even most of them) are interested in the maximization of wealth, that is, claims on the output of goods and services. In sum, not everyone is a wealth maximizer. (But see axiom number 12.)

4. The feeling of satisfaction that an individual derives from a particular product or service is situational — unique to the individual and to the time and place in which the individual undertakes to acquire or enjoy the product or service. Generally, however, there is a (situationally unique) point at which the acquisition or enjoyment of additional units of a particular product or service during a given period of time tends to offer less satisfaction than would the acquisition or enjoyment of units of other products or services that could be obtained at the same cost.

5. The value that a person places on a product or service is subjective. Products and services don’t have intrinsic values that apply to all persons at a given time or period of time.

6. The ability of a person to acquire products and services, and to accumulate wealth, depends (in the absence of third-party interventions) on the valuation of the products and services that are produced in part or whole by the person’s labor (mental or physical), or by the assets that he owns (e.g., a factory building, a software patent). That valuation is partly subjective (e.g., consumers’ valuation of the products and services, an employer’s qualitative evaluation of the person’s contributions to output) and partly objective (e.g., an employer’s knowledge of the price commanded by a product or service, an employer’s measurement of an employees’ contribution to the quantity of output).

7. The persons and firms from which products and services flow are motivated by the acquisition of income, with which they can acquire other products and services, and accumulate wealth for personal purposes (e.g., to pass to heirs) or business purposes (e.g., to expand the business and earn more income). So-called profit maximization (seeking to maximize the difference between the cost of production and revenue from sales) is a key determinant of business decisions but far from the only one. Others include, but aren’t limited to, being a “good neighbor,” providing employment opportunities for local residents, and underwriting philanthropic efforts.

8. The cost of production necessarily influences the price at which a good or and service will be offered for sale, but doesn’t solely determine the price at which it will be sold. Selling price depends on the subjective valuation of the products or service, prospective buyers’ incomes, and the prices of other products and services, including those that are direct or close substitutes and those to which users may switch, depending on relative prices.

9. The feeling of satisfaction that a person derives from the acquisition and enjoyment of the “basket” of products and services that he is able to buy, given his income, etc., doesn’t necessarily diminish, as long as the person has access to a great variety of products and services. (This axiom and axiom 12 put paid to the myth of diminishing marginal utility of income.)

10. Work may be a source of satisfaction in itself or it may simply be a means of acquiring and enjoying products and services, or acquiring claims to them by accumulating wealth. Even when work is satisfying in itself, it is subject to the “law” of diminishing marginal satisfaction.

11. Work, for many (but not all) persons, is no longer be worth the effort if they become able to subsist comfortably enough by virtue of the wealth that they have accumulated, the availability of redistributive schemes (e.g., Social Security and Medicare), or both. In such cases the accumulation of wealth often ceases and reverses course, as it is “cashed in” to defray the cost of subsistence (which may be far more than minimal).

12. However, there are not a few persons whose “work” is such a great source of satisfaction that they continue doing it until they are no longer capable of doing so. And there are some persons whose “work” is the accumulation of wealth, without limit. Such persons may want to accumulate wealth in order to “do good” or to leave their heirs well off or simply for the satisfaction of running up the score. The justification matters not. There is no theoretical limit to the satisfaction that a particular person may derive from the accumulation of wealth. Moreover, many of the persons (discussed in axiom 11) who aren’t able to accumulate wealth endlessly would do so if they had the ability and the means to take the required risks.

13. Individual degrees of satisfaction (happiness, etc.) are ephemeral, nonquantifiable, and incommensurable. There is no such thing as a social welfare function that a third party (e.g., government) can maximize by taking from A to give to B. If there were such a thing, its value would increase if, for example, A were to punch B in the nose and derive a degree of pleasure that somehow more than offsets the degree of pain incurred by B. (The absurdity of a social-welfare function that allows As to punch Bs in their noses ought to be enough shame inveterate social engineers into quietude — but it won’t. They derive great satisfaction from meddling.) Moreover, one of the primary excuses for meddling is that income (and thus wealth) has a  diminishing marginal utility, so it makes sense to redistribute from those with higher incomes (or more wealth) to those who have less of either. Marginal utility is, however, unknowable (see axioms 4 and 5), and may not always be negative (see axioms 9 and 12).

14. Whenever a third party (government, do-gooders, etc.) intervene in the affairs of others, that third party is merely imposing its preferences on those others. The third party sometimes claims to know what’s best for “society as a whole,” etc., but no third party can know such a thing. (See axiom 13.)

15. It follows from axiom 13 that the welfare of “society as a whole” can’t be aggregated or measured. An estimate of the monetary value of the economic output of a nation’s economy (Gross Domestic Product) is by no means an estimate of the welfare of “society as a whole.” (Again, see axiom 13.)

That may seem like a lot of axioms, which might give you pause about my claim that some aspects of economics are scientific. But economics is inescapably grounded in axioms such as the ones that I propound. This aligns me (mainly) with the Austrian economists, whose leading light was Ludwig von Mises. Gene Callahan writes about him at the website of the Ludwig von Mises Institute:

As I understand [Mises], by categorizing the fundamental principles of economics as a priori truths and not contingent facts open to empirical discovery or refutation, Mises was not claiming that economic law is revealed to us by divine action, like the ten commandments were to Moses. Nor was he proposing that economic principles are hard-wired into our brains by evolution, nor even that we could articulate or comprehend them prior to gaining familiarity with economic behavior through participating in and observing it in our own lives. In fact, it is quite possible for someone to have had a good deal of real experience with economic activity and yet never to have wondered about what basic principles, if any, it exhibits.

Nevertheless, Mises was justified in describing those principles as a priori, because they are logically prior to any empirical study of economic phenomena. Without them it is impossible even to recognize that there is a distinct class of events amenable to economic explanation. It is only by pre-supposing that concepts like intention, purpose, means, ends, satisfaction, and dissatisfaction are characteristic of a certain kind of happening in the world that we can conceive of a subject matter for economics to investigate. Those concepts are the logical prerequisites for distinguishing a domain of economic events from all of the non-economic aspects of our experience, such as the weather, the course of a planet across the night sky, the growth of plants, the breaking of waves on the shore, animal digestion, volcanoes, earthquakes, and so on.

Unless we first postulate that people deliberately undertake previously planned activities with the goal of making their situations, as they subjectively see them, better than they otherwise would be, there would be no grounds for differentiating the exchange that takes place in human society from the exchange of molecules that occurs between two liquids separated by a permeable membrane. And the features which characterize the members of the class of phenomena singled out as the subject matter of a special science must have an axiomatic status for practitioners of that science, for if they reject them then they also reject the rationale for that science’s existence.

Economics is not unique in requiring the adoption of certain assumptions as a pre-condition for using the mode of understanding it offers. Every science is founded on propositions that form the basis rather than the outcome of its investigations. For example, physics takes for granted the reality of the physical world it examines. Any piece of physical evidence it might offer has weight only if it is already assumed that the physical world is real. Nor can physicists demonstrate their assumption that the members of a sequence of similar physical measurements will bear some meaningful and consistent relationship to each other. Any test of a particular type of measurement must pre-suppose the validity of some other way of measuring against which the form under examination is to be judged.

Why do we accept that when we place a yardstick alongside one object, finding that the object stretches across half the length of the yardstick, and then place it alongside another object, which only stretches to a quarter its length, that this means the first object is longer than the second? Certainly not by empirical testing, for any such tests would be meaningless unless we already grant the principle in question. In mathematics we don’t come to know that 2 + 2 always equals 4 by repeatedly grouping two items with two others and counting the resulting collection. That would only show that our answer was correct in the instances we examined — given the assumption that counting works! — but we believe it is universally true. [And it is universally true by the conventions of mathematics. If what we call “5” were instead called “4,” 2 + 2 would always equal 5. — TEA] Biology pre-supposes that there is a significant difference between living things and inert matter, and if it denied that difference it would also be denying its own validity as a special science. . . .

The great fecundity from such analysis in economics is due to the fact that, as acting humans ourselves, we have a direct understanding of human action, something we lack in pondering the behavior of electrons or stars. The contemplative mode of theorizing is made even more important in economics because the creative nature of human choice inherently fails to exhibit the quantitative, empirical regularities, the discovery of which characterizes the modern, physical sciences. (Biology presents us with an interesting intermediate case, as many of its findings are qualitative.) . . .

[A] person can be presented with scores of experiments supporting a particular scientific theory is sound, but no possible experiment ever can demonstrate to him that experimentation is a reasonable means by which to evaluate a scientific theory. Only his intuitive grasp of its plausibility can bring him to accept that proposition. (Unless, of course, he simply adopts it on the authority of others.) He can be led through hundreds of rigorous proofs for various mathematical theorems and be taught the criteria by which they are judged to be sound, but there can be no such proof for the validity of the method itself. (Kurt Gödel famously demonstrated that a formal system of mathematical deduction that is complex enough to model even so basic a topic as arithmetic might avoid either incompleteness or inconsistency, but always must suffer at least one of those flaws.) . . .

This ultimate, inescapable reliance on judgment is illustrated by Lewis Carroll in Alice Through the Looking Glass. He has Alice tell Humpty Dumpty that 365 minus one is 364. Humpty is skeptical, and asks to see the problem done on paper. Alice dutifully writes down:

365 – 1 = 364

Humpty Dumpty studies her work for a moment before declaring that it seems to be right. The serious moral of Carroll’s comic vignette is that formal tools of thinking are useless in convincing someone of their conclusions if he hasn’t already intuitively grasped the basic principles on which they are built.

All of our knowledge ultimately is grounded on our intuitive recognition of the truth when we see it. There is nothing magical or mysterious about the a priori foundations of economics, or at least nothing any more magical or mysterious than there is about our ability to comprehend any other aspect of reality.

(Callahan has more to say here. For a technical discussion of the science of human action, or praxeology, read this. Some glosses on Gödel’s incompleteness theorem are here.)

I omitted an important passage from the preceding quotation, in order to single it out. Callahan says also that

Mises’s protégé F.A. Hayek, while agreeing with his mentor on the a priori nature of the “logic of action” and its foundational status in economics, still came to regard investigating the empirical issues that the logic of action leaves open as a more important undertaking than further examination of that logic itself.

I agree with Hayek. It’s one thing to know axiomatically that the speed of light is constant; it is quite another (and useful) thing to know experimentally that the speed of light (in empty space) is about 671 million miles an hour. Similarly, it is one thing to deduce from the axioms of economics that demand curves generally slope downward; it is quite another (and useful) thing to estimate specific demand functions.

But one must always be mindful of the limitations of quantitative methods in economics. As James Sheehan writes at the website of the Mises Institute,

economists are prone to error when they ascribe excessive precision to advanced statistical techniques. They assume, falsely, that a voluminous amount of historical observations (sample data) can help them to make inferences about the future. They presume that probability distributions follow a bell-shaped pattern. They make no provision for the possibility that past correlations between economic variables and data were coincidences.

Nor do they account for the possibility, as economist Robert Lucas demonstrated, that people will incorporate predictable patterns into their expectations, thus canceling out the predictive value of such patterns. . . .

As [Nassim Nicholas] Taleb points out [in Fooled by Randomness], the popular Monte Carlo simulation “is more a way of thinking than a computational method.” Employing this way of thinking can enhance one’s understanding only if its weaknesses are properly understood and accounted for. . . .

Taleb’s critique of econometrics is quite compatible with Austrian economics, which holds that dynamic human actions are too subjective and variegated to be accurately modeled and predicted.

In some parts of Fooled by Randomness, Taleb almost sounds Austrian in his criticisms of economists who worship “the efficient market religion.” Such economists are misguided, he argues, because they begin with the flawed hypothesis that human beings act rationally and do what is mathematically “optimal.” . . .

As opposed to a Utopian Vision, in which human beings are rational and perfectible (by state action), Taleb adopts what he calls a Tragic Vision: “We are faulty and there is no need to bother trying to correct our flaws.” It is refreshing to see a highly successful practitioner of statistics and finance adopt a contrarian viewpoint towards economics.

Yet, as Arnold Kling explains, many (perhaps most) economists have lost sight of the axioms of economics in their misplaced zeal to emulate the methods of the physical sciences:

The most distinctive trend in economic research over the past hundred years has been the increased use of mathematics. In the wake of Paul Samuelson’s (Nobel 1970) Ph.D dissertation, published in 1948, calculus became a requirement for anyone wishing to obtain an economics degree. By 1980, every serious graduate student was expected to be able to understand the work of Kenneth Arrow (Nobel 1972) and Gerard Debreu (Nobel 1983), which required mathematics several semesters beyond first-year calculus.

Today, the “theory sequence” at most top-tier graduate schools in economics is controlled by math bigots. As a result, it is impossible to survive as an economics graduate student with a math background that is less than that of an undergraduate math major. In fact, I have heard that at this year’s American Economic Association meetings, at a seminar on graduate education one professor quite proudly said that he ignored prospective students’ grades in economics courses, because their math proficiency was the key predictor of their ability to pass the coursework required to obtain an advanced degree.

The raising of the mathematical bar in graduate schools over the past several decades has driven many intelligent men and women (perhaps women especially) to pursue other fields. The graduate training process filters out students who might contribute from a perspective of anthropology, biology, psychology, history, or even intense curiosity about economic issues. Instead, the top graduate schools behave as if their goal were to produce a sort of idiot-savant, capable of appreciating and adding to the mathematical contributions of other idiot-savants, but not necessarily possessed of any interest in or ability to comprehend the world to which an economist ought to pay attention.

. . . The basic question of What Causes Prosperity? is not a question of how trading opportunities play out among a given array of goods. Instead, it is a question of how innovation takes place or does not take place in the context of institutional factors that are still poorly understood.

Mathematics, as I have said, is a tool of science, it’s not science in itself. Dressing hypothetical relationships in the garb of mathematics doesn’t validate them.

Where, then, is the science in economics? And where is the nonsense? Stay tuned.

The Essence of Economics

Standard

This is the first entry in what I expect to be a series of loosely connected posts on economics.

Market-based voluntary exchange is an important if not dominant method of satisfying wants. To grasp that point, think of your day: You sleep and awaken in a house or apartment that you didn’t build yourself, but which is “yours” by dint of payments that you make from income you earn by doing things of value to other persons.* During your days at home, in a workplace, or in a vacation spot you spend many hours using products and services that you buy from others — everything from toilet paper, soap, and shampoo to clothing, food, transportation, entertainment, internet access, etc.

It is not that the things that you do for yourself and in direct cooperation with others are unimportant or valueless. Economists acknowledge the psychic value of self-sufficiency and the economic value of non-market cooperation, but they can’t measure the value of those things. Economists typically focus on market-based exchange because it involves transactions with measurable monetary values.

Another thing that economists can’t deal with, because it’s beyond the ken of economics, is the essence of life itself: one’s total sense of well-being, especially as it is influenced by the things done for oneself, solitary joys (reading, listening to music), and the happiness (or sadness) shared with friends and loved ones.

In sum, despite the pervasiveness of voluntary exchange, economics really treats only the marginalia of life — the rate at which a person would exchange a unit of X for a unit of Y, not how X or Y stacks up in the grand scheme of living.

That is the essence of economics, as a discipline. There is much more to it than that, of course; for example, how supply meets demand, how exogenous factors affect economic behavior, how activity at the level of the person or firm sends ripples across the economy, and why those ripples can’t be aggregated meaningfully.

More to come.
__________
* Obviously, a lot of people derive their income from transfer payments (Social Security, food stamps, etc.), which I’ll address in future posts.

The Wages of Simplistic Economics

Standard

If this Wikipedia article accurately reflects what passes for microeconomics these days, the field hasn’t advanced since I took my first micro course almost 60 years ago. And my first micro course was based on Alfred Marshall’s Principles of Economics, first published in 1890.

What’s wrong with micro as it’s taught today, and as it has been taught for the better part of 126 years? It’s not the principles themselves, which are eminently sensible and empirically valid: Supply curves slope upward, demand curves slope downward, competition yields lower prices, etc. What’s wrong is the heavy reliance on two-dimensional graphical representations of the key variables and their interactions; for example, how utility functions (which are gross abstractions) generate demand curves, and how cost functions generate supply curves.

The cautionary words of Marshall and his many successors about the transitory nature of such variables is no match for the vivid, and static, images imprinted in the memories of the millions of students who took introductory microeconomics as undergraduates. Most of them took no additional courses in micro, and probably just an introductory course in macroeconomics — equally misleading.

Micro, as it is taught now, seems to purvey the same fallacy as it did when Marshall’s text was au courant. The fallacy, which is embedded in the easy-to-understand-and remember graphs of supply and demand under various competitive conditions, is the apparent rigidity of those conditions. Professional economists (or some of them, at least) understand that economic conditions are fluid, especially in the absence of government regulation. But the typical student will remember the graph that depicts the dire results of a monopolistic market and take it as a writ for government intervention; for example:

Power that controls the economy should be in the hands of elected representatives of the people, not in the hands of an industrial oligarchy.

William O. Douglas
(dissent in U.S. v. Columbia Steel Co.)

Quite the opposite is true, as I argue at length in this post. Douglas, unfortunately, served on the Supreme Court from 1939 to 1975. He majored in English and economics, and presumably had more than one course in economics. But he was an undergraduate in the waning days of the anti-business, pro-regulation Progressive Era. So he probably never got past the simplistic idea of “monopoly bad, trust-busting good.”

If only the Supreme Court (and government generally) had been blessed with men like Maxwell Anderson, who wrote this:

When a gov­ernment takes over a people’s eco­nomic life, it becomes absolute, and when it has become absolute, it destroys the arts, the minds, the liberties, and the meaning of the people it governs. It is not an ac­cident that Germany, the first paternalistic state of modern Eu­rope, was seized by an uncontrol­lable dictator who brought on the second world war; not an accident that Russia, adopting a centrally administered economy for human­itarian reasons, has arrived at a tyranny bloodier and more abso­lute than that of the Czars. And if England does not turn back soon, she will go this same way. Men who are fed by their govern­ment will soon be driven down to the status of slaves or cattle.

The Guaranteed Life” (preface to
Knickerbocker Holiday, 1938, revised 1950)

And it’s happening here, too.

The Rahn Curve Revisited

Standard

The theory behind the Rahn Curve is simple — but not simplistic. A relatively small government with powers limited mainly to the protection of citizens and their property is worth more than its cost to taxpayers because it fosters productive economic activity (not to mention liberty). But additional government spending hinders productive activity in many ways, which are discussed in Daniel Mitchell’s paper, “The Impact of Government Spending on Economic Growth.” (I would add to Mitchell’s list the burden of regulatory activity, which grows even when government does not.)

What does the Rahn Curve look like? Mitchell estimates this relationship between government spending and economic growth:

Rahn curve (2)

The curve is dashed rather than solid at low values of government spending because it has been decades since the governments of developed nations have spent as little as 20 percent of GDP. But as Mitchell and others note, the combined spending of governments in the U.S. was 10 percent (and less) until the eve of the Great Depression. And it was in the low-spending, laissez-faire era from the end of the Civil War to the early 1900s that the U.S. enjoyed its highest sustained rate of economic growth.

In an earlier post, I ventured an estimate of the Rahn curve that spanned most of the history of the United States. I came up with this relationship:

Real rate of growth = -0.066(G/GDP) + 0.054

To be precise, it’s the annualized rate of growth over the most recent 10-year span, as a function of G/GDP (fraction of GDP spent by governments at all levels) in the preceding 10 years. The relationship is lagged because it takes time for government spending (and related regulatory activities) to wreak their counterproductive effects on economic activity. Also, I include transfer payments (e.g., Social Security) in my measure of G because there’s no essential difference between transfer payments and many other kinds of government spending. They all take money from those who produce and give it to those who don’t (e.g., government employees engaged in paper-shuffling, unproductive social-engineering schemes, and counterproductive regulatory activities).

When G/GDP is greater than the amount needed for national defense and domestic justice — no more than 0.1 (10 percent of GDP) — it discourages productive, growth-producing, job-creating activity. And because G weighs most heavily on taxpayers with above-average incomes, higher rates of G/GDP also discourage saving, which finances growth-producing investments in new businesses, business expansion, and capital (i.e., new and more productive business assets, both physical and intellectual).

I’ve taken a closer look at the post-World War II numbers, because of the marked decline in the rate of growth since the end of the war:

Real GDP 1947q1-2016q2

Here’s the result:

Real rate of growth = -0.364(G/GDP) + 0.0626(BA/GDP) – 0.000287(FR) + 0.0537

Again, it’s the annualized rate of growth over a 10-year span, as a function of G/GDP (fraction of GDP spent by governments at all levels) in the preceding 10 years, and two new terms. The first new term, BA/GDP, represents the constant-dollar value of private nonresidential assets (i.e., business assets) as a fraction of GDP, averaged over the preceding 10 years. The second new term, FR, represents the average number of Federal Register pages, in thousands, for the preceding 10-year period.

The equation has a good r-squared (0.729) and is highly significant (F-value = 4.16E-13). The p-values of the coefficients and intercept are also highly significant (7.43E-08, 1.67E-08, 0.00011, and 0.0014). The standard error of the estimate is 0.0059, that is, about 6/10 of a percentage point. I found no other intuitively appealing variables that add to the explanatory power of the equation.

What does the equation portend for the next 10 years? Based on G/GDP, BA/GDP, and FR for the most recent 10-year period (2006-2015), the real rate of growth for the next 10 years will be about 1.7 percent. The earlier equation yields an estimate of 2.9 percent. The new equation wins the reality test, as you can tell by the blue line in the graph above.

In fact the year-over-year rates of real growth for the past four quarters (2015Q3 through 2016Q2) are 2.2 percent, 1.9 percent, 1.6 percent, and 1.2 percent. So an estimate of 1.6 percent for the next 10 years looks rather optimistic.

And it probably is. If G/GDP were to rise from 0.381 (the average for 2006-2015) to 0.43, the rate of real growth would fall to zero, even if BA/GDP and FR were to remain at their 2006-2015 levels. (And FR is much more likely to rise than to fall.) It’s easy to imagine G/GDP hitting 0.43 with a Democrat president and Democrat-controlled Congress mandating “free” college educations, universal “free” health care, and who knows what else.