Economists as Scientists

Standard

This is the third entry in a series of loosely connected posts on economics. The first entry is here and the second entry is here. (Related posts by me are noted parenthetically throughout this one.)

Science is something that some people “do” some of the time. There are full-time human beings and part-time scientists. And the part-timers are truly scientists only when they think and act in accordance with the scientific method.*

Acting in accordance with the scientific method is a matter of attitude and application. The proper attitude is one of indifference about the correctness of a hypothesis or theory. The proper application rejects a hypothesis if it can’t be tested, and rejects a theory if it’s refuted (falsified) by relevant and reliable observations.

Regarding attitude, I turn to the most famous person who was sometimes a scientist: Albert Einstein. This is from the Wikipedia article about the Bohr-Einstein debate:

The quantum revolution of the mid-1920s occurred under the direction of both Einstein and [Niels] Bohr, and their post-revolutionary debates were about making sense of the change. The shocks for Einstein began in 1925 when Werner Heisenberg introduced matrix equations that removed the Newtonian elements of space and time from any underlying reality. The next shock came in 1926 when Max Born proposed that mechanics were to be understood as a probability without any causal explanation.

Einstein rejected this interpretation. In a 1926 letter to Max Born, Einstein wrote: “I, at any rate, am convinced that He [God] does not throw dice.” [Apparently, Einstein also used the line in Bohr’s presence, and Bohr replied, “Einstein, stop telling God what to do.” — TEA]

At the Fifth Solvay Conference held in October 1927 Heisenberg and Born concluded that the revolution was over and nothing further was needed. It was at that last stage that Einstein’s skepticism turned to dismay. He believed that much had been accomplished, but the reasons for the mechanics still needed to be understood.

Einstein’s refusal to accept the revolution as complete reflected his desire to see developed a model for the underlying causes from which these apparent random statistical methods resulted. He did not reject the idea that positions in space-time could never be completely known but did not want to allow the uncertainty principle to necessitate a seemingly random, non-deterministic mechanism by which the laws of physics operated.

It’s true that quantum mechanics was inchoate in the mid-1920s, and that it took a couple of decades to mature into quantum field theory. But there’s more than a trace of “attitude” in Einstein’s refusal to accept quantum mechanics, to stay abreast of developments in the theory, and to search quixotically for his own theory of everything, which he hoped would obviate the need for a non-deterministic explanation of quantum phenomena.

Improper application of the scientific method is rife. See, for example the Wikipedia article about the replication crisis, John Ioannidis’s article, “Why Most Published Research Findings Are False.” (See also “Ty Cobb and the State of Science” and “Is Science Self-Correcting?“) For a thorough analysis of the roots of the crisis, read Michael Hart’s book, Hubris: The Troubling Science, Economics, and Politics of Climate Change.

A bad attitude and improper application are both found among the so-called scientists who declare that the “science” of global warming is “settled,” and that human-generated CO2 emissions are the primary cause of the apparent rise in global temperatures during the last quarter of the 20th century. The bad attitude is the declaration of “settled science.” In “The Science Is Never Settled” I give many prominent examples of the folly of declaring it to be “settled.”

The improper application of the scientific method with respect to global warming began with the hypothesis that the “culprit” is CO2 emissions generated by the activities of human beings — thus anthropogenic global warming (AGW). There’s no end of evidence to the contrary, some of which is summarized in these posts and many of the links found therein. There’s enough evidence, in my view, to have rejected the CO2 hypothesis many times over. But there’s a great deal of money and peer-approval at stake, so the rush to judgment became a stampede. And attitude rears its ugly head when pro-AGW “scientists” shun the real scientists who are properly skeptical about the CO2 hypothesis, or at least about the degree to which CO2 supposedly influences temperatures. (For a depressingly thorough account of the AGW scam, read Michael Hart’s Hubris: The Troubling Science, Economics, and Politics of Climate Change.)

I turn now to economists, as I have come to know them in more than fifty years of being taught by them, working with them, and reading their works. Scratch an economist and you’re likely to find a moralist or reformer just beneath a thin veneer of rationality. Economists like to believe that they’re objective. But they aren’t; no one is. Everyone brings to the table a large serving of biases that are incubated in temperament, upbringing, education, and culture.

Economists bring to the table a heaping helping of tunnel vision. “Hard scientists” do, too, but their tunnel vision is generally a good thing, because it’s actually aimed at a deeper understanding of the inanimate and subhuman world rather than the advancement of a social or economic agenda. (I make a large exception for “hard scientists” who contribute to global-warming hysteria, as discussed above.)

Some economists, especially behavioralists, view the world through the lens of wealth-and-utility-maximization. Their great crusade is to force everyone to make rational decisions (by their lights), through “nudging.” It almost goes without saying that government should be the nudger-in-chief. (See “The Perpetual Nudger” and the many posts linked to therein.)

Other economists — though far fewer than in the past — have a thing about monopoly and oligopoly (the domination of a market by one or a few sellers). They’re heirs to the trust-busting of the late 1800s and early 1900s, a movement led by non-economists who sought to blame the woes of working-class Americans on the “plutocrats” (Rockefeller, Carnegie, Ford, etc.) who had merely made life better and more affordable for Americans, while also creating jobs for millions of them and reaping rewards for the great financial risks that they took. (See “Monopoly and the General Welfare” and “Monopoly: Private Is Better than Public.”) As it turns out, the biggest and most destructive monopoly of all is the federal government, so beloved and trusted by trust-busters — and too many others. (See “The Rahn Curve Revisited.”)

Nowadays, a lot of economists are preoccupied by income inequality, as if it were something evil and not mainly an artifact of differences in intelligence, ambition, and education, etc. And inequality — the prospect of earning rather grand sums of money — is what drives a lot of economic endeavor, to good of workers and consumers. (See “Mass (Economic) Hysteria: Income Inequality and Related Themes” and the many posts linked to therein.) Remove inequality and what do you get? The Soviet Union and Communist China, in which everyone is equal except party operatives and their families, friends, and favorites.

When the inequality-preoccupied economists are confronted by the facts of life, they usually turn their attention from inequality as a general problem to the (inescapable) fact that an income distribution has a top one-percent and top one-tenth of one-percent — as if there were something especially loathsome about people in those categories. (Paul Krugman shifted his focus to the top one-tenth of one percent when he realized that he’s in the top one percent, so perhaps he knows that’s he’s loathsome and wishes to deny it, to himself.)

Crony capitalism is trotted out as a major cause of very high incomes. But that’s hardly a universal cause, given that a lot of very high incomes are earned by athletes and film stars beside whom most investment bankers and CEOs are making peanuts. Moreover, as I’ve said on several occasions, crony capitalists are bright and driven enough to be in the stratosphere of any income distribution. Further, the fertile soil of crony capitalism is the regulatory power of government that makes it possible.

Many economists became such, it would seem, in order to promote big government and its supposed good works — income redistribution being one of them. Joseph Stiglitz and Paul Krugman are two leading exemplars of what I call the New Deal school of economic thought, which amounts to throwing government and taxpayers’ money at every perceived problem, that is, every economic outcome that is deemed unacceptable by accountants of the soul. (See “Accountants of the Soul.”)

Stiglitz and Krugman — both Nobel laureates in economics — are typical “public intellectuals” whose intelligence breeds in them a kind of arrogance. (See “Intellectuals and Society: A Review.”) It’s the kind of arrogance that I mentioned in the preceding post in this series: a penchant for deciding what’s best for others.

New Deal economists like Stiglitz and Krugman carry it a few steps further. They ascribe to government an impeccable character, an intelligence to match their own, and a monolithic will. They then assume that this infallible and wise automaton can and will do precisely what they would do: Create the best of all possible worlds. (See the many posts in which I discuss the nirvana fallacy.)

New Deal economists, in other words, live their intellectual lives  in a dream-world populated by the likes of Jiminy Cricket (“When You Wish Upon a Star”), Dorothy (“Somewhere Over the Rainbow”), and Mary Jane of a long-forgotten comic book (“First I shut my eyes real tight, then I wish with all my might! Magic words of poof, poof, piffles, make me just as small as [my mouse] Sniffles!”).

I could go on, but you should by now have grasped the point: What too many economists want to do is change human nature, channel it in directions deemed “good” (by the economist), or simply impose their view of “good” on everyone. To do such things, they must rely on government.

It’s true that government can order people about, but it can’t change human nature, which has an uncanny knack for thwarting Utopian schemes. (Obamacare, whose chief architect was economist Jonathan Gruber, is exhibit A this year.) And government (inconveniently for Utopians) really consists of fallible, often unwise, contentious human beings. So government is likely to march off in a direction unsought by Utopian economists.

Nevertheless, it’s hard to thwart the tax collector. The regulator can and does make things so hard for business that if one gets off the ground it can’t create as much prosperity and as many jobs as it would in the absence of regulation. And the redistributor only makes things worse by penalizing success. Tax, regulate, and redistribute should have been the mantra of the New Deal and most presidential “deals” since.

I hold economists of the New Deal stripe partly responsible for the swamp of stagnation into which the nation’s economy has descended. (See “Economic Growth Since World War II.”) Largely responsible, of course, are opportunistic if not economically illiterate politicians who pander to rent-seeking, economically illiterate constituencies. (Yes, I’m thinking of old folks and the various “disadvantaged” groups with which they have struck up an alliance of convenience.)

The distinction between normative economics and positive economics is of no particular use in sorting economists between advocates and scientists. A lot of normative economics masquerades as positive economics. The work of Thomas Piketty and his comrades-in-arms comes to mind, for example. (See “McCloskey on Piketty.”) Almost everything done to quantify and defend the Keynesian multiplier counts as normative economics, inasmuch as the work is intended (wittingly or not) to defend an intellectual scam of 80 years’ standing. (See “The Keynesian Multiplier: Phony Math,” “The True Multiplier,” and “Further Thoughts about the Keynesian Multiplier.”)

Enough said. If you want to see scientific economics in action, read Regulation. Not every article in it exemplifies scientific inquiry, but a good many of them do. It’s replete with articles about microeconomics, in which the authors uses real-world statistics to validate and quantify the many axioms of economics.

A final thought is sparked by Arnold Kling’s post, “Ed Glaeser on Science and Economics.” Kling writes:

I think that the public has a sort of binary classification. If it’s “science,” then an expert knows more than the average Joe. If it’s not a science, then anyone’s opinion is as good as anyone else’s. I strongly favor an in-between category, called a discipline. Think of economics as a discipline, where it is possible for avid students to know more than ordinary individuals, but without the full use of the scientific method.

On this rare occasion I disagree with Kling. The accumulation of knowledge about economic variables, or pseudo-knowledge such as estimates of GDP (see “Macroeconomics and Microeconomics“), either leads to well-tested, verified, and reproducible theories of economic behavior or it leads to conjectures, of which there are so many opposing ones that it’s “take your pick.” If that’s what makes a discipline, give me the binary choice between science and story-telling. Most of economics seems to be story-telling. “Discipline” is just a fancy word for it.

Collecting baseball cards and memorizing the statistics printed on them is a discipline. Most of economics is less useful than collecting baseball cards — and a lot more destructive.

Here’s my hypothesis about economists: There are proportionally as many of them who act like scientists as there are baseball players who have career batting averages of at least .300.
__________
* Richard Feynman, a physicist and real scientist, had a different view of the scientific method than Karl Popper’s standard taxonomy. I see Feynman’s view as complementary to Popper’s, not at odds with it. What is “constructive skepticism” (Feynman’s term) but a gentler way of saying that a hypothesis or theory might be falsified and that the act of falsification may point to a better hypothesis or theory?

Advertisements

Economics and Science

Standard

This is the second entry in what I expect to be a series of loosely connected posts on economics. The first entry is here.

Science is unnecessarily daunting to the uninitiated, which is to say, the vast majority of the populace. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

Here I will dissect science, then turn to economics and begin a discussion of its scientific and non-scientific aspects. It has both, though at least one non-scientific aspect (the Keynesian multiplier) draws an inordinate amount of attention, and has many true believers within the profession.

Science is knowledge, but not all knowledge is science. A scientific body of knowledge is systematic; that is, the granular facts or phenomena which comprise the body of knowledge must be connected in patterned ways. The purported facts or phenomena of a science must represent reality, things that can be observed and measured in some way. Scientists may hypothesize the existence of an unobserved thing (e.g., the ether, dark matter), in an effort to explain observed phenomena. But the unobserved thing stands outside scientific knowledge until its existence is confirmed by observation, or because it remains standing as the only plausible explanation of observable phenomena. Hypothesized things may remain outside the realm of scientific knowledge for a very long time, if not forever. The Higgs boson, for example, was hypothesized in 1964 and has been tentatively (but not conclusively) confirmed since its “discovery” in 2011.

Science has other key characteristics. Facts and patterns must be capable of validation and replication by persons other than those who claim to have found them initially. Patterns should have predictive power; thus, for example, if the sun fails to rise in the east, the model of Earth’s movements which says that it will rise in the east is presumably invalid and must be rejected or modified so that it correctly predicts future sunrises or the lack thereof. Creating a model or tweaking an existing model just to account for a past event (e.g., the failure of the Sun to rise, the apparent increase in global temperatures from the 1970s to the 1990s) proves nothing other than an ability to “predict” the past with accuracy.

Models are usually clothed in the language of mathematics and statistics. But those aren’t scientific disciplines in themselves; they are tools of science. Expressing a theory in mathematical terms may lend the theory a scientific aura, but a theory couched in mathematical terms is not a scientific one unless (a) it can be tested against facts yet to be ascertained and events yet to occur, and (b) it is found to accord with those facts and events consistently, by rigorous statistical tests.

A science may be descriptive rather than mathematical. In a descriptive science (e.g., plant taxonomy), particular phenomena sometimes are described numerically (e.g., the number of leaves on the stem of a species), but the relations among various phenomena are not reducible to mathematics. Nevertheless, a predominantly descriptive discipline will be scientific if the phenomena within its compass are connected in patterned ways, can be validated, and are applicable to newly discovered entities.

Non-scientific disciplines can be useful, whereas some purportedly scientific disciplines verge on charlatanism. Thus, for example:

  • History, by my reckoning, is not a science because its account of events and their relationships is inescapably subjective and incomplete. But a knowledge of history is valuable, nevertheless, for the insights it offers into the influence of human nature on the outcomes of economic and political processes.
  • Physics is a science in most of its sub-disciplines, but there are some (e.g., cosmology) where it descends into the realm of speculation. It is informed, fascinating speculation to be sure, but speculation all the same. The idea of multiverses, for example, can’t be tested, inasmuch as human beings and their tools are bound to the known universe.
  • Economics is a science only to the extent that it yields empirically valid insights about  specific economic phenomena (e.g., the effects of laws and regulations on the prices and outputs of specific goods and services). Then there are concepts like the Keynesian multiplier, about which I’ll say more in this series. It’s a hypothesis that rests on a simplistic, hydraulic view of the economic system. (Other examples of pseudo-scientific economic theories are the labor theory of value and historical determinism.)

In sum, there is no such thing as “science,” writ large; that is, no one may appeal, legitimately, to “science” in the abstract. A particular discipline may be a science, but it is a science only to the extent that it comprises a factual and replicable body of patterned knowledge. Patterned knowledge includes theories with predictive power.

A scientific theory is a hypothesis that has thus far been confirmed by observation. Every scientific theory rests eventually on axioms: self-evident principles that are accepted as true without proof. The principle of uniformity (which can be traced to Galileo) is an example of such an axiom:

Uniformitarianism is the assumption that the same natural laws and processes that operate in the universe now have always operated in the universe in the past and apply everywhere in the universe. It refers to invariance in the metaphysical principles underpinning science, such as the constancy of causal structure throughout space-time, but has also been used to describe spatiotemporal invariance of physical laws. Though an unprovable postulate that cannot be verified using the scientific method, uniformitarianism has been a key first principle of virtually all fields of science

Thus, for example, if observer B is moving away from observer A at a certain speed, observer A will perceive that he is moving away from observer B at that speed. It follows that an observer cannot determine either his absolute velocity or direction of travel in space. The principle of uniformity is a fundamental axiom of modern physics, most notably of Einstein’s special and general theories of relativity.

There’s a fine line between an axiom and a theory. Was the idea of a geocentric universe an axiom or a theory? If it was taken as axiomatic — as it surely was by many scientists for about 2,000 years — then it’s fair to say that an axiom can give way under the pressure of observational evidence. (Such an event is what Thomas Kuhn calls a paradigm shift.) But no matter how far scientists push the boundaries of knowledge, they must at some point rely on untestable axioms, such as the principle of uniformity. There are simply deep and (probably) unsolvable mysteries that science is unlikely to fathom.

This brings me to economics, which — in my view — rests on these self-evident axioms:

1. Each person strives to maximize his or her sense of satisfaction, which may also be called well-being, happiness, or utility (an ugly word favored by economists). Striving isn’t the same as achieving, of course, because of lack of information, emotional decision-making, buyer’s remorse, etc

2. Happiness can and often does include an empathic concern for the well-being of others; that is, one’s happiness may be served by what is usually labelled altruism or self-sacrifice.

3. Happiness can be and often is served by the attainment of non-material ends. Not all persons (perhaps not even most of them) are interested in the maximization of wealth, that is, claims on the output of goods and services. In sum, not everyone is a wealth maximizer. (But see axiom number 12.)

4. The feeling of satisfaction that an individual derives from a particular product or service is situational — unique to the individual and to the time and place in which the individual undertakes to acquire or enjoy the product or service. Generally, however, there is a (situationally unique) point at which the acquisition or enjoyment of additional units of a particular product or service during a given period of time tends to offer less satisfaction than would the acquisition or enjoyment of units of other products or services that could be obtained at the same cost.

5. The value that a person places on a product or service is subjective. Products and services don’t have intrinsic values that apply to all persons at a given time or period of time.

6. The ability of a person to acquire products and services, and to accumulate wealth, depends (in the absence of third-party interventions) on the valuation of the products and services that are produced in part or whole by the person’s labor (mental or physical), or by the assets that he owns (e.g., a factory building, a software patent). That valuation is partly subjective (e.g., consumers’ valuation of the products and services, an employer’s qualitative evaluation of the person’s contributions to output) and partly objective (e.g., an employer’s knowledge of the price commanded by a product or service, an employer’s measurement of an employees’ contribution to the quantity of output).

7. The persons and firms from which products and services flow are motivated by the acquisition of income, with which they can acquire other products and services, and accumulate wealth for personal purposes (e.g., to pass to heirs) or business purposes (e.g., to expand the business and earn more income). So-called profit maximization (seeking to maximize the difference between the cost of production and revenue from sales) is a key determinant of business decisions but far from the only one. Others include, but aren’t limited to, being a “good neighbor,” providing employment opportunities for local residents, and underwriting philanthropic efforts.

8. The cost of production necessarily influences the price at which a good or and service will be offered for sale, but doesn’t solely determine the price at which it will be sold. Selling price depends on the subjective valuation of the products or service, prospective buyers’ incomes, and the prices of other products and services, including those that are direct or close substitutes and those to which users may switch, depending on relative prices.

9. The feeling of satisfaction that a person derives from the acquisition and enjoyment of the “basket” of products and services that he is able to buy, given his income, etc., doesn’t necessarily diminish, as long as the person has access to a great variety of products and services. (This axiom and axiom 12 put paid to the myth of diminishing marginal utility of income.)

10. Work may be a source of satisfaction in itself or it may simply be a means of acquiring and enjoying products and services, or acquiring claims to them by accumulating wealth. Even when work is satisfying in itself, it is subject to the “law” of diminishing marginal satisfaction.

11. Work, for many (but not all) persons, is no longer be worth the effort if they become able to subsist comfortably enough by virtue of the wealth that they have accumulated, the availability of redistributive schemes (e.g., Social Security and Medicare), or both. In such cases the accumulation of wealth often ceases and reverses course, as it is “cashed in” to defray the cost of subsistence (which may be far more than minimal).

12. However, there are not a few persons whose “work” is such a great source of satisfaction that they continue doing it until they are no longer capable of doing so. And there are some persons whose “work” is the accumulation of wealth, without limit. Such persons may want to accumulate wealth in order to “do good” or to leave their heirs well off or simply for the satisfaction of running up the score. The justification matters not. There is no theoretical limit to the satisfaction that a particular person may derive from the accumulation of wealth. Moreover, many of the persons (discussed in axiom 11) who aren’t able to accumulate wealth endlessly would do so if they had the ability and the means to take the required risks.

13. Individual degrees of satisfaction (happiness, etc.) are ephemeral, nonquantifiable, and incommensurable. There is no such thing as a social welfare function that a third party (e.g., government) can maximize by taking from A to give to B. If there were such a thing, its value would increase if, for example, A were to punch B in the nose and derive a degree of pleasure that somehow more than offsets the degree of pain incurred by B. (The absurdity of a social-welfare function that allows As to punch Bs in their noses ought to be enough shame inveterate social engineers into quietude — but it won’t. They derive great satisfaction from meddling.) Moreover, one of the primary excuses for meddling is that income (and thus wealth) has a  diminishing marginal utility, so it makes sense to redistribute from those with higher incomes (or more wealth) to those who have less of either. Marginal utility is, however, unknowable (see axioms 4 and 5), and may not always be negative (see axioms 9 and 12).

14. Whenever a third party (government, do-gooders, etc.) intervene in the affairs of others, that third party is merely imposing its preferences on those others. The third party sometimes claims to know what’s best for “society as a whole,” etc., but no third party can know such a thing. (See axiom 13.)

15. It follows from axiom 13 that the welfare of “society as a whole” can’t be aggregated or measured. An estimate of the monetary value of the economic output of a nation’s economy (Gross Domestic Product) is by no means an estimate of the welfare of “society as a whole.” (Again, see axiom 13.)

That may seem like a lot of axioms, which might give you pause about my claim that some aspects of economics are scientific. But economics is inescapably grounded in axioms such as the ones that I propound. This aligns me (mainly) with the Austrian economists, whose leading light was Ludwig von Mises. Gene Callahan writes about him at the website of the Ludwig von Mises Institute:

As I understand [Mises], by categorizing the fundamental principles of economics as a priori truths and not contingent facts open to empirical discovery or refutation, Mises was not claiming that economic law is revealed to us by divine action, like the ten commandments were to Moses. Nor was he proposing that economic principles are hard-wired into our brains by evolution, nor even that we could articulate or comprehend them prior to gaining familiarity with economic behavior through participating in and observing it in our own lives. In fact, it is quite possible for someone to have had a good deal of real experience with economic activity and yet never to have wondered about what basic principles, if any, it exhibits.

Nevertheless, Mises was justified in describing those principles as a priori, because they are logically prior to any empirical study of economic phenomena. Without them it is impossible even to recognize that there is a distinct class of events amenable to economic explanation. It is only by pre-supposing that concepts like intention, purpose, means, ends, satisfaction, and dissatisfaction are characteristic of a certain kind of happening in the world that we can conceive of a subject matter for economics to investigate. Those concepts are the logical prerequisites for distinguishing a domain of economic events from all of the non-economic aspects of our experience, such as the weather, the course of a planet across the night sky, the growth of plants, the breaking of waves on the shore, animal digestion, volcanoes, earthquakes, and so on.

Unless we first postulate that people deliberately undertake previously planned activities with the goal of making their situations, as they subjectively see them, better than they otherwise would be, there would be no grounds for differentiating the exchange that takes place in human society from the exchange of molecules that occurs between two liquids separated by a permeable membrane. And the features which characterize the members of the class of phenomena singled out as the subject matter of a special science must have an axiomatic status for practitioners of that science, for if they reject them then they also reject the rationale for that science’s existence.

Economics is not unique in requiring the adoption of certain assumptions as a pre-condition for using the mode of understanding it offers. Every science is founded on propositions that form the basis rather than the outcome of its investigations. For example, physics takes for granted the reality of the physical world it examines. Any piece of physical evidence it might offer has weight only if it is already assumed that the physical world is real. Nor can physicists demonstrate their assumption that the members of a sequence of similar physical measurements will bear some meaningful and consistent relationship to each other. Any test of a particular type of measurement must pre-suppose the validity of some other way of measuring against which the form under examination is to be judged.

Why do we accept that when we place a yardstick alongside one object, finding that the object stretches across half the length of the yardstick, and then place it alongside another object, which only stretches to a quarter its length, that this means the first object is longer than the second? Certainly not by empirical testing, for any such tests would be meaningless unless we already grant the principle in question. In mathematics we don’t come to know that 2 + 2 always equals 4 by repeatedly grouping two items with two others and counting the resulting collection. That would only show that our answer was correct in the instances we examined — given the assumption that counting works! — but we believe it is universally true. [And it is universally true by the conventions of mathematics. If what we call “5” were instead called “4,” 2 + 2 would always equal 5. — TEA] Biology pre-supposes that there is a significant difference between living things and inert matter, and if it denied that difference it would also be denying its own validity as a special science. . . .

The great fecundity from such analysis in economics is due to the fact that, as acting humans ourselves, we have a direct understanding of human action, something we lack in pondering the behavior of electrons or stars. The contemplative mode of theorizing is made even more important in economics because the creative nature of human choice inherently fails to exhibit the quantitative, empirical regularities, the discovery of which characterizes the modern, physical sciences. (Biology presents us with an interesting intermediate case, as many of its findings are qualitative.) . . .

[A] person can be presented with scores of experiments supporting a particular scientific theory is sound, but no possible experiment ever can demonstrate to him that experimentation is a reasonable means by which to evaluate a scientific theory. Only his intuitive grasp of its plausibility can bring him to accept that proposition. (Unless, of course, he simply adopts it on the authority of others.) He can be led through hundreds of rigorous proofs for various mathematical theorems and be taught the criteria by which they are judged to be sound, but there can be no such proof for the validity of the method itself. (Kurt Gödel famously demonstrated that a formal system of mathematical deduction that is complex enough to model even so basic a topic as arithmetic might avoid either incompleteness or inconsistency, but always must suffer at least one of those flaws.) . . .

This ultimate, inescapable reliance on judgment is illustrated by Lewis Carroll in Alice Through the Looking Glass. He has Alice tell Humpty Dumpty that 365 minus one is 364. Humpty is skeptical, and asks to see the problem done on paper. Alice dutifully writes down:

365 – 1 = 364

Humpty Dumpty studies her work for a moment before declaring that it seems to be right. The serious moral of Carroll’s comic vignette is that formal tools of thinking are useless in convincing someone of their conclusions if he hasn’t already intuitively grasped the basic principles on which they are built.

All of our knowledge ultimately is grounded on our intuitive recognition of the truth when we see it. There is nothing magical or mysterious about the a priori foundations of economics, or at least nothing any more magical or mysterious than there is about our ability to comprehend any other aspect of reality.

(Callahan has more to say here. For a technical discussion of the science of human action, or praxeology, read this. Some glosses on Gödel’s incompleteness theorem are here.)

I omitted an important passage from the preceding quotation, in order to single it out. Callahan says also that

Mises’s protégé F.A. Hayek, while agreeing with his mentor on the a priori nature of the “logic of action” and its foundational status in economics, still came to regard investigating the empirical issues that the logic of action leaves open as a more important undertaking than further examination of that logic itself.

I agree with Hayek. It’s one thing to know axiomatically that the speed of light is constant; it is quite another (and useful) thing to know experimentally that the speed of light (in empty space) is about 671 million miles an hour. Similarly, it is one thing to deduce from the axioms of economics that demand curves generally slope downward; it is quite another (and useful) thing to estimate specific demand functions.

But one must always be mindful of the limitations of quantitative methods in economics. As James Sheehan writes at the website of the Mises Institute,

economists are prone to error when they ascribe excessive precision to advanced statistical techniques. They assume, falsely, that a voluminous amount of historical observations (sample data) can help them to make inferences about the future. They presume that probability distributions follow a bell-shaped pattern. They make no provision for the possibility that past correlations between economic variables and data were coincidences.

Nor do they account for the possibility, as economist Robert Lucas demonstrated, that people will incorporate predictable patterns into their expectations, thus canceling out the predictive value of such patterns. . . .

As [Nassim Nicholas] Taleb points out [in Fooled by Randomness], the popular Monte Carlo simulation “is more a way of thinking than a computational method.” Employing this way of thinking can enhance one’s understanding only if its weaknesses are properly understood and accounted for. . . .

Taleb’s critique of econometrics is quite compatible with Austrian economics, which holds that dynamic human actions are too subjective and variegated to be accurately modeled and predicted.

In some parts of Fooled by Randomness, Taleb almost sounds Austrian in his criticisms of economists who worship “the efficient market religion.” Such economists are misguided, he argues, because they begin with the flawed hypothesis that human beings act rationally and do what is mathematically “optimal.” . . .

As opposed to a Utopian Vision, in which human beings are rational and perfectible (by state action), Taleb adopts what he calls a Tragic Vision: “We are faulty and there is no need to bother trying to correct our flaws.” It is refreshing to see a highly successful practitioner of statistics and finance adopt a contrarian viewpoint towards economics.

Yet, as Arnold Kling explains, many (perhaps most) economists have lost sight of the axioms of economics in their misplaced zeal to emulate the methods of the physical sciences:

The most distinctive trend in economic research over the past hundred years has been the increased use of mathematics. In the wake of Paul Samuelson’s (Nobel 1970) Ph.D dissertation, published in 1948, calculus became a requirement for anyone wishing to obtain an economics degree. By 1980, every serious graduate student was expected to be able to understand the work of Kenneth Arrow (Nobel 1972) and Gerard Debreu (Nobel 1983), which required mathematics several semesters beyond first-year calculus.

Today, the “theory sequence” at most top-tier graduate schools in economics is controlled by math bigots. As a result, it is impossible to survive as an economics graduate student with a math background that is less than that of an undergraduate math major. In fact, I have heard that at this year’s American Economic Association meetings, at a seminar on graduate education one professor quite proudly said that he ignored prospective students’ grades in economics courses, because their math proficiency was the key predictor of their ability to pass the coursework required to obtain an advanced degree.

The raising of the mathematical bar in graduate schools over the past several decades has driven many intelligent men and women (perhaps women especially) to pursue other fields. The graduate training process filters out students who might contribute from a perspective of anthropology, biology, psychology, history, or even intense curiosity about economic issues. Instead, the top graduate schools behave as if their goal were to produce a sort of idiot-savant, capable of appreciating and adding to the mathematical contributions of other idiot-savants, but not necessarily possessed of any interest in or ability to comprehend the world to which an economist ought to pay attention.

. . . The basic question of What Causes Prosperity? is not a question of how trading opportunities play out among a given array of goods. Instead, it is a question of how innovation takes place or does not take place in the context of institutional factors that are still poorly understood.

Mathematics, as I have said, is a tool of science, it’s not science in itself. Dressing hypothetical relationships in the garb of mathematics doesn’t validate them.

Where, then, is the science in economics? And where is the nonsense? Stay tuned.

Not-So-Random Thoughts (XVIII)

Standard

Links to the other posts in this occasional series may be found at “Favorite Posts,” just below the list of topics.

Charles Murray opines about “America Against Itself“:

With the publication in 2012 of Coming Apart: The State of White America, 1960-2010, political scientist Charles Murray – celebrated and denigrated in equal measure for his earlier works, Losing Ground (1984) and The Bell Curve (1994) – produced a searing, searching analysis of a nation cleaving along the lines of class, a nation, as he put it, ‘coming apart at the seams’. On the one side of this conflicted society, as Murray sees it, there is the intellectual or ‘cognitive’ elite, graduates of America’s leading universities, bound together through marriage and work, and clustered together in the same exclusive zipcodes, places such as Beverly Hills, Santa Monica and Boston. In these communities of the likeminded, which Murray gives the fictional title of ‘Belmont’, the inhabitants share the same values, the same moral outlook, the same distinct sense of themselves as superior. And on the other side, there is the ‘new lower class’, the white Americans who left education with no more than a high-school diploma, who increasingly divorce among themselves, endure unemployment together, and are gathered in neighbourhoods that Murray gives the title of ‘Fishtown’ – inspired by an actual white, blue-collar neighbourhood of the same name in Philadelphia.

It is in Fishtown that the trends Murray identifies as the most damaging over the past 50 years – family breakdown, loss of employment, crime and a loss of social capital – are felt and experienced. Its inhabitants have a set of values (albeit threadbare ones), an outlook and a way of life that are entirely at odds with those from Belmont. And it is between these two almost entirely distinct moral communities, that the new Culture Wars now appear to be being fought….

Collins: I was thinking about how, in Coming Apart, you explore how the elites seek to distance themselves from the working class. They eat so-called healthier foods, they have different child-rearing practices, and so on. Then, from afar, they preach their preferred ways to the working class, as if they know better. The elites may no longer preach traditional civic virtues, as you note in Coming Apart, but they are still preaching, in a way. Only now they’re preaching about health, parenting and other things.

Murray: They are preaching. They are legislating. They are creating policies. The elites (on both the right and the left) do not get excited about low-skill immigration. Let’s face it, if you are members of the elite, immigration provides you with cheap nannies, cheap lawn care, and so on. There are a variety of ways in which it is a case of ‘hey, it’s no skin off my back’ to have all of these new workers. The elites are promulgating policies for which they do not pay the price. That’s true of immigration, that’s true of education. When they support the teachers’ unions in all sorts of practices that are terrible for kids, they don’t pay that price. Either they send their kids to private schools, or they send their kids to schools in affluent suburbs in which they, the parents, really do have a lot of de facto influence over how the school is run.

So they don’t pay the price for policy after policy. Perhaps the most irritating to me – and here we are talking about preaching – is how they are constantly criticising the working class for being racist, for seeking to live in neighbourhoods in which whites are the majority. The elites live in zipcodes that are overwhelmingly white, with very few blacks and Latinos. The only significant minorities in elite zipcodes are East and South Asians. And, as the American sociologist Andrew Hacker has said, Asians are ‘honorary whites’. The integration that you have in elite neighbourhoods is only for the model minority, not for other minorities. That’s a kind of hypocrisy, to call working-class whites ‘racist’ for doing exactly the same thing that the elites do. It’s terrible.

The elites live in a bubble, which Murray explains in Coming Apart, and which I discuss in “Are You in the Bubble?” — I’m not — and “Bubbling Along.”

*     *     *

Meanwhile, in the climate war, there’s an interesting piece about scientists who got it right, but whose article was pulled because they used pseudonyms. In “Scientists Published Climate Research Under Fake Names. Then They Were Caught” we learn that

they had constructed a model, a mathematical argument, for calculating the average surface temperature of a rocky planet. Using just two factors — electromagnetic radiation beamed by the sun into the atmosphere and the atmospheric pressure at a planet’s surface — the scientists could predict a planet’s temperature. The physical principle, they said, was similar to the way that high-pressure air ignites fuel in a diesel engine.

If proved to be the case on Earth, the model would have dramatic implications: Our planet is warming, but the solar radiation and our atmosphere would be to blame, not us.

It seems to me that their real sin was contradicting the “settled science” of climatology.

Well, Francis Menton — author of “The ‘Science’ Underlying Climate Alarmism Turns Up Missing” — has something to say about that “settled science”:

In the list of President Obama’s favorite things to do, using government power to save the world from human-caused “climate change” has to rank at the top.  From the time of his nomination acceptance speech in June 2008 (“this was the moment when the rise of the oceans began to slow and our planet began to heal . . .”), through all of his State of the Union addresses, and right up to the present, he has never missed an opportunity to lecture us on how atmospheric warming from our sinful “greenhouse gas” emissions is the greatest crisis facing humanity….

But is there actually any scientific basis for this?  Supposedly, it’s to be found in a document uttered by EPA back in December 2009, known as the “Endangerment Finding.”  In said document, the geniuses at EPA purport to find that the emissions of “greenhouse gases” into the atmosphere are causing a danger to human health and welfare through the greenhouse warming mechanism.  But, you ask, is there any actual proof of that?  EPA’s answer (found in the Endangerment Finding) is the “Three Lines of Evidence”….

The news is that a major new work of research, from a large group of top scientists and mathematicians, asserts that EPA’s “lines of evidence,” and thus its Endangerment Finding, have been scientifically invalidated….

So the authors of this Report, operating without government or industry funding, compiled the best available atmospheric temperature time series from 13 independent sources (satellites, balloons, buoys, and surface records), and then backed out only ENSO (i.e., El Nino/La Nina) effects.  And with that data and that sole adjustment they found: no evidence of the so-called Tropical Hot Spot that is the key to EPA’s claimed “basic physical understanding” of the claimed atmospheric greenhouse warming model, plus no statistically significant atmospheric warming at all to be explained.

What an amazing non-coincidence. That’s exactly what I found when I looked at the temperature record for Austin, Texas, since the late 1960s, when AGW was supposedly making life miserable for the planet. See “AGW in Austin? (II)” and the list of related readings and posts at the bottom. See also “Is Science Self Correcting?” (answer: no).

*     *     *

Ten years ago, I posted “An Immigration Roundup,” a collection of 13 posts dated March 29 through September 22, 2006. The bottom line: to encourage and allow rampant illegal immigration borders on social and economic suicide. I later softened my views (see this and this). But I am swinging back toward the hard line because of Steven Camarota’s “So What Is the Fiscal and Economic Impact of Immigration?“:

The National Academies of Sciences, Engineering, and Medicine have just released what can fairly be described as the most comprehensive look at the economic and fiscal impact of immigration on the United States. It represents an update of sorts of a similar NAS study released in 1997, in the middle of an earlier immigration debate. Overall the report is quite balanced, with a lot of interesting findings….
 
The most straightforward part of the study is its assemblage of estimates of the current fiscal impact of immigrants. The study shows that immigrants (legal and illegal) do not come close to paying enough in taxes to cover their consumption of public services at the present time. The NAS present eight different scenarios based on different assumptions about the current fiscal impact of immigrants and their dependent children — and every scenario is negative. No matter what assumption the NAS makes, immigrants use more in public services than they pay in taxes. The largest net drain they report is $299 billion a year. It should be pointed out that native-born American are also shown to be a net fiscal drain, mainly because of the federal budget deficit — Washington gives out a lot more than it takes in. But the fiscal drain created by immigrants is disproportionately large relative to the size of their population. Equally important, a fiscal drain caused by natives may be unavoidable. Adding more immigrants who create a fiscal drain, on the other hand, can be avoided with a different immigration policy….
 
With regard to economics — jobs and wages — the results in the NAS study, based on the standard economic model, show that immigration does make the U.S economy larger by adding workers and population. But a larger economy is not necessarily a benefit to natives. The report estimates that the actual benefit to the native-born could be $54.2 billion a year — referred to as the “immigrant surplus.” This is the benefit that accrues to American businesses because immigration increases the supply of workers and reduces American wages. Several points need to be made about this estimate. First, to generate this surplus, immigration has to create a very large redistribution of income from workers to owners of capital. The model works this way: Immigration reduces the wages of natives in competition with immigrant workers by $493.9 billion annually, but it increases the income of businesses by $548.1 billion, for a net gain of $54.2 billion. Unfortunately, the NAS does not report this large income redistribution, though it provides all the information necessary to calculate it. A second key point about this economic gain is that, relative to the income of natives, the benefit is very small, representing a “0.31 percent overall increase in income” for native-born Americans.
Third, the report also summarizes empirical studies that have tried to measure directly the impact of immigration on the wages of natives (the analysis above being based on economic theory rather than direct measurement). The size of the wage impact in those empirical studies is similar to that shown above. The NAS report cites over a dozen studies indicating that immigration does reduce wages primarily for the least-educated and poorest Americans. It must be pointed out, however, that there remains some debate among economists about immigration’s wage impact. The fourth and perhaps most important point about the “immigrant surplus” is that it is eaten up by the drain on the public fisc. For example, the average of all eight fiscal scenarios is a net drain (taxes minus services) of $83 billion a year at the present time, a good deal larger than the $54.2 billion immigrant surplus.

 

There’s much more, but that’s enough for me. Build that wall!

*     *     *

It’s also time to revisit the question of crime. Heather Mac Donald says “Yes, the Ferguson Effect Is Real,” and Paul Mirengoff shows that “Violent Crime Jumped in 2015.” I got to the root of the problem in “Crime Revisited,” to which I’ve added “Amen to That” and “Double Amen.”

What’s the root of the problem? A certain, violence-prone racial minority, of course, and also under-incarceration. Follow all of the links in the preceding paragraph, and read and weep.

“Feelings, nothing more than feelings”

Standard

Physicalism is the thesis that everything is physical, or as contemporary philosophers sometimes put it, that everything supervenes on the physical. The thesis is usually intended as a metaphysical thesis, parallel to the thesis attributed to the ancient Greek philosopher Thales, that everything is water, or the idealism of the 18th Century philosopher Berkeley, that everything is mental. The general idea is that the nature of the actual world (i.e. the universe and everything in it) conforms to a certain condition, the condition of being physical. Of course, physicalists don’t deny that the world might contain many items that at first glance don’t seem physical — items of a biological, or psychological, or moral, or social nature. But they insist nevertheless that at the end of the day such items are either physical or supervene on the physical.

Daniel Stoljar, “Physicialism
(Stanford Encyclopedia of Philosophy,
first published February 13, 2001,
substantively revised March 9, 2015)

Robin Hanson, an economics professor and former physicist, takes the physicalist position in “All Is Simple Parts Interacting Simply“:

There is nothing that we know of that isn’t described well by physics, and everything that physicists know of is well described as many simple parts interacting simply. Parts are localized in space, have interactions localized in time, and interactions effects don’t move in space faster than the speed of light. Simple parts have internal states that can be specified with just a few bits (or qubits), and each part only interacts directly with a few other parts close in space and time. Since each interaction is only between a few bits on a few sides, it must also be simple. Furthermore, all known interactions are mutual in the sense that the state on all sides is influenced by states of the other sides….

Not only do we know that in general everything is made of simple parts interacting simply, for pretty much everything that happens here on Earth we know those parts and interactions in great precise detail. Yes there are still some areas of physics we don’t fully understand, but we also know that those uncertainties have almost nothing to say about ordinary events here on Earth….

Now it is true that when many simple parts are combined into complex arrangements, it can be very hard to calculate the detailed outcomes they produce. This isn’t because such outcomes aren’t implied by the math, but because it can be hard to calculate what math implies.

However,

what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable.

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

Thus it seems hard to square a belief in this extra feeling stuff with standard physics in either cases, where feeling stuff does or does not have strong interactions with ordinary stuff. The obvious conclusion: extra feeling stuff just doesn’t exist.

Of course the “feeling stuff” interacts strongly and reliably with the familiar parts of the world — unless you’re a Robin Hanson, who seems to have no “feeling stuff.” Has he never been insulted, cut off by a rude lane-changer, been in love, held a baby in his arms, and so on unto infinity?

Hanson continues:

If this type of [strong] interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

Which is just a stupid thing to say. Physicists haven’t measured the interactions — and probably never will — because they’re not the kinds of phenomena that physicists study. Psychologists, yes; physicists, no.

Not being satisfied with obtuseness and stupidity, Hanson concedes the existence of “feelings,” but jumps to a conclusion in order to dismiss them:

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist….

Thus it seems hard to square a belief in this extra feeling stuff with standard physics in either cases, where feeling stuff does or does not have strong interactions with ordinary stuff. The obvious conclusion: extra feeling stuff just doesn’t exist.

How does Hanson — the erstwhile physicist — know any of this? I submit that he doesn’t know. He’s just arguing circularly, as an already-committed physicalist.

First, Hanson assumes that feelings aren’t “real” because physicists haven’t measured their effects. But that failure has been for lack of trying.

Then Hanson assumes that the absence of evidence is evidence of absence. Specifically, because there’s no evidence (as he defines it) for the existence of “feelings,” their existence (if real) is merely coincidental with claims of their existence.

And then Hanson the Obtuse ignores strong interactions of “feeling stuff” with “ordinary stuff.” Which suggests that he has never experienced love, desire, or hate (for starters).

It would be reasonable for Hanson to suggest that feelings are real, in a physical sense, in that they represent chemical states of the central nervous system. He could then claim that feelings don’t exist apart from such states; that is, “feeling stuff” is nothing more than a physical phenomenon. Hanson makes that claim, but in a roundabout way:

If everything around us is explained by ordinary physics, then a detailed examination of the ordinary physics of familiar systems will eventually tells us everything there is to know about the causes and consequences of our feelings. It will say how many different feelings we are capable of, what outside factors influence them, and how our words and actions depend on them.

However, he gets there by assuming an answer to the question whether “feelings” are something real and apart from physical existence. He hasn’t proven anything, one way or the other.

Hanson’s blog is called Overcoming Bias. It’s an apt title: Hanson has a lot of bias to overcome.

Related posts:
Why I Am Not an Extreme Libertarian
Blackmail, Anyone?
NEVER FORGIVE, NEVER FORGET, NEVER RELENT!
Utilitarianism vs. Liberty (II)

Is Science Self-Correcting?

Standard

A long-time colleague, in response to a provocative article about the sins of scientists, characterized it as “garbage” and asserted that science is self-correcting.

I should note here that my colleague abhors “extreme” views, and would cross the street to avoid a controversy. As a quondam scientist, he thinks of a challenge to the integrity of science as “extreme.” Which strikes me as an unscientific attitude.

Science is only self-correcting on a time scale of decades, and even centuries. Wrong-headed theories can persist for a very long time. And it has become worse in the past six decades.

What has changed in the past six decades? Sputnik spurred a (relatively) massive increase in government-funded research. This created a new and compelling incentive: produce research that comports with the party line. The party line isn’t necessarily the line of the party then in power, but the line favored by the bureaucrats in charge of doling out money.

On top of that, politically incorrect research is generally frowned upon. And when it surfaces it is attacked en masse by academicians who are eager to prove their political correctness.

Thus it is that the mere coincidence of a rise in CO2 emissions and a rise in temperatures in the latter part of the 20th century became the basis for kludgey models which “prove” AGW — preferably of the “catastrophic” kind — while essentially ignoring eons of evidence to the contrary. Skeptics (i.e., scientists doing what scientists should do) are attacked viciously when they aren’t simply ignored. The attackers are, all too often, people who call themselves scientists.

And thus it is that research into the connection between race and intelligence has been discouraged and even suppressed at universities. This despite truckloads of evidence that there is such a connection.

Those two examples don’t represent all of science, to be sure, but they’re a sad commentary on the state of science — in some fields, at least.

There are many more examples in Politicizing Science: The Alchemy of Policy-Making, edited by Michael Gough. I haven’t read the book, but I’m familiar with most of the cases documented by the contributors. The cases are about scientists behaving badly, and about non-scientists misusing science and advocating policies that lack firm scientific backing.

Scientists have been behaved badly since the dawn of science, though — as discussed above — there are now more (or different) incentives to behave badly than there were in the past. But non-scientists (especially politicians) will behave badly regardless of and contrary to scientific knowledge. So I won’t blame science or scientists for that behavior, except to the extent that scientists are actively abetting the bad behavior of non-scientists.

Which brings me to the matter of science being self-correcting. I am an avid (perhaps rabid) anti-reificationist. So I must say here that there is no such thing as “science.” There’s only what scientists “do” and claim to know.

It’s possible, though not certain, that future scientists will correct the errors of their predecessors — whether those errors arose from honest mistakes or bias. But, in the meantime, the errors persist and are used to abet policies that have costly, harmful, and even fatal consequences for multitudes of people. And most of that damage can’t be undone.

So, in this age of weaponized science, I take no solace in the idea that the errors of its practitioners and abusers might, someday, be recognized. The errors of knowledge might be corrected, but the errors of application are (mostly) beyond remedy.

Here’s an analogy: The errors of the builders, owners, captain, and crew of RMS Titanic seem to have been corrected, in that there hasn’t been a repetition of the conditions and events that led to the ship’s sinking. But that doesn’t make up for the loss of 1,514 lives, the physical and emotional suffering of the 710 survivors, the loss of a majestic ship, the loss of much valuable property, or the grief of the families and friends of those who were lost.

In sum, the claim that science is self-correcting amounts to a fatuous excuse for the irreparable damage that is often done in the name of science.

*      *      *

Related posts:

Demystifying Science
Scientism, Evolution, and the Meaning of Life
The Fallacy of Human Progress
Pinker Commits Scientism
AGW: The Death Knell (with many links to related readings and earlier posts)
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Not-So-Random Thoughts (XIV) (second item)
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
Further Pretensions of Knowledge
“And the Truth Shall Set You Free”
AGW in Austin? (II)

AGW in Austin? (II)

Standard

I said this in “AGW in Austin?“:

There’s a rise in temperatures [in Austin] between the 1850s and the early 1890s, consistent with the gradual warming that followed the Little Ice Age. The gap between the early 1890s and mid-19naughts seems to have been marked by lower temperatures. It’s possible to find several mini-trends between the mid-19naughts and 1977, but the most obvious “trend” is a flat line for the entire period….

Following the sudden jump between 1977 and 1980, the “trend” remains almost flat through 1997, albeit at a slightly higher level….

The sharpest upward trend really began after the very strong (and naturally warming) El Niño of 1997-1998….

Oh, wait! It turns out that Austin’s sort-of hot-spell from 1998 to the present coincides with the “pause” in global warming….

The rapid increase in Austin’s population since 2000 probably has caused an acceleration of the urban heat-island (UHI) effect. This is known to inflate city temperatures above those in the surrounding countryside by several degrees.

What about drought? In Austin, the drought of recent years is far less severe than the drought of the 1950s, but temperatures have risen more in recent years than they did in the 1950s….

Why? Because Austin’s population is now six times greater than it was in the 1950s. The UHI effect has magnified the drought effect.

Conclusion: Austin’s recent hot weather has nothing to do with AGW.

Now, I’ll quantify the relationship between temperature, precipitation, and population. Here are a few notes about the analysis:

  • I have annual population estimates for Austin from 1960 to the present. However, to tilt the scale in favor of AGW, I used values for 1968-2015, because the average temperature in 1968 was the lowest recorded since 1924.
  • I reduced the official population figures for 1998-2015 to reflect a major annexation in 1998 that significantly increased Austin’s population. The statistical effect of that adjustment is to reduce the apparent effect of population on temperature — thus further tilting the scale in favor of AGW.
  • The official National Weather Service station moved from Mueller Airport (near I-35) to Camp Mabry (near Texas Loop 1) in 1999. I ran the regression for 1968-2015 with a dummy variable for location, but that variable is statistically insignificant.

Here’s the regression equation for 1968-2015:

T = -0.049R + 5.57E-06P + 67.8

Where,

T = average annual temperature (degrees Fahrenheit)

R = annual precipitation (inches)

P = mid-year population (adjusted, as discussed above)

The r-squared of the equation is 0.538, which is considerably better than the r-squared for a simple time trend (see the first graph below). Also, the standard error is 1.01 degrees; F = 2.96E-08; and the p-values on the variables and intercept are highly significant at 0.00313, 2.19E-08, and 7.34E-55, respectively.

Here’s a graph of actual vs. predicted temperatures:

Actual vs predicted average annual temperatures in Austin

The residuals are randomly distributed with respect to time and the estimated values of T, so there’s no question (in my mind) about having omitted a significant variable:

Average annual temperatures_residuals vs. year

Average annual temperaturs_residuals vs. estimates of T

Austin’s average annual temperature rose by 3.6 degrees F between 1968 and 2015, that is, from 66.2 degrees to 69.8 degrees. According to the regression equation, the rise in Austin’s population from 234,000 in 1968 to 853,000 (adjusted) in 2015 accounts for essentially all of the increase — 3.5 degrees of it, to be precise. That’s well within the range of urban heat-island effects for big cities, and it’s obvious that Austin became a big city between 1968 and 2015. It also agrees with the estimated effect of Austin’s population increase, as derived from the equation for North American cities in T.R. Oke’s “City Size and the Urban Heat Island.” The equation (simplified for ease of reproduction) is

T’ = 2.96 log P – 6.41

Where,

T’ = change in temperature, degrees C

P = population, holding area constant

The author reports r-squared = 0.92 and SE = 0.7 degrees C (1.26 degrees F).

The estimated UHI effect of Austin’s population growth from 1968 to 2015 is 2.99 degrees F. Given the standard error of the estimate, the estimate of 2.99 degrees isn’t significantly different from my estimate of 3.5 degrees or from the actual increase of 3.6 degrees.

I therefore dismiss the possibility that population is a proxy for the effects of CO2 emissions, which — if they significantly affect temperature (a big “if”) — do so because of their prevalence in the atmosphere, not because of their concentration in particular areas. And Austin’s hottest years occurred during the “pause” in global warming after 1998. There was no “pause” in Austin because its population continued to grow rapidly; thus:

12-month average temperatures in Austin_1903-2016

Bottom line: Austin’s temperature can be accounted for by precipitation and population. AGW will have to find another place in which to work its evil magic.

*     *     *

Related reading:
U.S. climate page at WUWT
Articles about UHI at WUWT
David Evans, “There Is No Evidence,” Science Speak, June 16, 2009
Roy W. Spencer, “Global Urban Heat Island Effect Study – An Update,” WUWT, March 10, 2010
David M.W. Evans, “The Skeptic’s Case,” Science Speak, August 16, 2012
Anthony Watts, “UHI – Worse Than We Thought?,” WUWT, August 20, 2014
Christopher Monckton of Brenchley, “The Great Pause Lengthens Again,” WUWT, January 3, 2015
Anthony Watts, “Two New Papers Suggest Solar Activity Is a ‘Climate Pacemaker‘,” WUWT, January 9, 2015
John Hinderaker, “Was 2014 Really the Warmest Year Ever?,” PowerLine, January 16, 2015
Roy W. Spencer, John R. Christy, and William D. Braswell, “Version 6.0 of the UAH Temperature Dataset Released: New LT Trend = +0.11 C/decade,” DrRoySpencer.com, April 28, 2015
Bob Tisdale, “New UAH Lower Troposphere Temperature Data Show No Global Warming for More Than 18 Years,” WUWT, April 29, 2015
Patrick J. Michaels and Charles C. Knappenberger, “You Ought to Have a Look: Science Round Up—Less Warming, Little Ice Melt, Lack of Imagination,” Cato at Liberty, May 1, 2015
Mike Brakey, “151 Degrees Of Fudging…Energy Physicist Unveils NOAA’s “Massive Rewrite” Of Maine Climate History,” NoTricksZone, May 2, 2015 (see also David Archibald, “A Prediction Coming True?,” WUWT, May 4, 2015)
Christopher Monckton of Brenchley, “El Niño Has Not Yet Paused the Pause,” WUWT, May 4, 2015
Anthony J. Sadar and JoAnn Truchan, “Saul Alinsky, Climate Scientist,” American Thinker, May 4, 2015
Clyde Spencer, “Anthropogenic Global Warming and Its Causes,” WUWT, May 5, 2015
Roy W. Spencer, “Nearly 3,500 Days since Major Hurricane Strike … Despite Record CO2,” DrRoySpencer.com, May 8, 2015

Related posts:
AGW: The Death Knell (with many links to related readings and earlier posts)
Not-So-Random Thoughts (XIV) (second item)
AGW in Austin?
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
The Precautionary Principle and Pascal’s Wager

Not-So-Random Thoughts (XVII)

Standard

Links to the other posts in this occasional series may be found at “Favorite Posts,” just below the list of topics.

*     *     *

Victor Davis Hanson offers “The More Things Change, the More They Actually Don’t.” It echoes what I say in “The Fallacy of Human Progress.” Hanson opens with this:

In today’s technically sophisticated and globally connected world, we assume life has been completely reinvented. In truth, it has not changed all that much.
And he proceeds to illustrate his point (and mine).

*     *     *

Dr. James Thompson, and English psychologist, often blogs about intelligence. Here are some links from last year that I’ve been hoarding:

Intelligence: All That Matters” (a review of a book by Stuart Ritchie)

GCSE Genes” (commentary about research showing the strong relationship between genes and academic achievement)

GWAS Hits and Country IQ” (commentary about preliminary research into the alleles related to intelligence)

Also, from the International Journal of Epidemiology, comes “The Association between Intelligence and Lifespan Is Mostly Genetic.”

All of this is by way of reminding you of my many posts about intelligence, which are sprinkled throughout this list and this one.

*     *     *

How bad is it? This bad:

Thomas Lifson, “Mark Levin’s Plunder and Deceit

Arthur Milikh, “Alexis de Tocqueville Predicted the Tyranny of the Majority in Our Modern World

Steve McCann, “Obama and Neo-fascist America

Related reading: “Fascism, Pots, and Kettles,” by me, of course.

Adam Freedman’s book, A Less than Perfect Union: The Case for States’ Rights. States’ rights can be perfected by secession, and I make the legal case for it in “A Resolution of Secession.”

*     *     *

In a different vein, there’s Francis Menton’s series about anthropogenic global warming. The latest installment is “The Greatest Scientific Fraud of All Time — Part VIII.” For my take on the subject, start with “AGW in Austin?” and check out the readings and posts listed at the bottom.