Bigger, Stronger, and Faster — but Not Quicker?



There’s some controversial IQ research which suggests that reaction times have slowed and people are getting dumber, not smarter. Here’s Dr. James Thompson’s summary of the hypothesis:

We keep hearing that people are getting brighter, at least as measured by IQ tests. This improvement, called the Flynn Effect, suggests that each generation is brighter than the previous one. This might be due to improved living standards as reflected in better food, better health services, better schools and perhaps, according to some, because of the influence of the internet and computer games. In fact, these improvements in intelligence seem to have been going on for almost a century, and even extend to babies not in school. If this apparent improvement in intelligence is real we should all be much, much brighter than the Victorians.

Although IQ tests are good at picking out the brightest, they are not so good at providing a benchmark of performance. They can show you how you perform relative to people of your age, but because of cultural changes relating to the sorts of problems we have to solve, they are not designed to compare you across different decades with say, your grandparents.

Is there no way to measure changes in intelligence over time on some absolute scale using an instrument that does not change its properties? In the Special Issue on the Flynn Effect of the journal Intelligence Drs Michael Woodley (UK), Jan te Nijenhuis (the Netherlands) and Raegan Murphy (Ireland) have taken a novel approach in answering this question. It has long been known that simple reaction time is faster in brighter people. Reaction times are a reasonable predictor of general intelligence. These researchers have looked back at average reaction times since 1889 and their findings, based on a meta-analysis of 14 studies, are very sobering.

It seems that, far from speeding up, we are slowing down. We now take longer to solve this very simple reaction time “problem”.  This straightforward benchmark suggests that we are getting duller, not brighter. The loss is equivalent to about 14 IQ points since Victorian times.

So, we are duller than the Victorians on this unchanging measure of intelligence. Although our living standards have improved, our minds apparently have not. What has gone wrong? [“The Victorians Were Cleverer Than Us!” Psychological Comments, April 29, 2013]

Thompson discusses this and other relevant research in many posts, which you can find by searching his blog for Victorians and Woodley. I’m not going to venture my unqualified opinion of Woodley’s hypothesis, but I am going to offer some (perhaps) relevant analysis based on — you guessed it — baseball statistics.


It seems to me that if Woodley’s hypothesis has merit, it ought to be confirmed by the course of major-league batting averages over the decades. Other things being equal, quicker reaction times ought to produce higher batting averages. Of course, there’s a lot to hold equal, given the many changes in equipment, playing conditions, player conditioning, “style” of the game (e.g., greater emphasis on home runs), and other key variables over the course of more than a century.

Undaunted, I used the Play Index search tool at to obtain single-season batting statistics for “regular” American League (AL) players from 1901 through 2016. My definition of a regular player is one who had at least 3 plate appearances (PAs) per scheduled game in a season. That’s a minimum of 420 PAs in a season from 1901 through 1903, when the AL played a 140-game schedule; 462 PAs in the 154-game seasons from 1904 through 1960; and 486 PAs in the 162-game seasons from 1961 through 2016. I found 6,603 qualifying player-seasons, and a long string of batting statistics for each of them: the batter’s age, his batting average, his number of at-bats, his number of PAs, etc.

The raw record of batting averages looks like this, fitted with a 6th-order polynomial to trace the shifts over time:


That’s nice, you might say, but what accounts for the shifts? I considered 21 variables in an effort to account for the shifts, and ended up using 20 of the variables in a three-stage analysis.

In stage 1, I computed the residuals resulting from the application of the 6th-order polynomial. That is, I subtracted from the actual batting averages the estimates produced by the equation displayed in figure 1. For ease of reference, I call this first set of residuals the r1 residuals.

I began stage 2 by finding the correlations between each of the 21 candidate variables and the r1 residuals. I then estimated a regression equation with the r1 residuals as the dependent variable and the most highly correlated variable as the explanatory variable. I next found the correlations between the remaining 20 variables and the residuals of that regression equation. I introduced the most highly correlated variable into a new regression equation, as a second explanatory variable. I continued this process in the expectation that I would come across an explanatory variable that was statistically insignificant, at which point I would stop. But I ran through 16 explanatory variables without hitting a stopping point, and that exhausted the number of explanatory variables allowed by the regression function in Excel 2016.

The 16th regression on the r1 residuals left me with a set of residuals that I call the r2 residuals. In stage 3, I estimated a new equation with the r2 residuals as the dependent variable, following the same procedure that I used to obtain the 16-variable regression on the r1 residuals. In this case, I used 4 of the remaining explanatory variables; the 5th proved statistically insignificant.

I then combined the estimates obtained in the three stages to obtain the equation that’s discussed later, and at length. For now, I’ll focus on the apparent precision of the equation and its implications for the hypothesis that the general level of intelligence has declined with time.


Here’s how well the equation fits the data:


The 6th-order polynomial regression lines (black for actual, purple for estimated) are almost identical.

Here’s how the final estimates (vertical axis) correlate with the actual batting averages (horizontal axis):


I’ve never seen such a tight fit based on more than a few observations, and this one is based on 6,603 observations. I’m showing 6 decimal places in the trendline label so that you can see the 3 significant figures in the constant, which is practically zero.

Year (YR) enters as a significant variable in stage 3, with a coefficient of
-0.0000284 . (The 95-percent confidence interval is  -.0000214  to  -.0000355 ; the p-value is  3.40E-15 .) So, everything else being the same (a matter to which I’ll come), batting averages dropped by  .00327  between 1901 and 2016 ( -0.00327 =  -.0000284 x 115 ). (Note: It’s conventional to drop the 0 to the left of the decimal point in baseball statistics. And if you’re unfamiliar with baseball statistics, I can tell you that a difference of .00327 is taken seriously in baseball; many a batting championship race has been decided by a smaller margin.)

If the compound equation resulting from stages 1, 2, and 3 accounts satisfactorily for all changes affecting BA, the estimate of  -.00327  might be attributed to the slowing of batters’ reaction times. However, despite the statistical robustness of the coefficient on YR, it’s necessary to ask whether there are factors not properly accounted for that might point to the conclusion that reaction times have remained about the same or improved. To get at that question, I’ll present and discuss in the next section a table that summarizes the complete equation and all 20 of its explanatory variables. As you read and interpret the table, keep these points in mind:

The 6th-order polynomial (stage 1) is a filter. It captures the fluctuations over time that must be accounted for by the 20 “real” variables that are listed in the table (including YR) and discussed below the table. The “year” terms in the 6th-order polynomial are therefore irrelevant to the question of whether reaction times have slowed.

Every p-value in the stage-2 and stage-3 regression equations is smaller than  0.0001 , and most of them are far, far below that threshold.

The significance of the explanatory variables notwithstanding, the standard errors of the stage-1 and stage-2 equations are both about  .0027 . Therefore, the 95-percent confidence interval surrounding estimates of BA derived from those equations is plus or minus  .0053 . As discussed above, that’s not a small error in the context of baseball statistics. In fact, it’s enough to swamp the effect of YR.

As discussed below, many of the explanatory variables have intuitively incorrect signs and are highly correlated with each other. This casts doubt on the validity of the derived coefficients, including the coefficient on YR.

I don’t mean to say that reaction times have stayed the same or become faster. I simply mean that this analysis is inconclusive about the trend (if any) of reaction times — possibly because there is no trend, in one direction or the other.

The equation, taken as a whole, does an admirable job of accounting for changes in BA over the span of 115 years. But I can’t take any of its parts seriously.

It’s been great fun but it was just one of those things.


Table 1 gives the coefficients and maxima, minima, means, standard errors, and 95-percent confidence intervals around the coefficients of the explanatory variables. Statistical parameters and estimated values are expressed to three significant figures. For ease of comparison, I use decimal notation rather than scientific notation for the explanatory variables.


Next is table 2, which gives the cross-correlations among the explanatory variables (including the 21st variable that’s not in the equation). Positive correlations above 0.5 are highlighted in green; negative correlations below 0.5 are highlighted in yellow; statistically insignificant correlations are denoted by gray shading.

TABLE 2 (right-click to open a larger image in a new tab)

Here’s my explanation and interpretation of the instrumental variables:

Intercept (c) (shown in table 1)

This is the sum of the intercepts derived from the 6th-order polynomial fit and the stage-2 and stage-3 regression analyses.

On-base-plus-slugging percentage minus batting average (OPS – BA)

BA is embedded in both components of on-base-plus-slugging percentage (OPS). By subtracting BA from OPS, I partly decouple that relationship and obtain rough measure of a batter’s propensity to get on base (mainly) by walking, plus his propensity for hitting doubles, triples, and home runs. But see OBP – BA and SLB – BA, below.

Strikeouts per plate appearance (SO/PA)

The positive coefficient on SO/PA is counterintuitive. In any particular at-bat, striving to hit a home run is thought to reduce a batter’s ability to make contact with the ball. The positive coefficient therefore reflects the positive relationship between HR/PA and BA (see below), and the tendency of home-run hitters to strike out more often than other hitters.

On-base percentage minus batting average (OBP – BA)

The negative coefficient on this variable probably means that it’s compensating for the residual component of BA that lingers in OPS – BA. This variable and OPS – BA should be thought of as a complementary variable — one that’s meaningless without the other.

Home runs per plate appearance (HR/PA)

The positive coefficient on this variable seems to capture the positive relationship between HR and BA. For example, most of the great home-run hitters also compiled high batting averages. (Peruse this list.)

Integration (BLK)

I use this variable to approximate the effect of the influx of black players (including non-white Hispanics) since 1947. BLK measures only the fraction of AL teams that had at least one black player for each full season. It begins at 0.25 in 1948 (the Indians and Browns signed Larry Doby and Hank Thompson during the 1947 season) and rises to 1 in 1960, following the signing of Pumpsie Green by the Red Sox during the 1959 season. The positive coefficient on this variable is consistent with the hypothesis that segregation had prevented the use of players superior to many of the whites who occupied roster slots because of their color.

Deadball era (DBALL)

The so-called deadball era lasted from the beginning of major-league baseball in 1871 through 1919 (roughly). It was called the deadball era because the ball stayed in play for a long time (often for an entire game), so that it lost much of its resilience and became hard to see because it accumulated dirt and scuffs. Those difficulties (for batters) were compounded by the spitball, the use of which was officially curtailed beginning with the 1920 season. (See this and this.) Batting averages and the frequency of long hits (especially home runs) rose markedly after 1919. Given the secular trend shown in figure 1, it’s surprising to find a positive coefficient on DB, which is a dummy variable (value =1) assigned to all seasons from 1901-1919. So DB is probably picking up the net effect of other factors. It should be considered a complementary variable.

Performance-enhancing drugs (DRUG)

Their rampant use seems to have begun in the early 1990s and trailed off in the late 2000s. I assigned a dummy variable of 1 to all seasons from 1994 through 2007 in an effort to capture the effect of PEDs on BA. The resulting coefficient suggests that the effect was (on balance) negative, though slight. Players who used PEDs generally strove for long hits, which may have had the immediate effect of reducing their batting averages.

Slugging percentage minus batting average (SLG – BA)

I consider this variable to be a complement to OPS – BA and OBP – PA.

Number of major-league teams (MLTM)

The standard view is that expansion hurt the quality of play by diluting talent. However, expansion didn’t keep pace with population growth over the long run. (see POP/TM, below). In any event, MLTM should be considered another complementary variable.

Night baseball, that is, baseball played under lights (LITE)

It has long been thought that batting is more difficult under artificial lighting than in sunlight. This variable measures the fraction of AL teams equipped with lights, but it doesn’t measure the rise in night games as a fraction of all games. I know from observation that that fraction continued to rise even after all AL stadiums were equipped with lights. The positive coefficient on LITE suggests that it’s yet another complementary variable. It’s very highly correlated with BLK, for example.

Average age of AL pitchers (PAGE)

The r1 residuals rise with respect to PAGE rise until PAGE = 27.4 , then they begin to drop. This variable represents the difference between 27.4 and the average age of AL pitchers during a particular season. The coefficient is multiplied by 27.4 minus the average age of pitchers; that is, by a positive number for ages lower than 27.4, by zero for age 27.4, and by a negative number for ages above 27.4. The positive coefficient suggests that, other things being equal, pitchers younger than 27.4 give up hits at a lower rate than pitchers older than 27.4. I’m agnostic on the issue.

Complete games per AL team (CG/TM)

A higher rate of complete games should mean that starting pitchers stay in games longer, on average, and therefore give up more hits, on average. The positive coefficient seems to contradict that hypothesis. But there are other, related variables (P/TM and IP/P/G), so this one should be thought of as a complementary variable.

Number of pitchers per AL team (P/TM)

It, too, has a surprisingly positive coefficient. One would expect the use of more pitchers to cause BA to drop (see IP/P/G).

World War II (WW2)

A lot of the game’s best batters were in uniform in 1942-1945. That left regular positions open to older, weaker batters, some of whom wouldn’t otherwise have been regulars or even in the major leagues. The negative coefficient on this variable captures the war’s effect on hitting, which suffered despite the fact that a lot of the game’s best pitchers also served.

Bases on balls per plate appearance (BB/PA)

The negative coefficient on this variable suggests that walks are collected predominantly by above-average hitters, who are deprived of chances to hit safely. See, for example, the list of batters who collected the most career bases on balls. Anecdotally, during the many years when I regularly listened to and watched baseball games, announcers often spoke of the “intentional” unintentional walk and “pitching around” a batter. In both cases, a pitcher would aim for the outside edges of the plate, to avoid giving a batter a good pitch to hit. If that meant a walked batter and a chance to pitch to a weaker batter, so be it.

Innings pitched per AL pitcher per game (IP/P/G)

This variable reflects the long-term trend toward the use of more pitchers in a game, which means that batters more often face rested pitchers who come at them with a different delivery and repertoire of pitches than their predecessors. IP/P/G has dropped steadily over the decades, exerting a negative effect on BA. This is reflected in the positive coefficient on the variable, which means that BA rises with IP/P/G. But the effect is slight, and it’s prudent to treat this variable as a complement to CG/TM and P/TM.

AL fielding average (FA)

Fielding averages have risen generally since 1901, which was an especially bad year at .938. The climb from .949 in 1902 to .985 in 2016 was smooth and almost uninterrupted. How would that affect BA? Here’s an example: A line drive that in 1916 bounced off the edge of a fielder’s glove might have been counted as a hit or an error, and if it just missed the glove it would usually be counted as a hit. A century later the same line drive would almost always be caught in the much larger glove worn by a fielder in the same position. It therefore seems to me that the coefficient on this variable should be negative, that is, a higher FA should mean a lower BA. The positive coefficient points to a confounding factor (e.g., BLK).

Year (YR)

This is the crucial variable, and the value of its coefficient — given the inclusion of all the other variables — may say something about the IQ hypothesis. After taking into account the 19 other variables in this equation, the coefficient on YR is slightly negative, which suggests that batters have generally been getting a bit slower. But as discussed throughout this post, there’s much uncertainty about the validity of the equation and, therefore, about the validity of the coefficient on BA.

Maximum distance traveled by AL teams (TRV)

Does travel affect play? Probably, but the mode and speed of travel (airplane vs. train) probably also affects it. The slightly positive coefficient on this variable — which is highly correlated with YR, BLK, MLTM, and several others — is meaningless, except insofar as it combines with all the other variables to account for BA.

U.S. population in millions per major-league team (POP/TM)

POP/TM has been rising almost without pause, despite expansion, and is now at its peak value. The negative coefficient is therefore surprising, and probably reflects the strong correlation of POP/TM with BLK, and perhaps other variables.

Batter’s age (BAGE)

This is the 21st variable, which isn’t in the final equation. The r1 residuals don’t vary with BAGE until BAGE = 37 , whereupon the residuals begin to drop. Accordingly, this variable represents the difference between 37 and a player’s age during a particular season.

In sum, there’s no way of knowing whether the negative coefficient on YR is related to reaction time, the (probably) greater speed of today’s pitchers, the greater variety of pitches thrown by today’s pitchers,  or anything else that’s not adequately reflected by the 20 variables in the final equation. I rest my case and throw myself on the mercy of the court.

Pennant Droughts, Post-Season Play, and Seven-Game World Series



Everyone in the universe knows that the Chicago Cubs beat the Cleveland Indians to win the 2016 World Series. The Cubs got into the Series by ending what had been the longest pennant drought of the 16 old-line franchises in the National and American Leagues. The mini-bears had gone 71 years since winning the NL championship in 1945. And before last night, the Cubs last won a Series in 1908, a “mere” 108 years ago.

Here are the most recent league championships and World Series wins by the other old-line National League teams: Atlanta (formerly Boston and Milwaukee) Braves — 1999, 1995; Cincinnati Reds — 1990, 1990; Los Angeles (formerly Brooklyn) Dodgers — 1988, 1988; Philadelphia Phillies — 2009, 2008; Pittsburgh Pirates — 1979, 1979; San Francisco (formerly New York) Giants — 2014, 2014; and St. Louis Cardinals — 2013, 2011.

The American League lineup looks like this: Baltimore Orioles (formerly Milwaukee Brewers and St. Louis Browns) — 1983, 1983; Boston Red Sox — 2013, 2013; Chicago White Sox — 2005, 2005; Cleveland Indians — 2016 (previously 1997), 1948; Detroit Tigers — 2012, 1984; Minnesota Twins (formerly Washington Senators) — 1991, 1991; New York Yankees — 2009, 2009; and Oakland (formerly Philadelphia and Kansas City) Athletics — 1990, 1989.

What about the expansion franchises, of which there are 14? I’ll lump them because two of them (Milwaukee and Houston) have switched leagues since their inception. Here they are, in this format: Team (year of creation) — year of last league championship, year of last WS victory:

Arizona Diamondbacks (1998) — 2001, 2001

Colorado Rockies (1993) — 2007, never

Houston Astros (1962) — 2005, never

Kansas City Royals (1969) — 2015, 2015

Los Angeles Angels (1961) –2002, 2002

Miami Marlins (1993) — 2003, 2003

Milwaukee Brewers (1969, as Seattle Pilots) –1982, never

New York Mets (1962) — 2015, 1986

San Diego Padres (1969) — 1998, never

Seattle Mariners (1977) — never, never

Tampa Bay Rays (1998) — 2008, never

Texas Rangers (1961, as expansion Washington Senators) — 2011, never

Toronto Blue Jays (1977) — 1993, 1993

Washington Nationals (1969, as Montreal Expos) — never, never


The first 65 World Series (1903 and 1905-1968) were contests between the best teams in the National and American Leagues. The winner of a season-ending Series was therefore widely regarded as the best team in baseball for that season (except by the fans of the losing team and other soreheads). The advent of divisional play in 1969 meant that the Series could include a team that wasn’t the best in its league. From 1969 through 1993, when participation in the Series was decided by a single postseason playoff between division winners (1981 excepted), the leagues’ best teams met in only 10 of 24 series. The advent of three-tiered postseason play in 1995 and four-tiered postseason play in 2012, has only made matters worse.

By the numbers:

  • Postseason play originally consisted of a World Series (period) involving 1/8 of major-league teams — the best in each league. Postseason play now involves 1/3 of major-league teams and 7 postseason series (3 in each league plus the inter-league World Series).
  • Only 3 of the 22 Series from 1995 through 2016 have featured the best teams of both leagues, as measured by W-L record.
  • Of the 22 Series from 1995 through 2015, only 7 were won by the best team in a league.
  • Of the same 22 Series, 11 (50 percent) were won by the better of the two teams, as measured by W-L record. Of the 65 Series played before 1969, 35 were won by the team with the better W-L record and 2 involved teams with the same W-L record. So before 1969 the team with the better W-L record won 35/63 of the time for an overall average of 56 percent. That’s not significantly different from the result for the 22 Series played in 1995-2016, but the teams in the earlier era were each league’s best, which is no longer true. . .
  • From 1995 through 2016, a league’s best team (based on W-L record) appeared in a Series only 15 of 44 possible times — 6 times for the NL (pure luck), 9 times for the AL (little better than pure luck). (A random draw among teams qualifying for post-season play would have resulted in the selection of each league’s best team about 6 times out of 22.)
  • Division winners have opposed each other in only 11 of the 22 Series from 1995 through 2016.
  • Wild-card teams have appeared in 10 of those Series, with all-wild-card Series in 2002 and 2014.
  • Wild-card teams have occupied more than one-fourth of the slots in the 1995-2016 Series — 12 slots out of 44.

The winner of the World Series used to be its league’s best team over the course of the entire season, and the winner had to beat the best team in the other league. Now, the winner of the World Series usually can claim nothing more than having won the most postseason games — 11 or 12 out of as many as 19 or 20. Why not eliminate the 162-game regular season, select the postseason contestants at random, and go straight to postseason play?

Here are the World Series pairings for 1994-2016 (National League teams listed first; + indicates winner of World Series):

1995 –
Atlanta Braves (division winner; .625 W-L, best record in NL)+
Cleveland Indians (division winner; .694 W-L, best record in AL)

1996 –
Atlanta Braves (division winner; .593, best in NL)
New York Yankees (division winner; .568, second-best in AL)+

1997 –
Florida Marlins (wild-card team; .568, second-best in NL)+
Cleveland Indians (division winner; .534, fourth-best in AL)

1998 –
San Diego Padres (division winner; .605 third-best in NL)
New York Yankees (division winner, .704, best in AL)+

1999 –
Atlanta Braves (division winner; .636, best in NL)
New York Yankees (division winner; .605, best in AL)+

2000 –
New York Mets (wild-card team; .580, fourth-best in NL)
New York Yankees (division winner; .540, fifth-best in AL)+

2001 –
Arizona Diamondbacks (division winner; .568, fourth-best in NL)+
New York Yankees (division winner; .594, third-best in AL)

2002 –
San Francisco Giants (wild-card team; .590, fourth-best in NL)
Anaheim Angels (wild-card team; .611, third-best in AL)+

2003 –
Florida Marlines (wild-card team; .562, third-best in NL)+
New York Yankees (division winner; .623, best in AL)

2004 –
St. Louis Cardinals (division winner; .648, best in NL)
Boston Red Sox (wild-card team; .605, second-best in AL)+

2005 –
Houston Astros (wild-card team; .549, third-best in NL)
Chicago White Sox (division winner; .611, best in AL)*

2006 –
St. Louis Cardinals (division winner; .516, fifth-best in NL)+
Detroit Tigers (wild-card team; .586, third-best in AL)

2007 –
Colorado Rockies (wild-card team; .552, second-best in NL)
Boston Red Sox (division winner; .593, tied for best in AL)+

2008 –
Philadelphia Phillies (division winner; .568, second-best in NL)+
Tampa Bay Rays (division winner; .599, second-best in AL)

2009 –
Philadelphia Phillies (division winner; .574, second-best in NL)
New York Yankees (division winner; .636, best in AL)+

2010 —
San Francisco Giants (division winner; .568, second-best in NL)+
Texas Rangers (division winner; .556, fourth-best in AL)

2011 —
St. Louis Cardinals (wild-card team; .556, fourth-best in NL)+
Texas Rangers (division winner; .593, second-best in AL)

2012 —
San Francisco Giants (division winner; .580, third-best in AL)+
Detroit Tigers (division winner; .543, seventh-best in AL)

2013 —
St. Louis Cardinals (division winner; .599, best in NL)
Boston Red Sox (division winner; .599, best in AL)+

2014 —
San Francisco Giants (wild-card team; .543, 4th-best in NL)+
Kansas City Royals (wild-card team; .549, 4th-best in AL)

2015 —
New York Mets (division winner; .556, 5th best in NL)
Kansas City Royals (division winner; .586, best in AL)+

2016 —
Chicago Cubs (division winner; .640, best in NL)+
Cleveland Indians (division winner; .584, 2nd best in AL)


The seven-game World Series holds the promise of high drama. That promise is fulfilled if the Series stretches to a seventh game and that game goes down to the wire. Courtesy of, here’s what’s happened in the deciding games of the seven-game Series that have been played to date:

1909 – Pittsburgh (NL) 8 – Detroit (AL) 0

1912 – Boston (AL) 3 – New York (NL) 2 (10 innings)

1925 – Pittsburgh (NL) 9 – Washington (AL) 7

1926 – St. Louis (NL) 3 – New York (AL) 2

1931 – St. Louis (NL) 4 – Philadelphia (AL) 2

1934 – St. Louis (NL) 11 – Detroit (AL) 0

1940 – Cincinnati (NL) 2 – Detroit (AL) 1

1945 – Detroit (AL) 9 – Chicago (NL) 3

1947 – New York (AL) 5 – Brooklyn (NL) 2

1955 – Brooklyn (NL) 2 – New York (AL) 0

1956 – New York (AL) 9 – Brooklyn (NL) 0

1957 – Milwaukee (NL) 5 – New York (AL) 0

1958 – New York (AL) 6 – Milwaukee (NL) 2

1960 – Pittsburgh (NL) 10 New York (AL) 9 (decided by Bill Mazeroski’s home run in the bottom of the 9th)

1965 – Los Angeles (NL) 2 – Minnesota (AL) 0

1967 – St. Louis (NL) 7 – Boston (AL) 2

1968 – Detroit (AL) 4 – St. Louis (NL) 1

1971 – Pittsburgh (NL) 2 – Baltimore (AL) 1

1972 – Oakland (AL) 3 – Cincinnati (NL) 2

1973 – Oakland (AL) 5 – New York (NL) 2

1975 – Cincinnati (AL) 4 – Boston (AL) 3

1979 – Pittsburgh (NL) 4 – Baltimore (AL) 1

1982 – St. Louis (NL) 6 – Milwaukee (AL) 3

1985 – Kansas City (AL) 11 – St. Louis (NL) 0

1986 – New York (NL) 8 – Boston (AL) 5

1987 – Minnesota (AL) 4 – St. Louis (NL) 2

1991 – Minnesota (AL) 1 – Atlanta (NL) 0 (10 innings)

1997 – Florida (NL) 3 – Cleveland (AL) 2 (11 innings)

2001 – Arizona (NL) 3 – New York (AL) 2 (decided in the bottom of the 9th)

2002 – Anaheim (AL) 4 – San Francisco (NL) 1

2011 – St. Louis Cardinals (NL) 6 – Texas Rangers (AL) 2

2014 – San Francisco Giants (NL) 3 – Kansas City Royals (AL) 2 (no scoring after the 4th inning)

2016 – Chicago Cubs (NL) 8 – Cleveland Indians (AL) 7 (decided in the 10th inning)

Summary statistics:

33 seven-game Series (29 percent of 112 series played, including 4 in a best-of-nine format, none of which lasted 9 games)

17 Series decided by 1 or 2 runs

12 of those 15 Series decided by 1 run (6 times in extra innings or the winning team’s last at-bat)

4 consecutive seven-game Series 1955-58, all involving the New York Yankees (10 percent of the Yankees’ Series — 8 of 41 — went to seven games)

Does the World Series deliver high drama? Seldom. In fact, only about 10 percent of the time (12 of 112 decided by 1 run in game 7). The other 90 percent of the time it’s merely an excuse to fill seats and sell advertising, inasmuch as it’s seldom a contest between both leagues’ best teams.

A Drought Endeth


Tonight the Chicago Cubs beat the Los Angeles Dodgers to become champions of the National League for 2016. The Cubs thus ended the longest pennant drought of the 16 old-line franchises in the National and American Leagues, having last made a World Series appearance 71 years ago in 1945. The Cubs last won the World Series 108 years ago in 1908, another ignominious record for an old-line team.

Here are the most recent league championships and World Series wins by the other old-line National League teams: Atlanta (formerly Boston and Milwaukee) Braves — 1999, 1995; Cincinnati Reds — 1990, 1990; Los Angeles (formerly Brooklyn) Dodgers — 1988, 1988; Philadelphia Phillies — 2009, 2008; Pittsburgh Pirates — 1979, 1979; San Francisco (formerly New York) Giants — 2014, 2014; and St. Louis Cardinals — 2013, 2011.

The American League lineup looks like this: Baltimore Orioles (formerly Milwaukee Brewers and St. Louis Browns) — 1983, 1983; Boston Red Sox — 2013, 2013; Chicago White Sox — 2005, 2005; Cleveland Indians — 2016 (previously 1997), 1948; Detroit Tigers — 2012, 1984; Minnesota Twins (formerly Washington Senators) — 1991, 1991; New York Yankees — 2009, 2009; and Oakland (formerly Philadelphia and Kansas City) Athletics — 1990, 1989.

Facts about Hall-of-Fame Hitters


In this post, I look at the batting records of the 136 position players who accrued most or all of their playing time between 1901 and 2015. With the exception of a bulge in the .340-.345 range, the frequency distribution of lifetime averages for those 136 players looks like a rather ragged normal distribution:

Distribution of HOF lifetime BA

That’s Ty Cobb (.366) at the left, all by himself (1 person = 0.7 percent of the 136 players considered here). To Cobb’s right, also by himself, is Rogers Hornsby (.358). The next solo slot to the right of Hornsby’s belongs to Ed Delahanty (.346). The bulge between .340 and .345 is occupied by Tris Speaker, Billy Hamilton, Ted Williams, Babe Ruth, Harry Heilmann, Bill Terry, Willie Keeler, George Sisler, and Lou Gehrig. At the other end, in the anchor slot, is Ray Schalk (.253), to his left in the next slot are Harmon Killebrew (.256) and Rabbit Maranville (.258). The group in the .260-.265 column comprises Gary Carter, Joe Tinker, Luis Aparacio, Ozzie Smith, Reggie Jackson, and Bill Mazeroski.

Players with relatively low batting averages — Schalk, Killibrew, etc. — are in the Hall of Fame because of their prowess as fielders or home-run hitters. Many of the high-average players were also great fielders or home-run hitters (or both). In any event, for your perusal here’s the complete list of 136 position players under consideration in this post:

Lifetime BA of 136 HOFers

For the next exercise, I normalized the Hall of Famers’ single-season averages, as discussed here. I included only those seasons in which a player qualified for that year’s batting championship by playing in enough games, compiling enough plate appearances, or attaining enough at-bats (the criteria have varied).

For the years 1901-2015, the Hall-of-Famers considered here compiled  1,771 seasons in which they qualified for the batting title. (That’s 13 percent of the 13,463 batting-championship-qualifying seasons compiled by all major leaguers in 1901-2015.) Plotting the Hall-of-Famers’ normalized single-season averages against age, I got this:

HOF batters - normalzed BA by age

The r-squared value of the polynomial fit, though low, is statistically significant (p<.01). The equation yields the following information:

HOF batters - changes in computed mean BA

The green curve traces the difference between the mean batting average at a given age and the mean batting average at the mean peak age, which is 28.3. For example, by the equation, the average Hall of Famer batted .2887 at age 19, and .3057 at age 28.3 — a rise of .0017 over 9.3 years.

The black line traces the change in the mean batting average from age to age; the increase is positive, though declining from ages 20 through 28, then negative (and still declining) through the rest of the average Hall of Famer’s career.

The red line represents the change in the rate of change, which is constant at -.00044 points (-4.4 percent) a year.

In tabular form:

HOF batters - mean BA stats vs age

Finally, I should note that the combined lifetime batting average of the 136 players is .302, as against the 1901-2015 average of .262 for all players. In other words, the Hall of Famers hit safely in 30.2 percent of at-bats; all players hit safely in 26.2 percent of at-bats. What’s the big deal about 4 percentage points?

To find out, I consulted “Back to Baseball,” in which I found the significant determinants of run-scoring. In the years 1901-1919 (the “dead ball” era), a 4 percentage-point (.040) rise in batting average meant, on average, an increase in runs scored per 9 innings of 1.18. That’s a significant jump in offensive output, given that the average number of runs scored per 9 innings was 3.97 in 1901-1919.

For 1920-2015, a rising in batting average of 4 percentage points mean, on average, an increase in runs scored per 9 innings of 1.03, as against an average number of runs scored per 9 innings of 4.51. That’s also significant, and it doesn’t include the effect of extra-base hits, which Hall of Famers produced at a greater rate than other players.

So Hall of Famers, on the whole, certainly made greater offensive contributions than other players, and some of them were peerless in the field. But do all Hall of Famers really belong in the Hall? No, but that’s the subject of another post.

Baseball’s Greatest 40-and-Older Hitters


Drawing on the Play Index at, I discovered the following bests for major-league hitters aged 40 and older:

Most games played — Pete Rose, 732

Most games in starting lineup — Pete Rose, 643

Most plate appearances — Pete Rose, 2955

Most at-bats — Pete Rose, 2574

Most runs — Sam Rice, 327

Most hits — Pete Rose, 699

Most doubles — Sam Rice, 95

Most triples — Honus Wagner, 36

Most home runs — Carlton Fisk, 72

Most runs batted in — Carlton Fisk, 282

Most stolen bases — Rickey Henderson, 109

Most times caught stealing — Rickey Henderson, 34

Most times walked — Pete Rose, 320

Most times struck out — Julio Franco, 336

Highest batting average — Ty Cobb, .343*

Highest on-base percentage — Barry Bonds, .464*

Highest slugging percentage — Barry Bonds, .561*

Highest on-base-plus-slugging percentage (OPS) — Barry Bonds, 1.025*

Most sacrifice hits (bunts) — Honus Wagner, 45

Also of note:

Babe Ruth had only 6 home runs as a 40-year-old in his final (partial) season, as a member of the Boston Braves.

Ted Williams is remembered as a great “old” player, and he was. But his 40-and-over record (compiled in 1959-60) is almost matched by that of his great contemporary, Stan Musial (whose 40-and-older record was compiled in 1961-63):

Williams vs. Musial 40 and older
* In each case, this excludes players with small numbers of plate appearances (always fewer than 20). Also, David Ortiz has a slugging average of .652 and an OPS of 1.067 for the 2016 season (his first as a 40-year-old), but the season isn’t over.

Back to Baseball


In “Does Velocity Matter?” I diagnosed the factors that account for defensive success or failure, as measured by runs allowed per nine innings of play. There’s a long list of significant variables: hits, home runs, walks, errors, wild pitches, hit batsmen, and pitchers’ ages. (Follow the link for the whole story.)

What about offensive success or failure? It turns out that it depends on fewer key variables, though there is a distinct difference between the “dead ball” era of 1901-1919 and the subsequent years of 1920-2015. Drawing on statistics available at I developed several regression equations and found three of particular interest:

  • Equation 1 covers the entire span from 1901 through 2015. It’s fairly good for 1920-2015, but poor for 1901-1919.
  • Equation 2 covers 1920-2015, and is better than Equation 1 for those years. I also used it for backcast scoring in 1901-1919 — and it’s worse than equation 1.
  • Equation 5 gives the best results for 1901-1919. I also used it to forecast scoring in 1920-2015, and it’s terrible for those years.

This graph shows the accuracy of each equation:

Estimation errors as a percentage of runs scored

Unsurprising conclusion: Offense was a much different thing in 1901-1919 than in subsequent years. And it was a simpler thing. Here’s Equation 5, for 1901-1919:

RS9 = -5.94 + BA(29.39) + E9(0.96) + BB9(0.27)

Where 9 stands for “per 9 innings” and
RS = runs scored
BA = batting average
E9 = errors committed
BB = walks

The adjusted r-squared of the equation is 0.971; the f-value is 2.19E-12 (a very small probability that the equation arises from chance). The p-values of the constant and the first two explanatory variables are well below 0.001; the p-value of the third explanatory variable is 0.01.

In short, the name of the offensive game in 1901-1919 was getting on base. Not so the game in subsequent years. Here’s Equation 2, for 1920-2015:

RS9 = -4.47 + BA(25.81) + XBH(0.82) + BB9(0.30) + SB9(-0.21) + SH9(-0.13)

Where 9, RS, BA, and BB are defined as above and
XBH = extra-base hits
SB = stolen bases
SH = sacrifice hits (i.e., sacrifice bunts)

The adjusted r-squared of the equation is 0.974; the f-value is 4.73E-71 (an exceedingly small probability that the equation arises from chance). The p-values of the constant and the first four explanatory variables are well below 0.001; the p-value of the fifth explanatory variable is 0.03.

In other words, get on base, wait for the long ball, and don’t make outs by trying to steal or bunt the runner(s) along,.

Does Velocity Matter?


I came across some breathless prose about the rising trend in the velocity of pitches. (I’m speaking of baseball, in case you didn’t know. Now’s your chance to stop reading.) The trend, such as it is, dates to 2007, when the characteristics of large samples of pitches began to be recorded. (The statistics are available here.) What does the trend look like? The number of pitchers in the samples varies from 77 to 94 per season. I computed three trends for the velocity of fastballs: one for the top 50 pitchers in each season, one for the top 75 pitchers in each season, and one for each season’s full sample:

Pitching velocity trends

Assuming that the trend is real, what difference does it make to the outcome of play? To answer that question I looked at the determinants of runs allowed per 9 innings of play from 1901 through 2015, drawing on statistics available at I winnowed the statistics to obtain three equations with explanatory variables that pass the sniff test:*

  • Equation 5 covers the post-World War II era (1946-2015). I used it for backcast estimates of runs allowed in each season from 1901 through 1945.
  • Equation 7 covers the entire span from 1901 through 2015.
  • Equation 8 covers the pre-war era (1901-1940). I used it to forecast estimates of runs allowed in each season from 1941 through 2015.

This graph shows the accuracy of each equation:

Estimation errors as perentage of runs allowed

Equation 7, even though it spans vastly different baseball eras, is as good as or better than equations 5 and 8, even though they’re tailored to their eras. Here’s equation 7:

RA9 = -5.01 + H9(0.67) + HR9(0.73) + BB9(0.32) + E9(0.60) + WP9(0.69) + HBP9(0.51) + PAge(0.03)

Where 9 stands for “per 9 innings” and
RA = runs allowed
H = hits allowed
HR = home runs allowed
BB = bases on balls allowed
E = errors committed
WP = wild pitches
HBP = batters hit by pitches
PAge = average age of pitchers

The adjusted r-squared of the equation is 0.988; the f-value is 7.95E-102 (a microscopically small probability that the equation arises from chance). See the first footnote regarding the p-values of the explanatory variables.

What does this have to do with velocity? Let’s say that velocity increased by 1 mile an hour between 2007 and 2015 (see chart above). The correlations for 2007-2015 between velocity and the six pitcher-related variables (H, HR, BB, WP, HBP, and PAge), though based on small samples, are all moderately strong to very strong (r-squared values 0.32 to 0.83). The combined effects of an increase in velocity of 1 mile an hour on those six variables yield an estimated decrease in RA9 of 0.74. The actual decrease from 2007 to 2015, 0.56, is close enough that I’m inclined to give a lot of credit to the rise in velocity.**

What about the long haul? Pitchers have been getting bigger and stronger — and probably faster — for decades. The problem is that a lot of other things have been changing for decades: the baseball, gloves, ballparks, the introduction of night games, improvements in lighting, an influx of black and Latin players, variations in the size of the talent pool relative to the number of major-league teams, the greater use of relief pitchers generally and closers in particular, the size and strength of batters, the use of performance-enhancing drugs, and so on. Though I would credit the drop in RA9 to a rise in velocity over a brief span of years — during which the use of PEDs probably declined dramatically — I won’t venture a conclusion about the long haul.
* I looked for equations where explanatory variables have intuitively correct signs (e.g., runs allowed should be positively related to walks) and low p-values (i.e., low probability of inclusion by chance). The p-values for the variables in equation 5 are all below 0.01; for equation 7 the p-values all are below 0.001. In the case of equation 8, I accepted two variables with p-values greater than 0.01 but less than 0.10.

** It’s also suggestive that the relationship between velocity and the equation 7 residuals for 2007-2015 is weak and statistically insignificant. This could mean that the effects of velocity are adequately reflected in the coefficients on the pitcher-related variables.