Department of Labor Logo United States Department of Labor
Dot gov

The .gov means it's official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Book Review
September 2023

Recontextualizing the relationship between statistics and economics

Exploring the History of Statistical Inference in Economics. Edited by Jeff Biddle and Marcel Boumans. Durham and London: Duke University Press, 2021, 332 pp., $18.00 paperback.

Anyone who has taken an econometrics course knows that statistical inference and probability theory are inexorably linked. But is that the whole story? How much do we know about the tools we use to explain economic phenomena? Exploring the History of Statistical Inference in Economics, a volume edited by Jeff Biddle and Marcel Boumans, helps us answer these questions. The volume outlines the history of statistical inference and traces how the economics profession has both molded the field and been molded by it. This is accomplished by focusing on work in statistical inference that falls outside the field’s long-calcified standards.

The volume consists of 10 papers divided into three themed sections: “Inference in the field,” “Inference in time,” and “Inference without a cause.” This review briefly covers each section, discussing the papers in the order in which they are presented.

The first section highlights that, over time, statistical inference has varied in both complexity and adherence to theory, with the latter being conditional on data availability and inputs from many, often biased, actors. The section refers both to work done outside of academia and to work involving agriculture, a sector covered in two of the section’s three papers.

In the first paper, Paul Burnett details agricultural economist Theodore Schultz’s use of “statistical parables” to subvert the prevailing assumptions of development economics in the 1950s and 1960s. According to Burnett, these parables employed limited analysis and relevant examples to disprove these assumptions by counterexample.

In the next paper, Jeff Biddle relays the work of a team of economists at the U.S. Bureau of Agricultural Economics in the 1920s and 1930s. The team employed advanced techniques to forecast livestock and crop harvests, placing more emphasis on methodological rigor than on economic theory. For example, the team revised survey questions to account for recall bias and extensively used alternative data sources (“check data”) to improve its forecasts. Biddle argues that the team’s work exemplified inference as it existed at the time—using data to generalize about a statistical universe outside a sample, without employing probability-based inference.

In the section’s last paper, Boris Samuel discusses statistical misreporting of macroeconomic indicators to the International Monetary Fund (IMF) by the government of Mauritania in the early 2000s. Samuel argues that IMF’s sanitized peer-review process and bureaucracy favored legibility, simplicity, and consistency over methodological rigor. As a result, IMF staff stuck to outdated economic models, allowing what the author calls a “statistical lie” to persist.

The volume’s second section, also composed of three papers, focuses on the chronological development of statistical techniques, previewing research into economic trends and business cycles as a touchstone. Here, reexamining the past does more than simply providing useful historical parallels and examples; it gets to the core of the economics profession—understanding and explaining economic phenomena—by highlighting the mutability of the profession’s tools.

In the section’s first paper, Mary S. Morgan uses the work of Thomas Malthus (late 1700s to early 1800s) and Nikolai Kondratiev (early 20th century) to introduce and highlight the act of “narrative making.” Morgan argues that, because of the rise of mechanized statistical inference from the 1970s onward, we have forgotten crucial components of the inferential process, namely, the deciphering of statistical phenomena and their placement in a larger context. The author recenters narrative making in the scientific process, showing how the scientist weaves together elements of a phenomenon to explain the whole.

Next, Laetitia Lenel discusses economist Warren Persons’ work and the development of the Harvard Index of General Business Conditions (an index for forecasting the business cycle on the basis of probability-based inference) in the early 20th century. Although Persons espoused probability-based inference well before it became dominant, he would renounce it as inadequate when the index would not accurately forecast future economic conditions. Crucially, Lenel argues that the index’s eventual failure triggered a paradigmatic shift in the economics profession, whereby ideas of individual expectations and uncertainty—as opposed to natural, mechanistic laws—guided the business cycle and the greater economy. On top of confounding the “linear history” view of statistical inference in economics and putting into question the clean shift from nonprobabilistic to probabilistic inference, this development reframes the forecaster’s ever-changing toolkit as reflective of shifting economic worldviews and helps contextualize the work of economists at the time, and even today.

Closing the second section, a paper by Thomas A. Stapleford further muddies history by bending it into a circle, comparing statistical inferential techniques actualized by economists of the “data revolution” (circa 2014) with those predicted by economist Wesley Mitchell 90 years prior. In this comparison, Stapleford highlights four shared traits: a deemphasis on probability-based inference, a shift from theoretical modeling to model building based on observational data, an adoption of a common data analysis technique to encourage interdisciplinary work, and a global shift toward promoting data collection and analysis.

The volume’s last section, composed of four papers, details how several parties used or developed statistical methods to advance biased argumentation, with varying results. The section highlights the interplay among economic research, the environment in which researchers collect and disseminate data, and the researchers themselves. The main takeaway from this discussion is that how we identify and navigate situations of potential bias has reverberating effects on our field of study, the larger community of applied research, and the broader society.

In the section’s opening paper, Marcel Boumans discusses Francis Galton’s (late 19th century) composite photography experiment. Galton, a eugenicist, used “pictorial statistics” to objectively prove the inferiority of certain groups. Yet, instead of showing inferiority, his composite photographs revealed “beautiful” archetypes, undermining his beliefs. Boumans also shows the bias in Galton’s procedure: it was the act of grouping photographs, not the taking of their composite, that constituted inference, and that act could not avoid bias.

Next, Aashish Velkar uses examples from Great Britain to highlight how economists involved in the production of price-index statistics contended with inferential gaps in the creation and presentation of information. Inferential gaps are chasms between a given phenomenon and what the scientist measures; they can arise at any point of the scientific process. However, these gaps may involve not just dynamics between people and phenomena but also interactions among people with differing aims. For example, Velkar discusses a cost-of-living index created in Great Britain in the early 1900s, showing that, despite the index’s use in inferential statistics at the government level, political factions ignored it and favored cherry-picked data designed to mislead the public and slander political opponents.

In the section’s third paper, Amanar Akhabbar details Nobel laureate Wassily Leontief’s (1941) early work in interindustry studies, documenting the formation of what Leontief coined “direct inference” (which he considered superior to probability-based “indirect inference”). Specifically, Akhabbar shows how Leontief used sample data to infer the structural parameters of a model that applied to the entire economy. Crucially, Leontief derived these parameters directly from his mathematical model’s structural equations, not from a reduced-form equation. Akhabbar’s paper contributes to the early history of alternative statistical inference by demystifying Leontief’s key contribution: the field of input–output analysis.

In the section’s last paper, Harro Maas closes the volume with a history of contingent valuation, a survey-based inferential technique intended to ascribe value to nonmarket goods such as the environment. The author details a long-running disagreement between resource and environmental economists over the technique’s merits. Although a guiding framework for the technique was established from the 1970s to the 2000s, a series of court battles would politicize the technique and crush its reputation.

The aim of statistical inference in economics is to draw generalizations and conclusions about a population from limited data. Exploring the History of Statistical Inference in Economics contributes to this common goal by presenting nonstandard statistical techniques and important contributions to the field. It is equal parts history and apophatic inquiry. By highlighting nonstandard statistical inference, the volume deepens our understanding of both our profession’s tools and our place in their evolving applications.

article image
About the Reviewer

Nicholas Catapano
catapano.nicholas@bls.gov

Nicholas Catapano is an economist in the Office of Prices and Living Conditions, U.S. Bureau of Labor Statistics.

close or Esc Key