**The Development of Econometrics and Empirical Methods in**

**Economics**

Econometrics Models and Economic

Economics is about events in the real world. Thus, it is not surprising that much of the debate about whether we should accept one economic theory rather than another has concerned empirical methods of relating the theoretical ideas about economic processes to observation of the real world. Questions abound. Is there any way to relate theory to reality? If there is a way, is there more than one way? Will observation of the real world provide a meaningful test of a theory? How much should direct and purposeful observation of economic phenomena, as opposed to informal heuristic sensibility, drive our understanding of economic events? Given the ambiguity of data, is formal theorizing simply game-playing? Should economics focus more on direct observation and common sense? In this chapter we briefly consider economists' struggles with questions such as these. Their struggles began with simple observation, then moved to statistics, then to econometrics, and recently to calibration, simulations and experimental work.

The debate about empirical methods in economics has had both a micro-economic and a macroeconomic front. The microeconomic front has, for the most part, been concerned with empirically estimating production functions and supply-and-demand curves; the macroeconomic front has generally been concerned with the empirical estimation of macroeconomic relationships and their connections to individual behavior. The macroeconomic estimation problems include all the microeconomic problems plus many more, so it is not surprising that empirical work in macroeconomics is far more in debate than empirical work in microeconomics.

We begin our consideration with a general statement of four empirical approaches used by various economists. Then we consider economists' early attempts at integrating statistical work with informal observations. Next, we see how reasonable yet ad hoc decisions were made about the problems regarding the statistical treatment of data, leading to the development of a subdiscipline of economics—econometrics. Finally, we consider how those earlier ad hoc decisions have led to cynicism on the part of some economists about econometric work and the unsettled state of empirical economics today.

**Empirical Economics Letters**

Economic Empirical, Empirical Research in Economics

Almost all economists believe that economics must ultimately be an empirical discipline, that their theories of how the economy works must be related to (and, if possible, tested against) real-world events and data. But economists differ enormously on how one does this and what implications can be drawn afterward. We will distinguish four different approaches to relating theories to the real world: common-sense empiricism, statistical analysis, classical econometric analysis, and Bayesian econometric analysis.

Common-sense empiricism is an approach that relates theory to reality through direct observation of real world events with a minimum of statistical aids. You look at the world around you and determine if it matches your theoretical notions. It is the way in which most economists approached economic issues until the late nineteenth century; before then, most economists were not highly trained in statistical methods, the data necessary to undertake statistical methods did not exist, many standard statistical methods that we now take for granted had not yet been developed, and computational capabilities were limited.

Common-sense empiricism is sometimes disparagingly called armchair empiricism. The derogatory term conveys a sense of someone sitting at a desk, developing a theory, and then selectively choosing data and events to support that theory.

Supporters of common-sense empiricism would object to that characterization because the approach can involve careful observation, extensive field work, case studies, and direct contact with the economic events and institutions being studied. Supporters of common-sense empiricism argue that individuals can be trained to be open to a wide range of real-world events; individuals can objectively assess whether their theories match those events. The common-sense approach requires that economists constantly observe economic phenomena, with trained eyes, thereby seeing things that other people would miss. It has no precise line of demarcation to ultimately determine whether a theory should or should not be accepted, but it does have an imprecise line. If you expected one result and another occurred, you should question the theory. The researcher's honesty with himself or herself provides the line of demarcation.

The statistical analysis approach also requires one to look at reality but emphasizes aspects of events that can be quantified and thereby be subject to statistical measure and analysis. A focus is often given to statistically classifying, measuring, and describing economic phenomena. This approach is sometimes derisively called measurement without theory. Supporters of the approach object to that characterization, arguing that it is simply an approach that allows for the possibility of many theories and permits the researcher to choose the most relevant theory. They claim that it is an approach that prevents preconsidered theoretical notions from shaping the interpretation of the data.

The statistical analysis approach is very similar to common-sense empiricism but unlike that approach, the statistical approach uses whatever statistical tools and techniques are available to squeeze every last bit of understanding from a data set. It does not attempt to relate the data to a theory; instead, it lets the data (or the computer analyzing the data) do the talking. As the computer has increased researchers' capabilities of statistically analyzing data, the approaches of common-sense empiricism and statistical analysis have diverged.

The classical econometric approach is a method of empirical analysis that directly relates theory and data. The common-sense sensibility of the researcher, or his or her understanding of the phenomena, plays little role in the empirical analysis; the classical econometrician is simply a technician who allows the data to do the testing of the theory. This approach makes use of classical statistical methods to formally test the validity of a theory. The econometric approach, which developed in the 1930s, is now the approach most typically taught in modern economics departments. Its history is the primary focus of this chapter.

The Bayesian approach directly relates theory and data, but in the interpretation of any statistical test, it takes the position that the test is not definitive. It is based on the Bayesian approach to statistics that seeks probability laws not as objective laws but as subjective degrees of belief. In Bayesian analysis, statistical analysis cannot be used to determine objective truth; it can be used only as an aid in coming to a subjective judgment. Thus, researchers must simply use the statistical tests to modify their subjective opinions. Bayesian econometrics is a technical extension of common-sense empiricism. In it, data and data analysis do not answer questions; they are simply tools to assist the researcher's common sense.

These approaches are not all mutually exclusive. For example, one can use common-sense empiricism in the initial development of a theory and then use econometrics to test the theory. Similarly, Bayesian analysis requires that researchers arrive at their own prior belief by some alternative method, such as common-sense empiricism. However, the Bayesian and the classical interpretations of statistics are mutually exclusive, and ultimately each researcher must choose one or the other.

Technology affects not only the economy itself but also the methods economists use to analyze the economy. Thus, it should not be surprising that computer technology is making major differences in the way economists approach the economy and do empirical work. As one observer put it: Had automobiles experienced the same technological gains as computers, Ferraris would be selling for 50 cents. Wouldn't that change your driving habits? The computer certainly has changed economists' empirical work, and it will do so much more in the future.

In some cases technology has merely made it easier to do things we have already been doing. Statistical tests, for example, are now done pro forma by computer programs. Recursive systems with much more complicated dynamics are finding a wider audience. Baysesian measures are beginning to show up in standard computer software statistical programs. Another group of economists is using a VAR (Vector Auto Regression) approach. They simply look to the computer to find patterns in data independent of any theory.

Another set of changes is more revolutionary than evolutionary. Recently a group of empirical economists have been focusing more on agent-based modeling. These are simulations in which local individual optimization goals of heterogeneous agents are specified and modeled. But instead of being deductively determined, the results are simulated to determine the surviving strategies. In these simulations individuals are allowed to build up institutions and enter into coalitions, providing a much closer parallel to real-world phenomena.

Another change that we have seen is the development and use of a technique called calibration in macroeconomic models. Models are not tested empirically; instead, they are calibrated to see if the empirical evidence is consistent with what the model could have predicted. In calibration, the role of simple general equilibrium models with parameters determined by introspection along with simple dynamic time-series averages is emphasized. Statistical "fit" is explicitly rejected as a primary goal of empirical work. There is debate about precisely what calibration shows, but if a model cannot be calibrated, then it should not be retained.

A final change has been the development of a "natural experiment" approach to empirical work. This approach uses intuitive economic theory rather than structural models and uses natural experiments as the data points.