978-1118808948 Chapter 4 Lecture Note Part 1

subject Type Homework Help
subject Pages 8
subject Words 2096
subject Authors William F. Samuelson

Unlock document.

This document is partially blurred.
Unlock all pages and 1 million more documents.
Get Access
CHAPTER FOUR
ESTIMATING AND FORECASTING DEMAND
OBJECTIVES
1. To introduce the “art and science” of empirical analysis.
2. To examine the sources of data. (Collecting Data)
3. To give the student an intuitive feel for regression analysis. (Regression
Analysis)
4. To explain the statistics that are generated by regression analysis.
(Interpreting Regression Statistics)
5. To introduce the student to some of the problems encountered in regression
analysis. (Potential Problems in Regression)
6. To analyze time series models. (Time Series Models)
7. To introduce barometric models. (Barometric Models)
8. To discuss forecast accuracy and performance.
(Forecasting Performance)
TEACHING SUGGESTIONS
I. Introduction and Motivation
In the previous chapters we took demand and cost equations as given. In real life,
however, they must be gleaned from experience. This chapter presents some
important techniques for estimating demand. Our approach is to emphasize that
empirical work is always both an art and a science. In
other words, the outcome of a study is determined in part by the design of the
study. If the study is poorly designed it will lead to “false” conclusions.
There are (at least) three important considerations in study design. First, the study
design is motivated by the purpose to which the results will be put. Before
gathering data it is useful to consider possible outcomes and how they will affect
decision making. If the study does not produce information relevant to the decision
making process, then the study should be redesigned or abandoned.
Second, it is difficult to gather data without having some hunch about how things
are likely to turn out. This hunch can be informed, of course. Theory is, in some
sense, a well-thought-out hunch about the way the world works. Also previous
empirical studies can be useful in designing empirical studies.
Third, empirical studies cost money.
The above considerations yield two conclusions: 1) Empirical studies should be
carefully designed with the decision making process in mind, and 2) the cost of the
study should be compared to the expected benefits to determine whether the study
is worth undertaking. (This latter conclusion we study in some detail in Chapter 13,
The Value of Information, but a general discussion of costs and benefits of
information is useful here.)
In analyzing empirical studies, we feel that, for the purposes of this course, the
ultimate goal is not to become an expert in regressions but rather to be skilled in
interpreting them. We spend more time on understanding what a regression is and
very little time on the formulae for computing the outputs of regression analysis.
Finally, it is important that students approach the results with caution and that they
exercise common sense. If the results do not make sense, then they should not be
believed. Estimates as to the accuracy of the regression results should be used in
performing sensitivity analysis in the underlying problem.
In our discussion of forecasting, the main focus is on time-series models.
Time-series models have advantages and disadvantages when compared to
demand-based models introduced in Chapter 3 and estimated in the earlier sections
of Chapter 4. Time series models do not require much ex ante theorizing about
economic relationships. On the other hand, they do not provide much information
about these relationships.
The result is that time-series models can be quite accurate (even more accurate
than structural models) provided that there are no bizarre twists in economic
conditions. When unusual events occur that have profound effects on the economy
(such as an oil embargo), time series models are often unable to forecast with any
accuracy. Structural models, which allow the unusual events to be explicitly
incorporated, usually do much better.
Another disadvantage of time series models is they do not account for variables
that are under the power of the decision maker. For example, the trend models do
not allow the decision maker to ask the question “What would happen if I increase
my price?” or “What would happen if I increase advertising expenditures?”
Structural models, on the other hand, are designed to answer these questions.
Often, time-series models and structural models are used in conjunction with one
another. For example, an estimated demand equation may require information on
the explanatory variables: consumer income, price and advertising. Price and
advertising are under the control of the decision maker. But, a time-series model
may be used to predict consumer income.
II. Teaching the “Nuts and Bolts”
Sources of Information. In covering this section we like to emphasize the
advantages of each technique. It is useful to put each method on the board and
have students call out the various advantages and disadvantages of each.
Regression Analysis and the Interpretation of Statistics. Here we try to give the
student an intuitive feel for the least squares method. One possibility is to put a
scatter diagram on the chalkboard and to ask the students how we might go about
finding a line that best fits the points. The distance between each point and the line
can be shown and it is not too difficult to convince the student that squaring these
distances and adding them is a reasonable (though not the only conceivable) way
of measuring fit.
One way of explaining degrees of freedom is to use a diagram. With two data
points one can estimate a line. However, it is impossible to tell how well this line
fits the true underlying relationship. With three data points we can not only fit a
line but also determine something about the distribution around the line. That is,
with three data points we have a degree of freedom beyond what we need to
determine a line. The more degrees of freedom we have the more we can determine
about the distribution.
The concept behind R2 is intuitive for most students. Its main shortcoming can be
emphasized by exploring what happens when explanatory variables are added.
Ultimately when there are as many explanatory variables (including the constant)
as there are data points, then the fit becomes perfect. However, we have lost our
degrees of freedom and with them any confidence we have in the specification.
That is, adding explanatory variables has a cost in terms of lost degrees of
freedom. The adjusted R2 is designed to take this cost into account.
For the F statistic we emphasize that it will have an F distribution if all of the
coefficients are zero. Given the assumption that all the coefficients are zero, we can
ask how likely it is that we get this value for F. If it is not very likely, then we
reject the hypothesis that all of the coefficients are zero.
All spreadsheet based regression programs now report p-values that directly
convey the statistical significance of the regression as a whole (the F statistic) and
of individual coefficients of the potential explanatory variables (respective
t-statistics). This frees students from having to pick critical values from statistical
tables to benchmark significance of their regression results. For instance, suppose
that an estimated coefficient is .76, its t-statistic is 2.31, and its associated p-value
is .032 (for say, a two-sided test). Then, we immediately know that the coefficient
is significant at the 5% level. The coefficient .76 is definitely different from zero.
Remember to explain to students what the p-value means. Namely, if the true
coefficient were zero, the chance that the estimate would be as extreme as .76 (due
to luck) would be only .032. Because this chance is so small, we reject the null
hypothesis of a zero coefficient. This is the correct formal meaning of rejecting the
null hypothesis, but it’s a bit “round about” and not always easy to remember.
(Informally, we tell students that it’s OK to think of the p-value as reflecting the
validity of the null hypothesis i.e. only a .032 chance in this case that the
coefficient is zero.)
Potential Problems in Regression. We spend some time on what can go wrong in
a regression. Our main emphasis is on equation specification and multicollinearity
since these are most easily grasped by the student. Equation specification can be
shown easily on the chalkboard by presenting a scatter diagram that looks well
fitted to a curved demand curve. By comparison, a linear demand curve
specification would not fit so well (and give quite different demand predictions).
Here, using a linear model would be a misspecification.
Omitting key variables can also bias results. For example, suppose that (1) one
incorrectly omits income from the regression equation, and (2) income happens to
be positively correlated with price. Doing a simple regression on price could then
show relatively inelastic demand when in fact it was quite elastic. (When price
increases, income also happens to increase blunting the reduction in sales that
would otherwise occur.) This mistaken elasticity estimate could lead to a disastrous
pricing policy.
Multicollinearity can also be demonstrated fairly easily. Again, the price and
income example can be used. Though we do not spend much time on simultaneity,
heteroscedasticity or serial correlation, the instructor may take the opportunity to
cover these issues in greater depth.
Simple Time Series Models (The Trend Model). In this section we try to give
the student a feel for time series models. The models are very simple and build on
regression models. We let our students know that this is only the tip of the iceberg.
Seasonality. This is a refinement of the trend model that indicates how such a
model might be complicated to take into account some commonly observed
patterns.
Barometric Models. In a way, these are hybrids between structural and
time-series models. They are more like time series models, however, in that they
do not contain variables over which the decision maker has control. Students
should be encouraged to follow the index of leading indicators reported in the
news.
Forecast Accuracy. All forecasting methods provide a point estimate of the
forecast. It is equally important, however, to have some idea as to the accuracy of
the forecast. More realistically, the forecaster should report a range within which
the forecaster is reasonably confident that the outcome will fall. It is then important
for the decision maker to do sensitivity analysis over all reasonable forecasts. Only
then will the manager have some idea whether a planned action is worth the risk.
Econometric Models. Though the text does not cover this important topic,
instructors who wish to do so should refer to the short module found on the
Samuelson and Marks website.
ADDITIONAL MATERIALS
I. Short Readings
Don Peck, “They’re Watching You at Work,” The Atlantic, December, 2013, pp.
72-84.
Suzanne Kapner, “The Dirty Secret of Black Friday Discounts,” The Wall Street
Journal,, November 26, 2013, pp. B1, B4.
Steve Lohr, “More Data can Mean Less Guessing about the Economy,” The New
York Times, September 8, 2013, p. BU3.
Stephanie Clifford, “Using Data to Stage-Manage Paths to the Prescription
Counter, The New York Times, June 20, 2013, p. F2.
By Carl Bialik, “And the Oscar-Pool Winners are …. the Stats Dudes,” The Wall
Street Journal, February 23, 2013, p. A2.
John Tierney, “Refining the Formula that Predicts Celebrity Marriages’ Doom,”
The New York Times, March 13, 2012, p. D3.
Steve Lohr, “The Age of Big Data,” The New York Times, February 12, 2012, pp.
SR1, SR2.
J. V. Devries, “May the Best Algorithm Win,” The Wall Street Journal, March 16,
2011, p. B4.
C. Tuna, “When Combined Data Reveal the Flaw of Averages,” The Wall Street
Journal, December 2, 2009, p. A21.
A. Schwartz, “N.F.L.’s Dementia Study has Flaws, Experts say,” The New York
Times, October 27, 2009, p. B10.
S. Lohr, “For Today’s Graduate, Just One Word: Statistics,” The New York Times,
August 6, 2009, p. A1.
E. Porter and G. Fabrikant, “A Big Star May Not a Profitable Movie Make,” The
New York Times, August 28, 2006.
“Economists Wrestle with the Olympics,” The Wall Street Journal, July 16, 2004,
p. A2. (Regression analysis predicts country medal counts.)
II. Longer Readings
Lucius Riccio, “Pothole Analytics,” OR/MS Today, June 2014, 32-35.
Ray C. Fair, “Reflections on Macro-Econometric Modeling,” Cowles Foundation
Discussion Paper No. 1908, 2013.
J. H. Stock, and M. Watson, Introduction to Econometrics. Boston:
Addison-Wesley, 2007.
R. S. Pindyck and D. L. Rubinfeld, Econometric Models and Economic Forecasts,
New York: McGraw-Hill, 1997.
R. C. Fair, Predicting Presidential Elections and Other Things, Palo Alto CA:
Stanford University Press, 2002.
J. S. Armstrong, Principles of Forecasting, Kluwer Academic Publishers, 2001.
D. Lovallo, and D. Kahnman, “Delusions of Success: How Optimism Undermines
Executives’ Decisions,” Harvard Business Review, July-August 2003, 56-63.
Symposium on Event Markets, Journal of Economic Perspectives, Spring 2004,
107-142.
J. H. Stock and M. W. Watson, “How did Leading Indicator Forecasts Perform
during the 2001 Recession?” Economic Quarterly, Federal Reserve Bank of
Richmond, Summer 2003, 71-90.
V. Zarnowitz, “Theory and History Behind Business Cycles: Are the 1990s the
Onset of a Golden Age?” Journal of Economic Perspectives, Spring 1999, pp.
69-90.

Trusted by Thousands of
Students

Here are what students say about us.

Copyright ©2022 All rights reserved. | CoursePaper is not sponsored or endorsed by any college or university.