Let's consider each statement in turn.

"It is good practice to include a measure of expected forecast error with any forecast."

That one is clearly true. You always want to include error measures in any kind of statistics or forecasting work; no estimate is ever perfectly precise, and knowing just how precise our estimates are can avoid costly mistakes later on.

"In exponential smoothing, a lower smoothing constant will better forecast demand for a product experiencing high growth."

This one is a bit trickier. In exponential smoothing, you adjust a time series *x *by replacing each term with a smoothed term *s, *which is determined by the original time series plus a smoothing constant *a*:

`s_{t} = a x_t + (1-a) s_{t-1}`

If the smoothing constant *a *is larger, that is, closer to 1, the smoothed series will be more similar to the original time-series. if it is smaller, that is, closer to 0, the smoothed series will be much more smoothed. Actually in the limit where a = 0, the "smoothed" series is just a constant that has nothing to do with the original time-series.

If a product is experiencing high growth, do we want more or less smoothing? Probably *less *smoothing, because with too much smoothing we will systematically underestimate future growth by averaging in too many past values that were small. Less smoothing means a *larger* smoothing constant (a bit counter-intuitive), so this statement is *false.*

So, we found one that is false. We could stop there, but let's make sure the other statements are true as well.

"It is good practice to use more than one forecasting model and then take a look at the results using common sense."

This is also definitely true. The reason we still have economists and statisticians rather than just throwing everything into big computer models is that computers have no common sense; they can't tell whether a result is reasonable or not. It's just garbage-in, garbage-out as they say; a bad model could result in wildly and obviously wrong predictions, which a human would detect but a computer would not.

By comparing a variety of different models and applying known theory and individual intuition, we can therefore arrive at better forecasts than we could have simply naively trusting in a single model.

"A benefit of qualitative forecasts is that they take advantage of expert opinion."

This is also true; qualitative forecasts are quite limited (which is why we use formal forecasting models in the first place), but they do have their place, because experts can make qualitative forecasts based on much richer sources of information---background knowledge, information from other fields, recent developments in policy---that formal models can't capture. If qualitative forecasts differ greatly from quantitative forecasts, we know we have a problem, and that gives us reason to investigate further. (We don't necessarily know which is correct, though my money is usually on the quantitative forecasts.)