Here are 2 examples of published
numbers that for me cross an important line of credulity (in the mental sand of my mind).
These examples illustrate a bothersome and unfounded inflation of perceived
accuracy in printed statistics.
Hundredths of a Goal. First, I’ve
enjoyed watching the Women’s World Cup soccer matches being played in Canada,
and remain hopeful that the US team will again capture the Cup. There’s been
much to enjoy as favorites like the US and Germany, as well as “Cinderella” teams
like Canada, have navigated FIFA’s very strangely-composed brackets towards the
July 5 championship game. One thing that’s distressed me, however, is related
to one of my eccentric pet peeves, unwarranted and misplaced accuracy.
Throughout this tournament,
the media has highlighted the lack of goals scored in the games. This isn’t
surprising because the US has scored a total of just 7 goals through its first 5
winning games. So much for being an offensive powerhouse (so far). The latest example
is the US 1-0 victory over China on June 26.
However, media stories about
this lack of scoring have routinely transgressed my trust. A piece in The Guardian
said, “The average goals per game at the end of the group stage was 2.97…”
A New York Times article stated, “The tournament was averaging 3.08 goals a game entering Monday (June 15)…
The first-round scoring average was only 2.24 goals a game entering Monday. The
2011 event finished with an average of 2.69 goals a game.”
Let’s take a minute of
stoppage time to think about these stated averages. Soccer goals only come in
whole numbers e.g., 1, 2, (or 10 if you’re Germany). Thus there’s only 1 or at
most 2 significant digits in a soccer team’s goal total for a game. To display
the average number of goals in hundredths of a goal with 3 digits (e.g., 3.08)
is absurd and actually no more accurate than stating the average in tenths of a goal
(e.g., 3.1). Come on; can Abby Wambauch score a hundredth of a goal? Never; so
why add 2 decimal places to this average? It’s done so these averages appear to
be more precise and accurate than simply saying “about 3” or “3.1.” But they’re
not more accurate. With only 1 or at most 2 significant digits as inputs, the
quotient’s average cannot meaningfully include more than 1 or 2 numbers. This
escalation of false “precision” is readily-obtainable to anyone – including a
reporter – who has a calculator in their smartphone; where you are shown
1.66666667 as the result of dividing 5 by 3.
Will this trend continue; so
the average number of goals scored in the 2016 Olympic soccer tournament be
presented with 3 decimal places – 3.141? I hope not. False precision is more widespread than just reporting soccer scores.But the next time you read
any news article that offers numerical results, see if the number of decimals
printed exceed the number of significant digits being referred to. My bet is
the resultant’s decimals will exceed the relevant input digits, illustrating
unfounded accuracy. That’s bad numerical form.
Bridges to the Future. Second, precision
is an issue that forecasters constantly face. It’s the unstated underpinning
for the apocryphal quote, “if you have to forecast, forecast often.” The longer
the forecast period, the more likely the prediction is going to be erroneous.
We economists have difficulty accurately forecasting economic changes over the
next 12 months, let alone several years. Weather forecasters feel lucky when
they get their 5-day forecasts right.
In The Signal and the Noise, Nate Silver states that despite the
challenges, short-term weather forecasts are overall more reliable than other
types of predictions, including economic and financial forecasts. Yet complex environmental
models are being used to somehow predict impacts during the next 85 years. That’s
right, forecasts over 85-years.
A recent New York Times story
cited an EPA report that provides long-term quantitative measures of the
consequences of not abating fossil-fuel consumption. The report states that by
2100 (85 years in the future), there will be 12,000 deaths from extreme heat and cold, and 720 to 2,200 bridges would
become structural vulnerable and cost $1.1 billion (B) to $1.6B to fix.
Forecasting deaths, bridge collapses and costs 85 years in the future? Really?
As many people do, I believe global climate change (né global warming)
has been underway for a considerable time increasingly due to humanity’s
actions. These induced and detrimental effects need to be mitigated now by a
series of specific public policies, such as imposing a meaningful tax on all carbon
use.
Ironically, justification for implementing such policies does not
require forecasting at all. Environmental policies can be rationalized based on
measured changes in what’s already
happened to our environment, and continues to happen. Nevertheless, I find it
highly doubtful that predicted really long-term effects of climate change have
much quantitative credence. Yet I haven’t seen much discussion about these
far-out forecasts’ precision. I
consider these multi-decade predicted deaths and costs to be decent examples of
imaginary numbers.
Think about it; who in 1930
(85 years ago) could have foreseen key details of our present economy such as
its size, the composition of our economic output, let alone the number of
bridges that are now in operation, and how many may now need repair. For some
perspective, the US real GDP last year was more than 16 times as large as it was
in 1930. In 1930, 21% of our labor force worked on farms (10 times what it is
now).
The
use of such far-out forecasts to substantiate needed environmental policies unfortunately
gives climate change deniers an opportunity to be listened to. I don’t
understand how any scientist – environmentalist, climatologist, economist or
engineer – can place much credibility in such truly long-term forecasts. And yet,
these far-out forecasts are presented and discussed as if they accurately
represent what will actually be happening nearly 100 years in the future. It’s
all quite puzzling, and unfortunate.
No comments:
Post a Comment