You can’t argue with Carl Bialik when he says,
apropos iPhone
sales, that "conducting a flawed survey can be worse than not conducting
a survey at all". He’s talking about the method that Piper Jaffray analyst
Gene Munster used to estimate that half a million iPhones that
Apple sold in its first weekend: standing in a flagship Apple Store, counting
the number of iPhones sold in some given period of time, and extrapolating.
Munster is suitably sheepish, now:
“We definitely overshot,” Mr. Munster said, adding: “The
part we’re definitely guilty of is building an estimate from three people
visiting three stores over a three-day period.” He projected those sales
to hundreds of other Apple stores and nearly 2,000 stores for AT&T, the
only carrier to offer the iPhone.
The three stores Munster and his colleagues visited? New York, San Francisco,
and Minneapolis, in the Mall of America. It never seems to have occurred to
Munster that Apple would ship more units to its flagship stores, especially
in media-heavy New York and San Francisco, than it would have allocated to the
Apple Store in Lyndhurst, Ohio, or some random AT&T store in Nebraska.
For every Gene Munster who comes clean, however, there are dozens of highly-paid
equity analysts who will never admit how sketchy their research reports really
are. And it’s not just analysts, either, who are guilty of massive oversimplification:
the risk controls at small US banks, I know, are often laughably simplistic.
(When trying to work out what would happen to their balance sheet in the event
of a sharp drop in interest rates, for instance, they’re liable to simply ignore
the fact that a lot of customers with fixed-rate loans would be likely to refinance.)
This is one of the reasons why academic papers are invaribly more interesting
than Wall Street reports. They show not only their conclusions, but also how
they reached those conclusions. If Wall Street had to do that, I fear it would
lose a great deal of credibility.