Posts tagged with: prediction

crowd_2At the beginning of human history, God gave mankind a mandate to “Be fruitful and increase in number; fill the earth” (Genesis 1:28). Sometime later—around the 19th-century—people started wondering, “Is the earth close to being filled with humans?”

In 1798, Thomas Malthus predicted that if current birth rates persisted, many in Great Britain would starve to death. Instead, the birth rate was matched by increased agricultural yields, allowing more people to be fed with fewer land resources.

Despite Malthus’s failed predictions, others worried that population would eventually outgrow our resources. In 1838, the Belgian mathematician Pierre Verhulst calculated that his country could never support more than 9.4 million people. Verhulst was wrong; Belgian’s current population is more than 11 million.

In 1925, Raymond Pearl, head statistician for the U.S. Food Administration during World War I, calculated the maximum population limit of the U.S. to be 200 million. We reached that in 1968 and are currently at around 319 million. Pearl also predicted the world population limit would be 2 billion, a number that was surpassed in 1930.

Other similar calculations and predictions followed—and they turned out to be just as faulty. Why do smart people get population limit predictions so wrong? As Adam Kucharski explains,

Blog author: jcarter
Tuesday, October 23, 2012

In March 2009 the deputy chief of Italy’s Civil Protection Department and six scientists who were members of a scientific advisory body to the Department held a meeting and then a press conference, during which they downplayed the possibility of an earthquake. Six days later an earthquake of magnitude 6.3 killed 308 people in L’Aquila, a city central Italy. Yesterday, the seven men were convicted of manslaughter and sentenced to six years in prison for failing to give adequate warning to the residents about the deadly disaster.

The news reports imply that the scientists were sentenced because of their failure to predict the earthquake. But Roger Pielke, Jr., a professor of environmental studies at the University of Colorado, says “one interpretation of the Major Risks Committee’s statements is that they were not specifically about earthquakes at all, but instead were about which individuals the public should view as legitimate and authoritative and which they should not.”

Whether it was because of their predictions or because of the authority with which they made their claims, the scientists were sent to prison for making an erroneous prediction about how nature would act. Such a judicial ruling would strike most of us Americans as absurd. We’d rightly assume that it might provide scientists with an incentive to not make any predictions at all. As Thomas H. Jordan, a professor at the University of Southern California, says, “I’m afraid it’s going to teach scientists to keep their mouths shut.”

I like Robert Samuelson’s recent column about the difficulty (impossibility?) of accurately analyzing economic reality, let alone predicting its future. Over the past several months a few people, mistaking me for someone who knows a great deal about economics, have asked what I think about the financial crisis, the stock market, the recession, etc. My response is usually something along the lines of the following: Anyone who pretends to know and understand completely the causes of the economic meltdown and/or how to “fix” it, is either not very smart or is selling something (e.g., political schemes or financial advice).

It is a bitter pill for modern man–maybe contemporary Americans, especially–to swallow, but the fact is we can’t control the economy, even if we have “learned the lessons” of the Great Depression of the 1930s, the stagflation of the 1970s, and the tech bubble of the 1990s. And often enough our efforts to manage and control it aggravate whatever problem we’re trying to address.

Recognizing this truth can be depressing, or it can be freeing.

It’s a reminder to all who are even occasionally viewed, described, or invoked as “experts” always to wield our opinions with humility. It won’t stop us from pontificating, but it should prevent anyone from taking us too seriously.

To substantiate this claim about the ignorance of the experts, here is an enjoyable summary of the worst economic predictions of 2008, courtesy of Business Week. (My fave: “I think you’ll see [oil prices at] $150 a barrel by the end of the year” —T. Boone Pickens, June 20, 2008.)

Here’s to an equally unpredictable 2009.

I was reading about Bill Gates’ speech to the Northern Virginia Technology Council last week, which received a lot of media coverage (PDF transcript here).

In the speech about software innovation, Gates “speculated that some of the most important advances will come in the ways people interact with computers: speech-recognition technology, tablets that will recognize handwriting and touch-screen surfaces that will integrate a wide variety of information.”

“I don’t see anything that will stop the rapid advance,” Gates said. I appreciate the insight that a corporate mogul and business insider like Gates provides.

The predictions did make me think about this observation from Alasdair MacIntyre, however, which serves to temper some of the more audacious claims often made about technological progress.

MacIntyre writes,

Any invention, any discovery, which consists essentially in the elaboration of a radically new concept cannot be predicted, for a necessary part of the prediction is the present elaboration of the very concept whose discovery or invention was to take place only in the future. The notion of the prediction of radical conceptual innovation is itself conceptually incoherent.

To his credit, much of what Gates is describing doesn’t meet these criteria. They are not “radically new” concepts, but the integrative alteration of already existing concepts (some might argue that this has essentially been the modus operandi for Microsoft’s success: not innovation per se, but rather innovative popularization of integration).

That said, we need to be cautious about the precision of our claims about future innovation. Statistically we can predict that radical innovations are quite likely to happen, but by definition we can’t know what they will be.