Maybe it's time to bring back the Pentagon's terror prediction market. Remember that much-maligned experiment? Toward the end of Bill Clinton's administration and into the early days of George W. Bush's, DARPA began studying the feasibility of establishing an electronic market where participants would risk money on geopolitical trends, mostly in and around the Middle East. Given the success of prediction markets in other realms, intelligence analysts at the Defense Advanced Research Projects Agency were interested in whether such a market might prove a useful tool in their work. The program was called the Policy Analysis Market, and it was a very good idea.
Prediction markets work by allowing investors to trade securities that are priced to reflect other investors' beliefs about the likelihood that a particular event will occur. If you are holding the security and the event occurs then you "win." They are generally regarded as useful tools for aggregating information that is decentralized. Electoral prediction markets, according to their supporters, are more accurate than political polling. If the markets are as good as claimed, one theory on why they work so well is that people incorporate less their own preferences than their knowledge of how friends and neighbors plan to vote. And with money at stake, they "bet" with their heads rather than their hearts.
The Pentagon wondered quite reasonably whether such an approach would help predict the course of major world events. As originally designed (it never got out of the test phase), PAM would have covered eight countries, for each of which traders would price such parameters as U.S. financial involvement and political stability. They would also price such matters as military casualties, U.S. GDP and total casualties from terrorism. Combinations of these factors would essentially allow prediction of how a particular U.S. policy change would affect a given country. As DARPA explained at the time, "The rapid reaction of markets to knowledge held by only a few participants may provide an early warning system to avoid surprise."
Would the idea have worked? The only way to find out would have been to give it a try.
Which the government didn't. In July 2003, the project was unceremoniously dumped.
What went wrong?
The main problem was the "terror market" meme. DARPA made what was perhaps a tactical error. A report to Congress on what was then styled "FutureMAP" (for "Futures Markets Applied to Prediction") took as its example the prediction of a bioweapons attack on Israel. At once the parade of horribles began. Critics, from Capitol Hill downward, trumpeted their moral repugnance at the notion that investors might profit from acts of terror. Some went so far as to suggest that terrorists might use the markets to speculate, betting on targets accorded low value in the markets, then striking those targets to get a big payout.
The public relations battle was over before it started. An embarrassed Defense Department killed the project without ever giving it a try. PAM, one of several efforts beneath the FutureMAP umbrella, died along with it.
But the criticisms were overblown. No investor, whether terrorist or innocent, would have made a killing by guessing right. The economist Robin Hanson, who was involved in PAM's development, later pointed out that the returns paid to winners would have been only "a few tens of dollars." Moreover, Hanson noted, there is empirical reason to doubt that speculators can move prices very much on prediction markets.
It's easy to understand the repugnance at the idea that investors might "profit" from the activities of terrorists. But the profits, if any, would have been tiny, and predicting terror attacks wasn't what the project was intended to do. At best it would have indicated what traders thought would happen in different parts of the world given some triggering event such as an oil price shock or a U.S. military intervention. PAM would have been a tool for analysts, nothing more -- that is, a tool that they would lay alongside existing intelligence sources, and consider or disregard depending in large part on its track record.
Since the collapse of the official effort, private vendors have moved into the void. Barack Obama's administration has also tiptoed, gently and hesitatingly, in the same direction. Current efforts aim mostly at predicting long-term geopolitical trends, and are publicly described that way. No one dares claim to be trying to guess where the next manmade horror will occur.
Some say that warning will eventually be provided via complex algorithms. Forecasting the future with any accuracy has so far proved beyond the reach of the data miners-- otherwise we'd all be rich (or poor). Among those who study terrorist attacks, so far the software showing the most progress is aimed not at figuring out where the bad guy will strike next but at determining which terror group is responsible for a given attack.
In other words, our available tools are good at telling us who committed yesterday's atrocity, not who will commit tomorrow's. Figuring out where and when a particular foe is likely to strike next has so far proved as big a challenge for the mathematicians as it has for the intelligence analysts. Would an active prediction market help? We won't know unless we try.
At the very minimum, we should accept that we made a mistake by abandoning FutureMAP and PAM. And someday soon an entrepreneur -- or a government agency -- is bound to propose a prediction market that will include among its tradeable securities the likelihood of a terror attack in a particular place. When that happens, we should swallow our understandable revulsion and, with proper safeguards, let the project go forward. Yes, it's repugnant that an investor might profit from an atrocity. But the tradeoff might be an ability to keep more people safe. And on that vitally important project, we need all the help we can get.