Thursday

March 28th, 2024

Insight

The Machines Ate My Homework

Niall Ferguson

By Niall Ferguson

Published Jan. 17, 2018

How to get rich off bitcoin --- or lose it all while trying
Are we living through the re-mystification of the world?

Much that goes on around us is baffling these days. Financial market movements, for example, seem increasingly mysterious. Why, after close to a decade of sustained recovery from the nadir of early 2009, did global stock markets sell off so sharply this month?

We who claim expertise in these matters can tell stories about what just happened, but the nasty feeling persists that we haven't a clue. Twelve weeks ago, I warned that "financial red lights" were flashing again. Was I prescient, or just lucky?

My argument was that, as central banks raised interest rates and wound down the asset purchasing programs known as quantitative easing, there was bound to be downward pressure on stock markets. I also argued that, for demographic and other reasons, the end of the prolonged bond bull market was nigh. I still like that story, as it's based on familiar patterns from financial history.

Yet the market gyrations of the past two weeks have elicited a host of more exotic explanations. Each stock market correction has its villains, the product or people whom everybody else can blame for their losses. This time around, there was a simple formulation: it was all the fault of "the machines" or "the algos" (short for algorithms).

Nobody doubts that computers play a far larger role in financial markets today than ever before. It seems reasonable to assume that automated transactions by index tracking funds, not to mention high-frequency trading by quant funds, tend to amplify market movements. Yet there is no need to invoke these novelties to explain the return of normal financial volatility. There is, to my mind, a superstitious quality to the phrase "It was the machines."

Dying arts can be saved — but is it worth it?

Cinema buffs love 70-millimeter film, which made "2001" beautiful, but staying authentic is really hard.

For most of human history, superstition was the dominant mode of human explanation. If the crops failed, it was the wrath of the gods. If a child died, it was the work of evil spirits, or witches.

The great German sociologist Max Weber argued that modernity was about the advance of rationalism and the retreat of mystery — what he called the "disenchantment ("Entzauberung") of the world." People said goodbye to magic and entered an "iron cage" of rationality and bureaucracy. Weber borrowed the word Entzauberung from the poet and playwright Friedrich Schiller. I have always thought "de-mystification" a more precise, if clumsier, translation. The point is that this process may be reversible.

"The machines" are getting smarter every day. Computer scientists in the United States and China vie with one another to achieve breakthroughs in artificial intelligence that will not only make driverless vehicles the dominant transport system of the world, but also revolutionize almost every activity that currently depends on human pattern recognition.

Machine learning is already superior to human learning in numerous domains. The best human players of chess and the Chinese game go no longer stand a chance against the computers of the pioneering British company DeepMind, which Google acquired in 2014.

As they try to understand the implications of the rapid advance of AI, people tend to think in terms of science fiction. The usual reference is to "2001: A Space Odyssey," the 1968 Stanley Kubrick film in which the "foolproof and incapable of error" computer HAL 9000 attempts to kill the entire crew of a spaceship.

But perhaps the right way to think of AI is historical — as a phenomenon that may return humanity to the old world of mystery and magic. As machine learning steadily replaces human judgment, we shall find ourselves as baffled by events as our pre-modern forefathers were. For we shall no more understand the workings of the machines than they understood the vagaries of nature. Already, many of us stand in the same relation to financial "flash crashes" as medieval peasants did to flash floods.

The point is that even the best software engineers in Silicon Valley no longer fully understand how their own algorithms work. At companies like Nvidia, they program the self-driving cars to teach themselves how to drive. This "deep learning" goes deeper than our paltry human minds can fathom. How exactly is Deep Patient, a system developed at Mount Sinai Hospital in New York, able to predict which patients may succumb to schizophrenia? We don't really know. AI is no longer about getting computers to think like humans, only faster. It is about getting computers to think like a species that had evolved brains much bigger than humans — in other words, not like humans at all.

The question is: How shall we cope with this re-mystification of the world? Shall we begin to worship the machines— to propitiate them with prayers, or even sacrifices? Or shall we just lapse into fatalism? Perhaps we shall need to devise an AI equivalent of "Inshallah" — Insh-AI, perhaps.

Mankind — or peoplekind, as the Canadian prime minister Justin Trudeau has renamed us — stands on the threshold of a new era. I would like to believe that the sum of human happiness will be increased by deep learning. Perhaps it may. But I fear that the sum of human understanding may end up being reduced.

If the re-mystification of the world means a revival of magical thinking, then I'm staying put in Weber's iron cage.

Columnists

Toons