Cochrane’s Law
Regulating technology before we know what it will do? It didn't work for those people. Maybe it will work for us.
John Cochrane wrote a wonderful piece in The Digitalist Papers. Its central thesis relates to the question of whether we should regulate artificial intelligence pre-emptively. However, it is so rich with wisdom about regulation (and, by extension, bureaucracy) that I want to parse it in its entirety.
He summarizes the prevailing vibe in the public discourse as saying, “AI poses a threat to democracy and society. It must be extensively regulated.” It’s as if he is proposing a resolution in a debate.
The central conclusion he reaches is that we should not regulate AI until we have experience of what the technology does and what potential costs it imposes.
The first argument he offers in opposition to the conventional wisdom is that the contemporary elite has never been able to forecast the effect new technology has had on politics, economics, or society. They’re not especially adept when it comes to predicting all kinds of dynamics, not just those caused by technology. We still can’t agree on the impact of these shocks that took place hundreds of years ago.
“Have the chattering classes—us—speculating about the impact of new technology on economics, society, and politics, ever correctly envisioned the outcome? Over the centuries of innovation, from moveable type to Twitter (now X), from the steam engine to the airliner, from the farm to the factory to the office tower, from agriculture to manufacturing to services, from leeches and bleeding to cancer cures and birth control, from abacus to calculator to word processor to mainframe to internet to social media, nobody has ever foreseen the outcome, and especially the social and political consequences of new technology. Even with the benefit of long hindsight, do we have any historical consensus on how these and other past technological innovations affected the profound changes in society and government that we have seen in the last few centuries? Did the industrial revolution advance or hinder democracy?”
It's not just technology. Our political and intellectual leaders have a dismal track record of predicting the evolution of events in response to all kinds of impulses. Who in World War I had Russian communist totalitarian dictatorship on their bingo card? Who could have predicted that Donald Trump would hold such sway over the Republican Party and rise to the highest office in the land once (and possibly twice)? Malthus was wrong about overpopulation and mass starvation. Marx was wrong about the capitalist model’s depravity when it came to the working class. China acted on revivified fears in the 1970s of overpopulation with a monstrous One Child policy.
These people don’t exactly have an inspiring track record.
He quotes the MIT economist Daron Acemoglu:
“We must remember that existing social and economic relations are exceedingly complex. When they are disrupted, all kinds of unforeseen consequences can follow…
“We urgently need to pay greater attention to how the next wave of disruptive innovation could affect our social, democratic, and civic institutions. Getting the most out of creative destruction requires a proper balance between pro-innovation public policies and democratic input. If we leave it to tech entrepreneurs to safeguard our institutions, we risk more destruction than we bargained for.”
The logic here seems to be that, well, we live in a complex system that is impossible to predict. This thing (AI today) should have big unforecastable consequences. So, we must try to predict the consequences. Because if we can do that, then we can put in sensible policies to balance the tension between the benefits of this new tool and the costs we may incur from it.
This is a complete non-sequitur. We must predict the thing we cannot predict.
By the way, we’re not standing still when we try to predict what’s going to happen as we integrate AI into more and more of our daily lives. There are lots of other moving pieces. We’re competing with other large geopolitical actors, not all of whom share our love of peace and freedom, for one. We’re told that we’re a dozen years away from the point of no return for climate hara kiri.
This is not to dismiss the fact that AI will be disruptive. That is the only thing we can aver with certainty.
The second argument is that we should not understate the risk of banning or delaying the use of AI, even as we overstate potentially the risk of its implementation.
“There are plenty of counterexamples—societies that, in excessive fear of such effects of new technologies, banned or delayed them, at great cost. The Chinese Treasure fleet is a classic story. In the 1400s, China had a new technology: fleets of ships, far larger than anything Europeans would have for centuries, traveling as far as Africa. Then, the emperors, foreseeing social and political change, “threats to their power from merchants,” (what we might call steps toward democracy) “banned oceangoing voyages in 1430.” (3) The Europeans moved in.
“Genetic modification was feared to produce “frankenfoods,” or uncontrollable biological problems. As a result of vague fears, Europe has essentially banned genetically modified foods, despite no scientific evidence of harm. GMO bans, including vitamin A-enhanced rice, which has saved the eyesight of millions, are tragically spreading to poorer countries. Most of Europe went on to ban hydraulic fracking. U.S. energy policy regulators didn’t have similar power to stop it, though they would have if they could. The U.S. led the world in carbon reduction, and Europe bought gas from Russia instead. Nuclear power was regulated to death in the 1970s over fears of small radiation exposures, greatly worsening today’s climate problem. The fear remains, and Germany has now turned off its nuclear power plants as well. In 2001, the Bush administration banned research on new embryonic stem cell lines. Who knows what we might have learned.”
Third, he refers to “narrow-focus bias” in which we ask ourselves the isolated question related to AI. Instead, we should be asking ourselves about all of the large risks, ranking them in terms of known potential impact. If we expand the aperture of our vision, we would be more concerned about many other dangers.
We also suffer from narrow-focus bias. Once we ask “what are the dangers of AI?” a pleasant debate ensues. If we ask instead “what are the dangers to our economy, society, and democracy?” surely a conventional or nuclear major-power war, civil unrest, the unraveling of U.S. political institutions and norms, a high death-rate pandemic, crashing populations, environmental collapse, or just the consequences of an end to growth will light up the scoreboard ahead of vague dangers of AI. We have almost certainly just experienced the first global pandemic due to a human-engineered virus. It turns out that gain-of-function research was the one needing regulating. Manipulated viruses, not GMO corn, were the biological danger.
Fourth, most regulation takes place once we see the new change in action, with a better comprehension of the costs and the benefits. Making matters worse, centrally planned regulation is bedeviled by “limited information, unintended consequences, and capture.”
“Most regulation takes place as we gain experience with a technology and its side effects. Many new technologies, from industrial looms to automobiles to airplanes to nuclear power, have had dangerous side effects. They were addressed as they came out, and judging costs vs. benefits. There has always been time to learn, to improve, to mitigate, to correct, and where necessary to regulate, once a concrete understanding of the problems has emerged. Would a preemptive “safety” regulator looking at airplanes in 1910 have been able to produce that long experience-based improvement, writing the rule book governing the Boeing 737, without killing air travel in the process? AI will follow the same path.”
There are perfectly good regulations on the books that reflect this approach. Externalities do exist. There is such a thing as market failure. (Although, not everything is market failure and not everything labeled as market failure meets the test.) When we do find it, regulation has its place. To a point. But the default perspective of the regulator is to downplay and discount the upside with a maniacal focus on the downside. The benefits? That’s someone else’s part ship.
It should be an accepted truth, but he reiterates the arguments against the case for the central planner because it needs to be repeated for those in the back.
“Scholars who study regulation abandoned the Econ 101 view a half-century ago. That pleasant normative view has almost no power to explain the laws and regulations that we observe. Public choice economics and history tell instead a story of limited information, unintended consequences, and capture. Planners never have the kind of information that prices convey. (4) Studying actual regulation in industries such as telephones, radios, airlines, and railroads, scholars such as Buchanan and Stigler found capture a much more explanatory narrative: industries use regulation to get protection from competition, and to stifle newcomers and innovators. (5) They offer political support and a revolving door in return. When telephones, airlines, radio and TV, and trucks were deregulated in the 1970s, we found that all the stories about consumer and social harm, safety, or “market failures” were wrong, but regulatory stifling of innovation and competition was very real. Already, Big Tech is using AI safety fear to try again to squash open source and startups, and defend profits accruing to their multibillion dollar investments in easily copiable software ideas. (6) Seventy-five years of copyright law to protect Mickey Mouse is not explainable by Econ 101 market failure.”
Fifth, the lessons of 2008 and 2020 are still fresh. Can we have faith in the ability of the regulators to understand the problem and to execute against it when they mishandled the financial system and the pandemic response so poorly?
The arguments for regulation often use the passive voice (“AI should be regulated”) without specifying who would do the regulation. There is some sort of juvenile subtext that suggests the existence of a benign dictator with perfect knowledge who could pull off this master stroke. Perhaps people were willing to share this delusional assumption once. The populism of the current day suggests the disintegration of this naïve faith.
Sixth, he notes the inextricable linkage between communication and liberty. AI is communication, at least when we think of Generative AI text and image generation. Witness the recent case in California in which a satirist used AI to develop a video involving one of the presidential candidates. The state deemed it misinformation. They lost.
‘“Regulating” communication means censorship. Censorship is inherently political, and almost always serves to undermine social change and freedom. Our aspiring AI regulators are fresh off the scandals revealed in Murthy v. Missouri, in which the government used the threat of regulatory harassment to censor Facebook and X. (8) Much of the “misinformation,” especially regarding COVID-19 policy, turned out to be right. It was precisely the kind of out-of-the-box thinking, reconsidering of the scientific evidence, speaking truth to power, that we want in a vibrant democracy and a functioning public health apparatus, though it challenged verities propounded by those in power and, in their minds, threatened social stability and democracy itself. Do we really think that more regulation of “misinformation” would have sped sensible COVID-19 policies? Yes, uncensored communication can also be used by bad actors to spread bad ideas, but individual access to information, whether from shortwave radio, samizdat publications, text messages, Facebook, Instagram, and now AI, has always been a tool benefiting freedom.’
Call it Cochrane’s Law: the optimal regulatory policy is not to regulate a new technology until we have sufficient experience to understand its full impact.
The beauty of Cochrane’s essay is that it generalizes beyond just AI.