In our quest for new rules of bureaucracy that can help us predict the future and recommend policy, it is important to ask cui bono? Who benefits? Another corollary question I like to ask when I observe an event is why now?
There is no shortage of regulation coming out of Washington, DC these days. As they say in the Navy, if it moves, salute it, and if it doesn’t move, paint it. There is nothing that cannot and should not be regulated by this federal government, or so it would seem, either in the form of legislation or, more likely, executive orders acting on some presumed extension of the enumerated powers of the President.
The latest version of this is artificial intelligence. The Executive Order dated October 30, 2023 purports to be one for “Safe, Secure, and Trustworthy Artificial Intelligence” because who doesn’t like safety, security, and trustworthiness?
There are several dimensions to the burden imposed upon the AI companies::
· Ensure there is proper testing in place to ensure safety
· Strengthen privacy protections including on data used to train AI models
· Advance social equity and civil rights
· Make Americans better off
· Support unionization and workforce development
· Drive innovation and competition
· Drive responsible adoption globally
· Modernize federal AI usage
If that seems like a lot, it is. It’s a floor wax. It’s a dessert topping. It’s both.
In essence, the order strives to solve problems that may or may not exist already.
In addressing these issues, one implication is that the private sector acting independently would dedicate insufficient resources to resolving them. Put another way, the private sector would discount the significance of safety and security, preferring instead to plunge ahead without controls. A corollary implication is that AI is mature. The regulator deems it to be at a point in its development where innovation is incremental, not revolutionary.
We’re from the government and we’re here to help.
There are costs associated with compliance, displacing funds that could be used for product development or marketing. The tradeoff, one presumes, is that the benefits from the order more than offset those we might expect from foregone (or delayed) features. We assume away any distortion of the evolutionary path this new technology may take incurred by our intervention. There is presumed inelasticity, an inevitability, of progress even in the presence of significant gates.
Who has the easiest time bearing these expenses? The large incumbent players with the deepest pockets. Who has the most difficult time? Smaller companies or enterprise players looking to build products for customized use cases.
It’s almost as if we would expect to see the winners-to-date out there lobbying for more regulation.
Here’s OpenAI’s Mira Murati in the Wall Street Journal one week before the order:
“It’s not a single fix. You usually have to intervene everywhere, from the data to the model to the tools in the product, and, of course, policy. And then thinking about the entire regulatory and societal infrastructure that can keep up with these technologies that we’re building.
“So, when you think about what are sort of the concrete safety measures along the way, No. 1 is actually rolling out the technology and slowly making contact with reality; understanding how it affects certain use-cases and industries; and actually dealing with the implications of that. Whether it’s regulatory, copyright, whatever the impact is, actually absorbing that, and dealing with that, and moving on to more and more capabilities. I don’t think that building the technology in a lab, in a vacuum—without contact with the real world and with the friction that you see with reality—is a good way to actually deploy it safely.”
If you weren’t clear on her message, here’s a single-sentence statement from the Center for AI Safety, signed by OpenAI:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This is like using a sledgehammer to play piano.
All of this sturm und drang coincided with an OpenAI funding round. Here’s Crunchbase writing at the same time:
“Per a report in The Information, Thrive Capital will lead a deal to buy the OpenAI shares at a price that will value the artificial intelligence giant at at least $80 billion.
“The deal was first reported late last month and makes OpenAI one of the most valuable private companies in the world.”
One of the largest private companies in the world is advocating for greater regulation of its industry.
Stratechery’s Ben Thompson observes that the signatories of the Center for AI Safety statement all come from Big AI.
“What is striking about this tally is the extent to which the totals and prominence align to the relative companies’ current position in the market. OpenAI has the lead, at least in terms of consumer and developer mindshare, and the company is deriving real revenue from ChatGPT; Anthropic is second, and has signed deals with both Google and Amazon. Google has great products and an internal paralysis around shipping them for business model reasons; urging caution is very much in their interest. Microsoft is in the middle: it is making money from AI, but it doesn’t control its own models; Apple and Amazon are both waiting for the market to come to them.
“In this ultra-cynical analysis the biggest surprise is probably Meta: the company has its own models, but no one of prominence has signed. These models, though, have been gradually open-sourced: Meta is betting on distributed innovation to generate value that will best be captured via the consumer touchpoints the the company controls.
“The point is this: if you accept the premise that regulation locks in incumbents, then it sure is notable that the early AI winners seem the most invested in generating alarm in Washington, D.C. about AI. This despite the fact that their concern is apparently not sufficiently high to, you know, stop their work. No, they are the responsible ones, the ones who care enough to call for regulation; all the better if concerns about imagined harms kneecap inevitable competitors.”
Thompson goes on to note that the huge increase in consumer welfare from the Internet came about in the absence of regulation. Section 230 of Title 47 of the United States Code shields technology companies from liability.
“At its core, Section 230(c)(1) provides immunity from liability for providers and users of an "interactive computer service" who publish information provided by third-party users:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
“Section 230(c)(2) further provides "Good Samaritan" protection from civil liability for operators of interactive computer services in the good faith removal or moderation of third-party material they deem "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."”
One of the key features of the last thirty-five years is that global innovation has taken place disproportionately in technology. It appears as if regulators underestimated the role technology would play and so, in some cases, they couldn’t be bothered with it, only to realize too late that technology had become too big and too central to regulate.
AI is the opportunity to reset that conversation.
The bureaucrats aren’t going to make the same mistake twice.
They’re going to make entirely new ones.
Contrast this with the “techno optimist manifesto” published by one of the most sophisticated venture capitalists in the world, Marc Andreesen. Here he talks about regulation.
“We have enemies.
“Our enemies are not bad people – but rather bad ideas.
“Our present society has been subjected to a mass demoralization campaign for six decades – against technology and against life – under varying names like “existential risk”, “sustainability”, “ESG”, “Sustainable Development Goals”, “social responsibility”, “stakeholder capitalism”, “Precautionary Principle”, “trust and safety”, “tech ethics”, “risk management”, “de-growth”, “the limits of growth”.
“This demoralization campaign is based on bad ideas of the past – zombie ideas, many derived from Communism, disastrous then and now – that have refused to die.
“Our enemy is stagnation.
“Our enemy is anti-merit, anti-ambition, anti-striving, anti-achievement, anti-greatness.
“Our enemy is statism, authoritarianism, collectivism, central planning, socialism.
“Our enemy is bureaucracy, vetocracy, gerontocracy, blind deference to tradition.
“Our enemy is corruption, regulatory capture, monopolies, cartels.
“Our enemy is institutions that in their youth were vital and energetic and truth-seeking, but are now compromised and corroded and collapsing – blocking progress in increasingly desperate bids for continued relevance, frantically trying to justify their ongoing funding despite spiraling dysfunction and escalating ineptness.
“Our enemy is the ivory tower, the know-it-all credentialed expert worldview, indulging in abstract theories, luxury beliefs, social engineering, disconnected from the real world, delusional, unelected, and unaccountable – playing God with everyone else’s lives, with total insulation from the consequences.
“Our enemy is speech control and thought control – the increasing use, in plain sight, of George Orwell’s “1984” as an instruction manual.
“Our enemy is Thomas Sowell’s Unconstrained Vision, Alexander Kojeve’s Universal and Homogeneous State, Thomas More’s Utopia.
“Our enemy is the Precautionary Principle, which would have prevented virtually all progress since man first harnessed fire. The Precautionary Principle was invented to prevent the large-scale deployment of civilian nuclear power, perhaps the most catastrophic mistake in Western society in my lifetime. The Precautionary Principle continues to inflict enormous unnecessary suffering on our world today. It is deeply immoral, and we must jettison it with extreme prejudice.”
Those are fighting words.
Who does Andreesen represent? He speaks on behalf of startups and others in the Silicon Valley ecosystem writ large who seek to develop AI. The compliance costs and other restrictions on development mean that we are less likely to come up with innovative solutions to our problems.
Explosive bureaucratic intervention ossifies the industrial organization of AI as a category.
His companies are less likely to succeed.
He descries regulation to solve problems that have yet to appear. The response is that by the time these problems do manifest, it will be too late. It doesn’t help that Altman throws around the term “AGI” like he’s discussing the weather, except here he refers to Artificial General Intelligence suggestive of such fictional nightmares as the Terminator with its spooky rise of the machines.
Thompson refers to this blog post from Steven Sinofsky.
“Instead, this document is the work of aggregating policy inputs from an extended committee of interested constituencies while also navigating the law—literally what is it that can be done to throttle artificial intelligence legally without passing any new laws that might throttle artificial intelligence. There is no clear owner of this document. There is no leading science consensus or direction that we can discern. It is impossible to separate out the document from the process and approach used to “govern” AI innovation. Govern is quoted because it is the word used in the EO. This is so much less a document of what should be done with the potential of technology than it is a document pushing the limits of what can be done legally to slow innovation.
“You have to read this document starting from the assumption that AI needs to be regulated immediately and forcefully and do so without the accountability of the democratic process. It doesn’t really matter what view you have of AI from accelerate to exterminate, but knowing the process one just has to be concerned. Is AI truly such an immediate existential risk that the way to deal with it is to circumvent the democratic process?”
His is a stinging indictment of the rush to regulation, at a basic level, going so far as to point out the absurd generality of the definition of AI in the order.
Ironically, the order itself hard-codes limits with tremendous specificity that he likens to trying to regulate spreadsheets in the early days of office applications based on the assumption that a spreadsheet would have no more than 255 rows.
“As a technologist one immediately sees the absurdity of this section. Who has not worked on a system that had to be completely rearchitected because it contained hard-coded assumptions. Conversely, how many computer systems of the past got left behind because they presumed limitations that were outmoded by the time the system was in widespread use.”
Beyond pointing out how poorly developed this regulatory approach, he answers the question why.
“If I remain unconvincing that this Order is either the product of incumbent regulatory capture or at the very least enormously beneficial to incumbents, then the last portion of the Order makes it abundantly clear that the regulatory framework that results from those charged with developing it will be the exclusive province of only the largest companies. The Order names 29 executive branch Secretary level (or close) positions that will have oversight or contribute to regulating AI innovation, including an open-ended invitation to add more. As anyone that has worked with new technology companies knows, the efforts that go into simply being able to sell software in the US that meets the basic needs of compliance across merely SOC2, FISMA, and often HIPAA are immense. Few new companies can even cross this threshold. Only the largest existing companies will have the wherewithal to deal with 29 different executive departments. Only the largest existing companies can pick up a phone and call the Deputy Chief of Staff for Policy to begin to deal with the onslaught of regulation and tune it to their capabilities.
“This approach to regulation is not about innovation despite all the verbiage proclaiming it to be. This Order is about stifling innovation and turning the next platform over to incumbents in the US and far more likely new companies in other countries that did not see it as a priority to halt innovation before it even happens.”
He highlights the speculative, pre-cognitive nature of the approach:
“The entirety of Section 4 “Ensuring the Safety and Security of AI Technology” is really a “guilty until proven innocent” view of a technology. It is simply premature. People are racing ahead to regulate away potential problems and in doing so will succeed in stifling innovation before there is any real-world experience with the technology. This section is a result of the fantastical claims of the technology doomsayers having won the ear of the White House. These are exactly the types of advocates that did not win over the White House at the dawn of the Information Age.”
“Regulatory capture” is defined as follows:
“Regulatory capture is an economic theory that says regulatory agencies may come to be dominated by the industries or interests they are charged with regulating. The result is that an agency, charged with acting in the public interest, instead acts in ways that benefit incumbent firms in the industry it is supposed to be regulating.”
I’ll leave it to you to decide whether the current rush to regulate is evidence of regulatory capture.
What should be clear from this analysis is that the large incumbents sought to put into place the current interventionist regime. They’re rational actors with a tremendous amount of money. They see themselves as beneficiaries of this bureaucratic bonanza.
Bureaucracy favors the incumbents because it imposes costs on new entrants and smaller competitors that are borne more easily by larger, more successful players. Also, regulatory capture emerges when larger, better-funded entities can lobby to influence policy decisions in their favor.
Bureaucracy is a tax on growth, or perhaps a store of potential energy like some sort of Strategic Innovation Reserve for future administrations to unleash.in times of national need.