AI
AI will convince some people that Weber's bureaucratic ideal is now feasible. They're wrong.
AI
What is intelligence?
‘Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.’
When we speak of smart or intelligent people, often young people, we refer to people who can pick things up quickly. They have a base of knowledge. They can recognize patterns and make connections. They can think logically and explain their rationale for concluding as they do. They can learn.
It's easy to see why people characterize Generative AI large language models as intelligence. They’re trained on massive troves of data, much of it text. They learn patterns, essentially mapping out connections between the concepts they ingest. They have the ability to reassemble these concepts in ways that mimic the kind of text on which they trained.
It is artificial intelligence in that it is a contrived intelligence, created unnaturally to help machines understand the world and to be able to reason. It generates answers, opinions, software code, you name it based on what it has learned in its training. It is not a human intelligence that learns from being in the world and benefits from millennia of lessons in how to imprint knowledge, logic, reasoning, critical thinking, etc.
One skeptic of the claims of what AI will be capable of doing is Emily Bender, a professor of linguistics at the University of Washington. Linguistics is at the heart of the development of the large language models that form the foundation of contemporary AI applications.
‘According to Bender, we are being sold a lie: AI will not fulfil those promises, and nor will it kill us all, as others have warned. AI is, despite the hype, pretty bad at most tasks and even the best systems available today lack anything that could be called intelligence, she argues. Recent claims that models are developing a capacity to understand the world beyond the data they are trained on are nonsensical. We are “imagining a mind behind the text”, she says, but “the understanding is all on our end”.’
She has a charming term for AI.
‘Her thesis is that the whizzy chatbots and image-generation tools created by OpenAI and rivals Anthropic, Elon Musk’s xAI, Google and Meta are little more than “stochastic parrots”, a term that she coined in a 2021 paper. A stochastic parrot, she wrote, is a system “for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning”.’
I suppose “stochastic parrot” sounded more intellectual than “bullshit artist.”
When I read this, I thought, aren’t we all “stochastic parrots” at least part of the time? How many meetings have we endured or papers have we struggled to read to completion in which people are making things up to sound smart or saying things to demonstrate their alignment with the high-status consensus? How common is it for people to espouse opinions or state facts to signal their membership in a desired group?
This is a cruel generalization, of course. While such behavior is commonplace, it’s also true that there are many, many people who are excellent at what they do. Whatever their function, their performance is almost poetry for its sublime efficiency.
How many times have you been in a meeting where you have longed to hear what that one person thinks because you know her opinion will cut to the central core of the matter?
The difference between these people and the stochastic parrots is judgment. It is what Bender refers to as imagining the “mind behind the text.”
According to Bender, we may have artificial intelligence but we will never have artificial judgment. But it’s also true, I would suggest, that many people we meet in everyday life lack judgment, especially younger, inexperienced individuals. The owl of Minerva flaps her wings at dusk; we only acquire judgment after making a ton of decisions, some good and some bad. We learn more from the mistakes than we do from the successes.
The bedrock of judgment and wisdom may be the recognition of the limits to our intelligence.
AI, in its silicon-machined hubris (the principal manifestation of which seems to be Sam Altman’s every other word), will never recognize these limits. The inability to acknowledge this imperfection explains the quest for (and the attendant fear and loathing of) Artificial General Intelligence.
We’ll never see Skynet, will we?
In this arrogant self-deception, AI may turn out to be more human than we think. We can already see its willingness to peddle hallucinated answers with the supreme confidence of some Chad wearing a Patagonia vest presenting the projected cash flows for a private equity portfolio company.
If you think that AI is all that the Altmans of the world tell you, then it is natural to extend this belief to the possibility of being able to realize the Weberian view of bureaucracy.
You would be wrong.
‘AI will be the consummation of bureaucracy as regime-type. The official, Weberian appeal of bureaucracy is that it takes discretion out of the hands of individuals, who may abuse it, and subjects decisions to procedures that will be fair and neutral. It depends on having a comprehensive representation of the field to be governed, so one can subject its various parts to a rational calculus. But the conceit that one has such a representation in hand is almost always a fiction, nicely illustrated by the effectiveness of “work to rule” strikes.
‘These are labor actions where workers don’t walk out, instead they agree among themselves to scrupulously follow all company procedures to a T. The result is that production grinds to a halt, as intended. It does so because the indispensable lubricant that keeps the system running consists of all the informal accommodations that workers make among themselves, the work-arounds and horse-trading. You need to let Larry stretch his cigarette break out as long as he likes, because Larry is the only guy who can keep that one lathe running true, the one that is the real bottleneck in production. That is because Larry knows the exact spot you need to shim the tail stock with a .002 feeler gauge. (He brings one in his pocket, and removes it at the end of his shift — Larry is wise.) But the rule-book says nobody should modify the equipment without submitting a request through the proper channels. OK, then. Good luck, assholes.
‘The point is that bureaucracies build their legitimacy on the idea that they have rendered the field of forces perfectly legible, and can therefore exert a perfect mastery over them. It ain’t so.
‘The world of AI will a world in which we have gotten rid of all the Larrys. Good luck, assholes.’
AI may ultimately make our lives more brittle and unreasonable because of the arrogant misconception that it can imagine and map the entire field of possibilities in ways that humans cannot. The irony of a machine created by the applied mathematics of probability assuming that it can convert our incomplete understanding of the world into some deterministic machine we can engineer is not lost on me.
AI may encourage bureaucracy. What’s worse, it will likely make it worse because it will crowd out the Larry’s with their years o experience and attendant judgment. AI will be no better at mapping out the entire dynamic field of possibilities than humans are. It will likely be much worse. I’ll bet it won’t take criticism well, either.
That’s the true threat of Skynet. It’s not an all-encompassing set of machine overlords who conquer humanity. It’s going to be much worse. It will be the imposition of a misanthropic mediocrity. And we’ll have no one to complain to.
My term for AI is plagiarism engines.
Copying is not reasoning.