In computer programming, there is a technique called garbage collection.
“Garbage collection (GC) is a memory recovery feature built into programming languages such as C# and Java. A GC-enabled programming language includes one or more garbage collectors (GC engines) that automatically free up memory space that has been allocated to objects no longer needed by the program. The reclaimed memory space can then be used for future object allocations within that program.
“Garbage collection ensures that a program does not exceed its memory quota or reach a point that it can no longer function. It also frees up developers from having to manually manage a program's memory, which, in turn, reduces the potential for memory-related bugs.”
Before the inclusion of built-in garbage collection, the developer would manage this issue by including lines of code to free up memory. Making developers do this introduces risk. The new memory management code they write might lead to bugs. Developers might forget to free up resources when they no longer require them, leading to performance issues. In the worst case, the program clogs up all of the RAM with unnecessary memory utilization. This is called memory leakage. Of course, one needs to be careful about how to manage memory, lest we delete something we need prematurely.
The purpose of a software program is to tell the hardware in a computer what to do. It translates our human, readable language into machine language. It must deal with every aspect of the machine. When computing resources were scarce, it was vital to ensure that our programs were efficient in using the machine’s capabilities to obtain the best, fastest, most reliable performance.
In Pro Forma, we wrote about the fire-and-forget approach to regulation in modern institutions, citing the financial system as a central example. There is little to no re-assessment of the efficacy or appropriateness of existing regulation. We just keep adding more. This is all in the context of a changing world with new products and greater complexity and larger scale. Yesterday’s regulation may not be relevant to today’s reality.
The natural tendency of bureaucracy is monotonic growth. As Elon Musk says, you need a referee, but it at some point in these systems it seems as if the referees outnumber the players.
We need the equivalent of the computer programmer’s garbage collection for the bureaucracy in an organization. We suffer from a form of “resource leakage.” Regulation restricts or directs behavior to mitigate certain risks in cases where the potential benefits do not justify leaving them be. We do so to improve the outcomes for everyone in the organization. This is a noble end. However, we should understand that we impose distortions; people would not make the same choices in the absence of regulation, potentially. We impose compliance costs. These costs may be opaque. We internalize them. At some point, these distortions and costs combine to slow down our performance, or even to make the organization stop functioning altogether. We should monitor not only the individual impact of a specific rule, but the systemic interaction of all of our regulations working together to understand how they lead to sub-optimal outcomes. To the extent that we can, we need to remove the pieces that are not working in the same way that computer programs free up memory they no longer require.
Who regulates the regulators? Who tells the people and organizations that make and enforce the rules that what they are doing may not be effective in improving the overall picture?
Theoretically, this is the role of governance. But, as we have seen in the case of Disney, too often governance can become captive. The agent can subvert the principal-agent relationship.
How then should we control the monotonic spread of bureaucracy?
Perhaps the solution is competition.
To borrow another example from computer programming, consider the concept of a Generative Adversarial Network (GAN), a type of AI. This is used in image generation. For example, we might want to generate a realistic image. In a GAN, there are two dueling models: a generator model and a discriminator model, both trained on the same training dataset. The generator model generates some images and passes it to the discriminator, along with some real ones. The discriminator then assesses the probability that each image passed is real or not. With each iteration, the generator improves its approach until the discriminator can no longer identify made-up images as having been synthesized by the generator. The discriminator, too, improves its model to be more accurate in its predictions on every run. It’s spy vs. spy.
This is abstract, so here’s an example:
“Let's contextualize the above with an example of the GAN model in image-to-image translation.
“Consider that the input image is a human face that the GAN attempts to modify. For example, the attributes can be the shapes of eyes or ears. Let's say the generator changes the real images by adding sunglasses to them. The discriminator receives a set of images, some of real people with sunglasses and some generated images that were modified to include sunglasses.
“If the discriminator can differentiate between fake and real, the generator updates its parameters to generate even better fake images. If the generator produces images that fool the discriminator, the discriminator updates its parameters. Competition improves both networks until equilibrium is reached.”
Equilibrium in this example means that the machine has learned to produce realistic images.
How could we adapt this approach to bureaucracy in our organizations?
If the bureaucracy in our analogy is the generator model, putting out rules that are meant to be optimized to improve the risk-adjusted performance of the organization, then we need a “contra-bureaucracy” to play the role of the discriminator model, giving the bureaucracy feedback on what’s working and what’s not working. The bureaucracy refines its rules and we obtain organizational equilibrium when the “contra-bureaucracy” runs out of critical suggestions. In our case, this would be a continuous process.
For it to work, the two organizations would need to have parallel influence. One could not dominate the other. Disproportionate power would lead to having too few rules or too many. The objective here is to obtain the Golden Mean.
The best people to put into the contra-bureaucracy would be veterans of the generative bureaucracy. They know how the world works. They know where the bodies are buried. They have the experience to parse process from friction.
In the military, they talk about red teams: “a group that pretends to be an enemy, attempts a physical or digital intrusion against an organization at the direction of that organization, then reports back so that the organization can improve their defenses.” The contra-bureaucracy would be a dedicated red team for red tape.
It is an interesting question as to whether it is better to have an internal contra-bureaucracy or to have an external organization. The latter would be more adversarial in the GAN sense. It would be less vulnerable to being turned and made captive.
Perhaps companies full of contra-bureaucracies would be a good source of jobs for the elites that our universities continue to churn out. The existence of these organizations, in competing for talent, would weaken bureaucracies, in turn.
Is this possible or is this just a meaningless thought experiment? The best place to test this theory is in a small organization before the administrative overhead has metastasized throughout the organizational body beyond the point of easy reversibility. Perhaps this is something that companies in Silicon Valley should consider.
It’s in their DNA to think this way, after all.