Intelligence Deflation

I routinely find myself lost in thought - unable to escape a question. This question incessantly rings in the back of my mind, growing louder every time I begin to work, and louder still every time I crack open my laptop. The louder the ringing, the clearer the question: “Why do today what AI can do tomorrow?

Why is this question so subtly sinister?

The improvements in Generative AI capabilities over the last few years have been nothing short of remarkable. Text generation models have gone from overly eager high schoolers to capable pair programming partners in the blink of an eye. Video generation models have nailed "Will Smith eating pasta" in less time than it took for Apple to upgrade from the iPod to the iPhone. This incredible, ever-accelerating speed of improvement results in only one possible outcome: AI will inevitably be better at virtually every job requiring digital output than humans.

In economics, inflation is a phenomenon that occurs when the purchasing power of money goes down over time. An over-simplified example of this: suppose one banana costs $2 today, and 6 months from now, the same type of banana will cost $3. Assuming the market for the banana has not experienced any change in supply or demand, the banana's change in cost likely came from a decrease in the purchasing power of the dollars used to buy the banana.

Relatively low rates of inflation are typically good for the economy. Most rational investors would prefer to buy the banana when it costs $2 rather than $3, so they choose to buy the banana now instead of 6 months from now, injecting cash back into the market which stimulates the economy. But what happens if the investors expect the banana to cost $1 in 6 months instead of $3?

This is known as deflation. It occurs when the purchasing power of money increases over time and occasionally has some nasty side effects... Now that the investors believe the banana will cost 50% less in 6 months, they hold their money out of the market. The money that was being spent on bananas simply sits on the sidelines - waiting for the discount they fully expect to come, slowing down everything else that requires the flow of money to operate.

This is happening right now - just not with bananas.

Our cognitive effort can sit on the sidelines just like the money earmarked for bananas. Let's play a game to demonstrate this:

Let's assume a knowledge worker has a set amount of cognitive load they can handle in any given day. We will call this “mental equity” and assign 100 points of mental equity to the knowledge worker.

Let's also assume more cognitively demanding tasks require more mental equity - in the same way that more complex projects typically require more funds.

Finally, let's assume that the worker wants to spend as little mental equity as possible in any given day.

Every time a new GenAI model is launched, it is lauded for its abilities to handle more cognitively demanding tasks. Secondarily, every time the new model surpasses its predecessor, it reinforces the idea that the next one will be better than the last.

At first this is innocent: a knowledge worker realizes they can save a few points of mental equity each day by off-loading low complexity, low cognitive load tasks.

Think of this as asking ChatGPT to organize your grocery list for you or using Claude to re-write your email to “sound more professional”. Our knowledge worker is saving 1-3 points of mental equity every day.

Soon the models get better, and the knowledge worker realizes they can off-load mildly complex tasks to the models to save even more mental equity points each day.

Think of this as in-line code edits to fix errors or asking ChatGPT to write out messages to clients for you. Now our knowledge worker is saving 5-10 points every day.

Eventually the models get even better. The knowledge worker is shocked by how much mental equity they are saving!

Think of this as developers relying on AI to build their products or entrepreneurs using ChatGPT to write their marketing plans. Each day our knowledge worker is offloading moderately cognitively demanding tasks and saves 15, 20, even 30 points in a day!

What do you believe the knowledge worker will expect from the next generation of models? You guessed it - pattern recognition has kicked in. It is natural for them to assume that the next models will be able to handle high cognitive load, high complexity tasks at some point soon. If our knowledge worker is anything like our rational banana investors, they will arrive at the same question I have: “Why do today what AI can do tomorrow?”

Let's add some other assumptions to our game. What if I told you that the knowledge workers of tomorrow have already had this pattern drilled into them? What if I told you that those saved mental equity points decay and atrophy if not spent?

Our only recourse is to break the assumptions of the game. Those of us who choose to act - to leverage models' capabilities to amplify our own - can set the pace for those who follow. We can accelerate human capital and pave the way for the creation of a better tomorrow.

We can instill a desire to "do" - a desire to utilize the mental equity allotted to us for good- instead of a desire to conserve it.

We can pivot our minds, lives, and labor to areas on the other side of the jagged edges of AI where models have not yet conquered.

We can choose to be human. We can choose to dance, to feel, to love.

We can collaborate with AI on the frontiers of health, economics, peace, and governance.

Or we can simply do as much as we can today, living in the shadow of a crashing, yet-to-be-realized wave, yet living all the same.

I choose to work. I choose to crack open my laptop one more time.

I choose to do today, knowing that AI can do tomorrow.

What do you choose?

Acknowledgements: I’d like to give a shout out to Jessica, Adam, Brad, and Rachel for reading this over and offering wonderful ideas & suggestions. Your feedback means the world to me and I would not have posted this without your amazing advice.

Written by Ryan Hartman