A Journal from the AI Frontier

A Journal from the AI Frontier: The Amplifier

March 9, 20266 min readBy Carl Eidsgard
A Journal from the AI Frontier: The Amplifier

If you've been following this series, you might be feeling a bit deflated at this point. I've spent the last three entries essentially arguing that the AI revolution, as it was sold to us, isn't happening. That the money doesn't add up, the benchmarks have a ceiling, and the dream of fully autonomous agents falls apart the moment you do the math on chaining probabilistic systems together.

All of that is true. And I stand by every word of it.

But here's the thing: something real is happening. Something that I think is genuinely important, maybe even transformative, just not in the way that the hype machine has been describing it. And I think the reason most people are missing it is because they're looking for the wrong thing.

They're looking for automation. What they should be looking for is amplification.

Let me explain what I mean by that.

If you had told me in 2023, which was peak magic for this technology, that this AI revolution would end up being closer in scale to what the internet did for us, I probably would have disagreed with you. At the time, it felt bigger than the internet. It felt like the discovery of fire, the kind of fundamental shift that changes what humans are capable of at the most basic level. Cooking leads to better nutrition leads to better intelligence leads to civilization. That scale.

Turns out the internet people were right. But they were also wrong, and the distinction matters.

The internet democratized access to information. Before it, if you wanted to know something, you had to go find it in a library, or know someone who knew, or be lucky enough to stumble into the right context. After it, information was just... there. And that changed everything. Not overnight, and not in the way the 90s dot-com hype predicted, but fundamentally and irreversibly over the course of two decades.

AI is doing something similar, but different. The internet made information available. AI makes information actionable. It takes the gap between "I can look this up" and "I can actually do something with it" and compresses it dramatically. That compression is what I call amplification, and the frame I use for thinking about it is range of motion.

Here's what I mean by range of motion: every person has a set of skills, a set of things they know how to do, that exists within a natural range. You're good at some things, okay at others, and hopeless at a few more. What AI does, when used correctly, is expand that range. It unlocks capabilities that are adjacent to what you already know, by giving you access to information and partially automating the underlying tasks that are hard.

Notice I said "adjacent." AI doesn't turn a baker into a quantum physicist. It turns a baker who has a good idea for a website into someone who can actually build a basic version of that website. It turns a software developer who needs to write a legal brief into someone who can produce a reasonable first draft. It turns a marketer who wants to analyze data but doesn't know Python into someone who can actually run the analysis.

The range expands. The motion increases. And that, at scale, is enormous.

We can already see this happening most clearly in software development. The process of writing code has been massively simplified by AI assistants. Boilerplate that used to take hours now takes minutes. Documentation that nobody wanted to write gets generated. Debugging happens faster. But here is the curious thing, and this is important: when you look at the traditional KPIs, developers using AI don't appear to be significantly more productive. The numbers don't really move.

And I think this is where the entire conversation about AI and productivity has gone off the rails.

Productivity, in economics, is defined as how much output you get for a given amount of input. And in theory, AI should be the ultimate productivity enhancer. More output, less input. Simple. But the way we actually measure productivity in modern corporate environments is through input hours on one side and total output on the other, and when you measure it this way, the output doesn't seem to go up with AI. In fact, because of what I'd call the slop effect (AI makes it easy to produce mediocre work on remedial tasks, which dilutes overall quality), the picture might actually look worse.

So is AI a net negative for productivity? The mounting evidence would suggest so, at least by traditional metrics.

But I would propose that we are thinking about this entirely wrong.

What's actually happening with developers is not that they're producing more code. It's that they're spending less time on the tedious, low-value parts of their work, and more time on the parts that actually matter. The architecture decisions. The system design. The creative problem-solving. From the developer's perspective, their work has gotten materially better and more interesting. But because the KPI doesn't capture "quality of thinking time," it doesn't show up.

And then there's the other side of the coin: the people who couldn't code at all before. Founders who can now build an MVP. Designers who can prototype interactive experiences. Domain experts who can test their ideas without waiting six months for an engineering team. This was simply unthinkable three years ago. The range of motion has expanded dramatically for these people, and the value created is real, even if it doesn't fit neatly into a traditional productivity spreadsheet.

(A caveat here, and I feel strongly about this: if you are one of the vibecoders out there who take a prompt-what-I-want-and-ship-it approach, please, I am asking sincerely, do not put your code into production by yourself. That likely won't end well. Leave that to the professionals. AI has expanded the range of motion, but it has not eliminated the need for people who actually understand the domain.)

This brings me to what I think is the most important shift that AI is actually driving, and it's not the one being talked about at conferences. The real shift is toward generalists.

We are moving toward a world where generalists, supported by domain specialists who are themselves generalists in areas outside their own expertise, become the norm. Each domain can now be populated by more people, because the barrier to entry has dropped. A specialist still knows things an AI-augmented generalist doesn't. But the generalist can now operate meaningfully in domains that were previously locked behind years of specialized training.

This is true information democratization. Not just access to information, but access to capability. And if done correctly, I truly believe this could be a real stepping stone to building a better world, because that is something we seem to have forgotten how to do.

But (and there is always a but) it needs to be done correctly. It needs good execution. And good execution relies on a good plan.

Which is what the next entry is about.


Stay in the loop

Get updates on AI infrastructure, product news, and insights from the FenxLabs team.

No spam. Unsubscribe anytime.

Ready to Get Started?

Explore ARCHIMEDES or get in touch to discuss your AI infrastructure needs.