How I Evaluate Technology Cycles

A framework built from watching infrastructure evolve across thirty years—from physical servers to autonomous agents.

The Pattern Recognition Problem

Every new technology arrives claiming to be fundamentally different. The vendors say "this time is different." The early adopters say "this changes everything." The analysts say "unprecedented disruption."

They're almost always wrong about that.

Not because the technology isn't new—it is. Not because it won't change things—it will. But because the patterns of adoption, the failures that emerge, and the governance problems that follow are remarkably consistent across cycles.

I've spent three decades watching these cycles repeat: physical servers to virtual machines, on-premise to SaaS, monoliths to microservices, datacenters to cloud, VMs to containers, managed services to serverless. Now: deterministic code to autonomous agents.

The infrastructure changes. The adoption curve doesn't.

* * *

The Framework

When I evaluate a new technology—whether I'm advising a client, designing an architecture, or building a platform—I ask the same questions. Not about the technology itself, but about where it sits in the cycle.

1. What abstraction is being sold?

Every technology cycle sells an abstraction. VMs abstracted hardware. Containers abstracted operating systems. Serverless abstracted servers. Cloud abstracted datacenters.

The abstraction is never the problem. The problem is what you lose when you adopt it—and whether you'll miss it when things break.

Ask: What am I giving up visibility into? What happens when I need that visibility back?

2. Where does optimization create lock-in?

The moment you optimize for a platform, you create coupling. Not at the layer being abstracted—at the layer above it.

Kubernetes abstracted the cloud provider, but optimizing for Kubernetes meant adopting CRDs, operators, and Helm charts that locked you into the orchestration model. Serverless abstracted servers, but optimizing for Lambda meant vendor-specific event models and IAM integration that made portability a rewrite.

Ask: Where will I optimize? What coupling does that create? Can I afford that coupling if I need to move?

3. What governance problems are deferred, not solved?

New technology always defers governance to "later." Early adoption focuses on capability: can we do the thing? Governance comes after: should we have done it that way?

VMs deferred resource allocation governance until VM sprawl became unmanageable. Containers deferred security policy until production breaches forced runtime controls. Cloud deferred cost governance until bills exceeded budgets. Microservices deferred observability until distributed tracing became mandatory.

Ask: What governance problems does this technology assume someone else will solve? What happens when "later" arrives?

4. Who pays for portability?

Every platform promises portability. Few deliver it. Not because the technology can't be portable, but because portability has a cost—and most organizations discover that cost exceeds the risk.

Multi-cloud promised freedom from vendor lock-in. What it delivered was redundant toolchains, abstraction layers that broke under optimization, and infrastructure costs that exceeded single-cloud bills.

Ask: What does portability actually cost to maintain? Is that cost less than the risk I'm avoiding?

5. Where does value accrue—infrastructure or intelligence?

The winning platforms don't sell infrastructure. They sell managed intelligence.

AWS didn't win by selling cheaper VMs. It won by selling managed databases, auto-scaling groups, and intelligent routing. Google didn't win Kubernetes by being the first orchestrator. It won by making orchestration invisible inside managed services.

Ask: Is this platform selling me infrastructure I manage, or intelligence that manages itself? Which do I actually need?

* * *

Applying the Framework to Autonomous Agents

Let's apply this to the current cycle: autonomous agents and AI orchestration.

What abstraction is being sold?
Agents abstract decision-making. You're giving up visibility into why a decision was made. When something breaks—or worse, when something works but shouldn't have—you'll need that visibility back. Governance must capture reasoning, not just actions.

Where does optimization create lock-in?
Optimizing for a specific agent framework (LangChain, AutoGPT, vendor platforms) creates coupling at the orchestration layer. You're locked into prompt patterns, tool integrations, and policy models. Moving means rewriting agent logic, not just redeploying code.

What governance problems are deferred?
Auditability, policy enforcement, and cost attribution are all deferred to "later." Early adoption focuses on "can the agent do the task?" The governance question—"should it have done it that way?"—comes after the first compliance audit, the first runaway cost event, or the first unintended consequence.

Who pays for portability?
Agent portability across models, clouds, or frameworks costs more than single-platform lock-in. Maintaining abstraction layers, testing across providers, and operating redundant policy systems consumes budget faster than agent compute itself. Most organizations will choose strategic lock-in—the question is whether they're locking into infrastructure or intelligence.

Where does value accrue?
The platforms that win won't be the ones with the best agent execution engines. They'll be the ones that make governance invisible—automated policy enforcement, automatic audit trails, and intelligent cost controls that adapt without manual intervention.

* * *

Why This Matters

This framework isn't predictive. It doesn't tell you which technology will win or which vendor to bet on.

What it does is strip away the novelty and focus on the constants: abstraction, optimization, governance, portability, and value.

The technology changes. These questions don't.

If you're evaluating agent platforms, orchestration systems, or any infrastructure promising to "change everything"—ask these questions. Not about the features. About the patterns.

The answers won't tell you what to build. But they'll tell you what problems you'll face when you do.

And that's the pattern that repeats.

← Back to Patterns That Repeat