← Writing

When Smarter AI Isn't Worth It

Most teams are asking the wrong question about AI architecture. Not “what is the most capable system we can build?” but “what is the simplest system that can do this job reliably?”

In production AI, complexity is not progress. It is a tradeoff. Every extra layer (reasoning loops, agent handoffs, retrieval tricks, orchestration frameworks) buys you something, but it also charges rent in latency, cost, debugging difficulty, and failure surface. Most teams are paying that rent where they should not.

Over the past year, I’ve been building multi-agent AI systems in a regulated banking environment and testing when structured reasoning actually outperforms simpler approaches. The pattern is sharper than most teams assume: for routine tasks, extra intelligence is wasted. For non-routine, high-stakes tasks, it can be worth every token.

Three questions decide most AI architecture choices:

  1. Is the output space small and stable?
  2. Are there real sequential dependencies?
  3. Is the cost of being wrong high?

The rest of this piece unpacks each one with data, production patterns, and a framework you can apply on Monday.

The Complexity Spectrum

AI system architecture exists on a spectrum. At one end, a single model call with a well-crafted prompt. At the other, a multi-agent orchestration system with specialized delegates, structured reasoning protocols, and iterative refinement loops. Both ends are valid, for different tasks.

The mistake is treating this as a capability ladder where more is always better. It isn’t. Every layer of complexity adds latency, cost, failure surface, and debugging difficulty.

THE COMPLEXITY SPECTRUM Single prompt ~1s, $ Classification Extraction RAG pipeline ~1-3s, $$ Knowledge retrieval Q&A systems Agentic RAG ~5-15s, $$$ Multi-step retrieval Tool use Multi-agent ~30-120s, $$$$ Structured deliberation Complex reasoning Complexity → lower cost, faster, fewer failure modes higher cost, slower, more capable

Each step right adds latency, cost, and failure surface. Only move right when the task demands it.

The spectrum isn’t a maturity model. Moving right isn’t progress, it’s a tradeoff. The most mature teams I’ve seen use multiple points on this spectrum simultaneously, routing different tasks to different architectures based on what each task actually requires.

Where Teams Overspend

The more common mistake, by far, is over-engineering. It’s not because teams are careless. The incentive structures actively push in this direction. Framework documentation showcases the most powerful patterns. Conference talks demonstrate the most impressive architectures. Model providers price by token, making cost proportional to capability, so the “best” option is always the most expensive one. The result is a systematic bias toward complexity that has to be actively resisted.

Frontier models for pattern matching

A team routes every incoming customer message through a frontier reasoning model (Opus, o3, Gemini Ultra) to classify intent into five categories: billing, support, sales, technical, and general. The model works beautifully. Accuracy is high. Latency is 2–3 seconds. Cost per classification is $0.01–0.03.

Why this happens: The team built the prototype with a frontier model because it was fastest to get working. The prototype became the production system. Nobody went back to ask whether a cheaper model could achieve the same accuracy, because the frontier model’s accuracy was already good enough and there was no forcing function to optimise. The cost felt small per-request. But at 100,000 classifications per day, the difference between $0.02 and $0.0004 per call is $590,000 per year.

The causal problem: This is pattern matching, not reasoning. The output space is small and stable: five categories that haven’t changed in six months. A fine-tuned Haiku-class model hits 95%+ accuracy at 1/50th the cost and 10x lower latency. A distilled classifier trained on the frontier model’s own outputs gets even cheaper. The frontier model’s reasoning capability, the thing you’re paying the premium for, isn’t being used. It’s like hiring a structural engineer to count bricks.

How to detect it: Look at your output space. If it’s one of N known categories, a structured extraction with a predictable schema, or a binary decision, you’re pattern matching. Classification, entity extraction, intent routing, sentiment analysis, content moderation: all pattern matching problems wearing language-model clothing. The model doesn’t need to generate novel text. It needs to recognise patterns it’s seen before.

Multi-agent orchestration for linear workflows

A planner agent analyzes the request, delegates to a researcher agent that gathers information, which hands off to a summarizer agent that produces the final output. Three LLM invocations, inter-agent context passing, error handling at each handoff. All for a task that’s actually sequential with no branching.

Why this happens: Agent frameworks make orchestration easy to set up. The developer experience is compelling: you define agents with names and system prompts, wire them together, and the framework handles the plumbing. But the framework abstracts away the cost of coordination. Each handoff between agents requires serializing context, making an LLM call to process it, and deserializing the output for the next agent. Error handling multiplies: if any agent in the chain fails, you need retry logic, fallback behaviour, and timeout handling at every boundary. A three-agent chain has three points of failure where a single prompt has one.

The causal problem: The tell is in the topology. If your agent graph is a straight line (A calls B calls C) with no point where an agent chooses between paths, you don’t have an orchestration problem. You have a pipeline that would be simpler, faster, and more reliable as a single structured prompt with well-defined output sections. The three-agent chain takes ~8 seconds and costs 3x as much as a single call that produces the same output in ~2 seconds.

When orchestration earns its complexity: Tasks that genuinely fork. When the system explores multiple approaches in parallel, like a code analysis agent and a documentation retrieval agent operating simultaneously on different data. When different paths require different tools. When intermediate results change what happens next, like when the output of a classification step determines which specialist agent handles the rest. If none of these apply, you’re paying a coordination tax for an architecture pattern you saw in a conference demo.

LINEAR PIPELINE vs GENUINE ORCHESTRATION Disguised pipeline 3 agents, no branching Planner Researcher Summarizer 3 LLM calls, ~8s, $$$ Use one structured prompt Genuine orchestration Branching, parallel paths Router Code analysis Doc retrieval Synthesizer Parallel paths, real decisions Orchestration is justified

If your agent graph is a straight line, it’s a pipeline pretending to be an orchestration

RAG with every technique stacked on

HyDE query expansion, query decomposition, parent-document retrieval, cross-encoder reranking, recursive retrieval, agentic follow-up. All layered on top of each other for an internal knowledge bot that serves 200 policy documents to a team of 50 people.

Why this happens: RAG frameworks ship with these techniques as configuration options. Turning them on is a one-line change. The mental model becomes additive: “more techniques = better retrieval.” There’s no immediate feedback that the additional layers aren’t helping. Reranking adds 200ms of latency. HyDE adds another LLM call. Query decomposition adds another. Each one individually seems small. Together, they 5x your latency and your cost, and the retrieval quality on your 200-document corpus is identical to what you’d get with a clean chunking strategy and a decent embedding model.

The causal problem: Each retrieval technique solves a specific failure mode. Reranking helps when your top-k results contain relevant documents but in the wrong order. That matters with large corpora, not 200 documents. HyDE helps when user queries are short and ambiguous. That matters for consumer search, less so for internal users who know their domain vocabulary. Query decomposition helps when questions span multiple topics, which is rare for policy lookups. When you stack techniques without diagnosing which failure mode you’re solving, you’re adding latency and cost to fix problems you don’t have.

I’ve benchmarked these patterns. From my work on enterprise RAG evaluation, the cost and latency differences are stark:

RAG PATTERN COST-LATENCY TRADEOFF Pattern Latency Cost When justified Naive RAG ~1s $ Clean corpus, simple queries Hybrid search ~1.5s $ Mixed keyword + semantic needs Parent document ~1.5s $ Context-dependent answers Corrective RAG ~3-5s $$$ High-stakes, needs self-check Agentic RAG ~5-15s $$$$ Multi-document reasoning LLM generation is ~70% of total RAG cost. Optimise model selection first, retrieval second.

Benchmarks from enterprise RAG evaluation across banking document retrieval workloads

The jump from naive RAG to agentic RAG is 5–15x in latency and 4x+ in cost. That tradeoff is worth it when queries require multi-step reasoning across documents, like “compare the margin requirements across these three jurisdictions and flag conflicts.” It is not worth it for “what is our refund policy?”

How to detect it: Run your retrieval pipeline with and without each technique on the same evaluation set. If removing a technique doesn’t change your recall or answer quality, it’s adding latency without value. The most effective pattern I’ve seen is starting with the simplest retrieval that works and adding complexity only when evaluation shows specific failure modes. Hybrid search handles 80% of enterprise knowledge retrieval (observed in production). Everything above it needs a documented reason tied to a specific retrieval failure you measured.

Where Teams Underspend

Under-engineering is less common but more dangerous, because the failure mode is invisible. The system appears to work. It produces outputs, they look reasonable, nobody complains. But the quality ceiling is real and the errors accumulate where you can’t see them. Over-engineering wastes money. Under-engineering wastes trust.

Single prompt for multi-step reasoning

A compliance team asks an LLM to read a 40-page contract, check it against regulatory requirements across three jurisdictions, flag risks, rank them by severity, and draft remediation recommendations. All in a single prompt.

Why this happens: The single-prompt approach is the default. It’s how most teams start, and it works well enough on demo inputs that nobody questions it. The output looks professional: headers, bullet points, risk ratings, recommendations. Stakeholders see structured output and assume structured reasoning produced it. The gap between “looks right” and “is right” is where under-engineering lives.

The causal problem: Later conclusions depend on earlier analysis in ways the model can’t reliably handle in a single pass. When you ask one prompt to both identify jurisdictional requirements and assess compliance against them, the model has to hold the regulatory framework in working memory while simultaneously evaluating specific clauses. In practice, it takes shortcuts. Did it actually check the indemnification clause against APAC requirements, or did it pattern-match a plausible-sounding concern? In a flat prompt, you can’t tell. Neither can the model.

The failure compounds silently. Each skipped reasoning step doesn’t produce an error. It produces a plausible output that happens to be wrong. Over hundreds of analyses, you build a body of work that looks thorough but has an unknown error rate. The cost of discovering this in an audit is orders of magnitude higher than the cost of structuring the reasoning correctly in the first place.

When to invest in multi-step: Tasks with genuine sequential dependencies, where the output of step 3 changes what step 5 should do. Not because multi-step is fancier, but because it makes each reasoning step auditable and prevents the model from taking shortcuts you can’t see. If you can’t point to a specific intermediate result and verify it independently, your pipeline isn’t doing what you think it’s doing.

One model for everything

A platform team deploys a single frontier model behind an API gateway for every use case in the organization. Customer FAQ responses, regulatory document analysis, code generation, data classification. All hitting the same endpoint, same model, same parameters.

Why this happens: Simplicity and speed to market. One API key, one integration, one set of rate limits to manage. The platform team ships faster because they have one thing to support. The organisation adopts faster because there’s one endpoint to learn. The cost looks manageable in early adoption when volume is low. This is a reasonable starting point. The problem is when it becomes the permanent architecture.

The causal problem: A single model creates a two-sided cost problem that gets worse with scale. The simple queries are overpaying: a refund policy lookup consumes the same compute as a multi-jurisdictional risk analysis, even though it needs 1/50th of the reasoning capability. The complex queries are underserved. They’re competing for the same rate limits, the same context window configuration, and the same system prompt as FAQ lookups. At scale, you’re simultaneously wasting money on simple tasks and degrading quality on hard ones.

There’s a deeper architectural issue. Different task types have different optimal parameters. Temperature, max tokens, system prompt length, context window usage. What works for creative generation actively hurts classification accuracy, and vice versa. A single model deployment forces every task into the same parameter regime, which means every task is slightly misconfigured.

What to do about it: Model routing (sending simple queries to fast, cheap models and reserving heavyweight models for tasks that need them) isn’t premature optimization. It’s the equivalent of having different database indexes for different query patterns. In experiments on identity-aware agent protocols, routing queries to right-sized models reduced latency by 12x on straightforward tasks with no quality loss (measured). The routing decision itself is cheap: a small classifier, a keyword heuristic, or even a regex on the incoming query can partition traffic effectively. You don’t need a sophisticated routing layer. You just need to stop sending every query to the same place.

The 62x Question

How do you know when complexity actually pays? I ran a study on exactly this question.

In research on Deliberative Collective Intelligence, I compared structured multi-agent deliberation against free-form debate across different task types. The results drew a line I didn’t expect to be so clean:

That 62x number is the cost of structure. Typed reasoning moves, differentiated cognitive roles, shared workspace maintenance, convergence algorithms. All of it costs tokens. And on routine tasks, every one of those tokens is waste.

But on non-routine tasks, the kind that actually matter in enterprise settings where wrong answers have consequences, that 62x investment bought real quality gains that cheaper approaches couldn’t match. It also produced structured deliverables that no baseline could: decision packets with 100% completion (vs. ≤16% for baselines), minority reports in 98% of sessions, and explicit reopen conditions that made every decision auditable.

The real boundary is not hard versus easy. It is routine versus non-routine. A hard classification problem is still routine: it has a correct answer and complete information. A simple-sounding policy question may not be, if it requires weighing competing frameworks with incomplete information.

A Decision Framework

After running these experiments and building production systems across both ends of the spectrum, I’ve landed on two diagnostic questions that reliably sort tasks into the right architecture:

COMPLEXITY DECISION FRAMEWORK Does the task have a small, stable output space? yes Use simple models Fine-tuned small model, classifier, or rules-based system no Does it have sequential dependencies? no Single prompt + good model Standard RAG if retrieval needed. Run parallel if independent steps. yes Is the cost of being wrong high? no Chain-of-thought Multi-step prompt, agentic RAG yes Multi-agent Structured deliberation The 62x cost is worth it in the bottom-right. Everywhere else, simpler is better.

Two questions that reliably sort tasks into the right architecture tier

Question 1: Does the task have a small, stable output space?

If the answer is one of N known categories, or a structured extraction with a predictable schema, you’re doing pattern matching. Use the cheapest model that achieves your accuracy target. This handles classification, entity extraction, intent routing, and structured data parsing.

Question 2: Does the task have sequential dependencies?

If later steps depend on earlier analysis, not just needing more context but changing what happens next based on intermediate reasoning, then multi-step approaches earn their cost. If the steps are independent, run them in parallel with simpler models.

When both conditions apply (open-ended output space, sequential dependencies, and high cost of errors), that’s where the 62x investment in structured reasoning is justified. Regulatory compliance review, multi-jurisdictional risk assessment, complex contract analysis. These are the tasks where cheaper approaches silently fail and nobody notices until the audit.

Putting It Into Practice

The practical implication is that most production AI systems should use multiple points on the complexity spectrum simultaneously. A single system might route FAQ queries to a small model, use standard RAG for document questions, and escalate multi-document analysis to a structured reasoning pipeline. Here is how to get there, concretely.

Step 1: Audit your task distribution

Before changing any architecture, sample 200–500 recent queries from your production logs. Classify each one along two dimensions: output complexity (pattern matching vs. open-ended generation) and reasoning depth (single-step vs. multi-step dependencies). Most teams discover that 60–80% of their traffic is pattern matching or single-step retrieval. Tasks that don’t need their current architecture’s full capability.

This audit produces a concrete breakdown: X% of queries are classification/routing, Y% are knowledge retrieval, Z% require multi-step reasoning. That breakdown is your architecture spec. If 70% of your traffic is simple retrieval, your primary optimisation target is making that 70% cheap and fast, not making the 5% of complex queries slightly better.

Step 2: Build a routing layer

Routing doesn’t need to be sophisticated. Three approaches in order of increasing complexity:

TIERED ROUTING ARCHITECTURE Incoming queries Routing layer rules / classifier / identity-aware ~70% ~25% ~5% Tier 1 Pattern matching Haiku / distilled model ~100ms, $0.0004/call Classification, routing Tier 2 Standard retrieval Sonnet / mid-tier + RAG ~1-3s, $0.005/call Knowledge Q&A, doc lookup Tier 3 Multi-step reasoning Opus / o3 + structured ~10-60s, $0.10/call Compliance, risk analysis Most traffic goes left. Most value comes from the right. Optimise both.

Route 70% of traffic to cheap models, reserve expensive reasoning for the 5% that needs it

Step 3: Right-size each tier

Once you have routing, optimise each tier independently:

Step 4: Measure what matters at each tier

Different tiers need different metrics. Measuring everything the same way hides the problems that matter.

Step 5: Set up cost monitoring by tier

Tag every LLM call with its routing tier. Track cost-per-query, latency-per-query, and quality-per-query for each tier independently. This gives you three levers to pull:

$50K/MONTH — TWO VERY DIFFERENT SYSTEMS Misallocated overspending on simple tasks $45k Tier 1 pattern matching $3k Tier 2 $2k Tier 3 Well-allocated investing where complexity pays $5k Tier 1 $10k Tier 2 $35k Tier 3 complex reasoning Same total spend. Radically different value. Per-tier monitoring tells you which one you are.

Aggregate dashboards hide the signal. Break cost reporting down by complexity tier.

The Principle

Match architectural complexity to task complexity.

Not to what the framework can do. Not to what the demo looked like. Not to what the most expensive model makes possible.

Complexity is justified when the task is non-routine, the reasoning is sequential, and the cost of being wrong is real. Everywhere else, simpler systems win twice: once in cost, and again in reliability.

In production AI, the smartest system is rarely the most complex one. It is the one that spends complexity only where complexity earns its keep.


Related


The experiments behind this piece are published on arXiv: Deliberative Collective Intelligence (DCI) for the reasoning cost data and Lightweight Delegation Protocol (LDP) for the routing latency findings. Evaluation framework is open-source at enterprise-rag-bench.