MurrietaLabs — Software a la medida | Desarrollo con IA Why small teams will win the AI era — MurrietaLabs
← All essays
EN Essay March 21, 2026 MurrietaLabs

Why small teams will win the AI era

In 1975, Fred Brooks published The Mythical Man-Month and gave us one of software engineering’s few reliable laws: adding people to a late project makes it later. The reason was mathematical. Communication channels in a team grow quadratically --- n(n-1)/2 --- while productive capacity grows linearly. A team of 5 has 10 communication channels. A team of 50 has 1,225.

That book is fifty years old. The industry has spent five decades trying to work around Brooks’ Law with process: Agile, Scrum, SAFe, squads, guilds, chapters, tribes. Every one of these methodologies is, at its core, an attempt to manage the communication overhead that grows faster than the team.

None of them solved it. They managed it. They made the quadratic curve slightly less punishing. But the fundamental math never changed.

Until now. Not because someone solved communication overhead, but because AI changed what a single person can produce. And when individual output increases dramatically while communication overhead stays the same, the math starts favoring very small teams in ways that would have seemed absurd ten years ago.

The new math

Let’s make this concrete. Say a developer without AI tools produces X units of output per week. With current AI tools, the same developer produces somewhere between 2X and 5X, depending on the task and the developer’s skill at leveraging the tools. Call it 3X for a conservative average.

Now compare two teams building the same product.

Team A: 3 people. Communication channels: 3. Total output: 9X. Overhead: minimal. Three people can talk without scheduling a meeting. Decisions happen in minutes, not sprint cycles.

Team B: 30 people. Communication channels: 435. Total output: 30 people x 3X = 90X. But that raw output is theoretical. In practice, a significant portion of every person’s time goes to communication overhead: standups, sprint planning, backlog grooming, architecture reviews, cross-team syncs, Slack threads, status updates, documentation for other teams.

Empirical studies suggest that in large engineering organizations, individual contributors spend 30-50% of their time on coordination activities. In a 30-person team, let’s say 35% of capacity is lost to overhead. That drops effective output from 90X to roughly 58X.

Team A, with negligible overhead, delivers about 9X. Team B delivers about 58X. Team B still produces more in absolute terms. But Team B costs ten times as much. The output-per-dollar of the small team is dramatically higher.

AI leverage scales linearly. Communication overhead scales quadratically. As AI multiplies individual output, the math tilts further toward small teams with every improvement.

And the multiplier is getting bigger every year. When AI tools improve from 3X to 5X --- which seems likely within a few years --- the small team equation becomes even more extreme. A 3-person team at 5X produces 15X. A 30-person team at 5X produces 150X theoretical, minus 35% overhead, equals roughly 97X. The small team is still cheaper per unit of output, and the gap widened.

What overhead actually costs

The communication overhead in large teams isn’t just meetings. It’s something more insidious: the progressive loss of shared context.

In a 3-person team, everyone knows everything. They know why a decision was made last Tuesday. They know the customer complaint that drove the redesign. They know which parts of the codebase are fragile and why. This shared context means decisions are fast and well-informed. Nobody needs to write a document explaining the rationale behind a choice because everyone was in the room when the choice was made.

In a 30-person team, shared context is a fantasy. Nobody knows everything. Knowledge lives in silos. The backend team doesn’t know why the frontend was built this way. The infrastructure team doesn’t know about the product constraint that makes their proposed architecture impractical. Information travels through documents, meetings, and Slack messages, losing fidelity at every hop.

This loss of context creates a specific kind of waste: decisions made with incomplete information. The backend team designs an API that doesn’t account for a frontend requirement they didn’t know about. The infrastructure team provisions resources based on projections that product quietly revised two weeks ago. These aren’t failures of competence. They’re failures of communication, and they’re inevitable in large teams.

The cost of these failures isn’t just the rework. It’s the invisible cost of everyone spending time on context-sharing activities instead of building things. It’s the architect who spends three hours writing an ADR that a small team would have communicated in a five-minute conversation. It’s the project manager whose entire job exists because the team is too large to self-coordinate.

In a small team with AI leverage, those roles and activities simply don’t exist. The overhead evaporates. Every hour is a building hour.

The counterargument and why it’s weakening

The obvious objection: “Small teams can’t build complex systems.” Historically, that’s been true. A 3-person team couldn’t build a banking platform, or an operating system, or an enterprise SaaS product. Complex systems required many specialists (database experts, security engineers, frontend developers, backend developers, DevOps engineers, QA engineers) and you needed enough of each to cover the work.

AI is eroding this objection from two directions.

First, AI dramatically expands what a single person can do competently. A strong backend developer with AI assistance can now produce reasonable frontend code, write infrastructure-as-code, generate test suites, and handle security configurations that would have previously required a specialist. Not specialist-level quality on every task, but “good enough for most contexts.” The number of specialists a project needs is shrinking.

Second, AI handles the routine work that used to justify dedicated roles. You don’t need a full-time QA engineer if AI can generate and maintain a test suite. You don’t need a full-time DevOps engineer if AI can manage deployment pipelines and infrastructure configs. The specialist is still needed for hard problems, but the hard problems don’t fill a full-time role on most projects.

This means a 3-person team where each person is a generalist augmented by AI can cover ground that used to require 10-15 specialists. Not because each person became an expert in everything, but because AI filled the gaps between their expertise and the project’s needs.

The old model was: one specialist per domain, coordinate across specialists. The new model is: one generalist per product area, AI fills the gaps, coordination is trivial.

The remaining objection is that some systems are genuinely too large for a small team, regardless of AI leverage. This is true but rarer than people assume. Most “large” software projects are large because of organizational complexity, not technical complexity. They have many teams because the organization has many departments that each need input, not because the software actually requires that many people to build.

When a large project is decomposed into its truly independent technical components, most of them could be built by a small team. The reason they aren’t is that the organizational structure requires cross-team dependencies, review processes, and coordination that inflate the headcount.

The studio model

There’s a historical precedent for what’s about to happen. Architecture went through a similar transition. For most of the twentieth century, large projects required large firms. Skidmore, Owings & Merrill had thousands of employees because that’s what it took to design and manage complex buildings.

But a parallel model always existed: the small studio. A handful of architects with a strong point of view, outsourcing structural engineering and other specializations to consultants, maintaining creative control and moving fast. Peter Zumthor’s office, famously small, produced some of the most celebrated buildings of its era.

The difference was that studios could only take on certain kinds of projects. The large firms handled the large, complex, institutional projects that required huge teams.

AI is shifting that boundary. It’s expanding the range of projects that a small studio can handle. The kind of software that used to require a 50-person team for eighteen months can increasingly be built by a 5-person team in six months, with AI handling the work that previously required the other 45 people.

This doesn’t mean large firms disappear. It means the range of projects that justifies a large firm shrinks. And as that range shrinks, the advantages of the studio model --- speed, coherence, shared context, low overhead --- become available to a larger share of the market.

The companies that will thrive in this model share certain traits. They hire generalists who are strong in multiple domains, not specialists who are world-class in one. They value judgment over velocity, because AI provides velocity and humans provide judgment. They keep teams small enough that everyone shares context naturally, without process.

What changes and what doesn’t

Some things about software development don’t change with team size. You still need to understand users. You still need to make trade-offs. You still need to maintain systems over time. You still need to deal with ambiguity, changing requirements, and the gap between what stakeholders ask for and what they need.

What changes is the overhead required to coordinate those activities. In a small team, coordination is a conversation. In a large team, coordination is a system --- with meetings, documents, tools, and dedicated roles to manage it. AI doesn’t change the need for coordination. It changes how much coordination is needed, by reducing how many people are involved.

The second thing that changes is the coherence of the output. Software built by a small team has a consistency that software built by a large team almost never achieves. It’s the same reason a novel written by one person reads differently than a novel written by committee. There’s a unified sensibility. Every part of the system reflects the same set of values and priorities, because the same few people made all the decisions.

This coherence is something users feel, even if they can’t articulate it. Products built by small, opinionated teams (Basecamp, Linear, the original Notion) have a quality that large-team products struggle to match. Everything fits together. Nothing feels bolted on.

The AI era doesn’t reward the biggest team. It rewards the team with the highest ratio of judgment to overhead. That ratio is structurally better in small teams, and AI is making the gap wider every year.

The last decade of software was about scale: scaling teams, scaling processes, scaling organizations to match the complexity of the software they built. The next decade will be about compression. Same output. Radically fewer people. Less coordination. More coherence.

The 50-person agency isn’t dead. But it’s competing, for the first time, with a 3-person studio that has the same productive capacity and none of the overhead. And in that competition, the math is not on the agency’s side.