The Limits of Connectivity

AI July 15, 2025 12 min read Arjun Srivastava

The Limits of Connectivity

Why connecting everyone might kill the very innovation it promises.

The Question

Here's something I used to take for granted: more communication is always better. More connection, more collaboration, more information flowing between more people. That's how you get the best outcomes. Right?

This post is about why that intuition is wrong, and when it's dangerously wrong.

The core problem is this: when everyone talks to everyone, entire groups of people collapse onto the same solution, and perfectly valid alternatives are never discovered. This is true of algorithms, companies, and civilizations. And as the world gets more hyperconnected, we are actively closing off pathways to innovation that isolation once naturally protected.

This post walks through that problem in four parts:

  1. Why groups converge on one solution (and why that's a problem)
  2. How to break the cycle (by deliberately cutting communication)
  3. How to fund the unknowns (the hardest management problem)
  4. How to lock in wins (where even the best companies fail)

Part 1: Why Groups Converge

Multiple Peaks

Consider the restaurant industry. Fine dining, fast food, and fast-casual are all billion-dollar industries. Each is a genuinely optimal model: a real peak built on fundamentally different tradeoffs of cost, speed, and quality. A Michelin restaurant that served food in 90 seconds would gut its own value proposition. A McDonald's that slow-cooked every burger with high-end ingredients would go bankrupt by Tuesday.

This is what a multimodal landscape looks like. There are several right answers, each a genuine peak, each requiring completely different infrastructure, and each valid.

The interesting problem shows up when you zoom out. Inside any single restaurant, convergence is good: once you commit to fine dining, every hire and every decision should reinforce that peak. But at the portfolio level, when an investor, an industry, or a society needs all the viable peaks explored, that same convergence becomes dangerous. The whole system can lock onto one peak and never discover the others.

The Munger Egg

Charlie Munger captured this beautifully in his 1995 Harvard speech: the human mind is like the human egg: once one sperm gets in, it shuts down so no others can follow. We lock onto the first good solution and stop looking.

The visualization below shows this in action. There are four equally good solutions (peaks) on the map: a multimodal landscape. A single agent walks uphill to the nearest one and stops. The other three are never discovered.

Interactive visualization loads below.

The Power of Collaboration

So collaboration should fix this, right? If one person locks onto one peak, a team working together should do better.

And on a single peak, that's exactly right. In Differential Evolution, agents share mutation vectors — directed steps built from the difference between two peers. That creates much faster convergence than isolated hill-climbing. The visualization below shows both: connected agents and non-connected agents climbing the same peak. Watch how quickly the connected group reaches the top.

Interactive visualization loads below.

This is collaboration at its best. When there's one right answer, connecting people accelerates everything. Teams discover and optimize solutions faster than any individual ever could.

The Mimetic Swarm

So if collaboration is so powerful, the natural instinct is: scale it up. If a connected team of ten converges on a peak faster, surely forty connected agents scattered across a complex landscape will find all the peaks?

Not even close.

This is René Girard's Mimetic Theory in action: we don't form desires independently. We imitate the desires of those around us. The moment one agent finds something slightly better than the rest, it broadcasts its position to the whole network, and everyone mimetically converges on that spot. The same mechanism that made collaboration so effective on a single peak becomes a trap on a landscape with many.

Try it below. Forty agents, all connected to each other, on a landscape with four equally good peaks. The moment one gets slightly ahead, the rest abandon their own exploration and pile on.

Interactive visualization loads below.

The problem is not communication or its absence. It is the architecture of communication: who talks to whom.


Part 2: Breaking the Mimetic Cycle

To discover all the peaks, you have to do something counterintuitive: cut the communication lines. But not all of them.

The insight is precise: cut communication between groups, not within them. A small team exploring a new idea needs intense, daily, free-wheeling internal debate to climb its peak effectively. What it doesn't need is a live feed to the rest of the organization telling it what "good" looks like.

Years ago, I encountered this exact problem and solved it mathematically in a research paper I published in college. At the time, I was just trying to solve an abstract optimization problem. But over the years, watching the same patterns repeat in tech companies, management theory, and geopolitics, I realized we had accidentally written the blueprint for how human organizations fall into groupthink, and how to structure them to actually innovate.

Safi Bahcall, in Loonshots, calls this Phase Separation: deliberately separating the "Artists" (people exploring wild, unproven ideas) from the "Soldiers" (people executing on a proven strategy). Mahesh Balakrishnan, a systems researcher who led Skunkworks projects at Meta and Confluent, describes what this looks like in practice in his memo: inside the team, synchronous daily communication with no process and no external docs shaping the design. Outside the team, a hard boundary: minimize dependencies, control what crosses the wall, and critically, do not exit skunkworks mode prematurely. If the team integrates with the main organization before it has a shipped success to stand on, the incumbent culture drowns the new one.

In our algorithm, we formalized this with Nearest Better Clustering to detect natural groupings of agents, and a Temperature parameter to control how connected the groups are: a thermostat for inter-group communication.

Interactive visualization loads below.

This is the key insight: the "right" amount of communication depends entirely on how many valid solutions exist. If there's truly one right answer, connect everyone and sprint. But if the problem is complex and has multiple valid approaches (which most interesting problems do), you need to actively enforce isolation between the groups exploring them.


Part 3: Funding the Unknown

Separating people into isolated groups creates a new problem, and it's the one that kills most organizations: how do you decide who gets the budget?

Imagine you're a CEO. You have one massive, proven Franchise generating reliable revenue. You also have three tiny Skunkworks teams, each exploring a completely different direction.

Here's the catch: you have no idea which Skunkworks team is sitting on a billion-dollar breakthrough and which is a dead end. Early on, they all look identical: small, unproven, and unprofitable.

If you allocate budget purely based on proven ROI, the Franchise wins every time. The Skunkworks teams starve, and you're right back to Munger's Egg: locked onto one solution, blind to the rest.

This is where our research made its core contribution. We realized this was a Multi-Armed Bandit problem: the classic dilemma of choosing between exploiting what you know works and exploring what might work better.

We used the Upper Confidence Bound (UCB) formula, which reframes the question. Instead of asking "who is performing best?", it asks: "who is most promising AND most unknown?"

Every team gets scored on two dimensions:

  1. Solid Reality ($f_{avg}$): How well is this team currently performing? This is their proven track record.
  2. The Fog of Potential ($\sqrt{2 \log N / N_i}$): How little do we know about this team? The fewer resources they've received, the bigger the fog: they could be hiding a breakthrough or a dead end, and you genuinely don't know.

The algorithm stacks the Fog on top of Solid Reality, and always funds whatever total bar is tallest. This means it actively funds unknown teams to buy information and burn away ignorance.

Try playing the CEO below. Adjust your Uncertainty Tolerance (C) and watch how the algorithm balances the Franchise's reliable returns against the Skunkworks' fog.

Interactive visualization loads below.

Part 4: Locking In Wins (The Google Problem)

There's one final step. What happens when a Skunkworks team actually succeeds? When the Fog burns away and you realize you've found a Unicorn?

This is where even the best innovators fail. And the most instructive failure belongs to Google.

Google is arguably the greatest "Loonshot Nursery" in the history of Silicon Valley. Through Google X, DeepMind, and Google Brain, they are extraordinary at Phase Separation: creating isolated teams and funding Artists to explore. They've invented a staggering amount of world-changing technology.

But they have a fatal flaw: when a skunkworks project proves its worth, Google consistently fails to lock it in. Instead of transferring the breakthrough to the Soldiers who can scale it and defend it as a core franchise, the dominant peak absorbs it. Google invented the Transformer. But instead of letting it become its own peak, they folded it inside Search, their existing mountain. It took OpenAI, with no incumbent franchise to protect, to recognize it as the foundation for something entirely new.

This is the Franchise Transfer problem. Exploring the fog is only half the job. If you explore forever without locking in your wins, you end up doing R&D for your competitors.

In our algorithm, we solved this with Archiving. Once a tribe of agents reaches the top of a peak and stops improving, the algorithm marks them as "converged" and locks them in. It refuses to let them wander off or get reallocated. The win is preserved. And immediately, the freed-up budget goes to spawning new agents in unexplored parts of the map, restarting the cycle of discovery.

Interactive visualization loads below.

The Shape of the Problem

The full lifecycle looks like this:

  1. Separate the explorers from the operators (Phase Separation)
  2. Fund the unknowns deliberately, buying information, not just returns (UCB)
  3. Lock in wins when you find them (Archiving / Franchise Transfer)
  4. Reinvest freed resources into new exploration

Each step has a real-world failure mode. Most companies never separate at all (Munger's Egg). The few that do often can't stomach funding uncertainty (starving the Skunkworks). The rare ones that fund exploration often can't lock in wins (the Google Problem). And almost none complete the full cycle consistently.

What We're Losing

This four-step algorithmic lifecycle doesn't just map to corporate tech labs. It is the exact mechanism by which human civilization scales.

For most of human history, geography did the phase separation for us. Oceans, mountains, and deserts cut the communication lines. Different cultures, facing the same fundamental problems of how to organize time, build trust, and make decisions, arrived at genuinely different solutions. Erin Meyer documents this in The Culture Map, mapping cultures along dimensions like Scheduling (linear-time vs. flexible-time) and Trust (task-based vs. relationship-based). They're different peaks on the same landscape, each internally coherent, each optimized for fundamentally different tradeoffs.

These peaks exist because the cultures that produced them developed in isolation. The distance between them is what allowed each to fully commit to its own logic without being pulled toward the other.

We see the exact same risk in technology. Every major lab right now is climbing the same mountain: Transformers, scaled up. While most of the industry is pouring billions into scaling standard Transformers, outliers like Sakana AI (exploring evolutionary techniques) or Yann LeCun with JEPA (exploring objective-driven AI) are deliberately climbing fundamentally different peaks. But if you dropped these labs inside DeepMind, they'd stop exploring. The dominant culture would pull them onto the existing mountain. The isolation is what makes the exploration possible.

As the world hyperconnects, as cultures blend, as companies centralize, as everyone benchmarks against the same "best practices," we are running the Mimetic Swarm experiment on civilization itself. The communication lines are being drawn between every node. And the math is clear about what happens next.

In a computer simulation, you can see the landscape. In the real world, the landscape is invisible. And that means the hardest part of this entire framework is having the courage to let some of your people explore in the dark. To fund them when they haven't found anything. To resist the pressure to pull everyone back onto the one mountain you can already see. Because the explorers won't always find something. But when they do, it changes everything.