How organizations shape their agentic systems

Agentic AI doesn’t just mirror code—it mirrors your org. By Conway’s Law, the agents you ship will look like your teams and silos. Design the agent ecosystem you want first, then align teams around it.

Abstract

Agentic AI systems—AI that can plan, act, and coordinate tools to pursue goals—are moving from research labs into real organizations. These systems don’t live in a vacuum: they are embedded in companies, teams, and communication structures. Conway’s Law, originally formulated in 1967, states that “any organization that designs a system… will produce a design whose structure is a copy of the organization’s communication structure.” (Wikipedia)

This post explores how Conway’s Law applies to agentic systems. First, it reviews Conway’s Law and modern interpretations. Second, it defines agentic AI and describes how agent architectures mirror organizational structures. Third, it examines the “Inverse Conway Maneuver”—deliberately reorganizing teams to achieve a desired system architecture—and how this idea extends to AI agents. Finally, it offers practical design guidelines and highlights open research directions for socio-technical alignment between human organizations and agentic systems.


1. Introduction

Organizations are starting to deploy AI that does more than answer questions: it sets subgoals, chooses tools, executes actions, and adapts based on feedback. This family of systems—often described as agentic AI—includes:

  • single agents that orchestrate multiple tools,
  • swarms of specialized agents collaborating on a workflow, and
  • AI services embedded into existing business processes. (Google Cloud)

At the same time, software engineering has long recognized a persistent pattern: systems tend to look like the organizations that build them. This is Conway’s Law, first introduced in Melvin Conway’s 1968 paper “How Do Committees Invent?” (Mel Conway)

Agentic systems sit exactly at this intersection:

  • They are technical (models, tools, orchestration, APIs),
  • but they are also deeply organizational (teams own them, processes govern them, human workflows depend on them).

The central idea is:

Agentic systems will mirror the communication structures, incentive structures, and power structures of the organizations that build and operate them.

Understanding this mirroring is essential for designing agents that are robust, safe, and actually useful.


2. Background

2.1 Conway’s Law

Conway’s classic formulation is often quoted as:

“Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.” (martinfowler.com)

Key ideas:

  • Communication structure: who talks to whom, how often, and through which channels.
  • System structure: modules, services, interfaces, and the “shape” of the technical solution.
  • Homomorphism: Conway describes a structure-preserving mapping from the graph of the organization to the graph of the system being built. (Mel Conway)

Modern summaries emphasize the same intuition: the way teams are split, and how they collaborate, tends to be reflected in how the product is decomposed into components, services, or modules. (Learning Loop)

2.2 Interpretations and evidence

In contemporary practice, Conway’s Law is used as:

  • A warning: siloed teams → siloed systems → brittle integrations. (Splunk)
  • A design heuristic: if you want a modular architecture, create modular, loosely coupled teams. (martinfowler.com)
  • A strategic lever: the so-called Inverse Conway Maneuver, which recommends changing the organization to get the architecture you want. (Thoughtworks)

Empirical work in software engineering has repeatedly found correlations between team boundaries, communication patterns, and system modularity. While the exact direction of causality can go both ways (architecture can also drive organizational change), the mirroring effect itself is widely accepted. (Wikipedia)

2.3 Agentic AI systems

Agentic AI is typically defined as AI that autonomously pursues goals via planning and action, rather than just responding to single prompts. Definitions emphasize: (Google Cloud)

  • Autonomy: The system can decide what to do next to progress towards a goal.
  • Tool use and orchestration: Agents call APIs, run code, query databases, or interact with other systems.
  • Planning and memory: Multi-step reasoning, maintaining state across steps.
  • Multi-agent coordination: Several specialized agents cooperating to solve a complex task.

Recent industry and media coverage describe a shift from “co-pilot” systems (assistive but user-driven) to “autopilot” agents that can execute end-to-end tasks under supervision. (AP News)

For this blog post, we’ll use:

Agentic system: A socio-technical system in which one or more AI agents can autonomously select actions, call tools, and coordinate with humans or other agents to achieve organizational goals.

3. Conway’s Law applied to agentic architectures

3.1 From microservices to “micro-agents”

Conway’s Law is most familiar from microservice architectures: one team per service → one service per team. (martinfowler.com)

Agentic systems often evolve similarly:

  • A Customer Support team builds a support agent.
  • A Sales team builds a lead-qualification agent.
  • A Finance team builds an invoice-reconciliation agent.

Before long, the organization has a constellation of agents, each reflecting the boundaries and priorities of their home team.

Result: The graph of agents and their allowed interactions often resembles the org chart:

  • Agents owned by the same team share data and tools easily.
  • Cross-team agent collaboration is rare or brittle, mirroring limited cross-team communication.
  • Integration points between agents follow the same “fault lines” as human communication.

This is Conway’s Law in agentic clothing.

3.2 Communication patterns → agent protocols

Conway’s Law focuses on communication. For agents, this shows up in:

  • APIs and schemas: How agents talk to each other is shaped by how teams align on data contracts.
  • Escalation & handoff patterns: If humans escalate issues along a chain (support → tier 2 → engineering), agents are often wired to do the same.
  • Governance flows: Approvals, audits, and security reviews appear as “agent-level” permissions, mirroring internal processes.

If two human teams rarely talk, the agents they own are unlikely to have clean, shared interfaces or shared ontologies. Misaligned concepts (like “customer,” “case,” or “ticket”) create the same failures in agent communication that they do in human collaboration.

3.3 Ownership and incentives

Conway’s Law also implicitly touches power and incentives:

  • If one department has budget and authority to deploy agents, its workflows are heavily automated.
  • Other departments may remain manual or under-served.
  • Agents become “digital extensions” of the teams that built them—optimizing for that team’s metrics.

Over time, the organization can end up with agentic asymmetries: some functions enjoy near-autonomous execution; others rely on manual work, even when technically similar tasks could be automated.

In short, who owns and funds an agent strongly shapes what that agent optimizes for—and therefore how it behaves.


4. Organizational patterns and their agentic echoes

To make this concrete, consider several common organizational patterns and the agent architectures they tend to produce.

4.1 Siloed functional departments

Organization:
Separate departments: Marketing, Sales, Support, Finance, each with its own stack, KPIs, and budget.

Typical agentic outcome:

  • A support agent that knows about tickets and FAQs but not about churn risk or lifetime value.
  • A sales agent that can qualify leads but doesn’t see historic support issues.
  • A marketing agent that hyper-optimizes campaigns with little visibility into post-sale experience.

Here, Conway’s Law leads to:

  • Fragmented agents optimized for local metrics.
  • Limited ability to build a single “company brain” that understands the full customer journey.

4.2 Cross-functional product squads

Organization:
Squads composed of PM, designers, engineers, and sometimes data/ML, each owning a “vertical slice” (e.g., Checkout, Onboarding, Billing).

Typical agentic outcome:

  • Product-aligned agents: e.g., a Checkout Optimization Agent that handles promotions, payment retries, and upsell flows.
  • Agents that are end-to-end within a value stream, but narrow in scope.

The resulting architecture:

  • Aligns well to user journeys (“everything related to checkout is coherent and agentic”).
  • Still inherits Conway constraints: agents for adjacent journeys (Onboarding vs. Checkout) may struggle to share state or coordinate behavior unless the org sets explicit cross-squad collaboration patterns.

4.3 Platform + product teams

Organization:
Platform teams provide shared infrastructure; product teams sit on top.

Typical agentic outcome:

  • A platform agent layer (e.g., internal AI orchestration, shared tool registry, common memory store).
  • Multiple domain-specific agents that reuse platform capabilities.

If done well, this pattern can intentionally harness Conway’s Law:

  • The platform team’s charter is to create shared agent infrastructure.
  • Product teams easily plug into this, resulting in a coherent agentic ecosystem rather than scattered bots.

5. The inverse Conway Maneuver in the age of agents

The Inverse Conway Maneuver (ICM) is the idea of restructuring the organization to achieve a desired system architecture. Instead of systems accidentally mirroring the org, you shape the org to mirror the architecture you want. (Thoughtworks)

5.1 From architecture diagrams to team structures

For agentic systems, this suggests a workflow like:

  1. Design your ideal agent mesh:
    • What capabilities should be centralized vs. local?
    • Where should human-in-the-loop control sit?
    • How should agents share memory and context?
  2. Map that mesh to team ownership:
    • Create a central “agent platform” or “AI foundation” team responsible for safety, infra, and shared tools.
    • Assign value-stream teams to own the agents that directly impact end users in those flows.
    • Ensure cross-cutting teams (e.g., Security, Compliance) have explicit roles in governing agent behavior.
  3. Align communication paths:
    • If two agents need to coordinate tightly, ensure their owning teams have clear communication channels and shared rituals.

In other words, design the agent architecture first, then arrange teams so that Conway’s Law works for you, not against you.

5.2 Reorganizing around AI capabilities

Some organizations are already reorganizing around AI capabilities themselves:

  • Creating AI Centers of Excellence that own foundational models and agent frameworks.
  • Embedding AI specialists in product teams while relying on a central platform for common patterns.

This promotes:

  • Shared safety guardrails and common tooling.
  • Local autonomy for tailoring agents to domain-specific workflows.

However, if the Center of Excellence becomes a bottleneck, Conway’s Law reasserts itself: systems become as frictional and slow-moving as the central team’s backlog and communication bandwidth allow.


6. Governance, Safety, and Sociotechnical Alignment

Agentic systems introduce new governance and risk surfaces: automated actions, financial transactions, data access, and more. These concerns are also shaped by organizational structures.

6.1 Governance structures → control surfaces

If an organization has:

  • A central Risk & Compliance group, you often see:
    • centralized approval flows,
    • global policies enforced at the agent platform layer.
  • Decentralized autonomy with light governance:
    • teams may ship agents quickly,
    • but cross-agent consistency in safety and ethics is harder to enforce.

The governance model becomes visible as:

  • Permission systems (who can authorize agents to perform actions).
  • Review workflows (how new tools or behaviors are approved).
  • Logging and audit infrastructure (who operates and monitors it).

All of these design choices are heavily driven by how legal, risk, security, and engineering teams communicate and share responsibilities.

6.2 Human-in-the-loop as organizational compromise

Many agent deployments today use human-in-the-loop patterns: agents propose actions; humans approve or edit them. (AP News)

The placement of these humans mirrors Conway’s Law again:

  • If frontline staff are trusted and empowered, approval UIs are built for them.
  • If approvals must go through management, the agent is wired to escalate through that chain.
  • If certain departments need to “rubber-stamp” any automated decision, agents become constrained in exactly those places.

Agent autonomy levels (e.g., suggest-only vs. auto-execute) end up matching organizational comfort levels and trust relationships between teams.


7. Practical Design Guidelines

Bringing this all together, here are practical guidelines for designing agentic systems with Conway’s Law in mind.

7.1 Make your “agent organization chart”

Treat your agents like a parallel organization:

  1. List all current or planned agents:
    • Their goals
    • Their tools/data access
    • Their owners
  2. Draw their interaction graph:
    • Who calls whom?
    • Where are the bottlenecks, long chains, or missing connections?
  3. Compare this to your human org chart:
    • Which teams own which agents?
    • Where do misalignments appear (e.g., two agents need to collaborate but their teams barely talk)?

This exercise often reveals why agents behave in fragmented, inconsistent ways.

7.2 Design around value streams, not just functions

When possible:

  • Organize agents around end-to-end user journeys (signup → onboarding → purchase → support → renewal).
  • Ensure a single team or tight coalition owns each journey’s core agents.

This mirrors modern advice for microservices: align technical boundaries with business domains and customer-facing value streams. (Splunk)

7.3 Establish a shared agent platform

Create a platform layer responsible for:

  • Tool and API catalogs
  • Common memory / knowledge stores
  • Safety policies (e.g., red-teaming, rate limits, permissioning)
  • Monitoring and observability for agents’ actions

Give this platform team:

  • A mandate to build standard patterns (e.g., “how agents call internal APIs”).
  • Strong communication channels with product teams, so patterns aren’t designed in isolation.

This reduces duplication and encourages a consistent “grammar” for agent behavior.

7.4 Use the Inverse Conway Maneuver intentionally

If your desired agent architecture is:

  • a hub-and-spoke model (central brain, many specialized executors),
  • a federation of domain experts (multiple semi-autonomous agents that negotiate),
  • or a layered model (core reasoning agent + task-specific workers),

then shape your teams to match:

  • Assign clear ownership for each major agent and its interfaces.
  • Align team OKRs to the end-to-end performance of their agent, not just local metrics.
  • Create cross-team rituals where closely interacting agents’ owners coordinate.

7.5 Build feedback loops between humans and agents

Agents that operate in complex organizations need constant tuning. To support that:

  • Embed agent feedback mechanisms where humans can:
    • report misbehavior,
    • suggest new tools,
    • correct plans or outcomes.
  • Route this feedback to the right team based on agent ownership.

If feedback routing is misaligned with the human org, Conway’s Law shows up as slow improvements, orphan issues, and unclear responsibility.


8. Open Research and Design Questions

The intersection of Conway’s Law and agentic systems is still emerging. Some open questions:

  1. Quantifying “organizational-agentic alignment”
    Can we define metrics that compare: Such metrics could help predict failure modes (e.g., brittle agent interactions where human teams barely talk).
    • the graph of human teams and communication, and
    • the graph of agents and their interactions?
  2. Agent ecosystems across organizational boundaries
    As agents increasingly call external tools and APIs, they span multiple organizations:
    • How does Conway’s Law extend to ecosystems, not just single firms?
    • Do API contracts embed the communication patterns between organizations?
  3. Distributed work and remote communication
    Modern organizations communicate via Slack, email, tickets, and async docs. (Medium)
    • How do these patterns translate into agent communication?
    • Do remote-first orgs produce more modular and loosely coupled agent architectures?
  4. Safety, ethics, and power structures
    • Which agents get more autonomy, and why?
    • How do internal power dynamics (e.g., dominance of certain departments) manifest in whose agents can take which actions?
  5. Co-evolution of org and agents
    Over time, agents can change how humans work; that change feeds back into communication structures, which then reshape agents again.
    • Can we model this as a co-evolutionary process?
    • What patterns lead to stable, resilient socio-technical systems?

9. Conclusion

Conway’s Law tells us that systems mirror the organizations that create them. Agentic systems are no exception; in fact, their autonomy and action-taking make the mirroring more visible and more consequential.

  • Agent architectures—which agents exist, how they communicate, what they optimize for—are shaped by team boundaries, communication patterns, ownership, and incentives.
  • The Inverse Conway Maneuver offers a way to design both sides together: create the agent architecture you want, then shape teams and collaboration patterns to support it.
  • Governance, safety, and human-in-the-loop oversight are likewise sociotechnical: they reflect the organization’s trust structures and risk appetite.

If you’re building agentic systems inside a real organization, you’re not just designing AI. You’re designing a second, digital organization of agents that will inevitably echo your human one. The most successful deployments will be those that intentionally align these two organizations—human and agentic—so that Conway’s Law becomes a tool, not a trap.


For current industry perspectives and case studies on agentic AI (from a more journalistic angle), you might find these pieces useful: