From Talking to Networks to Thinking With Them

Introducing Structured Network Autonomy

I've been building AI into my network lab for a while now.

It started simply enough: natural language queries, a custom MCP server, letting a language model pull telemetry and answer questions about my infrastructure in plain English. It was impressive. It was useful. And the more I built, the more I realized something that changed the direction of everything I was working on.

We've spent the past year teaching engineers to talk to AI better. But nobody is teaching them how to think with it at scale.

That distinction — seemingly small — is the entire problem.

The Conversation We're Not Having

Right now the dominant narrative around AI in networking is about prompting. Get better at writing prompts. Learn how to ask the right questions. Use AI as a smarter CLI.

That's not wrong — but it's not enough.

The real shift isn't in how fluently you can prompt a model. It's in whether you can clearly articulate what your network is supposed to do and why, and then design systems that let AI operate purposefully within that intent. That's a different discipline entirely. One rooted not just in syntax, but in governance.

Most of the industry isn't talking about it yet.

The real shift isn’t about talking to AI more fluently. It’s about developing the discipline to express what your network should be doing — and then building a system that lets AI act on that intent safely.

What I Kept Running Into

As my lab builds grew more sophisticated — pyATS validation, gRPC telemetry, RAG systems for documentation, agentic workflows that could actually take action — I kept running into the same wall.

The AI could reason. It could act. But the moment I considered giving it real authority in a real network, I didn't have a framework for how to do that safely.

Not just technically safely. Organizationally safely. With governance. With accountability. With the ability to scale trust incrementally rather than flip a switch and hope.

I looked around and found a lot of vendor marketing and not much else.

So I built one.

Introducing Structured Network Autonomy (SNA)

SNA is a framework for deploying AI agents in enterprise networks with intentional governance, earned trust, and secure autonomy.

The best way I can describe it: SNA is air traffic control for autonomous systems. Strict lanes. Clearance protocols. Escalating authority based on track record. Continuous telemetry for trust.

As someone who spends real time in the flight simulator, that model makes complete sense to me — and I think it will to anyone who has managed a complex, high-stakes operational environment.

A pilot doesn't get instrument rating overnight. They earn it through logged hours, demonstrated judgment, and formal review. An AI agent in your network should work exactly the same way.

The closer I got to building real autonomy, the clearer one thing became: technical capability isn't the barrier. Organizational trust is. That realization became the foundation of Structured Network Autonomy.

The Core Architecture

SNA is built on a Security Foundation — not as a layer you add later, but as the ground everything else stands on. Primary emphasis: data integrity at rest and in transit.

An agent is only as trustworthy as its data. If your source of truth is compromised, your AI will act — confidently, and at speed — on bad information. That's a category of risk that has to be designed out from day one.


Above that foundation, SNA operates through four layers:

  1. Define — What do we want? Inventory, network intent and policy, risk tolerance, data sources, and governance roles established before the agent touches anything.

  2. Constrain — How do we enforce it? Organizational intent translated into machine-enforceable policies via a Policy Engine that evaluates every action request before execution.

  3. Act — The agent reasons over your source of truth and curated knowledge base, moving fast on low-risk certainty and automatically escalating when it approaches the edges of its authority.

  4. Audit — Everything logged, explainable, and reviewable. The Audit layer feeds directly back into Constrain — creating a continuous governance cycle, not a static compliance model.



The Audit layer feeding back into Constrain is what makes this a living governance system rather than a rulebook that goes stale.


The Governing Principle: Earned Autonomy

The idea that holds the entire framework together: agent authority expands based on demonstrated track record, not assumed capability.

SNA makes this measurable through two mechanisms:

AFM — Autonomy Feedback Metric. Scores individual agent actions across four dimensions: accuracy, scope adherence, escalation appropriateness, and recovery performance. Calculated continuously and fed back into the Policy Engine.

EAS — Earned Autonomy Score. The agent's cumulative flight record. Built from AFM data over time. Determines what maturity stage the agent occupies and what authority level it has earned. Rises with consistent performance. Falls with poor decisions or unexpected human interventions.

AFM is the input. EAS is the output. You cannot have a meaningful Earned Autonomy Score without a rigorous Autonomy Feedback Metric. Trust is earned — never assumed.


As the EAS grows, authority expands. If behavior degrades, authority contracts. For organizations, that means the path from assisted AI to governed autonomy is visible, defensible, and controlled — not a leap of faith.

The Maturity Model: Where Does Your Organization Start?

Every organization is somewhere on this journey. SNA meets them where they are.

  1. Assisted — AI observes, analyzes, and recommends. No autonomous action. Every agent starts here — EAS at zero, regardless of organizational maturity.

  2. Supervised — AI takes limited pre-approved actions in low-risk scenarios. Human approval required beyond that.

  3. Delegated — AI operates autonomously within well-defined boundaries, escalating only at the edges of its authority.

  4. Governed — AI manages routine network operations autonomously with full audit trails, dynamic constraints, and continuous feedback into the governance cycle.



Every agent starts at Stage 1 with an EAS of zero. Trust is earned by the agent — not inherited.

Progression through stages requires Governance Officer sign-off. The first grant of real autonomous authority is a deliberate organizational decision, not an automatic one.

Why This Matters Now

We are at an inflection point. AI tooling has reached the point where giving an agent meaningful authority in a production network is technically feasible. What's lagging behind is the governance thinking — the frameworks that let organizations do this safely, incrementally, and with full accountability to the business.

SNA was designed from the ground up with enterprise and regulated environments in mind. The security foundation, governance roles, audit layer, and lifecycle management directly address NIST AI RMF's core functions — not as an afterthought, but as structural requirements.

The engineers and architects who develop this governance thinking now will define how this technology gets adopted across the industry.

I've been doing this work in my lab, in my content, and in the conversations I have with engineers trying to figure out what comes next. Those conversations have started opening doors at the enterprise AI infrastructure level that I couldn't have imagined a year ago. More on that when the time is right.

What's Next

This article is an introduction. Over the coming weeks I'll be going deeper on each layer of the SNA framework — the security foundation, RAG governance, the Policy Engine, the Trust Transparency Dashboard, and how SNA integrates with Zero Trust, ITIL, and the NIST AI Risk Management Framework.

If this resonates with you — if you've been building AI into your network practice and feeling like the governance piece is missing — I'd love to hear from you in the comments.

And if you want to start building the foundation yourself, grab the Network Automation Learning Path at The Tech-E.

The network doesn’t just need engineers who can talk to AI. It needs architects who can think with it — and govern it. Because the future of networking won’t just be automated. It will be accountable.

Elliot Conner Network Automation Architect · Founder, The Tech-E · CCNP Enterprise



Next
Next

Change Management Is Trust Engineering