Infra Atlas logo

Infra Notes

Infra Atlas
VPN DNS BGP

Field notes on the systems that quietly run the internet.

Zero Trust Isn’t Neutral: The Ethics Hiding in Identity-First Security

From humans to AI agents: the access layer is now where ethics, oversight, and accountability collide.

Dec 30, 2025 11 min read

Zero Trust Isn’t Neutral: The Ethics Hiding in Identity-First Security

Zero Trust is sold as a fix for an outdated assumption: that anything “inside the network” is safe. With cloud apps, remote work, stolen credentials, and contractors everywhere, that assumption was already broken long before anyone named the replacement.

The alternative sounds straightforward: don’t trust location, verify each request, limit access to what’s needed, assume breach. On paper it’s just better engineering — tighter controls, smaller blast radius, fewer catastrophic surprises.

What’s less obvious is what happens once security becomes identity-first. At that point it stops being purely technical. It becomes a system that decides who gets to participate, under what conditions, and how much scrutiny they’re under. That’s where the engineering decisions start carrying ethical weight, usually without anyone noticing.


Zero Trust as a technical idea (why it works so well)

From a purely technical perspective, Zero Trust solves real problems:

  • People log in from everywhere, not one office network.
  • Apps live in multiple clouds and SaaS platforms.
  • Credential theft is common, and “inside” isn’t safe.
  • Vendors, contractors, and partners need access, often temporarily.

In response, teams tighten controls around identity and context:

  • Stronger sign-in requirements when risk seems higher
  • Device checks (is it patched, encrypted, managed?)
  • Narrow permissions (least privilege)
  • Segmentation (so a compromise doesn’t spread)
  • Better logging and faster response when something goes wrong

If you’ve ever been on-call for an incident caused by “flat network + shared credentials + no visibility,” Zero Trust feels like sanity.

But the moment you make identity the center of security, you’re also making it the center of power.


IAM is the plumbing that turns Zero Trust into reality

Most Zero Trust conversations stay abstract. In practice, Zero Trust is implemented through Identity and Access Management (IAM) decisions, often boring and always consequential:

  • How people sign in (SSO, MFA, phishing-resistant methods)
  • How accounts are created and removed (the joiner/mover/leaver lifecycle)
  • How permissions are granted (roles, groups, least privilege)
  • How high-risk access is handled (privileged admin access, “break glass” accounts)
  • How access is reviewed (who still needs what, and why)

This is where the system becomes real. It is also where ethics sneak in, because IAM encodes answers to questions like:

  • Who is “inside” the organization?
  • Who is temporary?
  • Who is always under suspicion?
  • Who gets the benefit of convenience?
  • Who pays the cost of false alarms?

Identity is not just a login. It becomes an operating system for participation.


Identity is power, not just authentication

Identity sounds neutral: “Are you who you say you are?”

In modern systems, identity becomes much more than that:

  • A profile of what you’re allowed to do
  • A description of your device and environment
  • A history of your behavior (“normal” vs “anomalous”)
  • A label that can change your access in real time

The moment access depends on identity context, identity becomes a gatekeeping mechanism. It can enable productivity and safety. It can also create invisible hierarchies:

  • Employees vs contractors
  • “Managed” devices vs personal devices
  • “Trusted regions” vs “high-risk regions”
  • “Normal working hours” vs “unusual hours”

None of those categories are purely technical. They encode organizational assumptions about legitimacy, and then enforce them automatically.


Zero Trust shifts risk downward

A subtle but important shift happens in many Zero Trust rollouts: uncertainty is pushed down to the user.

Instead of the organization accepting risk (“we’ll allow it and investigate if needed”), the system increasingly says:

  • If anything looks off, access is denied or slowed until proven safe.
  • If your device or network doesn’t match expectations, you get friction.
  • If the system is unsure, you carry the cost of that doubt.

This can be appropriate. Security often requires caution. But it changes the failure mode.

In older models, security failures were often catastrophic (breach, lateral movement, data exposure). In stricter identity-first models, failures often become:

  • Lockouts
  • Work stoppages
  • Endless challenges
  • “I can’t do my job right now”

This is not just inconvenience. For many roles, access is labor. When access becomes conditional and fragile, the organization has moved operational risk onto individuals and frontline teams.


“Continuously verify” sounds like a technical best practice. In practice, continuous verification usually implies continuous observation.

To decide whether a request is safe, systems evaluate signals such as:

  • Sign-in patterns and location changes
  • Device state and security posture
  • Network characteristics and reputation
  • Application usage and access history

Those signals can be defensible for security. The ethical tension is that consent often does not keep pace.

Verification becomes continuous. Consent and transparency often remain one-time:

  • An acceptable use policy nobody rereads
  • An onboarding document signed years ago
  • A “we monitor for security” banner with no real detail

Over time, security telemetry can expand quietly:

  • more data sources
  • longer retention
  • broader internal access
  • secondary uses that were not originally stated

Even if this remains legal, it can violate expectations. Expectations matter when identity systems become part of everyday life.


Bias enters through risk scoring

Many Zero Trust systems rely on some form of risk scoring. It may not be a single number, but a logic layer that decides whether your situation “looks safe.”

Bias can enter through ordinary realities:

  • People who travel frequently (Hey, Netflix)
  • People who work across time zones
  • People on mobile networks with shifting IPs
  • People in regions where routing is inconsistent
  • People using older devices they can’t easily replace
  • People who share networks (campuses, dorms, coworking spaces)

A model can be “technically correct” in its own terms and still distribute friction unevenly. That friction matters:

  • More prompts for some users than others
  • Higher lockout rates for certain roles or regions
  • Slower workflows for people already at the margins of access

This is not a moral failure of mathematics, but what happens when policy is encoded into automated decisions without feedback loops for fairness and lived reality.


Where AI fits: both inside the controls and inside the threat model

AI is changing Zero Trust in two directions at once.

1) AI inside Zero Trust: decisions become more automated and less explainable

More organizations now use AI-assisted detection to decide when to challenge a login, block a session, or escalate verification. That can help catch real attacks (stolen credentials, impossible travel patterns, suspicious access bursts).

But it also raises two ethical problems:

  • Explainability: “The model said so” is not a reason a human can act on.
  • Governance drift: when AI is tuned to reduce security risk, it may silently increase lockouts, surveillance, or inequality unless someone actively measures those outcomes.

The practical takeaway: if you use AI for access decisions, you need operational guardrails. Provide clear reasons, appeal paths, and metrics for false positives, because the system will be wrong sometimes.

2) AI as a forcing function: identity attacks get cheaper

AI also strengthens the attacker’s side:

  • More persuasive phishing
  • Better social engineering at scale
  • Voice/video deepfakes targeting help desks and managers
  • Faster credential stuffing and recon

That pushes organizations toward stronger identity controls (phishing-resistant MFA, stricter recovery processes, tighter privileged access). Those moves can be justified, but they also tend to increase friction and monitoring, which again shifts cost downward unless designed carefully.

3) AI agents become “new employees” (and they need identities too)

A newer shift is internal: teams are giving AI agents and automation tools real access to systems like tickets, runbooks, databases, cloud consoles, and customer support tools.

This creates a category that Zero Trust wasn’t originally designed around: non-human identities that act with human-like reach.

If you do nothing, AI agents often end up with:

  • shared API keys
  • broad permissions
  • unclear ownership
  • weak audit trails

Ethically and operationally, that is the worst of both worlds: high power, low accountability.

The better pattern looks like identity-first for agents too:

  • Give each agent its own identity, scoped permissions, and clear ownership.
  • Limit what it can access by default (least privilege).
  • Log actions at the agent level (not just “someone used the service key”).
  • Make “what data can the agent see” a design decision, not an accident.

Zero Trust and the normalization of surveillance

Identity-first security can normalize surveillance by changing the framing from “monitoring people” to “monitoring posture.”

But as the system expands, the distinction can blur:

  • Logs of what you access can become proxies for what you’re working on.
  • “Anomalous behavior” can become “unusual work style.”
  • Data-protection controls can become broad inspection of everyday content.

Many organizations draw lines here. Many do not, especially under pressure after incidents, audits, or high-profile breaches.

This matters more when we zoom out to governments and critical infrastructure. In those contexts, identity systems can become part of how society enforces access: to services, to information, to participation in digital life. That is not a speculative leap. It is what happens when identity becomes the primary control plane for everything.


Trust is removed but replaced by vendors (and by infrastructure choices)

Zero Trust is often described as “removing trust from the network.”

In practice, trust does not disappear. It moves.

You reduce reliance on implicit trust (like being on a certain network), and increase reliance on:

  • The identity platform that issues sessions
  • The policy engine that decides “allow/deny”
  • The telemetry pipeline that feeds those decisions
  • The agents and endpoints that report device state

This can make systems safer. It can also concentrate dependency.

Questions that suddenly become existential:

  • What happens during an identity outage?
  • Who can change policy, and how quickly?
  • What visibility do you have into decision-making logic?
  • How portable are you if you need to switch providers?
  • Where does your security data live, and who can access it?

The ethical angle here is not “vendors are bad.” It is that Zero Trust can quietly turn private infrastructure into the arbiter of organizational access, and sometimes public access too.


Zero Trust in policy and regulation contexts

Once identity-first security becomes the default pattern, it intersects with public policy whether you intend it or not.

Governments and infrastructure operators adopt these models for good reasons:

  • Reducing national-scale cyber risk
  • Protecting critical services (energy, healthcare, finance, telecom)
  • Meeting procurement baselines and audit requirements
  • Responding to supply-chain threats

At the same time, cross-border data flows, privacy laws, and governance frameworks push back against unlimited collection and indefinite retention.

This is where principles often associated with OECD-style privacy governance become relevant, not as a citation exercise but as design pressure:

  • Purpose limitation: collect data for a specific security reason, not “just in case”
  • Data minimization: collect the least that works, not the most you can
  • Transparency: make monitoring understandable to the people affected
  • Accountability: someone owns the harm when systems misclassify or overreach

In many organizations, those ideas get operationalized through management systems and audits: ISO 27001 (information security management) often becomes the place where “who is accountable for controls?” is answered, and ISO 27701 is commonly used to extend that thinking into privacy management (what data is collected, why, and under what rules). You don’t need a certificate to benefit from the mindset: define scope, document intent, and make accountability real.

Zero Trust sits at the intersection of “secure everything” and “collect less.” If you do not handle that tension explicitly, your tooling will handle it for you, and tooling tends to prefer more data.


How to design ethical Zero Trust

Ethical Zero Trust isn’t “trust everyone.” It’s limiting technical trust without quietly building a system of coercion.

Treat visibility as a cost, not a benefit. Only collect signals you can justify. Prefer coarse checks over invasive inspection. Separate security telemetry from performance data by policy. Set retention limits that match actual operational needs, not “just in case.”

Make access decisions legible. People put up with friction better when it’s understandable and fixable. Explain why access was blocked, in terms a human can act on. Provide clear steps for remediation. “Computer says no, open a ticket” is a bad default that breeds distrust.

Design for false positives. Assume the system will be wrong sometimes — especially if there’s AI involved in the decision. Build fast recovery paths. Create audited “break glass” access for real emergencies. Track lockout and step-up rates as operational metrics alongside security metrics.

Consent should keep pace with verification. If monitoring expands, notice should too. Be explicit about what’s collected, why, and who can see it. After an incident, re-evaluate scope rather than permanently ratcheting up surveillance.

Govern non-human identities. Don’t let automation become a way around your own principles. Bots, pipelines, and AI agents should have their own scoped identities, clear owners, strong logging, and narrow data access — the same standards you’d apply to a human account.

Measure who carries the friction. Ask not only “did we reduce risk?” but also: who gets blocked most? Who gets challenged repeatedly? Do certain regions, roles, or connection types absorb disproportionate friction? If you don’t measure this, you’re letting the system choose outcomes without realizing it.


Closing: Zero Trust lives at Layer 9 whether we admit it or not

Zero Trust isn’t just an architecture diagram. It’s a governance system implemented in software, one that defines legitimacy, suspicion, and access at speed and at scale.

Protocols and physics set constraints. Institutions and incentives decide outcomes.

That’s why Zero Trust lives at Layer 9 — the layer of society, governance, and power — whether the architecture review mentions it or not.

Written by the Infra Atlas author

I work on infrastructure and software systems across layers: writing code, shipping products, and dealing with the practical trade-offs of hosting, memory, and network behavior in production. When this site says it covers “layer 3 to layer 9,” it’s half a joke and half a truth: from routing and packets, up through operating systems, applications, and the human decisions that actually cause outages.

Infra Atlas is a collection of field notes from that work. Some pages may include affiliate or referral links as a low-key way to support the site. Think of it as buying me a coffee while I write about why systems behave the way they do.