Why Latency is Political, Not Just Physical
The internet is routed by agreements and incentives, not maps, and that’s why “nearby” can still feel far.
Latency is usually explained like a physics problem: light travels fast through fiber, so farther means slower. That’s true, but it only explains part of what you actually see.
Two users 20 km apart can have a 5× latency difference to the same server — not because of fiber length, but because their ISPs buy transit from different networks and peer in different places. The physical path exists. Traffic doesn’t always take it.
Calling this “political” isn’t about ideology. It means the routing your packets take is the outcome of contracts, cost negotiations, and business relationships — not a map.
The modern internet is a graph of networks, not a straight line
A practical mental model: you are not “on the internet.” You are on an ISP’s network, an autonomous system (AS/ASN) with its own routing policy, cost structure, and commercial relationships.
From there, your traffic reaches other networks through:
- Direct peering
- Paid transit
- Interconnection at Internet Exchange Points (IXPs)
Border Gateway Protocol (BGP) doesn’t find the shortest path. It finds an allowed path.
That’s why “the closest server” is not a coordinate problem. It’s a connectivity problem. The fastest route is usually the one with fewer intermediaries and uncongested handoffs, and those only exist where networks choose (or are able) to connect.
“Physically connected” is not the same as “directly connected”
Even if two data centers are physically close and fiber clearly exists between them, traffic may still detour if the networks involved don’t have a direct interconnection agreement.
A simple analogy: two neighboring cities may be separated by a bridge, but if that bridge is private and closed to the public, travelers are forced onto toll highways through a distant hub.
On the internet, that “permission” is peering (often settlement-free) or a paid transit relationship. Without it, packets are handed off through intermediary networks that are allowed to carry the traffic, even if that means going the long way around.
The political layer of routing (without ideology)
Networks optimize for constraints that are not purely technical:
- Transit cost vs peering
- Capacity at interconnects
- Risk concentration
- Market power (who can say no)
This is where “politics” shows up. Political here means negotiated power, the outcome of contracts, leverage, and economic reality, not packet physics.
Cloudflare has been unusually explicit about this in public writing: bandwidth and transit costs vary dramatically by region, and some markets are dominated by incumbents that make local interconnection expensive or impractical.
When that happens, routing can look irrational on a map. The shortest physical path may not be the cheapest, not the most reliable, or not available under current interconnection policies.
East Asia is close, but not always near
On a globe, East Asia is compact. Taipei, Hong Kong, Seoul, Shanghai, and Tokyo are geographically close compared to trans-Pacific routes.
But “close” doesn’t guarantee “near” in latency terms.
“Near” depends on where your ISP hands traffic off to the destination network or CDN, and that handoff depends on who peers with whom and where. If strong interconnects don’t exist locally, or are too expensive, traffic may hairpin through regional hubs that make sense economically, not geographically.
Taiwan as a case example: when “local” isn’t actually local
Taiwan is a clear illustration of how interconnection economics shape user experience.
Cloudflare has publicly described Taipei as one of the more expensive markets for bandwidth and transit, citing the influence of large incumbents (including HiNet). In the same discussion, Cloudflare explains that serving traffic in expensive markets has real cost implications, especially for free or low-margin traffic.
As a result, some users may not be served from the most local cache even if a Taipei presence exists, because anycast routing and traffic engineering still follow cost and interconnection realities.
The takeaway isn’t that any single provider is “good” or “bad.” The point is structural: the existence of a PoP in a city does not guarantee your users will reach it.
Why slowness feels random to users
From a user’s perspective, latency variance feels arbitrary because routing can change without your application changing.
BGP decisions shift. Interconnects congest. Traffic is rebalanced based on cost or availability. One overloaded handoff can dominate the entire end-to-end path, even if everything else is healthy.
Because peering and transit choices differ by ISP (ASN), two users in the same city, or even on the same street, can have very different “internet distances” to the exact same service.
What this means for builders and operators
If you build for users across regions, “pick the nearest data center” is the beginning of the problem, not the solution. Measure latency by ASN, not just country. Reduce cross-region chattiness. Cache aggressively. Evaluate providers on their actual network connectivity, not just the VM specs on the pricing page.
You can’t control BGP routing decisions. But you can stop being surprised by them.
Latency is about who is connected to whom, under what terms, and at what cost. That’s why it’s political, whether we use that word or not.
Written by the Infra Atlas author
I work on infrastructure and software systems across layers: writing code, shipping products, and dealing with the practical trade-offs of hosting, memory, and network behavior in production. When this site says it covers “layer 3 to layer 9,” it’s half a joke and half a truth: from routing and packets, up through operating systems, applications, and the human decisions that actually cause outages.
Infra Atlas is a collection of field notes from that work. Some pages may include affiliate or referral links as a low-key way to support the site. Think of it as buying me a coffee while I write about why systems behave the way they do.