The packet's journey: what actually happens
You open a website hosted in Frankfurt from your office in Raffles Place. In rough order:
- Your laptop sends the request over Wi-Fi to your office router.
- Your router sends it over fibre to your ISP's nearest aggregation point.
- Your ISP's network carries it to a major data centre — likely in Tai Seng, Jurong, or Loyang.
- At that data centre, your ISP either hands the packet directly to a network that can reach Frankfurt (via peering at an IXP), or pays an upstream transit provider (Tata, NTT, Telia, Cogent) to forward it.
- The packet rides one or more submarine cables (likely SEA-ME-WE 5/6, AAE-1, IMEWE, or similar) under the Indian Ocean and through the Red Sea or around the Cape, surfacing at a Mediterranean or French landing station.
- From the European landing station, terrestrial fibre carries it to the destination data centre in Frankfurt.
- At Frankfurt, it crosses one or more networks inside the data centre, reaches the server, and the server's response retraces a similar (but not necessarily identical) path back.
The whole round trip typically takes ~160 ms. You experience it as "the page loaded." Every word in bold above is a part of the system this article unpacks.
Submarine cables
Roughly 99% of intercontinental internet traffic travels on submarine cables — fibre-optic bundles laid on the ocean floor between coastal landing stations. Satellite is a niche by comparison; even Starlink ultimately hands most of its traffic into fibre at ground stations.
Some structural facts:
- A modern submarine cable carries multiple fibre pairs, each capable of 20+ Tbps. The total capacity of any major cable runs into the hundreds of Tbps.
- Cables are amplified every 50–100 km by repeaters powered through a copper conductor running the length of the cable.
- Major cables cost USD 300 million to 1 billion+ to build, take 3–4 years from announcement to in-service, and are funded by consortia (carriers, hyperscalers) rather than single owners.
- Cables get cut — most often by ship anchors and fishing trawlers in shallow water; sometimes by undersea landslides or tectonic events. The global average is one cut every few days. Operators maintain spare capacity and re-route automatically using BGP when cuts happen.
- Hyperscalers (Google, Meta, Microsoft, Amazon) now finance most new builds. Examples landing in Singapore: Echo and Bifrost (US–SG–Indonesia), Apricot (Asia loop), SJC2 (Southeast Asia–Japan).
Landing stations
A submarine cable terminates at a landing station — typically a low building on the coast where the cable comes ashore, is connected to terrestrial fibre, and the high-voltage cable power is terminated. Singapore's main submarine cable landings are at Tuas, Changi, and Katong, with a smaller historical site at Pasir Ris.
From the landing station, terrestrial fibre carries the traffic to the major data centre clusters — Tai Seng, Loyang, Jurong, and Tampines — where it can be cross-connected to the networks that need it. The physical fibre routes between landing stations and data centres are themselves a planned redundancy matter; carriers maintain at least two diverse paths.
Data centres & meet-me rooms
The big carrier-neutral data centres (Equinix SG1–SG6, Digital Realty / formerly Global Switch, ST Telemedia DCs, Singtel DC East & West) play a critical role: they're where many different networks physically meet. Each network rents a cage, runs its equipment inside, and pays the data centre to install short cross-connects (fibre patch leads) to other networks' cages.
These cross-connects sit in a specially-managed area sometimes called the meet-me room (MMR). The denser a data centre's MMR, the more valuable that facility is — because every additional network present is one fewer hop you need to reach.
Equinix SG1 in Tai Seng is widely regarded as one of the most network-dense facilities in Asia. The reason cloud providers, big SaaS, and CDNs all put presence there is so they can hand traffic to each other and to carriers directly, avoiding the cost and latency of going through anyone else.
Internet exchanges (IXPs)
An Internet Exchange Point is a shared switching fabric inside a data centre where many networks can plug in once and exchange traffic with all the others. Instead of each pair of networks running a private cross-connect between them, they each connect to the IXP's switch and use it as a meeting point. Massive economy of scale.
Singapore hosts two major IXPs:
- SGIX (Singapore Internet Exchange) — neutral, member-governed, the largest by participant count.
- Equinix Internet Exchange Singapore — commercial, runs across Equinix's SG facilities.
Through these IXPs, hundreds of networks exchange traffic locally. When a Singapore eyeball ISP (StarHub, Singtel, M1) and a Singapore-hosted content network are both at the same IXP, traffic between them stays in Singapore and takes single-digit milliseconds — instead of traversing an undersea cable to a hub in Hong Kong or Tokyo.
Peering vs transit — the two ways networks talk
Every independent network on the internet is identified by an Autonomous System Number (ASN). Singtel is AS7473, StarHub is AS4657, Google is AS15169, Cloudflare is AS13335, and so on. There are over 100,000 active ASNs globally.
Two ASNs can connect in two fundamentally different ways:
- Peering. Two networks agree to exchange traffic between their own customers for free (or for a flat port fee at an IXP). No money for the traffic itself, just the shared infrastructure cost. Works when both networks roughly benefit equally.
- Transit. One network pays another to carry its traffic to everywhere on the internet. Tier-1 carriers (NTT, Tata, Telia, Cogent, Lumen) sell transit; smaller networks buy it. Priced per Mbps of throughput, with 95th-percentile billing.
Most non-trivial networks have a mix: peering with as many networks as possible (to reduce transit cost and improve latency) plus transit from one or two upstreams to cover everything peering doesn't.
BGP — the protocol that picks the route
BGP (Border Gateway Protocol) is how every network on the internet tells every other network "I can reach these IP address ranges via me." A router that speaks BGP holds the global routing table — which in 2026 contains roughly a million IPv4 prefixes and 200,000+ IPv6 prefixes — and uses a set of policy rules to decide which next-hop is best for each destination.
How BGP route selection works in rough order of priority:
- Longest-prefix match. A more specific prefix always wins (a /24 beats a /16 for the same addresses).
- Local preference (set by the network operator's policy) — usually used to prefer peering routes over transit routes.
- AS-path length. Fewer ASNs on the path = preferred. Approximates the "shortest" route.
- MED, IGP cost, router-ID tiebreakers for everything still equal.
BGP is the reason the internet routes around failures. When a submarine cable cuts, the routes that used it disappear from BGP, neighbouring routers re-converge on the next-best paths, and most traffic shifts within seconds — sometimes adding 20–100 ms of latency until repair, but rarely going dark entirely.
BGP is also the reason for some of the internet's most spectacular outages — a single misconfigured route announcement (a "BGP leak" or "BGP hijack") can pull global traffic into the wrong network. Recent industry adoption of RPKI (Resource Public Key Infrastructure) and route filtering has reduced but not eliminated this risk.
CDNs & the long tail of "the cloud"
For high-traffic web services, the actual content rarely comes from the origin server you imagine. CDNs (Content Delivery Networks) like Cloudflare, Akamai, Fastly, and Amazon CloudFront cache content at hundreds of points of presence around the world — including multiple PoPs in Singapore.
This is why a website "hosted in Frankfurt" often feels fast from Singapore: it isn't actually being served from Frankfurt at all. It's being served from a Cloudflare edge in SG1 or SG3, three milliseconds from your office. The "origin" only sees traffic on cache misses.
Cloud hyperscalers (AWS, Azure, Google Cloud) all have multiple Singapore regions/zones for the same reason. "Migrate to the cloud" almost always means migrating to local cloud capacity, not to some distant US data centre.
Why Singapore matters in this picture
- Geographic chokepoint. Singapore sits at the eastern end of the Indian Ocean, the western entry to the South China Sea, and the natural shore landing for cables between East Asia, Australia, the Indian subcontinent, and Europe via the Middle East. Most major submarine cables in the region either land here or pass close by.
- Stable jurisdiction. Singapore has been the regional choice for hyperscaler and carrier capex for two decades. Predictable regulation, deep talent, strong rule of law — landlords trust their boxes here.
- Dense data centre footprint. 50+ commercial data centres in a small footprint, with the major carrier-neutral ones intensely cross-connected.
- Low latency to the rest of Asia. < 10 ms to KL, ~35 ms to Hong Kong, ~70 ms to Tokyo and Sydney, ~50 ms to Jakarta. Excellent for serving an ASEAN-wide customer base from one footprint.
The flip side: Singapore is also one of the most power-constrained markets for data centre growth in the region. The DC moratorium of 2019–2022 and the new sustainability standards have shaped where new capacity gets built — Johor, Batam and other regional sites are now part of most Singapore-anchored deployment plans.
What this means for buyers
Three practical takeaways:
- Diversity isn't just about line count. Two fibre lines from the same telco, riding the same submarine cable, fail together. Real diversity means different carriers, ideally with different cable systems on the international side.
- Pick data centres for their network density, not just floor space. If you're hosting infrastructure that needs cheap cross-connects to many networks (cloud on-ramps, CDN nodes, SaaS providers), put it where they already are.
- Performance is an architecture choice. Using a CDN, picking a cloud region in Singapore (not Tokyo or Sydney), peering with the right partners at SGIX — these are the levers that actually move user latency. Last-mile bandwidth alone won't.
Where to go next
- Step back: LAN vs WAN Basics for the wired and wireless side of how your traffic gets to the internet in the first place.
- Wireless layer: The Modern Wireless Stack — how the local radio fits into the bigger picture.
- Find a partner: our Telecom Provider buyer's guide and data centre vendor directory.
Browse data centre & carrier providers
Looking for a data centre, carrier or international connectivity partner in Singapore?