So, we rang up, a sales guy picked up the phone and got a million pound pay day, and resigned that evening.
But our customers were happy so thats what counts :-)
Good times - they were a wonderful company, thank you
Reference: https://en.wikipedia.org/wiki/T-carrier
The T1/T2/T3 and E1/E2/E3 hierarchies join at the STM-1 level: An STM-1 can be subdivided as 4 x E3s or 3 x T3s.
This means that on a EU<->US SDH link, an STM-1 can be demuxed into either E3s or T3s, so you can have both standards on the same fiber.
Light is only traveling at around 2/3 of speed within Fibre.
The previous decades have been around Bandwidth. Is time we shift out focus to latency. 5G is already part of that , and 6G is further pushing it as standard feature. I wish other part of network start thinking about Latency too.
May be not network, but everything. From out input devices to Display. We need to enter the new age of Minimal Latency Computing.
The most-likely outcome is a few happy geologists/geophysicists and a number of very-sad HFT underwriters.
https://en.wikipedia.org/wiki/Mohorovi%C4%8Di%C4%87_disconti...
I long for the days when the US loved building infrastructure.
Well, yes and no. I recall they wanted to pursue hollow cables in the early days of optical cabling, but it turned out solid fiber was the answer.
(sorry, can't find a good reference)
So FTTC (Fiber Through The Core) is what you want.
I don't know of any being deployed long distance, though in principle they'd be really valuable for intercontinental backbones. Starlink fills a huge gap in existing infra, and there are places that won't see any sort of fiber, let alone fancy microstructured fiber, for the foreseeable future (or ever, obviously in the case of ships/aircraft). But the bandwidth isn't great. Each current sat does I think 20 Gbps, and though no doubt that'll increase over time that's literally orders of magnitude from this single cable alone. Having the sats support direct ground optical links for backbone usage might be interesting someday, but weather attenuation will never stop being a problem with that. Starlink is filling in the gaps for fiber infrastructure, not replacing it. They're complementary.
So I agree it would be great to see more advanced fiber deployed long distances and start to shrink latency for everyone, and interesting to know what technical obstacles remain if any (maybe a lot remain?). A 40% speed boost while still having massive bandwidth isn't nothing.
How do you splice a hollow optic fibre?
Each hop will add latency since signal needs regeneration. So it’s not clear to me a swarm of satellites is a real winner from a latency POV. Furthermore, given costs to put the constellation up there, it’s extremely expensive on a $/bit basis and not sure how it could compete against fiber.
The value of Starlink is providing service in areas lacking existing broadband infrastructure where the cost to provide service exceeds the cost of Starlink.
But Starlink will never match the bandwidth and reliability that fiber can do, nor is it meant to. So it's not a replacement, just another awesome option.
I mean input lag [1] is easily 50ms. But some of them requires software changes. And any thing software is expensive. The cost of this new Cable is only $300M. Hardware innovation is getting faster and cheaper than Software.
Webdevs: is there a reason why a page would be designed so that JS being on is mandatory? Especially for something as prosaic as a couple of paragraphs of text.
If you meant mandatory in terms of the actual medium requiring it, I can only hint to interactive applications, aside from that I don't think it would actually be mandatory.
Gatsby + Netlify with a CMS-as-a-service like Contentful or Prismic will lead you to a good result. We made e.g. https://fox-it.com/ using that, its back-end is Wordpress but it's drained empty to rebuild the website. Note how it works without JS, the dropdowns don't work but they fall back to full page navigation page. Note how with JS enabled, all the content shows up instantly. This is how it's supposed to be done.
Here's an idea: add some HN logic to automatically move a comment that begins with "TL;DR" to the top of the thread.
I think in the case of Google, it's because they've been told they are the best developers, the top 1% of SWE's, they went through rigorous interviews, are paid a small fortune twice as much as they would get at a regular coding job, etc.
So it's dick shaking. They need to show to the world that they're better than plain HTML websites, that they have a massive schlong, that they out-chadded the vast majority of software devs. Plain html? Psht, we can invent our own language, gonna put those six years of uni to work! Wordpress? This is beneath us! It has to be a client-side rendered JS-pulled-through-GWT behemoth because on my system it's... wait it's slower, but nevermind that it's technologically ALPHA.
edit: actually looked at the source, looks like a Polymer / Web Components website. I've had to work in that once, it was dreadful compared to libraries used by real people.
Is this due to more and more content simply generated by javascript frameworks?
I also wonder what kind of permissions and licenses you need to seek to run a cable across the ocean floor?
Yes it's long, but it's so worth it!
And there are likely laws about not obstructing the shoreline.
https://www.cnn.com/2019/07/25/asia/internet-undersea-cables...
As for physical security, there isn’t much on the sea floor. There are various instances of nation states tapping cables due to the ease of access when it comes to actually “listening” to the data. Obviously the issue there is getting to the undersea cable.
https://www.theatlantic.com/international/archive/2013/07/th...
Multiple nations have specialised subs to tap into them. I doubt you'd find anyone willing to make such a guarantee. They are impossible to secure in any way and you need to rely on security assurances at different layers instead.
Now we just compromise the servers/routers. https://gizmodo.com/the-nsa-actually-intercepted-packages-to...
It's very much still happening. Metadata is enough for intel purposes, storage is ridiculously cheap and post-quantum breaks of key exchange is forever 20 years away like fusion.
https://www.nytimes.com/2015/10/26/world/europe/russian-pres...
https://www.theatlantic.com/international/archive/2013/07/th...
https://www.zdnet.com/article/spy-agency-taps-into-undersea-...
[0] https://www.geoportail.gouv.fr/carte?c=5.372494831681247,43....
[1] https://www.sigcables.com/index.php/cableliste/fiche_cable/5...
[2] https://twitter.com/jlvuillemin/status/1238414261774401537
[3] https://twitter.com/jlvuillemin/status/1238433769935319042
[4] https://twitter.com/jlvuillemin/status/1238479381145751553
It is generally buried either under the sand or inside concrete. But yes, there are places where you can get very close to these things if you know what you are looking at.
https://en.wikipedia.org/wiki/Cable_landing_point
Here is a pic of the landing for the US base in Cuba.
https://www.dvidshub.net/news/186633/uct-1-unit-choice-gtmo-...
https://media.wired.com/photos/59546c71be605811a2fdcfd0/191:...
https://cloud.google.com/blog/products/infrastructure/announ...
I presume that the other trillion-dollar companies are getting in on the action too.
[0]: https://zeenea.com/metacat-netflix-makes-their-big-data-acce...
1 - https://blogs.loc.gov/thesignal/2012/04/a-library-of-congres...
What is the limit on how many fibres can go in a cable? Should we expect future cables to have 50 fibres, or 100, or 1000, or more?
I suspect that the repeaters and associated power equipment along the line is pretty big stuff. So the fact that this cable is able to "share" that equipment across the 12 fibers is a breakthrough in technology.
Their Oregon to Japan cable, 9000km and laid in 2016, cost $300M.
https://www.computerworld.com/article/2939316/googles-60tbps...
The "state-of-the-art" AFAIK is to use many wavelengths per fiber, each one carrying ~192 wavelengths each wavelength transporting at up to 100Gbps (this is known as DWDM).
So so with SDM, you just have more fibers? So what? It seems like I am missing something here? Why is "SDM" the key concept rather than "DWDM"? Why not just say DWDM with 12 fiber pairs?
Multi-mode fibers are not feasible for long distance transmission. For long distance communications, using suggested approach, may be better to use multi-core fibers.
[1] https://siliconangle.com/2013/07/19/how-the-nsa-taps-underse...
https://golem.ph.utexas.edu/category/2014/10/new_evidence_of...
If the encryption used is flawed, they could see whatever they want.
They can see who is talking to who, and when:
But Snowden showed us that a lot of it is scooped up and warehoused. Maybe they can see your traffic in a decade or two?
A lot of surveillance is done both *illegally* and secretly.
Forcing carriers to install black boxes next to their routers is not always the preferred choice.
While a cable is being tapped, there will be a suspicious change in signal strength, and various signal reflections will tell the cable operators where the tap is. Thats bad for a spy agency who want to remain undetected.
Instead, they break the cable in three points deliberately. The middle point is where they put the tap, and the spy agency will repair it. The points either side are simply so that the cable operators don't know where the tap has been inserted, and have to be repaired by the cable operator. That gets expensive, since it will typically happen 3 or 4 times for a new cable install (3 or 4 countries want access to the data).
Cable repair operations are typically public knowledge (they require specialized ships), so anyone who fancies can crunch the data and see how often a cable breaks in multiple places before being repaired to know how often it's tapped... Mediterranean cables seem to see the most taps.
I'm sure a Shakespeare play or The Great Gatsby are barely a few megabytes.
But if you asked Joe Shmoe on the street "In Great Gatsbys, how big was the last picture your iPhone took", they would rightly have zero idea.
It's so useless.
I used to use that metric when folks ask why it took so long to debug. Like, our project is 600,000 LOC and more complicated than any of his works. He didn't have it all memorized and neither do I. It's a metric PMs can understand.
So only the raw texts, probably. 10TB sounds about right for that.
[1] https://nplusonemag.com/online-only/online-only/the-library-...
Seems crazy since oversea transit (tcp & single-channel) is usually latency (or loss) bound.
I would expect it's better than going over public transit and legacy subsea fiber, but it would have been useful to see some comparison tests between POPs.
What's terrifying is that Google described each of their B4 sites as having 60tbps uplinks in 2017, growing at 100x per 5 years. So a 250tbps undersea cable is nice but when you think about it probably not enough to make intercontinental transfer too cheap to meter.
Damn. Anyone else just agog at this figure?
(And I assume that Google has enough demand: If it didn't, why would they build such a large cable?)
No. Good market socialist solution in situations where network investment (electric grid, railway, telecom) creates natural monopolies, is forcing separation of the network and content.
For example electric grid owner must allow other sell and buy electricity trough the network. They can only get maintenance fee determined so that it cant be used to distort energy markets in favor of the company owning the grid.
In telecom it usually applies only for the last mile.
Not a big deal, but...sheesh. It's not like it was a troll comment; just a relatively lighthearted poke.
Whats the issue?
On another note - the third link captures the back button and doesn’t let you get back to hacker news (at least on mobile). What a shitty site.
long click the back button, a popup will show your navigation history and you can click the last link before entering the broken site.
That said, the behavior is absolutely unacceptable.
[1]: And in Firefox desktop you can also do this but I can't remember if it is long-click or right click.
How the great have fallen...
They are losing billions because they are paying for growth. It is the proper strategy.
This is what happens when a marketing company starts a cloud right? turns it into a loss leader and everyone who buys it becomes and apologist at all cost.
i don't get it.
GCP is not a loss leader. the unit economics are fine.