Corruption and attempts to control markets are everywhere and if you don't fight them they will win.
In Switzerland fiber needs to be accessible to all providers which results in many places having fiber run by the local power providers and the large state owned telephone company. This is what allows providers to offer 25gbit synchronous for under 70 USD per month.
However this did not stop the large state owned communication provider to attempt to kill competition by no longer running p2p (1 or 4 fibers directly from a home to the local exchange bulding) fibers but p2mp (1 fiber to a splitter in the street with a backbone to the exchange) which requires active splitters (the environmental impact of this was completely ignored). This automatically limits any other provider from offering a faster service than the phone company.
Even after a court case and then an injunction they spent millions to expand this network thinking they can somehow perswaid the courts and use people complaining that they can't have fiber because of an injunction (they told customers on the phone that fiber is available but can't be connected due to a court case). In the end they however back peddled and it appears they will loose the case now.
Thanks to the small provider that took this to the courts (init7) it appears we will keep a network open for competition and future proof.
There are fines pending but those are a 2 edges sword. The tax payer effectively pays it since the majority stake belongs to the tax payer. So a large fine is bad and a small fine is bad because it's not a deterrent. The executives that caused the mess's are already mostly gone (hence the back peddling) but the correct action would be to claw back their pay and bonus or something like that so the next "hot shots" don't try such shit again.
What use cases are there for residential (or even most business) connection to be 25Gb/s (or even >10 Gb/s)? Are there 'practical' application for homes or offices to have this much?
At $WORK we have both 10 Gbps to both our office and our DC, and we don't come close to saturating that.
There's really only so many Linux ISOs that you can download at home.
(I'm not "against" having it, just curious on possible uses.)
Games. These days the average new AAA games approach 100GB. the biggest ones like Ark Survival are 400GB.
A 5 minute download vs a 50 minute download is a totally meaningful difference in quality of life. It might sound crazy but a top end gaming rig can definitely take advantage of that 10G connection.
Honestly when it comes to bandwidth: if you build it, they will come? It's a chicken and egg problem most of the time. People aren't going to invent a new widget if there's no infrastructure and no sign of there ever being infrastructure to support it.
For instance: who in their right mind would have built Netflix in 1992?
We were experimenting with IGMP on the Mbone back then, in terms of teleconference and webinar capabilities. We sort of envisioned that large groups of people would tune in simultaneously to live events, but Netflix's VOD and YouTube made for a markedly different architecture.
“Futureproofing” feels like a lame answer but when you’re talking about laying cables in the ground it’s a good one. Just imagine a future where we’re streaming 4K 360 degree video for VR headsets or something.
I’m sure all the folks that have had to tear up 100Mbps LAN cables wished there was 1000Mbps cable in there instead.
The laying of cables is the same: it's single-mode fibre with a PON architecture. Once you have that possible future speeds are 'infinite' with end-point upgrades.
I'm asking: why even get the 25Gig service over the 10Gig? What are you doing with a 25G down/uplink that you cannot accomplish with 10G?
If the price is under $70 for 25G, I'd imagine most people pay half that for less bandwidth, but the 25G works for a number of people who need or want it, plus is great marketing. Also selling 25G that's underutilized is probably substantially cheaper.
I get it's stated. It still seems awful. It's like owning a euro performance car in an area with exclusively straight roads. I mean yeah, you can go fast but that's it. Most of the fun can't happen.
I used my symmetrical upload for work. I'd copy virtual machines home to fix and then copy them back. I hosted VMs that I used as a template. I'd host huge ISOs so I didn't have to carry install DVDs with me.
Past that, it's pretty sweet to be able to access every bit of my home network as if I were there.
But then I moved to an area with one internet option, a cable ISP. I now drive a LOT more than I used to.
Ever transfer a file more than a few hundred gigs? This makes that take less time. Even if I’m only pulling down files at 1800 mbps Ive got plenty of headroom for everything else.
It's not PON thats why they are able to provide 25 Gbit/s. And they choose 25 Gbit/s because the switches for it where only slightly more expensive than the 10 Gbit/s version would have been. They did a talk about it some time ago.
True but the "state owned ISP" in Switzerland only has XGS-PON hardware. So Init7 could only provide 25Gbit/s in the parts of the network thats not PON.
Latency of fiber vs copper isn’t a thing you will notice within any cities. The latency you are probably thinking of is related to shared mediums and having to wait to transmit (e.g. DOCSIS).
>Latency of fiber vs copper isn’t a thing you will notice within any cities. The latency you are probably thinking of is related to shared mediums and having to wait to transmit (e.g. DOCSIS).
You're saying fiber's low latency (within a metro area) is more tied to it's carrying capacity as opposed to it's transmission nature. You're correct. Maybe also pedantic but certainly correct.
> I'd buy an old shed and turn it in to a make-shift data centre, plug an ethernet cable in to a mushroom.
My various jobs in recent years is to run HPC data centres: a little while ago one with 12PB of total storage (along with lots of Ceph storage for ~300 on-prem, private cloud OpenStack instances), a more recent one had about a thousand GPUs (our power usage was high-five digit kWh each month).
Whenever I looked at our routers/firewalls, we never came close to saturating 10Gb/s even with all data sets we dealt with.
That sounds like a limitation on the remote end that doesn't support high bandwidth. If you have 10Gbps connections on both ends, your link should be saturated (minus some overhead).
Blu-ray 4K content is an absolute maximum of ~150Mbps, but can be below 100Mbps. So with a 1G/1000M connection you can stream have 7-10 streams of 4K Bluray quality video simultaneously.
What does going to 10G, let alone 25G, get you? Are you really planning on 70-100 4K simultaneous movie streams on 10G, or 160-250 simultaneous streams on 25?
That's missing some of the bigger picture. One of the reasons streaming 4k content is at a lower bitrate is because it has to deal with network hiccups and fit inside the buffer of most playback devices. A computer or phone has plenty of room, but streaming 4K content to a TV or chromecast/roku/firestick/etc. does not. Faster bandwidth means keeping the bucket full more reliably.
Are we sure that is true? Why would Netflix or similar pay out to give a better picture? Outside of a few AV aficionados, I doubt the average consumer would know or care.
For streaming, probably true. However my 2016-era 4K camera records at up to 150Mbps and I noticed recently that due to compression I was totally unable to show off some gorgeous telephoto video of a hummingbird bathing in a river because the fast running water looked truly horrible under compression, even though the original video looked great. I would love 100mpbs streaming options.
Bitrate on a camera and on a video aren't comparable. The camera has to use a compression algorithm that can run real time on very low power (the cpu in a camera is probably ~5-10 watts. Anything non-real-time video will have been re-compressed and can probably maintain quality at roughly half the bitrate.
Why do people want the extra bandwidth? Isn't latency in distant end server response times generally much slower than speeds, unless the concern is bandwidth for streaming?
This reminds me of "640K ought to be enough for anybody".
Right now I have 1gpbs up and down at home. That was an upgrade I did from 50 down about five years ago. The reason we upgraded was because at night when everyone was streaming things would slow down and we'd be fighting with each other. The 1g stopped that issue.
Right now the 1g is more than enough. But I'm sure there will come a time when it won't be. Maybe we'll have 8k streaming from AR headsets that require one stream for each eye. Or who knows what else.
I'd rather get ahead of it.
And right now it still takes a few minutes to download a movie at full quality. I have to say that when we went from 50m to 1g it was nice to be able to download TV shows in seconds.
High latency is not the same thing as low throughput.
It is wild how many people do not understand this.
Latency can inform throughput if your windows do not scale. But the whole reason we have window scaling schemes is to optimize throughput in the face of latency.
With regards to remote server performance - yeah - CDN's exist for this reason. I may not saturate my gigabit connection while downloading game patches, but I get close enough. I have also had the experience on a different ISP of having spent more time downloading and installing updates than I ever did playing my PlayStation.
High latency means low throughout in the beginning in addition to the latency itself, and for most things on the web by the time the window is scaled up the request is already done. Sure, if you download 100GB games every day or have other special needs like torrenting <s>pirated movies</s> Linux ISOs all day, 10GbE (which also requires expensive equipment) helps, but for 99.9% of people, once you hit 1Gbps, latency is what affects your Internet experience the most, bandwidth is hardly ever an issue.
Latency drives throughput for a single session - but most people that want that kind of bandwidth don't care about a single stream going at the full 25Gbit. Things like torrents or lftp will allow you to create multiple data streams for a single file if you need higher throughput than you can get through a single session.
If you're self-hosting something like a web server, no one user is ever going to hit you with 25Gbit of requests, it'll be coming from multiple sources.
Peaks. If you have a home with 5 people in it - watching 4k content, downloading games, etc. There can be contention and performance gets degraded. This is a case where size of pipe matters more than latency.
I pay $90 a month for 1Gbps up/down fiber to the home. To pay $70 a month for 25x that speed is ludicrous. I imagine I could pay half of what I do or less to get the same speeds.
This still reads like a success story of municipal networks to me. It is much easier to hold public institutions like this accountable than private regional monopolies. A private company is not subject to the whims of the Democratic process, and regional monopolies ensure that free market forces have a much harder time cultivating competition or disruption. The fact that your courts were able to put an end to these practices and the executives responsible are gone while Comcast continues to operate unchecked throughout large swaths of the US really demonstrates this.
In Utah, we've been fighting the corruption and anti-competitive practices of Comcast and CenturyLink for over two decades. And despite many small victories like the recent one in Bountiful, many residents are still getting screwed over with no viable recourse. The city my parents live in fell for Comcast's intense lobbying a few years ago and now they have no real path towards getting a meaningful alternative.
>The tax payer effectively pays it since the majority stake belongs to the tax payer. So a large fine is bad and a small fine is bad because it's not a deterrent. The executives that caused the mess's are already mostly gone (hence the back peddling) but the correct action would be to claw back their pay and bonus or something like that so the next "hot shots" don't try such shit again.
If it's state owned, they should be allowed to determine how the penalty is levied. Make the fine directly payable by the executives in charge, it'll stop immediately.
Also not to nitpick as I'm guessing English is a second language and it's quite excellent -
I think this is the best kind of typo. The spelling given by the original comment pronounces the same way as the correct spelling, we've just all memorized that one of them is the correct way to go from verbal to written.
IMHO many of these US and EU comparisons don't pan out due to scale.
Switzerland is a small, wealthy and densely populated country compared to the US.
Utah alone is five times the size of Switzerland. Swiss GDP per capita is ~20% higher and most importantly, the population density of Switzerland is 213 people per sq km versus 34 in the US (and undoubtedly that number is even lower in Utah which has one of the lowest population densities in the US).
It was calculated to save about 50 USD per connection (p2p vs p2mp). Why active I don't know, they may not all be. There were probably also other interests which I would love to know about but I wasn't a fly on the wall when those decisions were made.
The sheer amount of money spent to expand the network after the court injunction forbid connecting those seems sus to me. It will take many years and many more millions to undo.
I should also point out that a lot of this money to expand the fiber network comes from government grants.
If you're talking about 25 Gb/s specifically, standards didn't exist until relatively recently (e.g., IEEE Std 802.3ca-2020, 25GS-PON/G.9804), so if you want to handle those speeds you had to go active.
If you were building out in later 2022 or 2023, you have have (more) 25Gb PON parts available. Pre-2021 your options may have been more limited.
In Switzerland fiber needs to be accessible to all providers which results in many places having fiber run by the local power providers and the large state owned telephone company. This is what allows providers to offer 25gbit synchronous for under 70 USD per month.
However this did not stop the large state owned communication provider to attempt to kill competition by no longer running p2p (1 or 4 fibers directly from a home to the local exchange bulding) fibers but p2mp (1 fiber to a splitter in the street with a backbone to the exchange) which requires active splitters (the environmental impact of this was completely ignored). This automatically limits any other provider from offering a faster service than the phone company.
Even after a court case and then an injunction they spent millions to expand this network thinking they can somehow perswaid the courts and use people complaining that they can't have fiber because of an injunction (they told customers on the phone that fiber is available but can't be connected due to a court case). In the end they however back peddled and it appears they will loose the case now.
Thanks to the small provider that took this to the courts (init7) it appears we will keep a network open for competition and future proof.
There are fines pending but those are a 2 edges sword. The tax payer effectively pays it since the majority stake belongs to the tax payer. So a large fine is bad and a small fine is bad because it's not a deterrent. The executives that caused the mess's are already mostly gone (hence the back peddling) but the correct action would be to claw back their pay and bonus or something like that so the next "hot shots" don't try such shit again.