But there is something to be said for SPAs and slow connections. All the HTML and JS code is loaded beforehand and afterwards only raw data is exchanged. If the API calls can be limited to one call per user action/page visit, the experience would be better because only some API data has to be transferred instead of an entire HTML page. So your initial page load would be slow because of all the HTML and JS, but afterwards it should be faster compared to having server side rendered pages.
I very rarely see a "Web App" that's faster due to this. Take Gmail: plain-HTML gmail transfers and renders a whole page faster than full-fat Gmail does most of the things it does, which involve merely "some API data". The activity/loading indicators on normal Gmail display longer than the full-page load on plain-HTML gmail.
This had some validity in the early days of AJAX when common practice was to respond with HTML fragments and it was mostly just a work-around for Frames not being very good at a bunch of things they should have been good at. These days, not so much.
Makes sense. Roundtrips will kill you on slow connections, and the average SPA does lots of back-and-forth roundtrips. Then the JS code has to tweak hundreds or thousands of DOM nodes in a completely serial fashion to reflect the new data. Much faster in practice to either download or generate (preferably via WASM) a single chunk of raw HTML and let the browser rerender it natively.
It's part of the cause of all the damn memory-bloat , too. Receive some JSON (one copy of the data), parse that and turn it into JS objects (at least two copies allocated, now, depending on how the parsing works), modify that for whatever your in-browser data store is, and/or copy it in there, or copy it to "local state", or whatever (three copies), render the data to the shadow DOM (four copies), render the shadow DOM to the real DOM (five copies). Some of those stages very likely hide even more allocations of memory to hold the same data, as it's transformed and moved around. Lots of library or pattern choices can add more. It adds up.
When 99% of all SPAs I ever used are that painfully slow (sometimes even on 4G) then obviously that way of work encourages people to make bad design. So we should uproot the problem and not dance around it.
Especially with graphql being the new hotness I wonder if it's time to replace JSON with a binary format like Thrift. The old argument was that REST+JSON was simple and easy to debug but that goes out the window with GQL anyway.
JSON is just a terrible fit for GQL schemas. I regularly deal with metadata enum fields that are repeatedly serialized causing massive bloat. Sure it gets gzipped away, but you still have all those copies after decompression and parsing.
A reasonable POTS modem does ok on latency, but bad on bandwidth. So roundtrip is fine until you send too much data (which is real easy, modern initial segment limit of 10 combined with a 1500 mtu is more than a second of download buffer). If you kept the data small, many round trips would be okish.
On the other hand, traditional geosynchronous satellite always has terrible latency, so many round trips is bad regardless of the data size... One big load would be a lot better there.
POTS modems don't do great on latency either. I never managed under about a quarter second latency on dial-up. The first hop (your modem'd DAC and the ISP modem's ADC) was usually around 75-100ms all by itself. So even today with pretty fast networks (compared to the heyday of modems) you're easily looking at base latency of a quarter second.
Modem latency was crap. That was another reason a 64K single channel ISDN connection felt so much faster, even though the bandwidth wasn't that much more. ISDN latency was wayyy better since it was digital: 10 to 20ms vs 200ms minimum with an analog modem.
Some early ISPs I worked with started with 56K leased lines. The latency there was like night-and-day compared to a 56k modem.
My first experience with an ISDN connection was mind blowing. I had only ever used analog modems prior. So I'm just expecting faster downloads for large files and pages would load faster. It was a 64k line so I was expecting it to be twice as my modem at home.
Web pages just appeared (modulo Netscape connection limitations). A fresh page load felt as fast as a cached page load on my analog modem. It was nuts.
It was most noticeable in a handful of games of Quake I played. I got to experience the joy of being a Low Ping Bastard and actually landing a few hits on people.
They also handle errors by having the user reload the page which means everything starts over.
My experience growing up is that you don't notice the issues on sane pages written by hobbyists/professors/researchers and then you go to something built by google and everything falls apart.
It only works if the webapp can keep its calls limited to 1 per page/user action. Lots of webapps make multiple roundtrips (additional fetches triggered by earlier requests so they can't be done in parallel) making it slow even on fast connections (looking at you Quickbooks Time).
There really is not. SPAs generally means tons more assets being loaded and hard errors on on unreliable connections.
On a web page, missing a bit of the page at the end is not an issue, you might have somewhat broken content but that's not a deal-breaker.
With an SPA, an incompletely loaded API call is just a complete waste of the transferred bytes.
And slow connections also tend to have much larger latencies. Care to guess what's an issue on an SPA which keeps firing API calls for every interaction?
> So your initial page load would be slow because of all the HTML and JS, but afterwards it should be faster compared to having server side rendered pages.
The actual effect is that the initial page load is so slow you just give up, and if you have not, afterwards it's completely unreliable.
Seriously, try browsing the "modern web" and SPAs with something like a 1024 DSL with 100ms latency and 5% packet loss, it's just hell. And it's the sort of connections you can absolutely find in rural places.
Not at all, unless there is an absurd amount of content on the page that is unrelated to the data being fetched (like title, footer, sidebar). An HTML-table, for example, is in the same ballpark size-wise as a JSON representation of the same data. And that's without taking into account the fact the JSON can potentially carry more information than necessary.
Facebook is an example of a website where there is such an absurd amount of content that's not the focus of the page: the sidebars, the recommendations, the friend list of the chat, the trends, the ads. It sorta makes sense for them to have an SPA (although let's be frank: most people in slow connections prefer "mobile" or "static" versions of those sites).
The impetus for SPAs was never really speed. The impetus for SPAs is control for developers, by allowing navigation between "pages" but with zero-reloads for certain widgets. It was like 90s HTML frames but with full control of how everything works.
In practice, there's rarely "afterwards". You visit some website once because someone sent you a link. You load that one page and that's it. You read the article and close it. By the time you visit that website again, if you ever do, your cache will be long gone, so you're downloading the 2-megabyte JS bundle again.
In other words, pretty often the initial load is the only one.
> So your initial page load would be slow because of all the HTML and JS, but afterwards it should be faster compared to having server side rendered pages
TFA is arguing that a user on a bad connection won't even make it to a proper page load event in the first place.
It's probably also worth mentioning that the "gains" from sending only data on subsequent user actions are subject to devils in details, namely that response data isn't always fully optimized (e.g. more fields than needed), and HTML gzips exceptionally well due to markup repetitiveness compared to compression rate of just arbitrary data. Generally speaking you can rarely make up in small incremental gzip gains what you spent on downloading a framework upfront plus js parse/run time and DOM repaint times, especially on mobile, compared to the super fast native streaming rendering of pure HTML.
In theory I guess, but I'd bet that basically every SPA is using enough JS libs that the initial load is much bigger than a bunch of halfheartedly-optimized basic HTML. I bet somebody somewhere has written a SPA-style page designed to be super-optimized in both initial load and API behavior just because, but I don't think I've ever seen one.
I agree with the general sentiment, but if you used Facebook and YouTube you know they respond immediately on tap, even if the view hasn’t completely loaded. They are SPA-style pages.
Unfortunately they are the exception as there are a lot of awful SPAs that focus on looking cool while they’re barely usable. Looking at you, Airbnb.
Facebook and YouTube can afford to use SPAs without worrying too much about performance penalty because they invest massive amounts of effort into highly optimized and widespread content delivery networks, to the point where many ISPs literally host dedicated on-site caches for them.
Not at all: HTML compresses extremely well. CSS/favicon/etc are cached.
If you get rid of javascript frameworks used for SPA, the overhead of delivering a handful of HTML tables and forms instead of some JSON is negligible.
But that depends on the use case, doesn't it. Static sites may as well be huge, and now you need to send all of the surrounding html over, when only a small table or form would need updating. So I am not so sure about your point. The greater the complexity of the displayed page, the more sense it makes to use a SPA network-wise. (edit: mostly covered in sibling comments)
You have a point about compression though. I now wonder what the situation would look like if we had (had) native HTML imports, as that would greatly help with caching.
> now you need to send all of the surrounding html over, when only a small table or form would need updating
No, you don't. For something like the upvote button on HN you can do an ajax call with a line of javascript. In the context of the conversation this is very far from bloated SPAs.
HN is a good example: each thread page consists mostly of comments. Reusing the same DOM across different pages would do very little.
Gmail is another: the default UI is a heavy SPA. The "plain html" mode is not. And it's faster.
> If the API calls can be limited to one call per user action/page visit, the experience would be better because only some API data has to be transferred instead of an entire HTML page
HTML pages are not that big, though, unless you put a lot of content around the data. Not to mention JSON can be wasteful, and contain more data than needed. And lots of SPAs require multiple roundtrips for fetching data.
And even if you do have lots of content around your data, there are alternatives, like PJAX/Turbolinks that allow you to only fetch only partial content, while still using minimal JS compared to a regular JS framework.
That’s already kind of happening. Google and Mozilla both have “accelerator” services that render pages in their data centers; something similar to what opera was doing years ago. I also think Node supports server side rendering. A parking web app I used out in Lincoln NE takes advantage of that.
I know about Google's compression proxy but what's Mozilla doing? I found something called Janus but it looks like it was discontinued. Opera Mini is still around and there's also the Firefox-based browsh.
Finally, an influential developer who cares about the other 99%.
I've lived in dozens of places. I've lived in urban areas, suburban areas, rural areas. I've even lived on a boat. With the exception of wealthy areas, reasonable internet is a constant struggle.
It's also never due to the speed though, in my experience. The biggest issues are always packet loss, intermittent outages, latency, and jitter. High speed internet doesn't count for much, aside checking off a box, if it goes out for a couple hours every few hours, or has 10% packet loss. You'd be surprised how common stuff like that is. Try visiting the domicile of someone who isn't rich and running mtr.
Another thing I've noticed ISPs doing is they seem to intentionally add things like jitter to latency-sensitive low-bandwidth connections like SSH, because they want people to buy business class. So, in many ways, 56k was better than modern high speed internet. Because yes, connections had slow throughput, but even with 300 baud the latency was fast and the reliability was good enough that you could count on it to be something you could actually use when connecting to a central computer and running vi. Bill Joy actually wrote vi on that kind of internet connection, which deeply influenced its design.
I had an ISDN connection at home in the mid 90's. In many ways, it felt faster than today's broadband because the web was still built for dial up. I later upgraded to an early cable modem sometime around 1998 (3 megabits, I think.) The web was still, very much, built for dialup and things felt incredibly fast. Despite having a connection orders of magnitude faster today, things feel far more bloated and sluggish. Bloated web sites are everywhere.
Just get a Ryzen CPU with business class gigabit internet and block display ads and third party cookies. The wild-eyed web frameworks and advertisements of our age were designed to consume the glut created by the cheapest silver status symbol, on which they now press the limits of not only bandwidth, but also CPUs and RAM too. For me I think it was around 2015 when I started feeling the heat of my Macbook scorching my legs each time I loaded a news website. I thought, oh my, am I being swept away by the performance tide already? So if your box is twice as fast as those things, then the modern web will feel fast. What's sad is most people aren't even granted that choice.
The sad thing is that I have gigabit internet and a fast gaming PC, yet many websites are still unbearably slow.
I recently got the chance to speed up a website that I often use myself as a part of my consulting work, and despite my best efforts a further 10x speed up was left on the table because of insurmountable apathy by the developers.
In the last two decades, I've met one (1) web developer that cared about performance at all. One! The rest are blithely unaware of the profligate waste of CPU time and network bandwidth their "features" are consuming.
Things like two different third-party APMs on top of Google Analytics, the CDN analytics, and in-house analytics taking up 90% of both the bandwidth and server load.
I regularly see web sites with all caching, compression, and HTTP settings left on defaults. As in: private, off, and HTTP/1.1 only.
It's mind boggling how slow these sites are, in an era where benchmarks show that modern servers can put out 100K responses per second!
My web server can put out 1000K gzip encoded responses per second on a cheapo $1,000 Core i9 PC. https://justine.lol/redbean/
Here's what you do. You start off by showing people papers like this https://static.googleusercontent.com/media/research.google.c... which say "users are 1.5 times more likely to choose the fast engine". That's how you get resources to do the 10x optimization. But doing the optimization isn't enough, if you can't tie the results into something people care about. To do that, you're already paying the price for Google Analytics telemetry. You can use its API to A/B test the slow vs. fast version of your website for conversions or whatever behavior you're trying to optimize.
People fall in love with stuff that goes fast in unpredictable ways. Be prepared for that. Sometimes it helps to just walk around the office and watch people using your website, to get a basic idea of how they feel about it and how they're engaging. For instance you might discover they're constantly alt-tabbing to some blog while they're waiting for pages to load. In that case you can quantify a productivity impact across everyone who's using the website that likely far exceeds your own salary. Then the people who didn't care might start paying attention. All you had to do was make a web page less sluggish.
Oh, I know. I've tried these things. It makes no difference in my experience, but that's probably mostly because I work for government departments. They have zero interest in customer engagement, or any similar metrics. The only thing they care about is not hearing internal complaints from more senior people. Those senior people don't use their own department's products! Hence... no complaints.
I vividly remember trying to explain these concepts to a librarian. She wanted to implement a hugely complex "search form" where you could narrow down the search to something like: "Show me all young adult fiction about dragons written by authors who's name starts with the letter A and was published in the years 1997 to 1999".
But that's an absurd scenario, and no student in the history of the world will ever use it like that. Never. They'll type in "dragons" and read the first book that has a cover picture they like. I did this. My friends did this. All of the statistics I collected in the system shows that this is what everyone else does too.
The reason she wanted that search was because the old system would take tens of seconds to do any search. So you had to get the search "right" the first time, or you'd be there for tens of minutes.
My version did the search in under 15ms, and even did live tab complete. I copied the Google interface with quotes for exact search, negative sign prefix for exclude, etc...
Oh, and I made sure to get the highest quality book cover art for every title I possibly could. Kids love to see that, the artwork is engaging, and it motivates them to read more. Search forms? Not so much...
The kids loved it, but the librarian is still a little bit grumpy that I didn't build her the two page form that she wanted!
I searched for "dragons" at the Philadelphia Free Library. 5,802 results showed up. I doomscrolled through a few hundred and couldn't find Smaug, the most iconic and influential dragon of the twentieth century. https://catalog.freelibrary.org/Search/Results?lookfor=drago... Then I searched Google for "book with dragon" and The Hobbit is the fourth result.
Librarians are really intelligent people. All the recent engineering marvels that Google does under the hood to provide meaningful results, such as PageRank and machine learning, librarians have always been able to do in their heads. They are not unsophisticated users who click on the first picture they like. If your search algorithms are simpler than what Google does, then you can count on librarians to fill in the gaps for you. But in order to do that, they need control. Because when someone can't find something in the library, because full-text search isn't taking into consideration popularity, then what do kids do? They go to the librarian, who enters the complicated search for them, based on what they know, and they know a lot.
So I think the librarian was being perfectly reasonable. You should have empowered her with the tools she needs to do her job. You could have just as easily put it behind a drop-down that's hidden by default, so both her and the kids are happy.
I have a fast computer, a ridiculously fast internet connection (10000/10000), and adblock. It still doesn't feel as fast as when I had @Home (10/1 I think?) in 1999.
Depending on your used adblock, I'd guess it is the bottleneck.
Plugins running in the browser are not made for this. If you are using browser plugin-based adblocking, try to disable that temporarily for testing, and exchange it with some host-based solution, for instance this https://github.com/StevenBlack/hosts no matter which OS your are on. Then see if it feels faster. If you are already doing it this way, forget what I wrote :-)
> High speed internet doesn't count for much, aside checking off a box, if it goes out for a couple hours every few hours
This adds an interesting dimension to things. In a roundabout way, for this kind of case, have you ever been glad to be using a PWA (with client-side caching) instead of a regular website?
I don't think I've ever used a website that's able to run offline before aside from Google Docs (and that requires a browser extension). When the Internet goes offline I close the browser and use regular software where my mind can context switch to offline mode to minimize any risk of being further distracted by an outage that's not within my power to address.
The way I see it, these PWA client-side caching technologies are mostly used by news websites to install permanent service workers on my PC that run in the background after the tab is closed to mitm http requests, phone home every day, and have a kill switch over my data associated with their origin. What's my data? I don't even know. For example, I noticed the Western Digital Forum (I don't even remember visiting it) had a service worker that managed to store 1gb of content to my hard disk. What's in it? I don't know. Chrome doesn't let me see it. Maybe it made the whole forum available offline for me to enjoy. Or maybe I'm an unknowing member of their new cloud storage service. Can't tell. The lack of visibility, lack of consent, and lack of options to disable these emerging standards are in themselves an issue. We should be focusing on making local apps better, rather than having browsers foray into territories that are not in sync at all with expectations.
The problem with comparing the Internet now vs 25 years ago is back then you didn't live on the Internet all your waking hours. You jumped on, got what you needed and got off again otherwise you'd be paying a high hourly premium. And to top that off you'd power off your computer and cover it and the monitor with a dust cover.
Now with phones and always on connections it's not even comparable. I spent more time using my computer; programming, graphics, learning about it, yes the physical thing in front of me than on the Internet in the early 1990s even later 1990s.
Actually in terms of bandwidth JavaScript isn’t the problem, you can fit an entire SPA into the bandwidth required for one large image (it is a problem in terms of CPU usage on underpowered devices though)
Large images being “worth the trade off” is debatable depending on your connection speed, I think (though at least you can disable images in the browser?)
Some browsers used to have "click to show image" too. I know dillo did. Actually, come to think of it, its browsers that have really dropped the ball on helping out users with slow connections.
I was glued all the time to my computer on the internet as far back as 2002 - browsing forums and playing video games and self-learning programming from C++ tutorials here and there.
25 years ago I had pay by the minute dial up so would connect, download a few pages and then hang up while I read them. It was epically slow of the click a few links then go make a cup of tea and come back and see if they'd loaded slow.
I find the modern web quite quick with an adblocker on but a bit horrendous with it off.
The performance numbers are a really helpful illustration of the problem with a lot of sites. A detail it misses and I see people constantly forget, is any individual user might have one of those crappy connections at multiple points during the day or even move through all of them.
It doesn't matter if your home broadband is awesome if you're currently at the store trying to look something up on a bloated page. It's little consolation that the store wrote an SPA to do fancy looking fades when navigating between links when it won't even load right in the store's brick and mortar location.
Far too many web devs are making terrible assumptions about not just bandwidth but latency. Making a hundred small connections might not require a lot of total bandwidth but when each connection takes a tenth to a quarter of a second just to get established there's a lot of time a user is blocked unable to do anything. Asynchronous loading just means "I won't explicitly block for this load", it doesn't mean that dozens of in-flight requests won't cause de facto blocking.
I'm using the web not because I love huge hero images or fade effects. I'm using the web to get something accomplished like buy something or look up information. Using my time, bandwidth, and battery poorly by forcing me to accommodate your bloated web page makes me not want to give you any money.
Yep... in large stores like ikea, i've had huge problems with my connection falling down to 2g speeds, and finding anything online was a huge pain in the ass. First thing to load should be the page layout with text already inside, then images, then everything else... some pages ignore this, load nothing + a huge JS library, then start with random crap, and i get bored and close my browser, before i get the actual text.
I experienced this problem just yesterday which I found ironic since I had recently posted about it. I was in Home Depot and couldn't find what I was looking for. So I pulled up the website to see if it says they're in stock to ask an employee. It took me wandering the store for a few minutes to get a good enough signal to load the damn web page.
Home Depot web devs, make sure someone can load the Home Depot web page in side one of your stores filled with steel shelves and other cellular hating materials. I can't imagine trying to use Home Depot's website inside a store is some super edge case that's not worth the effort.
One of the problem is that a lot of devs have very good connections at home. I've got 600 MBit/s symmetric (optic fiber). Spain, France, now Belgium... It's fiber nearly everywhere. Heck, Andorra has 100% fiber coverage. Japan: my brother has got 2 Gbit/s at home.
My home connection is smoking some of my dedicated servers: the cheap ones are still on 100 MBit/s in datacenter and they're totally the bottleneck. That's how fast home connections are, for some.
I used to browse on a 28.8 modem, then 33.6, then ISDN, then ADSL.
The problem is: people who get fiber are probably not going back. We're going towards faster and faster connections.
It's easy, when you're on fiber since years and years now, to forget what it was. To me it's at least part of the problem.
Joel is not loading his own site on a 28.8 modem. It's the unevenly distributed future.
I think the problem is the vast majority of web developers don't care. It's true. Sure, having nice connection helps. In the sense that having a meal helps you forget about world hunger... if you cared about it in the first place.
Sans multimedia consumption, the modern web is fine on a reasonable connection provided you're using noscript or something like that. If you're not, then well, you're already screwed either way.
What's crazy to me is not that regular users put up with a ton of bullshit - they have to. It's that lots of fellow developers do and they most certainly don't have to. They simply don't care.
> What's crazy to me is not that regular users put up with a ton of bullshit
The cynic would view society is an optimization problem between bullshitters and those willing to tolerate bullshit. I say introduce a little anarchy into the equation, using the clean and pristine.
I guess most of the times it's not about a a dev not caring about the issue - it's about not being paid to address this scenarios.
The default for most of the tools/frameworks is to generate this bloat without a nice fallback for slow connections. Because the industry is focused on people with good connections, because that's where the money is.
Whenever I develop a webapp whose users won't be having a good Internet connection, there's a requirement to support that use case, and I spend time making sure the thing is usable on bad connections, and it's OK to sacrifice some UX to get there.
But on most cases, customers (both end-users and companies that pay me to code something for them) prefer shiny & cheap rather than "works OK for faceless people I don't know they still live like I did 15 years ago".
TL;DR: it's an economics issue, as usual.
----
PD: I've spent five hours yesterday to avoid a 70% packet loss to the router in the place I've recently moved to. There's a (top) 6Mbps connection on the other side of the router. I'm suffering not being on a top-class connection - but that's _my_ issue, not my customer's, nor my customer's customers.
On the other hand go to a Pret in central London and use the cafe's wifi. It's a bit like the old dial up days. Due to historic buildings and stuff they don't have fiber many places so you get 8mb adsl shared between like 20 customers when it's busy.
" Dualmode faxmodems provide high-quality voice mail when used with a soundcard. They also have both Class 1 14.4 Kbps send/receive fax for convenient printer-quality faxing to any faxmodem or fax machine. You can even "broadcast" your faxes to multiple recipients, schedule fax transmission, or forward them to another number. "
Kind of a tangent but (for reasons) I have a workstation on which I didn't want to install a full-fledged browser so I'm using Dillo, https://www.dillo.org/
> Dillo is a multi-platform graphical web browser known for its speed and small footprint.
> Dillo is written in C and C++.
> Dillo is based on FLTK, the Fast Light Toolkit (statically-linked by default!).
Dillo doesn't have a JS engine. (This is a "pro" in my opinion.)
Using it the web is divided into three equivalence classes:
1) Works. (defined as, the site loads, the content is accessible and it looks more-or-less like the author intended.) As a rule such sites load lightning fast.
2) Broken but the content can still be read. Usually the site layout is messed up.
3) Broken completely. Typically a blank page, or garbage without visible content. (There is a new failure mode: sites that won't reply to browsers without server name indication (SNI https://www.cloudflare.com/learning/ssl/what-is-sni/ ) Dillo doesn't (yet) support SNI, so those sites are "broken" too. Typically these are Cloudflare-protected sites that give me a 403, but e.g. PyPI recently adopted SNI and went from category 1 to 3.)
(HN is in category 2 FWIW: you can read the content but the "bells and whistles" don't work, login, voting, etc.)
I don't really have much to add, just that A) enough of the web works that I find it useful to use dillo for my purposes, B) the web that does work with dillo is much less annoying than the "modern" web, C) it kinda sucks IMO that most of the web is junk from the POV of dillo user agent.
Not supporting SNI is the browser's problem IMO, because SNI are required to enable a single IP to host multiple HTTPS websites. SNI is not just for CloudFlare.
You could also accomplish this via uBlock Origin with increased granularity and it has SNI capability. UO in hard mode can for instance block 3rd party and/or 1st party JavaScript, and whitelist permissible scripts.
Between 2010-2016, I lived in a rural area where we had one option: Satellite internet from HughesNet. Prior to that, the only option was dialup, and since it was such a remote area, the telephone had grandfathered a few numbers from out of the service area as free-to-call, to allow for residents to use the nearest isp without extra toll charges.
So we went from paying 9.95/month for average 56k service, to 80/month for a service that was worse than that.
To add insult to injury, a local broadband provider kept sticking their signs at our driveway next to our mailbox, and we would call to try and get service, but we were apparently 200 feet past the limit of their service area. People who lived at the mouth of our driveway had service, our neighbors had service, but we were too far out they said.
I repeat: as late as 2016 I WOULD HAVE KILLED TO BE ABLE TO JUST USE FREAKING DIALUP!
In rual Virginia I had a very similar experience during the exact time frame as you. Verizon and Comcast would say we could be connected over the phone, send equipment (which I'd pay for), then turn around and say it was too remote. Neighborhood down the street had their service though. The ISP we ended up with was a couple with an antenna on top of a local mountain. Our service was capped at 2GB per day and blowing through it (which was very easy) meant being throttled to the point of nothing working anymore. It was several long years of frustration.
I have been using another person's starlink beta terminal since November of last year, and have had my own since late January. It's at a latitude sufficient that packet loss and service unavailability is averaging about 0.25% (1/4th of 1 percent) over multi day periods.
It's a real 150-300 down, 16-25 Mbps up. In many cases it actually beats the DOCSIS3 cable operator's network at the same location for jitter, packet loss, downtime.
The unfortunate economics of building 4000-6000 kg sized geostationary telecom satellites with 15 year finite lifespans, and launching them into a proper orbit ($200 million would not be a bad figure for a cost to put one satellite in place) mean that the oversubscription/contention ratios on consumer grade VSAT are extreme.
Dedicated 1:1 geostationary satellite still has the 492-495ms but is remarkably not bad, but you're looking at a figure of anywhere from $1300 to $3000 per Mbps, per direction, per month for your own dedicated chunk of transponder kHz for SCPC. You're also looking at a minimum of $7000-9000 for terminal equipment. That's the unfortunate reality right now.
I feel sorry for both viasat/hughesnet consumer grade customers, who are suffering, and also the companies, who are on a path dependent lock in towards obsolescence. Even more so if various starlink competitors like Kuiper, Telesat's LEO network and OneWeb (not exactly a competitor since it won't be serving the end user, but same general concept) actually launch services.
> It's at a latitude sufficient that packet loss and service unavailability is averaging about 0.25% (1/4th of 1 percent) over multi day periods.
Do you have any more specific data you could share there? For example, how much of that percentage is caused by downtime and how much is caused by random lost packets outside of downtime? Or how long does it go unavailable?
That's total unavailability time for a basic series of 20 icmp pings sent to a target in Seattle very close to the cgnat gateway exit point. On a 60s Cron job. Very close to a default smokeping installation. My 0.25% would be even less if I could remove one problematic tree in a 1/12th quadrant of the cpe's obstruction measurement system.
Not within the past 3-4 weeks, but there have been previous times of 30 to 120 minute downtime at 0200 in the morning local time while the starlink people do terminal firmware updates and other changes in the network.
No neighbor that let you piggyback on their connection? (That of course doesn't change the fundamental shittyness of the overall lack of available access)
This is how many of the local wireless networks here in Czech Republic started - someone who could have connectivity got together with others, put up an omni-directional antenna on the roof and had others connect to it, then split the cost of the connection. This often made it possible to share much more expensive connections than any one of the participants would otherwise would be able to afford.
This often started with off the shelf wireless APs (like the venerable D-Link DWL-900AP+ with often home made antennas connected. If the network had more knowledgeable Unix people, they might have used an old PC running Linux or BSD with hostapd and an PCI wireless card. The most advanced ones even had home-built optical links (Seriously, 10 Mbit/s full duplex on 2001 by light was super cool! https://en.wikipedia.org/wiki/RONJA) though our network was not that hardcore. :P We did lay some cable via agricultural pipe and build a makeshift com tower next to a Vineyard. :)
Over time the networks got high larger via bridging and wired or even fiber segments and a lot of the hardware was often replaced by high performance Microtik and Ubiquity devices.
Also the networks often merged into bigger ones, covering whole cities and their surrounding villages, with APs on top of grain silos and water towers.
Some of the networks are still independent and quite big, some were bought by big "traditional" telecom companies and some still operate as user coops to this day.
Yes that was my first thought. Offer to pay half the bill and pick up a giant yagi antenna for twenty bucks. Only downside is it's technically illegal to combine that much power and that much gain but I've known people who did it for a decade with no FCC call.
For a link, you mainly need signal-to-noise ratio (SNR).
With high gain antennas you simultaneously increase mutual reception and decrease reception of other signals, giving a triple boost to your signal-to-noise ratio if you can use it on both sides (sender: +1 txpower; receiver: +1 signal reception, +1 interference rejection), so that is what you should aim for.
Transmit power does comparatively little (sender only: +1 txpower, -? amplifier noise), while blanketing the area with useless power (both in-band and out-of-band²) that can get you detected and affect other devices. Especially if you driver you transmitter near the limit (or use a shitty amplifier, broadband antenna) further increasing out-of-band transmissions, which is the part that is likely to get you a visit from regulators.
¹ gain ~ directionality; symmetric in both reception and transmission
² out-of-band - frequencies other than intended ones
Oh man, I empathize. I'm writing this from the literal end of the internet: our cable (Comcast) comes off the last pole with coax on it.
We spent today taking a look at properties, and the internet situation goes downhill fast from where we presently rent from DSL to "unknown". HughesNet is an option, but it's an iffy option for latency reasons, and Starlink is crazy pricey. Cellular is a total non-starter: there is none.
Our realtor lives on a road with a bunch of people willing to pony up to get decent internet, but they can't get the ISP to actually run media down the road because it's too few houses, never mind that there's money on the table.
So if you're a web developer, please know that there are real folks in the USA who have legitimately lousy internet options in 2021. In our case, it's self inflicted, but I'm going to make some sweeping assumptions based on the houses we drove by and say it isn't a matter of choice for everybody.
They're limiting the number of CPEs per cell to a maximum number of sites, and volume of traffic, such that the service won't degrade to a viasat/hughesnet like terrible consumer experience. I can't say how I know, but some very experienced and talented network engineers are designing the network topology and oversubscription/contention ratio based out of their Redmond, WA offices.
In rural areas, contention should stay low enough that the speeds will continue to be at least decent. And of course the latency is much lower than geosynchronous satellites, independent of bandwidth.
It’s a common English idiom, and like “raining cats and dogs,” the meaning of the phrase doesn’t correspond literally to the words. The parent comment isn’t actually joking about committing murder.
As someone whose only internet options for a few years were satellite and WISP that cost several hundred dollars for 8 Mbps down, I understand where they’re coming from :)
My mom's cellular data plan (used for rural internet access through a cellular/wifi router) has a 128kbps fallback if you use up your main data allotment.
128kbps isn't so bad, is it? More than 3x the speed she used to get with a dialup modem.
But no. We ran it into the fallback zone to try it out. And half the sites (e.g. the full Gmail UI or Facebook) wouldn't even load - the load would time out before the page was functional.
The 128kbs fallback is meant to be as a lifeline, for email and instant messaging communications. And that's really all it's food for any more.
That's last years modern web. This years modern web splits up the JS bundle based on the pages so you only load what's required for each page. So we're basically back to square one.
> I was thinking about converting my old server rendered web site into modern web. Still wondering if it's worth it.
As usual guideline I tend to use: Are you building a website or a web application? If you're building a website, you're best off with just static pages or static pages generated dynamically. If you need lots of interactivity and similar, better to build a web application, and React fits well for that purpose.
I really like the idea of server side rendering. Flask/Django style backend and use Jinja2 templates, sprinkle some vanilla JS for interaction if needed.
I wonder if it is safe to say vast majority of websites are simple enough to use the aforementioned pattern? There are so many "Doctors appointment" type dynamic websites that I don't think need anything like React or Angular.
I think React is great if you're building the next Notion or a web-based application such as Google Sheets.
Edit: Yeah, I am new to webdev and I find server side rendering "refreshing" :-)
It’s not just an idea. It’s tech that has been around for decades now. Rails, Django, Spring, Laravel, Sinatra, Flask and hundreds of others.
They work fine for a large chunk of modern websites. And server side templating is not the only concept from that era that is much simpler than what is popular now. Those frameworks were primarily synchronous instead of asynchronous. And they worked in pretty much every browser. Without breaking the back button. With shareable links. And no scrolljacking.
For me personally the sweet spot for many applications/websites is still just that: A synchronous MVC framework with serverside rendering. With a sprinkle of vanilla JS or Vue on select pages for dynamic stuff.
I do this. It's remarkable this is considered to be a novel pattern! (Note that some of the advances in modern CSS and JS made this more feasible than it used to be)
Since you say you're new, it might be worth looking at Laravel if you learned with the componentized JS approach. The Blade templating language they've been perfecting for years now has started to embrace composable components very similar to a JS framework, but all server-side.
Doesn't it also break basic caching? That is, I can't download a "modern" website to view it offline because it's actually just a rendering shim that needs to phone home for the content?
That's only when done poorly, which unfortunately has become the norm. Properly done modern websites load things incrementally, as and when they're needed, while cleanly separating the front end logic from the back end.
> For another level of ironic, consider that while I think of a 50kB table as bloat, this page is 12kB when gzipped, even with all of the bloat. Google's AMP currently has > 100kB of blocking javascript that has to load before the page loads! There's no reason for me to use AMP pages because AMP is slower than my current setup of pure HTML with a few lines of embedded CSS and the occasional image, but, as a result, I'm penalized by Google (relative to AMP pages) for not "accelerating" (deccelerating) my page with AMP.
This is cute. Somehow it's like Compton limit where at a certain scale you just can't make measurements accurate enough because the very act of measurement interferes with the system.
what do people think is the best approach to incentivise lean web design? The bloat of the modern web is absolutely ridiculous but it seems to be the inevitable result of piling abstraction on top of abstraction so that you end up with a multimegabyte news article due to keeping up with the latest fad framework.
What every large company does - make an employee VPN (for laptops and phones) that simulated poor internet connection (slow, packet loss, lags), to let the developers feel the pain.
The joke is on them (well, yeah, but figuratively too). The VPN is only slow for external sites that the developers can not fix, while internal ones load at the full speed the middlebox can handle.
Companies are grooming the developers so that they will never have difficulty hiring people that can diagnose a broken firewall, but they are not getting faster web pages out of the deal.
>You can see that for a very small site that doesn’t load many blocking resources, HTTPS is noticeably slower than HTTP, especially on slow connections.
Yet another reason to not get rid of HTTP in the HTTP+HTTPS world just because it's hip and what big companies are going.
That's a really bad take on HTTPS. HTTP helps to enable a bunch of trivial but damaging attacks on an end user. If you run anything maybe more complicated than a static blog, and even then for your own security, HTTP is the wrong choice.
0RRT only applies to connections after the first load. And what cloudflare does is certainly not what most people on the internet (as opposed to most mega-corporations) should do.
A realisation I had a few years ago was that differential accessibility to websites is quite likely market segmentation technique, whether used intentionally or otherwise.
A website that only works on recent kit in high-bandwidth locales with low ping latencies and little packet loss ... acts much the same as a posh high-street address does in dissuading those people one would prefer not to have to face with or address.
Interesting that it mentions the book "High Performance Browser Networking". My feeling after reading it was that latency is all that matters, not page size.
Is there any cellular phone service that throttles to a usable speed (say at least 1.5 Mbps)? I was looking forward to the spread of 4G because I naively thought throttled speeds would also increase to "3G equivalent" instead of "2G equivalent".
I use Mint Mobile and last month I hit the 4G data cap of my "unlimited" plan (30 Gb). I tried to buy more data, but it can't be done on the "unlimited" plan. I could not buy more data for my "unlimited" data plan after my "unlimited" data plan ran out of data. Other plans let you purchase additional data at an expensive price, only the "unlimited" plan doesn't.
Some don't throttle at all. But they aren't as cheap as Mint Mobile. However, Mint says that videos will stream at 480p after you've used 35GB/mo. If that's true, isn't that a lot faster than 2G?
This article gave me a random push to nuke the size of my site - just brought it down from ~50 kB on text pages all the way to ~1.5 kB now, and from 14 files to 1 HTML file.
Part of the problem is that writing html is lost knowledge. Extinct.
Before you think out loud how simple html is and you still remember doing that a million years ago ask yourself this: do you write now and would you pay to hire someone to do that?
It’s not a job or skill you can be hired for. Instead you have to use React if you want employment. So that’s an immediate 50x swell right there immediately out of the gate and we haven’t even gotten to poor coding practices.
We've invented abstractions that exceed our common ability to reason about their base building blocks. I think most websites should be just pure html, minimal inline styling so you don't get page flicker, and a small sprinkling of <script asyc/defer> to give you some async server functionality. That's it.
This is all possible, it's just that collectively, web developers have tried so hard to make their discipline as 'complex' as other software engineering domains, that we have destroyed our sense of efficiency, speed, and catering to the worst off end users.
It may be extinct in some places, but in many others HTML knowledge is quite alive.
React is great for some tasks, but it's ridiculous for others. Its use can be overengineering, and it presumes that JavaScript execution is always allowed for all websites (which is not true).
Plain HTML, pre-generated as a static site or generated server-side at runtime, is a better solution in many cases. React is great for the things it was designed for.
Fad engineering is just that, a fad. All technologies have situations or cases where they should not be used. People should never use a technology until they understand at least some situations where it should not be used.
From a capitalist devil’s advocate point of view: does it really matter that your website works poorly with 90% of the web when these 90% either don’t have any disposable income (there’s got to be a correlation) or are too distant geographically and culturally to ever bring you business? The “commercial quality” (i.e. to avoid looking worse than competition) web development has become unbelievably complicated and expensive, so in reality probably only individual enthusiasts and NCOs should care about the long tail. Local small businesses will just use some local site builder or social media, so even they don’t matter.
These sorts of characterizations are just way, way, way too simplistic.
Assume that I earn a six figure income and live in a major city with reasonably fast (though not necessarily top tier) Internet. So, y'know, theoretically a fairly desirable customer and not a subsistence farmer or some other demographic that's easy to give the cold shoulder. But still --
Bad Internet day, maybe someone's getting DDoS attacked? If your website isn't lean, I'm more likely to get frustrated by the jank and leave.
It's Friday evening and the Internet is crammed full of Netflix? If your website isn't lean, I'm more likely to get frustrated by the jank and leave.
Neighbor's running some device that leaks a bunch of radio noise and degrades my wifi network? It's Friday evening and the Internet is crammed full of Netflix? If your website isn't lean, I'm more likely to get frustrated by the jank and leave.
I'm connecting from a coffee shop with spotty Internet? If your website isn't lean, I'm more likely to get frustrated by the jank and leave.
I've got 50,000 tabs open and they're all eating up a crapton of RAM? If your website isn't lean, I'm more likely to get frustrated by the jank and leave.
I'm browsing while I've got some background task like a big compile or whatever chewing up my CPU? If your website isn't lean, I'm more likely to get frustrated by the jank and leave.
I'm accessing from my mobile device and I'm in a dead zone (they're common in major cities doncha know)? If your website isn't lean, I'm more likely to get frustrated by the jank and leave.
etc. etc. etc.
Meanwhile, while it may not be the style of the times, it's absolutely possible to make very good looking websites with no or very little JavaScript. Frankly, a lot of them look and feel even better than over-engineered virtual DOM "applications" do. Without all that JavaScript in the way, the UX feels downright snappy. https://standardebooks.org/ is a nice example.
In another thread I just read this[1]: "Using a systems language where it’s not required is just burning time and money." and was considering a response like yours. Is there actually going to be any noticable consequences to the jank of Slack, Teams, GMail, and so on bringing laptops to their knees and damaging user morale and taking a company brand reputation hit?
After you get frustrated by the jank and leave, is there anything else you can do except go back because the people and customers and services you need to deal with are using the janky systems, and all the systems are janky these days - and finding ones which aren't, and predicting they will stay that way, and moving to them, is non-trivial?
This falls down because people with lots of money have cabins in the woods with garbage internet. If they can't load your page from there, there goes that lucrative customer. Or they may be on a ferry, or a small aircraft or etc.
That assumes that you can appropriately target the 10% that are profitable. If you can't reliably do that or the profitable 10% isn't 10% with the highest performance computers and connections, then you probably are better served by casting a wide net.
What if, to take advantage of the profitable 10%, you are better off providing a rich page, albeit with a large filesize? I have no evidence to support that claim, other than that large profitable companies generally seem to think this is the case.
The assumption you are making here is that the preferred customers you're talking here are always on quality connections, and that poor connections are limited to undesired customers. The same user who accesses your site from an 800MB wifi signal may need to access the same site in spotty 4G scenario.
It does if your business is low margin high volume and the 90% you reference have _enough_ disposable income to buy the basics. I.e. literally all of retail and consumer banking.
Most businesses fall into this category. Facebook, Amazon, and Netflix fall into this category. That’s why their reliability engineers are amongst the highest paid engineers in those businesses. They literally cannot afford to be down.
The ironic thing about your argument is that I’ve found recently that the more something costs, the more difficult it is to procure, ESPECIALLY online. Some of the most expensive items out there simply cannot be purchased online end-to-end.
looking to rent an apartment online? Easy. Looking to rent or buy a house? A billion moving parts, with half a million of those parts needing to be done face to face.
Software bloat works by the same principle as dynamic range compression does for rock musicians. DRC creates fake loudness by removing a dimension of complexity from audio tracks. Rock musicians complain all the time about how they need to do it in order to sound professional, because everyone else does it. With software it's the same way. Many consumers have come to associate bloat with value. If you sell someone a 300kb exe file, then they might feel robbed if your competitor ships a 30gb installer that in essence does the same thing.
What's the solution? Don't write rock music. Focus on classical, jazz, opera, ballet, etc. which has a fewer number of fans with more money who have a better education in appreciating your music. That same concept can be applied to software.
Depends on the site. It'd be nice if somebody's individual blog about the cool tech thing they built was accessible to everyone everywhere. If you are a business that can fundamentally only serve customers in city X, then it's perfectly reasonable to use tech that might not work for people outside of city X. There's a lot of space in between those, so use your judgement I guess.
From the other capitalist point of view, wouldn’t it be more ideal to squeeze every single percentage of the market if you could? I think most companies would gladly take a 10% boost in sales.
Furthermore, that’s only the bottom 10% of America, appealing to their audience appeals to maybe the top percentage of India and Asia as well which are giant markets
That depends on how much money it takes to get a bit more of the market and how much money you expect to get from them. Ideally you'd reach the point where those two derivatives are equal, then stop.
I am pretty sure it would, and that is one of the reasons Opera Mini is still very popular in Africa.
One of the things it does is to remove the need for hundreds of requests to fetch every single image/script in the page (from the client that is). Instead it is only one file to fetch over http. Only that makes a huge difference.