"... generally infinite willingness to accept web apps..."
Interesting.
I stay in textmode. Hence I do not need an emulator.
If I need graphics I access the files over VLAN from another computer designed for mindless consumption of graphics, like the locked-down ones they sell today with touchscreens, etc.
My understanding is that emulators like xterm can redraw the screen faster than VGA. I remember this can make textual interfaces feel snappier.
But I doubt that jobs execute any faster in X11/Wayland/whatever than they do in textmode. I cannot see how the processes would be completing any sooner by virtue of using a graphics accelereted emulator.
But I could be wrong.
I sometimes use tmux for additional virtual consoles because on the computers I control (custom kernel, devices and userland) I do not use multiple ttys, just /dev/console.
I rarely ever work within tmux. I only use it to run detached jobs. I view screen output from my tty with something like
case $1 in -B|-E|-S|-t)
tmux capturep $@ --
exec tmux showb $@ --
esac
I'm not a seasoned tmux user. I was a very early adopter. tmux is useful high quality software IMHO.
Not sure why I would ever need these slow "web apps".
I guess the third parties controlling the endpoints might be able to utilise the data they gather about users. And I am sure some users appreciate the help. Thus it is a symbiotic relationship.
I am continually making my "tooling" faster by eliminating unecessary resource consumption. It is an obsession of sorts. Constant improvement.
But given that I am working with text, graphics processing is not something I need. I would not mind being able to run my non-graphical jobs on a fast GPU, but my understanding is that the companies making these processors are not very open.
For example, the GPU in the RasperryPi.
Always interesting to hear how others are meeting their computing needs.
The Web is two things. First, it's the promise that a certain runtime with a specific minimum set of capabilities is available almost anywhere. Secondly, it's a staggeringly-huge installed base of stuff written for that runtime.
I don't think there's anything out there in the that matches the volume of deployed HTML, CSS and JS in the wild.
The horribly sad part is that HTML, CSS and JS are a gigantic Rube Goldberg implementation of "run arbitrary code in a safe sandbox," because the Web is also the world's biggest collection of legacy dependency.
IMHO, the source of the engineering cringe making everything so much sadder and less than what it could be is that the W3C/WHATWG/IETF/etc are made up of consortiums of large, foghorn-equipped corporations - corporations that have vested interests in advertising, consumer retention, and strong guarantees of indefinite consumption.
I've never really gotten the reasoning behind the technical directions the Web's gone in; a lot of things have stuck and worked, but so many more have flopped, yet the associated implementations for both the successes and failures have to be maintained going forward indefinitely.
The iterative pace on the various Web standards is another problem - things go so fast that the implementations can never get really really good, and Chrome uses literally all of your memory (whether you have 2GB or 20GB, apparently!) as a result.
---
Regarding $terminal_emulator being faster than VGA, I can emphatically state that virtually all of them are disasterously slow. aterm had some handcoded SSE circa 2001 to support fake window shadowing (fastcopy the portion of the root window image underneath the terminal window whenever the window is moved; use SSE to darken the snagged area; apply as terminal window background) but besides that sort of thing, terminal emulators have more or less never been bastions of speed.
If by VGA you mean true textmode (the 720x400 kind, generated entirely by the video card), I don't think there's much that's faster than that. Throw setfont and the Cyr_a8x8 font in there to get 80x50 (I think it is, or 80x43) and you have something hard to beat, since spamming ASCII characters at the video card's memory will always be faster than addressing pixels in a framebuffer.
Which is why GPU-accelerated terminal emulators are so interesting: they're eliminating as many software/architectural bottlenecks as possible to make those expensive framebuffer updates as quick as possible. It's definitely the way to go; games are generally rated on their ability to push GPUs to >60fps at 1080p (and increasingly 2K/4K/8K), so the capacity is really there.
The i3 window manager could be considered one of many comparable similar implementations to tmux. It's not perfect (it's not as configurable as I'd prefer), but it'd get you X and the ability to view media more easily.
I do really appreciate the tendency to want to view a computer as an industrial terminal appliance though. Task switching is still best done by associating tasks with different objects in physical space, so keeping the computer for terminal work and keeping tablets (et al) for other tasks does make legitimate sense.
---
Regarding data usage, that's a tricky one - most successful Internet companies provide some kind of service that necessarily requires the collection of arguably private information in exchange for a novel convenience. As an example, mapping services don't truly need your realtime location but having that means that they can stream the most relevant tiles of an always-up-to-date map to you. The alternative is storing an entire world map, or subsetted map(s) for the locations you think you'll need, but that'll kill almost all the storage on phones without massive SD cards.
---
I find elimination of unnecessary resource consumption a fun concept to explore, almost to the point of obsession. In this regard I often come back to Forth. I was reading this yesterday - http://yosefk.com/blog/my-history-with-forth-stack-machines.... - and it explores how Forth is essentially the mindset of eliminating ALL but the smallest functional expression of the irreducible complexity of an idea, often to the point of insanity. It's not a register-based language so it's never going to beat machine code for any modern processor, but it's a very very interesting concept to seriously explore, at least. (And I say that as someone interested in actually using Forth for something practical, as described in that article.)
---
AFAIK, the RPi actually boots off the GPU, or at least the older credit-card-sized ones did. I'm not sure about the current versions.
ATI released some documentation about their designs a while back with the subtext of enabling open-source driver development. I don't think that panned out as much as was hoped.
My understanding is that Intel has both NVIDIA and AMD beat nowadays when it comes to Linux graphics support; the two former vendors still heavily rely on proprietary drivers (on Linux) for a lot of functionality.
Sadly, since they both have to successfully compete in the market, they're unlikely to release their hardware designs in significant detail anytime soon. (Even if they did like the idea of merging, single gigantic monopolies have a lot of risk, and the behemoth that resulted would be impossible for Intel to compete with, likely.)
So, learning OpenCL and CUDA (depending on the GPU you have) is likely your best bet. There are extant established ecosystems of resources and domain knowledge for both implementations, and the relevant code is not too tragically licensed AFAIK.
> The alternative is storing an entire world map, or subsetted map(s) for the locations you think you'll need, but that'll kill almost all the storage on phones without massive SD cards.
And that's what HERE Maps does best, without needing massive storage.
Do you know of any forks or clean implementations of browsers which cut out legacy support more aggressively and/or are tuned for performance? Something like Chrome with less overhead because it doesn't bother to support deprecated features.
Unfortunately there's currently nothing out there that generally meets all of the points you've touched on. There are some projects that tick one or two boxes, but not all of them.
Dillo parses a ridiculously tiny subset of HTML and CSS, and I used it to browse the Web between 2012 and 2014 when my main workstation was an 800MHz Duron. Yes, I used it as my main browser for two years. Yes, I was using a 19 year old computer 2-4 years ago. :P
Its main issue was that it would crash at inopportune times :D taking all my open tabs with it...
The one thing it DID do right (by design) was that the amount of memory it needed to access to display a given tab was largely isolated per tab, and it didn't need to thrash around the entire process space like Chrome does, meaning 5GB+ of process image could live in swap while the program remained entirely usable. This meant I could open 1000+ tabs even though I only had 320MB RAM; switching to a tab I'd last looked at three weeks ago might take 10-15 seconds (because 100MHz SDRAM) but once the whole tab was swapped in everything would be butter-smooth again. (By "butter-smooth" I mean "20 times faster than Chrome" - on a nearly-20-year-old PC.)
I will warn you that the abstract art that the HTML/CSS parser turns webpages into is an acquired taste.
---
Another interesting project in a significantly more developed state is NetSurf, a browser that aims to target HTML5, CSS3 and JS using pure C. The binary is about 3MB right now. The renderer's quality is MUCH higher than Dillo's, but it's perceptibly laggier. This may just be because it's using GTK instead of something like FLTK; I actually suspect firmly kicking GTK out the window will improve responsiveness very significantly, particularly on older hardware.
I have high hopes for this project, but progress is very slow because it's something like a 3-6 man team; Servo has technically already superseded it and is being developed faster too. (Servo has a crash-early policy, instead of trying to be a usable browser, which is why I haven't mentioned it.)
---
The most canonical interpretation of what you've asked for that doesn't completely violate the principle of least surprise ("where did all the CSS go?!?! why is the page like THAT? ...wait, no JS!?? nooo") would have to be stock WebKit.
There are sadly very few browsers that integrate canonical WebKit; the GNOME web browser (Midori) apparently does. Thing is, you lose WebRTC and a few other goodies, and you have to lug around a laundry list of "yeah, Safari doesn't do that" (since you're using Safari's engine) but I keep hearing stories of people who switch from Chrome back to Safari on macOS with unilaterally positive noises about their battery life and system responsiveness.
I've been seriously think-tanking how to build a WebKit-based web browser that's actually decent, but at this exact moment I'm keeping a close eye on Firefox. If FF manages to keep the bulk of its extension repository in functioning order and go fully multiprocess, the browser may see a bit of a renaissance, which would be really nice to witness.
Interesting.
I stay in textmode. Hence I do not need an emulator.
If I need graphics I access the files over VLAN from another computer designed for mindless consumption of graphics, like the locked-down ones they sell today with touchscreens, etc.
My understanding is that emulators like xterm can redraw the screen faster than VGA. I remember this can make textual interfaces feel snappier.
But I doubt that jobs execute any faster in X11/Wayland/whatever than they do in textmode. I cannot see how the processes would be completing any sooner by virtue of using a graphics accelereted emulator.
But I could be wrong.
I sometimes use tmux for additional virtual consoles because on the computers I control (custom kernel, devices and userland) I do not use multiple ttys, just /dev/console.
I rarely ever work within tmux. I only use it to run detached jobs. I view screen output from my tty with something like
I'm not a seasoned tmux user. I was a very early adopter. tmux is useful high quality software IMHO.Not sure why I would ever need these slow "web apps".
I guess the third parties controlling the endpoints might be able to utilise the data they gather about users. And I am sure some users appreciate the help. Thus it is a symbiotic relationship.
I am continually making my "tooling" faster by eliminating unecessary resource consumption. It is an obsession of sorts. Constant improvement.
But given that I am working with text, graphics processing is not something I need. I would not mind being able to run my non-graphical jobs on a fast GPU, but my understanding is that the companies making these processors are not very open. For example, the GPU in the RasperryPi.
Always interesting to hear how others are meeting their computing needs.