I was actually a bit curious how much HN uses, since it's probably the lightest site that I frequent.
According to Brave's dev tools, looks like just shy of about 90kb on this comment page as of the time of this writing.
Obviously some of that is going to be CSS rules, a small amount of JS (I think for the upvotes and the comment-collapse), but I don't think anyone here called HN "bloated". Even that one page wouldn't fit on Voyager.
Our comments don't really contradict each other. The page size without any linked documents like an external style sheet grew to 140KB after your comment. But just the text is 30KB.
HN used to work fine on an Nokia classic phone until last year. Sadly it doesn't any more, since they switched the CA to something that is not in the OS root trust. If HN wouldn't enforce HTTPS, it would still work fine.
Nice. Do you just use your 5 as a stationary iPod, or do you dual-carry with a modern device as well? Curious on if you also use it to wi-fi the web on your local LAN periodically too, of it that was just a periodic test to check if HN worked.
I use it around the house to Airplay music to various devices.
A number of things don't work, or work in unexpected ways, mostly because Apple doesn't allow me to log in to iCloud with such an old phone.
I can't control lights with the Home app. But Airplay works fine. The phone doesn't know what a HomePod is, but it shows up with a regular generic speaker icon, like the AirMac I have hooked up to my stereo.
Sometimes I have a few minutes to kill, and I pick it up to look at HN. The New York Times web site starts to work, but the login page doesn't load at all. WSJ blocks me at a "verifying the device" screen. WaPo half works. eBay works some, but no pictures. Ditto for Wikipedia.
There's a lot of things you take for granted on a new phone that you only realize when you're using an old phone. Like you didn't used to be able to quickly scroll an entire web page it's only a screen at a time in iOS 10. You can't grab the scroll bar on the side and move it, either.
And 99.9999% of people don't realize the genius of the camera island. It makes it so much easier to pick up the phone if one end is elevated a bit. With a completely flat phone, you end up dragging/scraping it along the table in order to grip it, which scuffs the surface. And if the table is really smooth, it's surprisingly difficult to lift the phone straight up.
Why can't you log into iCloud? unless somethings changed in the past year or something broke between ios 6 and 10, it should work. I'm still signed into my iPad 2 running iOS 6 (granted, iirc the root cert expired a bit ago so you need to update that). the 2fa is also a bit weird, you have to input the code after your password (eg: if your password is password123 and the code is 789 you'd submit password123789)
I think that might be a thing with apples Advanced Data Protection if you have it enabled, which is understandable since the software needs to know how to un-encrypt the data. If you don't have that enabled, then ignore this and assume apple decided to kill a whole lot of devices (particularly their macs, I know a surprising amount of people still on 10.15)
There is more information in a typical, single page of comments here than there is on the average webpage. And I'd say a far higher signal to noise ratio (though depending on the topic discussed some will disagree).
A low refresh rate probably still requires the same display-side framebuffer as PSR.
With conventional PSR, I think the goal is to power off the link between the system framebuffer and the display controller and potentially power down the system framebuffer and GPU too. This may not be beneficial unless it can be left off long enough, and there may be substantial latency to fire it all back up. You do it around sleep modes where you are expecting a good long pause.
Targeting 1 Hz sounds like actually planning to clock down the link and the system framebuffer so they can run sustain low bandwidth in a more steady state fashion. Presumably you also want to clock down any app and GPU work to not waste time rendering screens nobody will see. This seems just as challenging, i.e. having a "sync to vblank" that can adapt all the way down to 1 Hz?
But why 1hz? Can’t the panel just leave the pixels on the screen for an arbitrary length of time until something triggers refresh? Only a small amount of my screen changes as I’m typing.
When PSR or adaptive refresh rate systems suspend or re-clock the link, this requires reengineering of the link and its controls. All of this evolved out of earlier display links, which evolved out of earlier display DACs for CRTs, which continuously scanned the system framebuffer to serialize pixel data into output signals. This scanning was synchronized to the current display mode and only changed timings when the display mode was set, often which a disruptive glitch and resynchronization period. Much of this design cruft is still there, including the whole idea of "sync to vblank".
When you have display persistence, you can imagine a very different architecture where you address screen regions and send update packets all the way to the screen. The screen in effect becomes a compositor. But then you may also want transactional boundaries, so do you end up wanting the screen's embedded buffers to also support double or triple buffering and a buffer-swap command? Or do you just want a sufficiently fast and coordinated "blank and refill" command that can send a whole screen update as a fast burst, and require the full buffer to be composited upstream of the display link?
This persistence and selective addressing is actually a special feature of the MIP screens embedded in watches etc. They have a link mode to address and update a small rectangular area of the framebuffer embedded in the screen. It sends a smaller packet of pixel data over the link, rather than sending the whole screen worth of pixels again. This requires different application and graphics driver structure to really support properly and with power efficiency benefits. I.e. you don't want to just set a smaller viewport and have the app continue to render into off-screen areas. You want it to focus on only rendering the smaller updated pixel area.
> This seems just as challenging, i.e. having a "sync to vblank" that can adapt all the way down to 1 Hz?
I was under the impression that modern compositors operated on a callback basis where they send explicit requests for new frames only when they are needed.
There are multiple problems here, coming from opposite needs.
A compositor could request new frames when it needs them to composite, in order to reduce its own buffering. But how does it know it is needed? Only in a case like window management where you decided to "reveal" a previously hidden application output area. This is a like older "damage" signals to tell an X application to draw its content again.
But for power-saving, display-persistence scenarios, an application would be the one that knows it needs to update screen content. It isn't because of a compositor event demanding pixels, it is because something in the domain logic of the app decided its display area (or a small portion of it) needs to change.
In the middle, naive apps that were written assuming isochronous input/process/output event loops are never going to be power efficient in this regard. They keep re-drawing into a buffer whether the compositor needs it or not, and they keep re-drawing whether their display area is actually different or not. They are not structured around diffs between screen updates.
It takes a completely different app architecture and mindset to try to exploit the extreme efficiency realms here. Ideally, the app should be completely idle until an async event wakes it, causes it to change its internal state, and it determines that a very small screen output change should be conveyed back out to the display-side compositor. Ironically, it is the oldest display pipelines that worked this way with immediate-mode text or graphics drawing primitives, with some kind of targeted addressing mode to apply mutations to a persistent screen state model.
Think of a graphics desktop that only updates the seconds digits of an embedded clock every second, and the minutes digits every minute. And an open text messaging app only adds newly typed characters to the screen, rather than constantly re-rendering an entire text display canvas. But, if it re-flows the text and has to move existing characters around, it addresses a larger screen region to do so. All those other screen areas are not just showing static imagery, but actually having a lack of application CPU, GPU, framebuffer, and display link activities burning energy to maintain that static state.
I mean sure, you raise an interesting point that at low enough refresh rates application architectures and display protocols begin needing to explicitly account for that fact in order for the system as a whole to make use of the feature.
But the other side of things - the driver and compositor and etc supporting arbitrarily low frequencies - seems like it's already (largely?) solved in the real world. To your responsiveness point, I guess you wouldn't want to use such a scheme without a variable refresh rate. But that seems to be a standard feature in ~all new consumer electronics at this point. Redrawing the entire panel when you could have gotten away with only a small patch is unfortunate but certainly not the end of the world.
I'm glad someone else said this because I was right about to.
One of the things I love about Rama 1 is how it squashes the idea of a human centric universe where everything has to occur for reasons knowable by us. Rama is truly alien, inscrutable and fulfilling a purpose we don't get to understand. As soon as it enters our solar system, its gone for good, leaving a lot unanswered.
I wonder if this can help with the extremely irritating bug (intentional?) on the X270 where if you give it a third party 9-cell battery, it will raise CPU_PROCHOT all the damn time, and my processor would drop to below 1Ghz clock speeds.
Back when I used to have an X270 I had a shell script that ran on boot which poked a register to disable thermal throttling handling. Not at all ideal, but it made the machine usable in the absence of official Lenovo batteries which they stopped manufacturing pretty damn quickly.
You can use ThrottleStop[1] to disable PROCHOT on non-standard battery. I encountered similar issue with throttling on my Dell Precision laptop when I was charging it via 60 W USB-C charger instead of proprietary barrel-type 130 W plug. The system triggered a warning about low power charger and initiated aggressive cpu frequency scaling. By using ThrottleStop, I was able to use type-c 60W charger on lightweight tasks (such as web browsing, older games) just fine.
Dell likes to pull this stunt on other devices too. Like their 1L desktops in the OptiPlex line that I managed for many years. Even though we were using genuine Dell power adapters, if they became slightly unplugged but remained powered, they would enable PROCHOT.
This was fine until the machines randomly started setting PROCHOT on genuine power adapters that were fully plugged in. Eventually I just deployed a configuration with PDQ to all the machines that ran ThrottleStop in the background with a configuration that disabled PROCHOT on login.
Unfortunately, I couldn't get it to consistently disable PROCHOT pre-login, so students and teachers in my labs would consistently wait 3-4 minutes while the machines chugged along at 700 MHz as they prepared their accounts.
Nice to finally know what was happening to my x270 after so many years. Well good thing it doesn't happen when connected to power nowadays is my home server
I was the happy owner of ThinkPad X1 Extreme g1. It had that bug out of the box, new original battery. Once it thermal throttles, it never goes back to full GHz. It throttled pretty soon, cause big CPU small chassis. Yes, I had a script like that.
It is still somewhere on a shelf, so maybe its day will come again.
Possibly. Usually this is handled by the embedded controller, and not sure if that was reversed or not. You may be able to tristate the GPIO line that tells the CPU that a pin means PROCHOT, which would allow you to ignore the ECs attempts to do this.
As a former owner of a T470, Lenovo included a pretty beefy component from intel that was supposed to be feature complete by itself for dynamically managing thermals, including funky ideas like detecting if you were potentially using the laptop on your legs etc. and reducing thermals then, but giving full power when running plugged on the desk.
Time comes for delivery, Lenovo finds out that intel did a half-assed job (not the first time, compare Rapid Start "hibernation" driver earlier) and the result is kabylake T470 (and X270 which share most of the design) having broken thermals when running anything other than windows without special intel driver, thus leading to funny tools that run in a loop picking at an MSR in the CPU in a constant whack-a-mole with piece of code deep in firmware.
I feel like taking the approach of ramming the entire current desktop userspace into a phone is a misguided one. I can fully see now why Android reinvented the wheel across the board.
If I were to do a Linux Phone platform, I'd be targeting feature phone levels of functionality to begin with, with a focus on battery life and actually working telephony. I'd be aggressively throwing Wayland/GTK and all that nonsense in the bin just to get something basic working well. Draw straight to the framebuffer if you have to. This doesn't help with the app problem, but it sets a tide mark for quality & performance, and it can be iterated on.
With not-quite current hardware as supported by Pocketblue, performance is not that much of an issue, despite the OnePlus 6 being introduced in 2018. GNOME Shell mobile is quite smooth on it.
That said, if you want to start without the entire Linux desktop stack, you can, and there's even a project that already does something like that IIUC: https://sr.ht/~mil/framebufferphone/
I got mine 2nd hand on eBay as new old stock. £300 for a 55" 4K panel. The only thing I can ding it for is that the backlight local dimming is done in columns which is extremely distracting, so I turn it off. You have to remember this thing is designed to sit in a shop window in direct sunlight.
Ticks all my other boxes though, powers on as soon as my finger leaves the button on the remote, same with input switching and any other interactions with the OSD. Its completely braindead, just how I like it.
Oh, they also sent me the model with the touch digitizer installed. So I've got capacitive touch and pen input, it has a USB-B port on the side to connect to a computer.
I've switched to a low carb diet this year and have cut out just about all processed foods. I am considering getting a GLP1 injection privately in the near future. I'm hopeful that when I do get down to my target weight, my diet will remain changed, my habits will have improved and I'll be putting my new mobility to some use.
I don't plan on going cold turkey, I'll taper off the dose slowly and see what happens.
I don't think the bad sound is necessarily deliberate, its more of a casualty of TV's becoming so very thin there's not enough room for a decent cavity inside.
I had a 720p Sony Bravia from around 2006 and it was chunky. It had nice large drivers and a big resonance chamber, it absolutely did not need a sound bar and was very capable of filling a room on its own.
A dedicated GPU is a red flag for me in a laptop. I do not want the extra power draw or the hybrid graphics sillyness. The Radeon Vega in my ThinkPad is surprisingly capable.
Dedicated GPUs in gaming laptops are a necessity for the IT industry, as it forces manufacturers, assemblers and software makers to be more creative and ambitious with power draw and graphics software, and better optimal usage of available hardware resources (e.g., better battery and different performance modes to compensate for the higher power consumption due to the GPU; so a low-power mode enabled by casual user will disable the dedicated GPU and make the OS and apps dependent on the integrated GPU instead, but same/another user using same PC can switch to dedicated GPU when playing a game or doing VFX or modeling).
Without dedicated GPUs, we consumers will get only weaker hardware, slower software and the slow death of graphics software market. See the fate of Chromebooks market segment - it is almost dead, and ChromeOS itself got abandoned.
Meanwhile, the same Google which made ChromeOS as a fresh alternative OS to Windows, Mac and Linux, is trying to gobble the AI market. And the AI race is on.
And the result of all this AI focus and veering away from dedicated GPUs (even by market leader nVidia, which is no longer having GPUs as a priority) is not only the skyrocketing price hikes in hardware components, but also other side effects. e.g., new laptops are being launched with NPUs which are good for AI but bad for gaming and VFX/CAD-CAM work, yet they cost a bomb, and the result is that budget laptop market segment has suffered - new budget laptops have just 8GB RAM, 250GB/500GB SSD, and poor CPU, and such weak hardware, so even basic software (MS Office) struggles on such laptops. And yet even such poor laptops are having a higher cost these days. This kind of deliberate market crippling affects hundreds of millions of students and middle class customers who need affordable yet decent performance PCs.
Yea I agree it's not worth it to have a igpu a dedicated. If I'm correct in what you are talking about. There's always issues with that setup in laptops. But I'd stay away from all laptops at this point until we get an Adminstration that enforces anti trust. All manufactures have been cutting so many corners, your likely to have hardware problems within a year unless it's a MacBook or a business class laptop.
reply