I know nothing about PowerPC, and I base that solely on the information on how the MMU works in this blog post, but is just taking the first set of pages found really the correct fix? Shouldn’t it instead look up the VSID from the segment register for the lowest address, and use the PTEG matching that?
Again, I never looked at how Power works in that regard, so I might have entirely misunderstood.
But a great and fun read nonetheless! (I believe this is evidenced by me thinking about it in such detail :) )
Yes, this seems highly dubious. Basically, what's going on is the code is taking a shortcut to properly figuring out how to locate its own page table mapping, and instead blindly searching the entire page table for all mappings from any process to the target pages, assuming it will be the owner. Then it's finding there's another one.
The other mapping is an identity mapping (PA==EA), as evidenced when the attempt to map EA 0 using that VSID found a PTE that mapped it to PA 0. This screams kernel-internal mapping. Lots of OSes map all RAM 1:1 to some extent.
The correct fix here is obviously to read the segment register instead of trying to do all this linear search heuristic. However, if that is not trivial for some reason, then the next best solution would be to assume that the only other possible mapping for the pages is indeed a 1:1 identity map (they should really investigate where this comes from), and filter out any PTEs found during the search that map 1:1, since no legitimate userspace process should have these freshly allocated pages mapped down there anyway. Of course, this then fails if SheepShaver ends up with EA==PA on the malloc out of sheer luck :-)
Why did this work with less RAM? Well, PPC uses hashed page tables, so they do not contain a full view of all MMU mappings. Instead, the mappings thrash on collisions. My guess is that boxes with lower RAM have enough page table churn that those fixed mappings often go away. Either that or the fixed mapping doesn't cover all RAM, and for whatever reason never collides with the allocated pages on lower-RAM systems.
When he launched SheepShaver after playing Doom for a while, either the game had caused enough page table churn to evict those identity mappings, or the allocation ended up somewhere else where it didn't collide with any identity mappings that had ever existed, and thus SheepShaver worked.
Here's the thing though: without knowing how the rest of the OS handles the HPT, this entire approach is unlikely to be stable. The author says "we will take over that memory mapping for the life of the process", but unless something guarantees those PTEs will never be evicted by the OS, it is well within its right to do that. Then when SheepShaver next tries to use those addresses, they will fault, the OS will know there is no real legitimate mapping for them, and things explode.
At least if my memory serves me right; last time I had to deal with hashed page tables was with PS3 Linux (and that was 64-bit PPC), and I've only ever used BAT mode on 32-bit (Wii) so I'm a bit rusty on this stuff :-)
Thanks for confirming! I haven't thought much more about this, but even the "let's confirm that this is indeed a 1:1 identity map" approach seems brittle. And I don't know if the first few pages of physical memory are used for anything special on this architecture, but if not, you might run into the (extraordinarily rare) situation where the first 2 virtual pages are regularly mapped to the first 2 physical pages because they were free. Granted, that's a bit like winning the lottery... But all in all, barring any "for some reason"s you mentioned, just properly reading the segment register and going through all motions really seems like the best approach.
It's fun what can be inferred from just a few paragraphs in a blog, I hope I didn't fail to take super important details of the architecture into account. :)
And good point with the eviction by the OS, I had somehow assumed that the driver was invoking steps to lock those PTEs down, but that may not actually be the case...
Fair enough 8) A few years back I recall Alan Cox (int al) demonstrating Linux booting on a hard disc. Not from a hard disc but actually on one - ie there is enough RAM etc to boot and run a shell in most peripherals.
That aside, have you seen what Jeff Minter and co could do with a C64? You have around 40Kb RAM to play with. You've got the SID for sound and sprites are a thing. Thank the good $DEITY that your storage isn't as noisy as a Spectrum and you have a decent amount of memory to work with. Now make a game to run on this thing!
I can see a Quickshot II across the dining room (WFH.) Hmmmm, when lunch time turns up I think some Mutant Camels will need a good kicking.
Ha, yes I always have fond memories of the days when people wrote a serviceable chess program in 1KB which you typed in from a magazine [1] and I compare that to nowadays when I can have a Firefox Tab using 1GB+.
Also some of the optimisations (The Hobbit on the BBC micro actually using screen memory to load the game) and "friendly rivalry" between developers and copiers (I remember trying to Hack Frak! on the BBC micro and it detecting this and playing the Jolly Roger).
No I didn't see that from Jeff Minter, I was never a C64 guy. I had an Atari 800XL in those days (and in fact I kept it all this time - though it needs new RAM chips). I did have Mutant Camels though!!
I did see the harddrive Linux thing yes, I think it was shown at SHA2017 too.
And I was just joking about the controller having a C64 indeed :) Of course functionally the C64 is more versatile despite having less resources.
Consumer-grade RAM was ~$50/MB for a hot minute in the early to mid 90’s. Lots of machines shipped with 4MB, but 8 was the sweet spot for basic tooling around in Windows. I worked in tech support for a small laptop manufacturer in Ohio and we sold systems with 64M. Nearly $5k for a laptop, it blew my mind people would pay that much.
I had that Voodoo 2 card as well, it was the bees knees.
With my earliest computers, I kept buying laser printers with more RAM than the computer attached to them (I did subsequently upgrade the computers). I still remember what a strange and amazing thing it was when I first had a computer with 1G of RAM.
> With my earliest computers, I kept buying laser printers with more RAM than the computer attached to them (I did subsequently upgrade the computers).
I always though it was funny that the original Apple LaserWriters had more RAM and a faster processor than the computers they were attached to: https://en.wikipedia.org/wiki/LaserWriter
Ahhh yes. I loved BeOS. I remember how exciting I thought it was when it came to Intel x86, for free. As a 12 year old, I was an early promoter of platform agnosticism.
I installed it on a hand-me-down emachines, aside from figuring out some voodoo magic to get my 33K modem to work; it was a good OS. I do miss technology in the 90s.
I too miss the Cambrian explosion of OSs in the 90s. It felt like so much potential before it all became dominated by Windows and then the OS itself began to vanish with sealed devices like smartphones.
Would you rather have a NeXTCube instead of your Mac, or rather a BeBox? Run some networking servers on your Solaris machine and do visual rendering on your SGI? Ah, it was fun times.
Haiku is a modern successor to the BeOS. It is backwards compatible, but future-looking as well. I'm optimistically expecting that it will be an option alongside Linux as a production alternative OS for users, one day.
Again, I never looked at how Power works in that regard, so I might have entirely misunderstood.
But a great and fun read nonetheless! (I believe this is evidenced by me thinking about it in such detail :) )