I feel like we're missing some of the details/steps here, perhaps because the folks at corellium have a bunch of existing experience with apple SoCs and as a result had non-public knowledge about the platform.
For example: determining that interrupts were being routed to FIQ. It's unclear how one would have figured that was occurring. Or that there was an special AIC (apple interrupt controller) that needed talking to, and how to talk to it. It feels like there's a lot of "reverse engineer some Apple kernel bits" that's potentially being elided over.
Edit: or alternately, some debug interface that isn't being mentioned (swd/jtag/itm).
Presumably, they used the macOS/iOS kernel source code as a reference? I'm no expert on kernels, but a quick grep reveals plenty of references to "AIC" and "FIQ", in osfmk/arm64 for example.
I think there's probably some sort of understanding between them and Apple which allows them to do their thing as long as they don't publish too much publicly.
EDIT: In fact, it might not even be that. It doesn't exactly make commercial sense for a company whose entire existence is based on their extensive IP knowledge of apple hardware to post every detail publicly for others to copy. They are a for profit organization after all.
> "I think there's probably some sort of understanding between them and Apple which allows them to do their thing as long as they don't publish too much publicly."
You know the macOS kernel for M1 hardware is open source, right? This stuff isn't exactly a proprietary secret.
> To actually connect the USB port inside the M1 to the USB type-C connectors on the back of the Mac Mini, we had to interact with a chip on I2C (which means GPIO and I2C drivers) which has customized firmware. We've seen the protocol for these while building our virtualized models; nothing is a big surprise if you have a bird's eye view of the system.
You're not kidding. This is more of a flex than an explanation.
You start with a problem, then you think what system might be interacting with that particular subsystem. Then you start to poke around in there trying to intercept, dump code, add debugging and so on.
Side channels like timing during different tests is one possibility. You can also reverse the operating system to understand more about how it may be handling interrupts or unique hardware. Then there's also dumping any onboard firmware memory to learn about the boot process. With some creativity, there's loads of ways to infer functionality of the processor/system enough to develop over it.
It would be interesting if this project and the Asahi one would post a "wish list" of actions, documentation, and so forth that they wished they had from Apple. Even if Apple never responded, it would help clarify what the challenges are, and what issues might pop up for future variations of Apple silicon. Or maybe some Apple devs would offer up some helpful pointers outside of official sanction.
Here is how they actually set the kernel for booting. It's hidden behind a `curl <url> | sh` in their post:
#!/bin/sh
bputil -d | grep "CustomerKC" | grep -v "absent"
KC=$?
if [ $KC -eq 1 ]
then
bputil -n -k -c -a -s
csrutil disable
csrutil authenticated-root disable
fi
cd /Volumes/Preboot
curl https://$LONGURL/linux.macho > linux.macho
kmutil configure-boot -c linux.macho -v /Volumes/Macintosh\ HD/
echo "Kernel installed. Please reboot";
So, it looks like they download their pre-compiled kernel onto a Preboot volume. This is then set as the booting kernel? How long until there is a grub-like option?
I should add -- I don't know what most of these commands are. It would probably be helpful if they had spelled these out more, but this is all a great place to start.
bputil - Utility to precisely modify the security settings on Apple Silicon Macs.
csrutil - Configure System Integrity Protection (SIP)
kmutil - replaces kextload, kextunload, and other earlier tools for loading and managing kexts
The plan, IIUC, is to use PongoOS[0] as a bootloader which will then be able to load XNU/macOS or Linux. PongoOS already works on M1, so once the Linux support is done there will be a GRUB substitute.
And in doing so it turns of SIP. Probably necessary to boot a non-Apple kernel, but it means if you do want to switch back to the Mac side you’ve got one of the main security mechanisms turned off.
Security settings are stored per partition, not per machine, so you can have a Linux partition where SIP is disabled and a macOS partition where it is not on the same machine.
From the sounds of it, the corellium guys and Marcan are being a bit more cooperative now. Also, it sounds like much of the Corellium code is going to be pushed up to the main line so there won't be 2 Linux M1 branches for long.
This hopefully means more of the efforts going into Marcan's projects can be spent on higher lever things like getting the GPU, Power management and other important bits working well.
> You should also consider supporting the work being done by the folks over at Asahi Linux.
This seems like quite a nice gesture after the bit of drama that apparently happened (and made corellium look bad, imo).
I hope that eventually they're not going to duplicates effort as much as they likely are now, but then again it seems like Marcan thinks that their approach (with "insider knowledge") is an inherent risk to the project's legality.
With or without Corellium Asahi will eventually get there, though, and they're pretty certainly the ones who will make the GPU work (thanks to the amazing Alyssa Rosenzweig of Panfrost fame).
I thought the main issue with nouveau at this point was that Nvidia is locking the open drivers out switching the GPU into the performance power states? No amount of optimization is going to be able to get around that.
I love this for a few reasons. First off, we get to see a deeper look into what Apple did, and I'm enjoying how they went outside the established norms... Sometimes the Apple cart (pun not intended) needs overturning.
Secondly, the fact that they are able to dig into this much depth bodes well for competitors to reverse engineer similar performance into other platforms
Computers used to be a "commodity of commodities". The motherboard was made by one manufacturer, the memory was made by another one, the CPU by a different one, etc. etc.
Unless a competitor controls and owns their supply chain like Apple does, I don't see how they can compete; they need those "established norms" otherwise they won't be able to find commodities that work with their final product.
Computers being "commodities of commodities" is kind of a recent phenomenon. For a while, the norm was to do exactly what Apple is doing. It was much more common to design and market your own custom processor or, at the least, custom chipsets coupled with your own firmware/OS, like most unix workstations, non-IBM-PCs like the Amiga, Atari, Macs...
It'll be interesting to see if any other players try to move back to this style in the desktop/laptop PC space.
Personally I think that could be a good thing if it allows us to once again become experimental with personal computing operating systems, an are that has stagnated significantly in the past 2 decades due in part to the fact that any new contender has to support a ludicrous amount of hardware to be usable on a wide variety of PCs.
Then again it could all go horribly wrong and leave us with nothing but highly locked-down terminals suitable only for connection to our glorious cloud masters.
The flip side to that is that OSes that are built for specific hardware are only useful so long as that hardware continues to be produced. I don't think we'd have Linux as a useful contender today if it had been locked to a specific company's hardware; there's a reason we're not still using BeOS today for example (Haiku aside).
> there's a reason we're not still using BeOS today for example (Haiku aside).
You're blaming BeOS's failure on the BeBox? Every release of BeOS that was marked as a "full release" (as opposed to a developer preview or a "preview release") could run on Intel hardware, and the BeBox itself was discontinued long before then.
This is a tangent, but I don't understand people who think BeOS was the heir-apparent to consumer/home/personal OSes. It had super half baked networking support and absolutely no concept of multi-user or privilege separation. Similar home OSes didn't enforce privilege separation or multiple users, but at least had the framework -- BeOS essentially just runs everything as root.
It had some neat tricks, but was a pretty underdeveloped OS in some key areas. The fact it never took off isn't that surprising to me, and it's not just because Be was a small company or that it's hard for an underdog to get market share in a crowded space (both of which are also true). It just wasn't really well suited for "general purpose computing" in the same way that e.g. Linux, WinNT, or Darwin are.
> This is a tangent, but I don't understand people who think BeOS was the heir-apparent to consumer/home/personal OSes. It had super half baked networking support and absolutely no concept of multi-user or privilege separation.
What need does a personal computer OS have for multi-user support? The median number of users on a personal computer at any given time is 1. Even in the rare case that two or more people want to use the same computer at separate times and desire to protect eachother's files from eachother, they can use encryption rather than fake OS enforced security.
Privilege separation, in order to keep malicious applications from compromising the user's files, has become significantly more important but it wasn't that important back then.
Malicious applications and internet malware weren't raging problems in the late 90s, but they were known and predicted dangers.
You really don't need "user accounts" per se, but a way to restrict process privileges, of which user accounts is just a decent abstraction. On WinNT, user accounts and groups are just abstractions for providing SIDs for your access token -- there are SIDs like NT SYSTEM which don't have a correlated user account.
The important bit is you can create a process that can't accidentally fandango on core and overwrite all the system files. You have to elevate privileges -- even if there's no prompt for the user, the application dev still has to intentionally request admin rights to touch sensitive files.
All that aside, though, it's a lot easier to fake "no user accounts" like Windows did and just have no password, than to genuinely have no privilege separation and try to retrofit it on later. If you're trying to market a brand new OS with new APIs that has no compatibility with anything, even in the 90s, I'd expect some kind of plan for at least optional security. Anything at all!
> You really don't need "user accounts" per se, but a way to restrict process privileges, of which user accounts is just a decent abstraction
I contend that they really aren't.
> The important bit is you can create a process that can't accidentally fandango on core and overwrite all the system files.
And here's why: the system files are easily replicable, there's a copy on every OS installation media. My personal files exist in far fewer places and the OS does absolutely nothing to stop a malicious application from damaging them. See also: https://xkcd.com/1200/
We call them user accounts because they are designed around isolating users from each other and the system, but that isn't what we care about in a personal computing context. It is possible to jerry-rig the kind of permission system we actually want using the user account system, but it has a lot of problems in practice because it isn't designed for that. It is not a good abstraction.
Yeah, I'll concede that "decent" is too much credit. Even on "true multi user systems" the simplistic models *nix don't really work, and you end up seeing people using combinations of virtual machines and containers as ham-fisted sandboxes. I was thinking of it more along the lines that it's at least kind of understandable for the average person. More so than some alternatives, at least.
Still, I think it was a big oversight to provide nothing at all in BeOS. Maybe they could have come up with a better solution than the backwards-compatible status quo.
Or maybe personal computers would have just been better off in an alternate timeline where networking wasn't as pervasive and we still got software on disks, and mailed off for a hard copy of GNU...
AFAIK the original releases of OS X by default did not require a login and simply booted to desktop, same as Win 9x/XP.
The distinction is these systems, even if they would never ask you for a password or allow you to forgo a password altogether (like WinXP) still have the notion of user accounts and privilege separation/process isolation. It's a lot easier to fake them not being there then to retrofit them.
Well, I don't know if I'd say it was an "heir apparent" to anything in particular, but I think BeOS was promising. You're looking back at it like an engineer, which is fair; I'm looking back at it as, well, a user, someone who ran it full-time for nearly two years. Sure, I ran into things it couldn't do, but most of the time I was kept enchanted by all the things it could already do.
At the time it was being worked on its main competitors were Windows 95/98 and (pre-Unixification) Mac OS; multiuser capability just wasn't that important. If development had continued, I don't think it would have been out of the question for a future version to include multiuser support. In any case, I really do wish we'd been able to see where BeOS might have been in 2009 if things had gone differently.
>You're blaming BeOS's failure on the BeBox? Every release of BeOS that was marked as a "full release" (as opposed to a developer preview or a "preview release") could run on Intel hardware, and the BeBox itself was discontinued long before then.
Okay, that's fair - BeOS was a bad example. Let's say AmigaOS then, which largely seems to be limited to Amiga hardware. (Though amazingly it still seems to be maintained?)
Depends on who you ask, I think. I'm not into Amiga stuff or that knowledgeable, but I understand AROS is essentially Amiga 3 implemented on x86/ARM/PPC/M68K. However, it's not seen as being "Amiga" because it doesn't primarily run on "real Amiga hardware"?
BeInc launched BeOS R3 in the era of Win95/98 and MacOS classic, both were commercially successful and had no multiuser support and poor TCP/IP stacks.
Win9x did actually have multiuser support -- it didn't enforce permissions between users locally, but it could at least have multiple profiles with different settings, and could restrict network share access to user accounts with NetNTLM style authentication. The important part is the API was "multiuser aware".
MacOS was inferior to basically every other alternative but it got by because of backwards compatibility with a really large catalogue of Mac software. I don't think an OS like MacOS classic could have made it from scratch.
I didn't mean to sound so negative about BeOS, I just think it could have had a better chance if it had a more solid technical base, especially for an OS made from scratch in the mid 90s. Especially when Linux and BSD were accelerating. I like to imagine something like NeXTSTEP but based on a FreeBSD or Linux core with a from-scratch userland/runtime and API... mmm...
Also innovation in the PC space has pretty much been limited by whatever Intel is willing to accommodate. I'm happy we're beginning to move away from this.
>It'll be interesting to see if any other players try to move back to this style in the desktop/laptop PC space.
It's possible. I wouldn't necessarily bet on it, but it's possible.
Qualcomm is just licensing ARM's standard core designs. If I were a PC OEM, designing your own custom silicon around the ARM core might be tempting, if it allows you to better differentiate your product.
> If I were a PC OEM, designing your own custom silicon around the ARM core might be tempting, if it allows you to better differentiate your product.
There's a problem there, though (that Apple doesn't have): a random PC OEM doesn't control the OS. So that means they have to beg Microsoft to add support for their custom architecture, which MS may not want to do, and instead say "we're standardizing on what Qualcomm does, so do that or get bent".
My machines aren't that old, and I'm fairly certain that is the case in both of them. In fact, I'm thinking about upgrading my working memory on one of them looking at the 50% indicator at my screen right now for what is only having 12 tabs open in a web browser and not doing much other heavy lifting.
What norms are there that actually make any difference? The cool stuff in M1 is the design work into the processing hardware, aren't the peripherals a bit of a hodge-podge of old this and non-standard that?
> aren't the peripherals a bit of a hodge-podge of old this and non-standard that?
There are certainly "old" peripherals in the SoC (the UART comes to mind). There's probably not a lot of norm-busting in this,
There are certainly "non-standard" peripherals in the SoC (if by "non-standard" you mean "nobody else has them"). As the article notes, many if not most other folks "just" check off a couple dozen boxes on the ARM PrimeCell Shopping List (plus a few 3p vendors) and stack the resulting blobs together.
Going your own way and building major components yourself (and all of the major components, including the fabric / interconnects) is very definitely not the "norm".
Much worse, I'd expect. I doubt their porting work does all that much with power management. And there's no GPU driver, so whatever power state it starts up in is what you're going to get.
Also they seem to be booting Ubuntu off of a USB stick, so it's possible that they haven't figured out how to support the internal storage yet, let alone things like audio, suspend/resume, ethernet/WiFi, keyboard/trackpad (for the laptops), etc. So those things would either come up completely shut down (and give you better power stats than you'd expect under normal usage), or come up active, and possibly not have power management enabled at all, killing your battery.
Pretty recently read that Linus was wanting to run Linux on new Macbook Air with M1, but he though it would be too much of a pain in the ass. I imagine we will see it soon.
> we've been tracking the Apple mobile ecosystem since iPhone 6, released in 2014 with two 64-bit cores
It sounds like they were well prepared. More and more I see preparation as make or break for engineering success, not budget or talent or tech, but silent persistence.
A nice article, but one nitpick. The authors repeatedly describe Apple's approach as "non-standard" when what they mean is "not what Linux does". That's pretty grating. It's also worth mentioning that in several cases (such as Mach-O), Apple's approach predates the very existence of Linux.
I don't think that's entirely accurate. Yes, the bit about flat image vs. PE vs. Mach-O is guilty of that.
But things like their SMP bringup process, the AIC, using FIQs to deliver interrupts, etc. are things that Apple is doing that's not what most ARM SoC vendors do.
Very few drivers have been written, beyond handling low-level SoC stuff. The ones that have been are explicitly mentioned in the linked article, stuff like USB which leverages existing Linux code.
No keyboard, no trackpad, no GPU drivers (besides an unaccelerated framebuffer, setup and left by iBoot). There's some indication of Wi-Fi support, as it's a Broadcom FullMAC device (brcmfmac on Linux). But yeah, I'd say unusable without a USB hub and a myriad of external devices. Again, said as much in the article.
Very recently as well. It's nice to see some rapid progress, certainly coming together. Good to hear the Ethernet is working on the M1 Mac Mini. Do you happen to know what device it is?