Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love this for a few reasons. First off, we get to see a deeper look into what Apple did, and I'm enjoying how they went outside the established norms... Sometimes the Apple cart (pun not intended) needs overturning.

Secondly, the fact that they are able to dig into this much depth bodes well for competitors to reverse engineer similar performance into other platforms



Computers used to be a "commodity of commodities". The motherboard was made by one manufacturer, the memory was made by another one, the CPU by a different one, etc. etc.

Unless a competitor controls and owns their supply chain like Apple does, I don't see how they can compete; they need those "established norms" otherwise they won't be able to find commodities that work with their final product.


Computers being "commodities of commodities" is kind of a recent phenomenon. For a while, the norm was to do exactly what Apple is doing. It was much more common to design and market your own custom processor or, at the least, custom chipsets coupled with your own firmware/OS, like most unix workstations, non-IBM-PCs like the Amiga, Atari, Macs...

It'll be interesting to see if any other players try to move back to this style in the desktop/laptop PC space.


Personally I think that could be a good thing if it allows us to once again become experimental with personal computing operating systems, an are that has stagnated significantly in the past 2 decades due in part to the fact that any new contender has to support a ludicrous amount of hardware to be usable on a wide variety of PCs.

Then again it could all go horribly wrong and leave us with nothing but highly locked-down terminals suitable only for connection to our glorious cloud masters.


The flip side to that is that OSes that are built for specific hardware are only useful so long as that hardware continues to be produced. I don't think we'd have Linux as a useful contender today if it had been locked to a specific company's hardware; there's a reason we're not still using BeOS today for example (Haiku aside).

I'm more excited about things like rump kernels for hardware support across disparate OSes: https://en.m.wikipedia.org/wiki/Rump_kernel


> there's a reason we're not still using BeOS today for example (Haiku aside).

You're blaming BeOS's failure on the BeBox? Every release of BeOS that was marked as a "full release" (as opposed to a developer preview or a "preview release") could run on Intel hardware, and the BeBox itself was discontinued long before then.

This is a tangent, but I don't understand people who think BeOS was the heir-apparent to consumer/home/personal OSes. It had super half baked networking support and absolutely no concept of multi-user or privilege separation. Similar home OSes didn't enforce privilege separation or multiple users, but at least had the framework -- BeOS essentially just runs everything as root.

It had some neat tricks, but was a pretty underdeveloped OS in some key areas. The fact it never took off isn't that surprising to me, and it's not just because Be was a small company or that it's hard for an underdog to get market share in a crowded space (both of which are also true). It just wasn't really well suited for "general purpose computing" in the same way that e.g. Linux, WinNT, or Darwin are.


> This is a tangent, but I don't understand people who think BeOS was the heir-apparent to consumer/home/personal OSes. It had super half baked networking support and absolutely no concept of multi-user or privilege separation.

What need does a personal computer OS have for multi-user support? The median number of users on a personal computer at any given time is 1. Even in the rare case that two or more people want to use the same computer at separate times and desire to protect eachother's files from eachother, they can use encryption rather than fake OS enforced security.

Privilege separation, in order to keep malicious applications from compromising the user's files, has become significantly more important but it wasn't that important back then.


Malicious applications and internet malware weren't raging problems in the late 90s, but they were known and predicted dangers.

You really don't need "user accounts" per se, but a way to restrict process privileges, of which user accounts is just a decent abstraction. On WinNT, user accounts and groups are just abstractions for providing SIDs for your access token -- there are SIDs like NT SYSTEM which don't have a correlated user account.

The important bit is you can create a process that can't accidentally fandango on core and overwrite all the system files. You have to elevate privileges -- even if there's no prompt for the user, the application dev still has to intentionally request admin rights to touch sensitive files.

All that aside, though, it's a lot easier to fake "no user accounts" like Windows did and just have no password, than to genuinely have no privilege separation and try to retrofit it on later. If you're trying to market a brand new OS with new APIs that has no compatibility with anything, even in the 90s, I'd expect some kind of plan for at least optional security. Anything at all!


> You really don't need "user accounts" per se, but a way to restrict process privileges, of which user accounts is just a decent abstraction

I contend that they really aren't.

> The important bit is you can create a process that can't accidentally fandango on core and overwrite all the system files.

And here's why: the system files are easily replicable, there's a copy on every OS installation media. My personal files exist in far fewer places and the OS does absolutely nothing to stop a malicious application from damaging them. See also: https://xkcd.com/1200/

We call them user accounts because they are designed around isolating users from each other and the system, but that isn't what we care about in a personal computing context. It is possible to jerry-rig the kind of permission system we actually want using the user account system, but it has a lot of problems in practice because it isn't designed for that. It is not a good abstraction.


Yeah, I'll concede that "decent" is too much credit. Even on "true multi user systems" the simplistic models *nix don't really work, and you end up seeing people using combinations of virtual machines and containers as ham-fisted sandboxes. I was thinking of it more along the lines that it's at least kind of understandable for the average person. More so than some alternatives, at least.

Still, I think it was a big oversight to provide nothing at all in BeOS. Maybe they could have come up with a better solution than the backwards-compatible status quo.

Or maybe personal computers would have just been better off in an alternate timeline where networking wasn't as pervasive and we still got software on disks, and mailed off for a hard copy of GNU...


I know people who were very upset that Mac OS X required login/password. The "personal" part of "personal computer" was important to some people.


AFAIK the original releases of OS X by default did not require a login and simply booted to desktop, same as Win 9x/XP.

The distinction is these systems, even if they would never ask you for a password or allow you to forgo a password altogether (like WinXP) still have the notion of user accounts and privilege separation/process isolation. It's a lot easier to fake them not being there then to retrofit them.


Well, I don't know if I'd say it was an "heir apparent" to anything in particular, but I think BeOS was promising. You're looking back at it like an engineer, which is fair; I'm looking back at it as, well, a user, someone who ran it full-time for nearly two years. Sure, I ran into things it couldn't do, but most of the time I was kept enchanted by all the things it could already do.

At the time it was being worked on its main competitors were Windows 95/98 and (pre-Unixification) Mac OS; multiuser capability just wasn't that important. If development had continued, I don't think it would have been out of the question for a future version to include multiuser support. In any case, I really do wish we'd been able to see where BeOS might have been in 2009 if things had gone differently.


>You're blaming BeOS's failure on the BeBox? Every release of BeOS that was marked as a "full release" (as opposed to a developer preview or a "preview release") could run on Intel hardware, and the BeBox itself was discontinued long before then.

Okay, that's fair - BeOS was a bad example. Let's say AmigaOS then, which largely seems to be limited to Amiga hardware. (Though amazingly it still seems to be maintained?)


Depends on who you ask, I think. I'm not into Amiga stuff or that knowledgeable, but I understand AROS is essentially Amiga 3 implemented on x86/ARM/PPC/M68K. However, it's not seen as being "Amiga" because it doesn't primarily run on "real Amiga hardware"?


BeInc launched BeOS R3 in the era of Win95/98 and MacOS classic, both were commercially successful and had no multiuser support and poor TCP/IP stacks.


Win9x did actually have multiuser support -- it didn't enforce permissions between users locally, but it could at least have multiple profiles with different settings, and could restrict network share access to user accounts with NetNTLM style authentication. The important part is the API was "multiuser aware".

MacOS was inferior to basically every other alternative but it got by because of backwards compatibility with a really large catalogue of Mac software. I don't think an OS like MacOS classic could have made it from scratch.

I didn't mean to sound so negative about BeOS, I just think it could have had a better chance if it had a more solid technical base, especially for an OS made from scratch in the mid 90s. Especially when Linux and BSD were accelerating. I like to imagine something like NeXTSTEP but based on a FreeBSD or Linux core with a from-scratch userland/runtime and API... mmm...


Also innovation in the PC space has pretty much been limited by whatever Intel is willing to accommodate. I'm happy we're beginning to move away from this.


>It'll be interesting to see if any other players try to move back to this style in the desktop/laptop PC space.

It's possible. I wouldn't necessarily bet on it, but it's possible.

Qualcomm is just licensing ARM's standard core designs. If I were a PC OEM, designing your own custom silicon around the ARM core might be tempting, if it allows you to better differentiate your product.


> If I were a PC OEM, designing your own custom silicon around the ARM core might be tempting, if it allows you to better differentiate your product.

There's a problem there, though (that Apple doesn't have): a random PC OEM doesn't control the OS. So that means they have to beg Microsoft to add support for their custom architecture, which MS may not want to do, and instead say "we're standardizing on what Qualcomm does, so do that or get bent".


Is that not still the case?

My machines aren't that old, and I'm fairly certain that is the case in both of them. In fact, I'm thinking about upgrading my working memory on one of them looking at the 50% indicator at my screen right now for what is only having 12 tabs open in a web browser and not doing much other heavy lifting.


What norms are there that actually make any difference? The cool stuff in M1 is the design work into the processing hardware, aren't the peripherals a bit of a hodge-podge of old this and non-standard that?


> aren't the peripherals a bit of a hodge-podge of old this and non-standard that?

There are certainly "old" peripherals in the SoC (the UART comes to mind). There's probably not a lot of norm-busting in this,

There are certainly "non-standard" peripherals in the SoC (if by "non-standard" you mean "nobody else has them"). As the article notes, many if not most other folks "just" check off a couple dozen boxes on the ARM PrimeCell Shopping List (plus a few 3p vendors) and stack the resulting blobs together.

Going your own way and building major components yourself (and all of the major components, including the fabric / interconnects) is very definitely not the "norm".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: