Yeah, it's the question I guess: too little, too late?
Because it's hard to beat the x86 mammoth for so many reasons (on top of my head):
- huge market share in servers/workstations
- Intel has more resources than pretty much anyone else
- AMD is now back in the game and started a core/performance/price war with intel
- x86 is "cheap"
- market shares for "cheaper" stuff will probably be taken by ARM and RISC-V
- so much time was invested in optimizing compiler, code and so on for x86 because that's what everyone has
- the Torvalds argument which is to say developper "will happily pay a bit more for x86 cloud hosting, simply because it matches what you can test on your own local setup, and the errors you get will translate better,". So as long as you don't have cheap Power workstations, it'll be a moot point. I remember working on AlphaPC and pretty much nothing was 64 bits clean back then, it was a huge mess. Now that part is solved but not everything else...
I definitely get the appeal for the Googles of the world to challenge Intel and for niche (internal) products, and for myself because honestly I don't really need an intel compatible CPU but in the long run, I am not sure it'll go anywhere...
Well, the local machines are coming, it's totally feasible to have a Blackbird at home and host at IntegriCloud…
but both of those are really expensive, so instead I have a MACCHIATObin at home & AWS Graviton in the cloud. ARM is winning :P
> optimizing compiler, code and so on
Fun fact, IBM is paying large amounts of cash on BountySource for SIMD optimizations of various things for POWER: https://www.bountysource.com/teams/ibm
But ARM is winning again: many things, especially the more user-facing ones, are already optimized thanks to smartphones. For POWER, the TenFourFox author is I think still working on SpiderMonkey's baseline JIT. For ARM (AArch64), IonMonkey (full smart JIT) is already enabled, developed by Mozilla, thanks to both Android phones and the new Windows-Qualcomm laptops: https://bugzilla.mozilla.org/show_bug.cgi?id=1536220
> Well, the local machines are coming, it's totally feasible to have a Blackbird at home and host at IntegriCloud…
Yeah but there's a HUGE but: the motherboard and CPU (1S/4C/16T) and heatsink alone are $1.4k, no RAM no case no HD no nothing (I found a guy who spec'ed one for $2.1k with everything you'd need for a reasonable workstation). So unless you have a massive good reason or interest (political, because POWER, your company runs on POWER, "f*ck" x86, ...) to run your code on POWER, I don't see why you'd spend that much while you could get better for a lot less.
And the only way it'll get cheaper is to mass produce it: let's be realistic, as much as I'd want to have a POWER workstation or laptop (hey, there were SPARC and Thinkpad PowerPC laptops so why not), I won't be holding my breath while I wait...
Yeah, I already mentioned that because of that price, my "fuck x86" machine is ARMv8 :)
(okay, not only because of the price, also because I just like the A64 ISA and UEFI)
The SolidRun MACCHIATObin is not nearly as powerful — it's ultrabook-grade performance, not server-grade — but it works fine for coding & browsing, and it's also quite open — the only blob in the firmware is something tiny and irrelevant (and I'm pretty sure for some secondary processor), everything on the ARM cores post-ROM (including RAM training code) I have built from source.
Well, the CPUs are in the same ballpark (<500 bucks for the 4-core POWER9 ~= launch price of the Ryzen 7 1800X) but the boards are outrageous — $1100 for the Blackbird is almost twice as expensive as ridiculous E X T R E M E boards like the ASUS ROG ZENITH EXTREME for Threadripper (and a normal decent board for desktop Ryzen is in the $250 area).
Yeah, it's low volume and Raptor needs to pay their employees — but $1100 for a mainboard? Come on. Maybe they should have dropped PCIe Gen 4 from the Blackbird at least.
I think you are overlooking one of the main reasons for avoiding Intel and AMD -- the ME and PSP, respectively. Intel or AMD has a master "skeleton" key that can basically unlock any of their computers post-sale, while simultaneously using that key to ensure that you cannot modify, replace, or remove the black box firmware in question.
If you trust Intel and AMD, without an SLA, to keep your data private all I'll say is that's quite naive. Even the HDMI master key leaked, do you really expect the ME and PSP signing keys not to fall into the wrong hands at any point?
Yes, the mainboards are expensive. That's the price of making them blob-free and still retaining high performance. Blackbird lowers that barrier to entry some as well.
Again, Rome has a mandatory PSP blob that cannot be removed (any UEFI toggles that say otherwise are not accurate -- the PSP must run before the x86 cores even come out of reset). If you're OK with that loss of control, my gut impression is that use of Linux etc. is just being done to avoid Microsoft licensing fees, not because of security or owner control concerns ;). At that point, why not just lease cloud space on a major provider that can offer that compute power even cheaper than a local machine which sits idle overnight?
I know you like to play up the privacy angle in your marketing… that wouldn't work on me. I mostly work on public/FOSS stuff, about the only really private data on my PC is my access credentials. I don't want them stolen, but someone targeting me with a low-level exploit for them is a ridiculous moonshot scenario, they're a million times more likely to leak from the actual service itself.
> local machine which sits idle overnight
um, I thought we're talking about workstations here. I power mine off when unused.
> use of Linux etc. is just being done to avoid Microsoft licensing fees, not because of security or owner control concerns
This is based on two rather odd assumptions:
- Microsoft as the default: No, I grew up with Unix, Unix is my default choice just because I know it and I'm used to it;
- owner control on all levels being equally important: meh, there's a lot more that you'd want to tweak in the kernel and up the stack. I wouldn't know what to change in firmware. I have changed many little things in the FreeBSD kernel (and contributed them). The only thing I ever changed in the UEFI firmware on my ARM box is some ACPI tables to fix compatibility.
> That's the price of making them blob-free and still retaining high performance
That sounds vague ;)
Also, what's "high performance" about the board anyway? PCIe Gen 4? On a typical developer workstation that's kind of a waste, Gen 3 is plenty.
While the machines off it is a paid for resource that is unused. A cloud provider would lease that resource (so to speak) to someone else during that time, meaning in theory they can provide lower cost than you will ever see unless you can somehow get the hardware cheaper than they can.
Good providers will still allow you to run an accelerated VM inside the leased VPS, so you could still do your kernel hacking there.
I'm simply saying there's something interesting here -- you care enough about owning (I use that term loosely) a machine to spend more on a local system, but not enough to obtain one that you can freely modify as desired. Clearly there is a threshold, and I'm curious where it lies. :)
The threshold is not spending all my savings on an additional computer "for science" :)
> accelerated VM inside the leased VPS
Does that work on POWER?
> they can provide lower cost
They can but they won't. They like having huge profits. Even if they offer the base VPS for cheap (Spot instances) they rip you off on storage, bandwidth, IP addresses, etc.
Also, again, desktops. I like developing directly on a desktop workstation. I can't exactly insert my Radeon into a PCIe slot in the cloud and run a DisplayPort cable from the cloud to my monitor :)
Yeah, POWER has basically unlimited nested virt from POWER9 on. And unlike x86 you don't get the massive slowdowns past a level or two of nested virtualization.
Stadia seems to think it can push a high resolution monitor like stream over a network interface. I'm playing devils advocate of course here but fundamentally if you don't have control of the hardware there's no long term advantage to local compute, at least not with current market trends etc. Everything points to a move back to dumb terminals for consumer use at this point -- in the past it would have at least been possible to hack those terminals to run some minimal (for the time) OS, but crypto locking of the terminal hardware stops that quite cold.
Yeah, I'm still working on it. But I'm just one guy. Hopefully this gains interest from others in working on it too (and I will _not_ be butthurt if someone gets a fully functional one submitted before I do -- in fact, I'll probably be relieved).
Yeah, Intel is ahead of everyone in SIMD width (until someone makes a chip with like 2048-bit ARM SVE :D).
But still, that didn't prevent POWER9 from being in one of the largest supercomputers. And super wide SIMD has its disadvantages (hello AVX Offset downclocking)
In the latest TOP500 rankings announced this week, 56 percent of the additional flops were a result of NVIDIA Tesla GPUs running in new supercomputers – that according to the Nvidians, who enjoy keeping track of such things. In this case, most of those additional flops came from three top systems new to the list: Summit, Sierra, and the AI Bridging Cloud Infrastructure (ABCI).
Summit, the new TOP500 champ, pushed the previous number one system, the 93-petaflop Sunway TaihuLight, into second place with a Linpack score of 122.3 petaflops. Summit is powered by IBM servers, each one equipped with two Power9 CPUs and six V100 GPUs. According to NVIDIA, 95 percent of the Summit’s peak performance (187.7 petaflops) is derived from the system’s 27,686 GPUs. (emphasis mine, Summit being a POWER9 supercomputer with 4608 nodes with 2 POWER9 and 6 V100 in each)
I think that's more reflective of Intel cornering at least 85% of virtually every market (except, notoriously, mobile) than of some special suitability for HPC.
I agree, I was just saying all traditional supercomputer manufacturers lost that battle, including POWER but it's the last standing member from the old guard...
From Power9 cluster with array of GPUs? For work loads like http servers/php code/MySQL/etc intel CPU will be much faster. We have CPU samples from all vendors, unfortunately Intel right now is the best platform.
That didn't answer my question. That's fine if you think Intel is better (I don't, but that's orthogonal), but that doesn't make PowerPC less general purpose.
It's entirely possible that the x86 might still have been very successful even if the IBM PC had never happened. Back then, there were a lot of vendors involved in the 8080/Z80 ecosystem, and the 8086/8088 was a logical path to the future for those vendors – an easier migration path than e.g. the Motorola 68K, and cheaper than the 68K as well.
I was an active Z-80 user in those days and I respectfully disagree with that assessment. There was nothing logical about going from Z-80 to 8088/86 (and 8080 didn't matter nearly as much).
Seeing the number of design wins m68k racked up, it would have been the logical choice (and ISTR that IBM actually liked it better, but it and its peripherals were more expensive). Disclaimer: not a fan of any of these architectures.
1) OS from the start. Develop in the open. Maybe lock some features behind a paywall.
2) OS when something is not hot anymore. Take your formerly private stuff you charged a lot of money for, and because so much better stuff has come out.. meh, let's OS it.
While I agree that there's a whiff of IBM trying to offload responsibility for older tech, I think some credit has to be given to projects which pre-dated the current industry attitudes towards open source. A lot of things had to happen (probably including people of a certain generation/era passing the torch) before companies felt comfortable with the idea that they could open source their tech, and still have a competitive advantage over anyone else who would use it.
In the late 80s/early 90s when POWER appeared, and RISC fever was in full swing, if someone, as a VP or C-suite level decision maker at a mega-corp like IBM, declared "let's just release all the IP of our high performance processor design to anyone who wants it!", they would have their coworkers and superiors questioning their sanity, at very, very least.
> if someone, as a VP or C-suite level decision maker at a mega-corp like IBM, declared "let's just release all the IP of our high performance processor design to anyone who wants it!", they would have their coworkers and superiors questioning their sanity, at very, very least.
Well that was basically the PPC consortium. I'm not sure how much apple/motorola/etc paid to be part of it, but the idea was to build a common ISA from multiple vendors.
Hardware is different than software though: producing actual physical chips is extremely expensive.
POWER is hot, IBM is probably just confident that someone else producing competing compatible chips and taking all their customers is not a real threat / completely outweighed by the benefits of an open ecosystem (including more compatible but different segment (low-power) chips)
Linux built up over time with zero upfront investment, starting with Torvalds in his bedroom. Silicon fabrication requires gigantic amounts of cash upfront.
That's not what "barrier to entry" means. Entry is not making 2019 Linux. Entry is making something.
I can write a tiny blog engine in a day on my existing computer. I can't walk into Global Foundries and ask them to make me a single wafer of my tiny microcontroller on their 14nm process.
Hobby software is on the same playing field as pro software, and can smoothly become pro software like Linux did. Hobby silicon is on the 1970s playing field - Jeri Ellsworth and Sam Zeloof making a few transistors with size measured in micrometers. There is literally no way to make your own "hello world" in modern performance silicon.
Now with EPYC Rome, I wonder just how many takers IBM will have.