Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
IBM Opens Power Microprocessor Architecture (nytimes.com)
160 points by kjhughes on April 23, 2014 | hide | past | favorite | 101 comments


A particularly desperate move from IBM.

This is uncomfortably close to SPARC's story. Sun opened up SPARC designs in 2005. Four years later, the "Rock" project was canceled, marking an end for world-beating SPARC performance.

(Later Sun/Oracle chips have all been based on "Niagara," a low-end chip that didn't even hope to compete on performance. It was intended to be massively multicore and inexpensive, and it was at least one of those things.)


I think your conclusion, "end for world-beating SPARC performance", is perhaps based on out of date information:

http://www.oracle.com/us/solutions/performance-scalability/s...

Also, if you look at the roadmap, you'll see there's even more performance improvements scheduled in the near future:

http://www.oracle.com/us/products/servers-storage/servers/sp...


Oracle's current chips are derived from the massively-multicore Niagara, not from Rock or UltraSPARC III/IV. As a result, you might notice that the benchmarks Oracle brags about are SPECjbb and TPC-C. These are benchmarks that reward integer perf and parallelism, not single-thread or floating-point performance.

On SPEC's general purpose benchmark, SPECcpu, Oracle published only three SPARC results. All three are from Fujitsu chips in re-branded Fujitsu systems. Oracle just doesn't publish results for their in-house chips. I imagine they ran the benchmarks and decided the poor results did not fit their marketing message.

http://www.spec.org/cpu2006/results/cpu2006.html


Oracle has actually published other SPARC results that are not for Fujitsu chips for SPEC CPU:

https://blogs.oracle.com/BestPerf/entry/20130326_sparc_t5_sp...

http://www.spec.org/cpu2006/results/cpu2006.html

While I can't provide you my own detailed benchmarks, I can tell you anecdotally that the most recent SPARC hardware has significantly better single-threaded performance than the early T1, T2 series.

As an example, one of the projects I work on is written almost entirely in Python, and so is almost completely single-threaded (only the transport is multi-threaded using libcurl).

I had an opportunity to compare the performance on the T5 to the T1 and found it was practically night and day, and the performance of the program seems just as snappy as it does on any Xeon system I've used.

Anecdotally, I can tell you that given a choice between a fully-loaded Xeon box or one of the newest SPARC servers, most of the developers I work with will choose the SPARC server simply because builds take significantly less time.


Those are the SPECint_rate benchmarks, for throughput. These are different from the regular SPECint, and they're published separately. Oracle has published no regular SPECcpu scores.

Even the rate benchmarks are not very impressive. Oracle rigged the comparison in their press release. They compare a 16-core SPARC to an 8-core Intel. If we do apples-to-apples, the Oracle result is pretty humdrum:

- Oracle T5-1B, 16-core SPARC, 436 / 467

- Dell M520, 16-core Intel, 533 / 553

The M520 contains 2x E5-2450@2.1 GHz. This is far from the fastest Intel chip. You can get them up to 3.6 GHz. It's just a common and inexpensive configuration. Let's not even talk about the respective pricing.

Personally, I haven't used anything newer than a T1. Because I haven't seen anyone buy a new Sun box in that long!


Given the vastly different design philosophies of these two architectures, why is a comparison of a "16-core SPARC to an 8-core Intel" inherently rigged? Aren't those SPARC cores a lot smaller than x86_64 cores? I mean, the first SPARC CPU was implemented on 2 20,000 gate Fujitsu gate-arrays (2nd was the floating point unit)....


If you normalise the SPEC results to per-thread per-clock, you get:

- SPARC T5, 128 threads @ 3.6GHz, 463 / 467 -> 1.01 / 0.946 result/thread/GHz

- Intel Xeon E5-2690, 16 threads @ 2.9GHz, 357 / 343 -> 7.69 / 7.39 result/thread/GHz

Or to put it another way, the SPARC needs 800% more threads and a 24% higher clock to achieve only ~30% faster than the Intel. The POWER is somewhat, but not that much better, at 2.54 / 2.24.


The Intel chip, as benchmarked, is 2.1 GHz. The same design is available in 3.6 GHz, as of Q2 '12. We're comparing a top-clocked, circa 2013 SPARC to a 2012 Intel product running at 60% of the commercially available clock rate, and the SPARC comes up short.

I used this poor comparison because that was what was readily available in SPEC's published results for single-processor (SPECcpu) and parallel (SPECcpu_rate) benchmarks.


Your initial implication was that SPARC wasn't setting any world-records anymore, when I proved that wasn't the case, you then proceeded to complain about an arbitrary benchmark.

When I pointed out that results were provided for that benchmark, you then complained that it wasn't for the general benchmark but a portion of it.

You then proceeded to claim it wasn't an Apples-to-Apples comparison, but Oracle doesn't offer anything less than 16 cores for a T5. I don't think comparing products that don't exist is very useful so attempting to extrapolate what an 8-core version might be seems silly, especially since there's not a one-to-one correlation between cores and performance.

In addition to that, at last check, you can't pick the number of cores (precisely) that you want a processor to have when purchasing so it doesn't make any sense (in my personal opinion) to strictly compare core-to-core performance, since, as the other poster pointed out, core is essentially a definition at the whim of a vendor.

So, I'll just stick to refuting your original implication -- that SPARC isn't setting world-records anymore; it is in fact doing so. And in fact, SPARC continues to have far greater memory bandwidth, I/O bandwidth, and memory capacity compared to general Intel offerings.

So if you want to find out how fast a T5 will actually run your application, try one out and get real data instead of relying on benchmarks to make your decision. Personally, I think you'd be shocked at just how well most workloads perform if you actually tried a T5.

As to the price argument, the companies I've worked for or with in the past generally didn't care about that so much as the reliability of the system and it's capability. They have workloads that consume multiple terabytes of memory. They're using the servers to process transactions that are netting them millions of dollars with the servers they use, so saving a few thousand bucks doesn't matter to them.

In the end, you have to use the right tools for the right job. There's a reason that Oracle sells x86 servers too.

Personally, I don't care which architecture is being used as long as I get to use Solaris/ZFS.


I mentioned that SPARC performance stopped being a central focus with the death of Rock, in favor of massively-multicore, low-cost designs derived from Niagara.

You pasted an Oracle press release that focused on parallel performance.

I complained, and pointed out SPECcpu as a common measurement.

You responded with SPECcpu_rate, a different benchmark focused on parallel performance measurement. It is not surprising that Niagara-derivative chips do well. It is also not surprising that, core for core, they can't match modern commodity systems for density, performance, or cost.

p.s. The "reliability" argument went out the window with VMware. Every fortune 500 is using x86 with VMware HA to provide the redundancy that would have once come from enterprise RISC. (Given how few RISC systems were ever configured with redundant memory or CPUs, VMware-on-commodity is probably offering a substantially better service level.)

p.p.s. a basic 1U x86 server will typically have between 0.75 and 1.5 TB of RAM in it. Yes, TB, as in terabytes. Virtualization provides a market for compact, low-wattage systems with significant memory. That's just the 1Us. In 2014, "Large" x86 boxes are very large indeed.

Lastly, I, too, miss Solaris/SPARC. It was a great platform that I enjoyed working with. I don't miss the associated hardware support contracts that came floating by after the Oracle buyout. It has been several years since I found a contract where SPARC or Solaris were anything but legacy platforms. It's sad, but it's not a mystery.


SPEC CPU Rate is part of the SPEC CPU benchmark; it is not a "different" benchmark.

The reliability argument doesn't "go out the window"; what do you think that software is virtualised on? And to top it off, you're losing a significant amount of performance by using a virtualisation solution like vmware.

In fact, there's entire segments of industry that won't accept the latency typical virtualisation technologies bring.

As for the 0.75 and 1.5 TB of ram argument, perhaps you missed what I said about terabytes. You have any Intel boxes with 32 terabytes lying around?

While you may have not seen any contracts floating around for SPARC hardware, I have quite recently..

Finally, as for "legacy" status, SPARC and Solaris have features not found anywhere else that are continuing to be developed and added even today. It's only a "legacy" platform if you completely ignore the technology there. And Solaris runs just fine on x86 thank you very much.


This sucks.

It means IBM doesn't want to do HARD engineering anymore. They just want to do sales and handling of stuff for high markup. This is the last really cool thing to go before the finish dismantling that former technical giant.

POWER was never about flops/watt it was about the IBM ecosystem. It's calling in for support on your hardware/software and getting elevated to somebody in the same department that will actually write the fix. It's having actual field engineers that went to the factory and learned how the stuff is put together. It was never cheap... But it was good.

ibm's POWER hardware is the last of the "old iron" lines. X86 stuff just doesn't hold a candle in the "well built" category. My company went from an "open platform" IBM solution back to old fashioned POWER hardware because it was just better built and the "open" platform just resulted in finger pointing between BLADE, SAN, LAN, and OS people... And that was IBM support with their name on the box.. When it's truly mixed support it's going to be very terrible.

It's not like AIX or System i will ever get ported to non-IBM machines for better value. It's not like Apple is gonna dust off OSX and let me build racks of POWER servers either. Betting on "Linux" is a cop out. People use Linux on POWER when they have investments in POWER iron for their mainline business apps.. ROG, COBOL, etc.. And they don't want to pay AIX/enterprise licensing for web/email servers.

This is just the "last call" to show they tried to let the world love them more... And nobody will come. Then they close it down.


This is not actually "open" as in free and open source software.

https://en.wikipedia.org/wiki/OpenPOWER_Foundation


"Open" here seems to mean that other companies can license IBM's core designs to use as part of a larger chip, and also that the instruction set itself will be public (although IBM's implementation will not be).


That's what we really need: An ISA with a GPL-style copyleft licence. Not that being Intel's competitor is all that profitable, but the x86 market would probably be more diverse if everyone could use the ISA.


http://en.wikipedia.org/wiki/LEON

http://space.stackexchange.com/questions/729/why-did-the-esa...

The SPARC ISA is completely unencumbered as far as I know, and there are several copyleft implementations of the architecture that are synthesized into interesting products, often from overseas vendors (for example Navspark: https://www.indiegogo.com/projects/navspark-arduino-compatib... )


Yeah, but ISAs exhibit so strong network effects that anything not x86/amd64 or ARM is irrelevant, sadly.


Well, there's OpenRISC[1] and I believe SPARC is too. At least there are open source implementations including Sun's first Niagara.

[1]http://www.openrisc.net/


I miss Sun's brand of idealism before profits sometimes.


I miss Sun's "use our enterprise software, only pay if you need support" licensing. Oracle deep-sixed that as soon as possible, along with the pseudo-hobbyist licensing terms for the OS, effectively killing a large portion of the hobbyist and "sysadmin with a setup at home" market.

I ran (well, still do) a website dedicated to Sun/Solaris hardware and software users, was on the OpenSolaris external pre-release beta testing team/group, and heard lots of moaning about this, even from sales people within Oracle itself.

Pre-Oracle Sun was gracious and gave me a loaded SunFire T1000 to run the site and mailing lists, etc, on. Post-Sun Oracle gave me the finger.


Yea, it is sad that Jonathan Schwartz had problems too.


If we are really talking about free and open-source software, the ISA is irrelevant. Over the years, I've used Emacs on Alpha, SPARC, PA-RISC, POWER, MIPS, ARM and x86 under at least six different operating systems and it works flawlessly on all of them. I never had the chance to confirm that, but I suspect it works very well on IBM zSeries mainframes too.


It was about open-source hardware: monopolies can limit freedom in hardware too. The Pentium FDIV bug was trivial to validate at that time, since it was common even for an undergraduate at a small university to have accounts on Sun SPARCS, HP PA-RISC, and DEC Alpha in addition to POWER Macs and Windows x86. And there were other choices in the ecosystem like SGI MIPS.

Granted, that may seem like ancient history, but RDRAND is an indication that Intel still makes mistakes.

Less fragmentation is more efficient in many ways, but just like any monoculture, it is also fragile.


Serious question: can the zSeries do full-duplex terminal communications or the modern equivalent?

I first started using EMACS shortly after various IBM systems, and it's hard to express how obnoxious it was to have your keyboard lock while the computer was sending you stuff....

The question is serious because it's not clear to me the use cases of the zSeries including supporting this sort of thing (vs. editing your files on another computer like your PC before submitting them to the mainframe ... but I haven't touched anything in that domain since ... 1978 I think).


You can just SSH into a modern z. It has a full POSIX environment, so you could run real EMACS.

http://www-03.ibm.com/systems/z/os/zos/features/unix/library...

http://pic.dhe.ibm.com/infocenter/zos/v1r11/index.jsp?topic=...


I don't think a 3270 would be very comfortable for Emacs. I once saw a 3278 hooked to a 390 running Unix (I don't think it was AIX) and seeing text scroll on a 3278 was interestingly alien. Running Emacs would be doable, but most 3270 editors I can remember (low 80's) took advantage of the local terminal smarts and worked on a form/page based concept. Emacs was designed for the VT-100 world where terminals were not particularly smart.

Having said that, you can host Linux VMs under zVM. Binaries will use the zSeries ISA and the whole guest will run under the zVM environment. You can easily ssh to it. IIRC, Debian, Red Hat and SuSe support it.

This would probably be the least cost-effective way to edit text. On the other hand, few terminals were able to render monospaced text so beautifully.


But we're not talking about FOSS. We're talking about software in general, and some types of software can't be sustained by any traditional FOSS monetization model. These kinds of software will usually be distributed as binaries, and you'll just have to switch to whatever OS and ISA the developers care to compile for.


For better or worse, the ISA wars are in one sense over. There's relatively little benefit to be gained from using an architecture other than amd64 or ARM at this point; silicon has evolved to the point were the line between RISC and CISC is pretty much nonexistent, save for some minor benefits around code density etc. The benefits of using amd64 are beyond-compare experience of optimisation, commodity pricing, and legacy compatibility; the costs are negligible.

However, I'm somewhat uncomfortable with Intel's increasing and complete near-domination of the "not-low-power" CPU market. I'd love to see other high-end silicon designers start to develop amd64-compatible processors that will encourage competition in the market. AMD's struggling with it, but there's plenty of scope for looking at interesting approaches to increasing performance or reducing power use.


(Mill mod)

Ahem, sorry to those suffering Mill Fatigue on HN, but any conversation about the ISA wars being over is a red rag to a bull with us! :)

Hope you like the talks on http://millcomputing.com/docs

NOTE: bulls react to movement not colour


Apologies, I shouldn't have been quite so dismissive.

I mean that the ISA wars are over in the sense of multiple relatively similar architectures competing against one another for negligible gain.

There's still plenty of scope for nontraditional architectures to come in and offer something different. We've already seen that in the more general availability of general-purpose GPU programming. As an architecture nerd, I'm already super-excited by Mill.


Yea, thinking about it, many of the old UNIX workstation vendors used the same 68K CPUs but then they decided to develop their own RISC architectures, fragmenting the market and reducing the economies of scale.


Well, it's hard to say that Motorola had it's heart in the 68K CPUs, it was a wildly diversified and not entirely focused electronics conglomerate back then.

And their product plan was very conservative, with the "odd numbered" chips not being great advances on the even numbered ones, e.g. the 68030, which came out the same time as the first SPARC, used the 68020 microarchitecture with a 256 byte instruction cache and used the design shrink to put the MMU on the chip (but not the FPU). The 68040, three years later, wasn't a stunning success, was it? And the 88100, which came out a year after the 68020 certainly sucked up a lot of corporate oxygen, not to mention put into question the company's commitment to the 68K family.

And it's very dangerous for a company to depend on another company for one of the most critical components of its products, isn't it?

But, yeah, the economies of scale, the much lower unit sales to spread your Non-Recurring Engineering costs over, eventually doomed them to mostly bureaucratic and installed base niches when AMD successfully innovated for a short period and then Intel got its act together due to that threat.


Yea, I know the 68040 was late. On the other hand, not having SPARC for example would have been less competition for HP etc to deal with, making this less of a problem.


Which does a good job of proving my case. SUN very much didn't want to provide "less completion for HP" (albeit I very seldom heard a good word for HP-UX).


And my point is that in the long term this was probably a bad idea.


Indeed, and it's something I've been thinking about if I were in charge of Symbolics at the beginning.

But it's also in part 20/20 hindsight, e.g.:

Lots of people refused to believe that Moore's Law would last as long as it has; the corollary that you'd get higher speeds purely from design shrinks did end about a decade ago.

I'm not sure very many people "got" the Clayton Christensen The Innovator's Dilemma disruptive innovation thesis prior to his publishing the book in 1997. He really put it all together, how companies with initially cruddy products could in due course destroy you seemingly overnight.

In this case, how a manufacturer of rather awful CPUs (the 286 in particular, but the 8086/8 was no price except in cost; caveat, Intel support to people who design in their chips was stellar back then), could start getting its act together in a big way in 1985 with the 386, then seriously crack their CISC limitations with P6 microarchitecture (Pentium Pro), etc.

And note their RISC flirtation with the i860 in 1989, their Itanium debacle, etc. etc. More than a few companies would have committed suicide before swallowing their pride and adopting their downmarket, copycat's 64 bit macroarchitecture that competed with the official 64 bit one.

And that's not even getting into all the mistakes they made with memory systems, million part recalls on the eave of OEM shipments, etc. What allowed Intel to win? Superb manufacturing, and massive Wintel sales, I think.


crikey, you gave me a platform to get Mill into yet another HN thread, and you apologise? Are you mad? I'm the one to thank you :)


Traditional "ISA wars" were about the cost of instruction decode & context-specific functionality. This really is over now: instruction decode costs hardly anything and context specific functionality is rarely used.

As far as I can tell, Mill have realised that the real fight is for cache; the most expensive thing you can do on a modern CPU is a data stall. So the intent is to change the execution model enough to change data access patterns and make prefetch work properly, along with more efficient use of the cache.

The next most expensive thing is a branch mispredict, and Mill attacks that strongly as well.


No Yes and Yes. Instruction decode wars aren't over, its just that the competition have plateaued :) One of the real big hurdles was decoding enough ops / cycle to compete, and the Mill's 33 ops/cycle is way way way more than everyone else, and due to the split stream and multi-phase decoding cunning.

And of course this is intimately dovetailed to cache, so Yes Yes Yes to everything you said too.


What do you actually do with the 33 ops/cycle? How many do you issue, by comparison? Or is it used to follow both sides of the branch?


We aggressively speculate the cheap branches (which are the common type) and we aggressively inline, but mostly its to software pipeline loops (which make up 80% of most code).


Speaking for myself here – the Mill material is some of the most interesting that's appeared on HN in recent memory and I don't think we're close to being fatigued by it.


People still forget how ARM just came out of nowhere to dominate the low power space. Makes me wonder if the CPU wars aren't really over yet.


ARM didn't "come out of nowhere", it's been used in embedded and mobile systems as a low power CPU for decades now. The earliest mobile use I can find is the Apple Newton in 1993. DEC was marketing their StrongARM product as a low power CPU back in 1996.


And it goes even further back than that: when Acorn got first silicon back for the ARM1, they were hoping for anything less than 1W so that affordable plastic packaging would be sufficient. When they booted up that processor, their multimeter on the power supply line didn't register anything. Further investigation showed that the supply line wasn't actually connected, and the processor had booted using a total of about a tenth of a watt of leakage from other components on the board.


I remember that. ARM was doing great with Newton until Intel bought and killed Strong ARM.


Intel was almost dumb enough to put those on the used manufacturing line from x86 chips... Almost. Then Intel "knifed the baby" by moving those designers to Atom and selling off their ARM stake... At one point StrongARM was the top of the hobbiest Boards available, long before Raspberry got their idea.


They're not, never will be. Especially if there were a concerted effort to put together a GNU/Linux like open source hardware stack to get rid of the fear, uncertainty and doubt of what's inside commercial processors and chipsets. Also there needs to be more decapping of commercial chips to see what's actually in them.


And I guess a lot of people forget that the PowerPC is in a lot of embedded designs.


Yeah, but's that strongly in the direction of the "Zero Cost" category of microprocessors, isn't it?

An IEEE I think article laid them out nicely into 4 categories:

Zero cost: what you put in microwaves, every cent counts.

Zero power: used to be obscure, the mobile market is of course making it very much less so, although those have elements of:

Zero time: speed is what counts, and plenty are willing to pay a hefty premium for it.

Zero units: say the military needs a CPU for a combat airplane which won't be made in more than 100s of units. Perhaps worth it for the prestige and getting the government to pay you to figure out neat things that might be usable in your bread and butter channels.

One modern example, but not with custom designs so much, is radiation hardened CPUs for space applications, a unit cost of say 100K is rather small in the bill of materials, the cost of failure in the high millions at minimum, could top a billion or billions, e.g. it would be very very bad if the computing subsystem(s) of the James Webb Space Telescope fail on its way to the Earth-Sun L2 point....


> silicon has evolved to the point were the line between RISC and CISC is pretty much nonexistent, save for some minor benefits around code density etc

There's one strategy of increasing performance on a CISC that can't really be done with RISC - making existing complex instructions execute faster. This is particularly attractive since it means all existing software gets a boost without any software-level work. So in some sense, having complex instructions that would not be fast at the time the ISA was designed is like future-proofing. And x86/amd64 still has many places where this sort of optimisation can be done.

> I'd love to see other high-end silicon designers start to develop amd64-compatible processors that will encourage competition in the market.

Absolutely. It's unfortunate that many regard the ISA as being too complex, since it is within that complexity that is hidden a wealth of opportunities for optimisation.


Agreed. I worked on PPC on the Xbox 360, and rarely had to worry about the underlying ISA, although when you did have to worry about it, things got pretty hairy and ugly, fast (think: DMA, cache coherence and other low-level stuff all ganging up to ruin your day).

You also had to worry about the ISA when you were doing something performance intensive (that's when you start taking advantage of multimedia instructions and so forth; ideally you have abstractions for this, but they'll only take you so far).


Intel has dominated for quite some time.

http://en.wikipedia.org/wiki/Intel#Market_share

In all honesty, beyond the cheap CPU market, most people have gone with Intel for a decade(s).


Not really.

Intel has been dominating the Windows-compatible desktop computer space and recently made significant progress in the server space but if you lump together mobile and embedded processors and units sold, you'll see is nowhere near a leadership position in overall processor sales.


ARM CPU/SOC shipped in billions of units.

Latest production ARM in Nexus 5, 7 has 4 cores, integrate GPU, VPU (video processing unit.) Run from battery power. IMO, Intel is losing CPU war to ARM.

We are also seeing Qualcomm's 64 bits 8 cores going into the next gen phone soon.


Intel's position in CPUs is analogous to Apple's position in phones: Less than 15% of the units, but over 50% of the profit.

ARM is a commodity market. Nvidia makes its money on GPUs and runs Tegra at a loss. ARM has a huge market cap but barely makes any profit. Samsung makes its money on the phone and other chips. TI found ARM so cutthroat that they've given up and quit. Qualcomm has an enormous market cap (larger than Intel's), but they make almost all of their money on the wireless side.

This is why ARM is so dangerous to Intel. While ARM moves upmarket and Intel moves downmarket, Intel is unable to compete on price because the competitors are operating so close to breakeven.


Last time I checked, and I can see it changing since then, TI decided play the wireless phone game wasn't worth it, but it was still pursuing other mobile markets, e.g. WiFi SoCs, which at the time were really popular.

Has this changed?


As TI phrased it in their press release back in 2012, they were focusing on "embedded" instead of "mobile." The way I understood it was that this covered everything mobile: not just phones, but also tablets.

In any case, TI makes almost all of its money on analog chips. "Embedded processing" was less than 8% of profits in Q1 2014. And that division includes microcontrollers.


Please don't say "market cap" when you mean revenue or market share. (Market capitalization of a stock is the number of shares of the stock multiplied by the price per share.)


> Please don't say "market cap" when you mean revenue or market share. (Market capitalization of a stock is the number of shares of the stock multiplied by the price per share.)

And please don't downvote without fist spending ten seconds to look up the companies' financials.

I meant exactly what I wrote: "market cap." Qualcomm had a larger market cap that Intel (at least it did yesterday). It did not and does not have greater revenue than Intel, especially when you strip out the wireless side and only look at CPUs.

The point of bringing up market cap is to show how huge these companies are that Intel is competing with. Everyone knows Samsung is enormous, so I didn't bother pointing it out. A lot of people don't realize how huge Qualcomm has gotten. But it's not because of Snapdragon.


sorry. (upvoted your reply to cancel out my downvote.)


You are only looking at a tiny subsection of CPUs, which happen to be the niche intel dominates. CPUs are in a lot more devices than just PCs and laptops.


I keep hearing "IBM is opening POWER", but I still don't see any obvious route to dial up Asus or Tyan and order a cheap ATX motherboard that I can slap a POWER processor on, and build a machine to play with. Until that day comes, it's hard for me to get excited about POWER, as much as I might otherwise want to.


SME (Sun MicroElectronics) did exactly this in the 1990s. You could buy a standard ATX motherboard with an UltraSPARC II and PC DIMM slots on it. It was called the Ultra AXmp.

Unfortunately, the internal cost structure was such that the Ultra AXmp cost only a few percent less than a "proprietary" workstation/server from the parent company.

The moral of the story: sticking a high-margin, low-volume chip in a commodity board doesn't put it on a commodity cost basis.


Yeah, I was gonna say earlier, but got distracted... it would also help if one could call up IBM (or a reseller) and actually buy a POWER chip - for a reasonable price - to plug into said motherboard.

In the end, it's the entire ecosystem... it's either affordable and accessible or it isn't. It's sad that IBM keeps coming out with pretty cool hardware (Cell, POWER, etc.) that nobody can afford to get convenient access to. :-(


Alpha and sparc both had moderately affordable hardware (atx boards and such). Outside the geeks, what's the compelling story? I owned a couple of those devices, fact of the matter was you paid more to be on an isolated moderately performing island. To be interesting, you need an Atx board or something with a better chip at the same price or the whole thing has to be cheaper. Basically, when cell came out they should have put ps3 spec workstations on the market for $300 or maybe even less. One thing arm really does right is low cost.


In that case you should probably avoid the Open Compute Project, too.


I don't really understand this comment. OCP, from what I've seen, is mostly dealing with designs for servers built using x86 and/or ARM processors, of which there are plenty of off the shelf motherboards already available. If people are building (or start building) OCP spec motherboards, so much the better. But with POWER, it's always been just short of impossible to buy an "off the shelf" motherboard with a normal form-factor, that uses normal RAM, etc., and put a box together.

And, IMO, when an architecture is that inaccessible to the hobbyists and experimenters and tinkerers, it really hinders it. I know it definitely diminishes my personal interest, and I want to be an IBM fan in many ways.


I get the impression that you can buy 1,000 OCP servers but you can't buy one; I wouldn't be surprised if OpenPower is the same way. But if you want one Power8 maybe you can just buy a server from IBM.


I get the impression that you can buy 1,000 OCP servers but you can't buy one;

Ah, gotcha. Yeah, that's kind of a bummer if that is actually the case. I haven't really looked into buying any OCP servers, so I wasn't aware of that situation.


the major power licensees were Freescale and AMCC, the rest is too small to be meaningful, if any. Both Freescale and AMCC are shifting towards ARM and not in the foundation?

With or without them, it's probably too little too late for this architecture, sigh.

MIPS, UltraSparc, and Power, are sadly fading away.


If they want adoption they will have to bring the price of entry level power pc systems down.

Also wonder if its a year or two too late. AMD seems to have picked ARM as their plan B strategy realising that competing with Intel on x86 is hard to do with reasonable margins. Wonder if AMD would have picked POWER if it was available as an alternative a couple of years back.


No, they still would have picked ARM. Why? Because every mobile device with uptake uses ARM, and Intel has no play there.


AMD is not playing in the mobile space with ARM. They realise they are not large enough like Intel to spend their way into the ARM market. ARM is interesting because you have Apple and Qualcomm at the high end, but you also have Mediatek and RockChip at the low end. AMD cant play the "more value for money" in ARM mobile that they do with x86 - because there are already people doing it.


No, but following the disruption innovation theory, ARM should soon work its way up, while it will cannibalize Intel and force it to move increasingly upmarket (servers, etc) to chase profits. Intel is already losing billions subsidizing Atom chips [1], their "real" competitors to ARM chips, to make them at least semi-attractive to OEMs (which will really get the short end of the stick, if they go with Intel anyway, because if they do help Intel succeed, Intel is going to immediately increase prices on them, to turn the division profitable again).

ARM chips don't have to "beat" Intel. They just have to become good enough for desktop performance, while costing much less.

[1] - http://www.electronicsweekly.com/news/business/intel-lose-1b...


I can't actually see anything on Tyan's site about a board. Elsewhere Google says it is going to be cheap. But it would be nice if it was available.


Can someone explain Google's involvement?


Sure, the article itself can explain it: "But analysts say the embrace of Power has two crucial advantages for Google. First, the Internet giant builds its own data centers and tweaks the technology in its server computers, and the licensing regime in the Power foundation is hacker-friendly in a way Intel’s handling of its intellectual property is not. The second advantage for Google is negotiating power..."


I read the article and didn't find their little explanation to be that good, that's why I asked. Why PowerPC? Why not SPARC or ARM? What got them this involved?


Given the hatred between Google and Oracle, I expect that even if SPARC were a better solution, Google would look everywhere else first.


Power8 has equal or better performance than Intel; ARM isn't even close.


PowerPC and POWER are different architectures.


The PowerPC chips fall under the Power ISA standards documents.


That is very cool. I know that John Cocke who invented the architecture (and was an old drinking buddy of mine) would definitely approve.

It was an outgrowth of his work developing the first optimizing compiler (Fortran) with Fran Allen which earned him a Turing Award. Good on ya, Johh, wherever and with whomever you might be sipping now. :-)


What can today POWER do that neither Intel nor ARM can? What is its relevance?


I'm not sure it can do anything that ARM and x86 can't. But the relevance is that it provides competition to both of those, and competition is good because it keeps things moving forward. And diversity and choice are (mostly) Good Things.


It's unclear. Diversity helps innovation, but it also adds complexity to everyone who builds on top of it. If there's room for significant innovation, it can be worth the complexity. But if there isn't, and there are many signs that there's not much left to do in the traditional general-purpose ISA space, then we're just living with a bunch of complexity for no benefit.


The performance is in the SYSTEM architecture. Even a mid sized POWER box can address Hundreds of gigs of RAM, Petabytes of direct attached disk and rooms full of tape stored data. You can make an x86 box do that, but it's not really supported at the "plug bits and go" level. You put POWER boxes in place and don't reboot them for years. (Unless you are smart and do your HA/DR tests like a good kid)

For example a Google plays around with racks of disposable X86 boards like candy. When their app becomes "fixed" rather than growing exponentially, they'll want to move to something like POWER because it's DESIGNED to work with dozens of CPUs sharing Petabytes of attached disk easily. Not the silly kludges like Blades, SANS or iSCSI or virtual machines people play with now to hide x86 OS vendor scaling limitations.


What you're describing sounds more like a combination of better HW (bus, components) and SW (operating system). I agree, IBM delivers superb systems.

However: are there any inherent limitations in the x64 architecture which would make it impossible to achieve the same as with POWER, if you designed it from scratch for that kind of robustness?


any sign of this transition happening at Google?


Run a PS3.


I honestly thought that the Power architecture was already open. Guess not.


It's been varying degrees of open all along; there was AIM and Exponential in the 90s and then PA Semi and AMCC ten years ago. But marketing/branding/strategy people need to stay employed.


Does anybody know the architectural advantages that POWER has over x86?


Depending on what you mean by architecture, there probably aren't many. Maybe reliability and decimal floating point. But IBM's Power server processors tend to be bigger and higher performance than Intel's; e.g. Power8 has four times the threads, four times the cache, and twice the memory bandwidth as Ivy Bridge-EP.


POWER (Performance Optimization With Enhanced Risc) is an extremely nice architecture compare to the x86. It's firmware system was so ahead of the BIOS disaster (EFi might change this a bit, yet). I remember loading and booting kernels from NFS because you could load the network card drivers[1]. This is 2006, EFI was not known/used widely. The CPU itself has amazing features too.[2]

I guess we are seeing another occurrence of Gresham's law applied to the technical field. "Bad architecture drives out good". In this case bad is cheaper, and this is the only thing we care about after all.

1. Open Firmware allows the system to load platform-independent drivers directly from the PCI card, improving compatibility. http://en.wikipedia.org/wiki/Open_Firmware

2. http://www.spscicomp.org/ScicomP16/presentations/Power7_Perf...


You cannot praise POWER for OpenFirmware. OpenFirmware came out of SUN, and allows for the development of CPU-independent drivers. IIRC, SUN needed that because they shipped workstation variants that either had a 68k CPU or a SPARC CPU.


> EFi might change this a bit, yet

As far as I can tell the only reason EFI even exists is NIH syndrome. Intel could have just adopted OpenFirmware for x64, and they still should.



Everything seems like a nice architecture compared to x86. I do love the PowerPC instruction set though.

I'm not a hardware guy, but it always seemed like PowerPC chips underperformed others and ran as hot as hell while doing so.


I seem to remember an article that pointed out some of the POWER addressing modes made it hard to optimize so it always would run slower than other chips. I would really love to hear what the story actually is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: