Your initial implication was that SPARC wasn't setting any world-records anymore, when I proved that wasn't the case, you then proceeded to complain about an arbitrary benchmark.
When I pointed out that results were provided for that benchmark, you then complained that it wasn't for the general benchmark but a portion of it.
You then proceeded to claim it wasn't an Apples-to-Apples comparison, but Oracle doesn't offer anything less than 16 cores for a T5. I don't think comparing products that don't exist is very useful so attempting to extrapolate what an 8-core version might be seems silly, especially since there's not a one-to-one correlation between cores and performance.
In addition to that, at last check, you can't pick the number of cores (precisely) that you want a processor to have when purchasing so it doesn't make any sense (in my personal opinion) to strictly compare core-to-core performance, since, as the other poster pointed out, core is essentially a definition at the whim of a vendor.
So, I'll just stick to refuting your original implication -- that SPARC isn't setting world-records anymore; it is in fact doing so. And in fact, SPARC continues to have far greater memory bandwidth, I/O bandwidth, and memory capacity compared to general Intel offerings.
So if you want to find out how fast a T5 will actually run your application, try one out and get real data instead of relying on benchmarks to make your decision. Personally, I think you'd be shocked at just how well most workloads perform if you actually tried a T5.
As to the price argument, the companies I've worked for or with in the past generally didn't care about that so much as the reliability of the system and it's capability. They have workloads that consume multiple terabytes of memory. They're using the servers to process transactions that are netting them millions of dollars with the servers they use, so saving a few thousand bucks doesn't matter to them.
In the end, you have to use the right tools for the right job. There's a reason that Oracle sells x86 servers too.
Personally, I don't care which architecture is being used as long as I get to use Solaris/ZFS.
I mentioned that SPARC performance stopped being a central focus with the death of Rock, in favor of massively-multicore, low-cost designs derived from Niagara.
You pasted an Oracle press release that focused on parallel performance.
I complained, and pointed out SPECcpu as a common measurement.
You responded with SPECcpu_rate, a different benchmark focused on parallel performance measurement. It is not surprising that Niagara-derivative chips do well. It is also not surprising that, core for core, they can't match modern commodity systems for density, performance, or cost.
p.s. The "reliability" argument went out the window with VMware. Every fortune 500 is using x86 with VMware HA to provide the redundancy that would have once come from enterprise RISC. (Given how few RISC systems were ever configured with redundant memory or CPUs, VMware-on-commodity is probably offering a substantially better service level.)
p.p.s. a basic 1U x86 server will typically have between 0.75 and 1.5 TB of RAM in it. Yes, TB, as in terabytes. Virtualization provides a market for compact, low-wattage systems with significant memory. That's just the 1Us. In 2014, "Large" x86 boxes are very large indeed.
Lastly, I, too, miss Solaris/SPARC. It was a great platform that I enjoyed working with. I don't miss the associated hardware support contracts that came floating by after the Oracle buyout. It has been several years since I found a contract where SPARC or Solaris were anything but legacy platforms. It's sad, but it's not a mystery.
SPEC CPU Rate is part of the SPEC CPU benchmark; it is not a "different" benchmark.
The reliability argument doesn't "go out the window"; what do you think that software is virtualised on? And to top it off, you're losing a significant amount of performance by using a virtualisation solution like vmware.
In fact, there's entire segments of industry that won't accept the latency typical virtualisation technologies bring.
As for the 0.75 and 1.5 TB of ram argument, perhaps you missed what I said about terabytes. You have any Intel boxes with 32 terabytes lying around?
While you may have not seen any contracts floating around for SPARC hardware, I have quite recently..
Finally, as for "legacy" status, SPARC and Solaris have features not found anywhere else that are continuing to be developed and added even today. It's only a "legacy" platform if you completely ignore the technology there. And Solaris runs just fine on x86 thank you very much.
When I pointed out that results were provided for that benchmark, you then complained that it wasn't for the general benchmark but a portion of it.
You then proceeded to claim it wasn't an Apples-to-Apples comparison, but Oracle doesn't offer anything less than 16 cores for a T5. I don't think comparing products that don't exist is very useful so attempting to extrapolate what an 8-core version might be seems silly, especially since there's not a one-to-one correlation between cores and performance.
In addition to that, at last check, you can't pick the number of cores (precisely) that you want a processor to have when purchasing so it doesn't make any sense (in my personal opinion) to strictly compare core-to-core performance, since, as the other poster pointed out, core is essentially a definition at the whim of a vendor.
So, I'll just stick to refuting your original implication -- that SPARC isn't setting world-records anymore; it is in fact doing so. And in fact, SPARC continues to have far greater memory bandwidth, I/O bandwidth, and memory capacity compared to general Intel offerings.
So if you want to find out how fast a T5 will actually run your application, try one out and get real data instead of relying on benchmarks to make your decision. Personally, I think you'd be shocked at just how well most workloads perform if you actually tried a T5.
As to the price argument, the companies I've worked for or with in the past generally didn't care about that so much as the reliability of the system and it's capability. They have workloads that consume multiple terabytes of memory. They're using the servers to process transactions that are netting them millions of dollars with the servers they use, so saving a few thousand bucks doesn't matter to them.
In the end, you have to use the right tools for the right job. There's a reason that Oracle sells x86 servers too.
Personally, I don't care which architecture is being used as long as I get to use Solaris/ZFS.