Thanks for sharing some programming-centric benchmarks using modern CPUs. I always wish that CPU reviewers would stop churning out pages of graphs with synthetic benchmark workloads and instead post things that actually matter, like compile times and other programming benchmarks.
The M1 processor is extremely impressive for the price and power envelope. However, I'm noticing a lot of people have taken Apple's marketing material a bit too literally and assumed that it somehow beats any and every desktop CPU out there, which is clearly not the case.
Moreover, as the author mentions: A significant portion of Apple's lead came from buying exclusivity on TSMC's 5nm process through the end of the year. It will be interesting to see how AMD stacks up as they roll out 5nm parts in the future, compared to Apple's scaled up M1 successors.
Exciting times. It's good to have some progress in CPU technologies again after years of Intel stagnation.
Not everyone is a programmer though. Synthetic benchmarks don't just compute prime number or run mandelbrots. They test things like video compression, image processing, zip compression and decompression, etc. - things that most people do.
> Geekbench 5 measures the performance of your device by performing tests that are representative of real-world tasks and applications.
I'd rather rely on Geekbench score than some programmer's compilation pipeline. Not saying programming benchmarks aren't important. Just that you cannot blame the reviewers testing comprehensively because your use case is specific.
And CPU/GPU companies have been caught multiple times in the past optimizing for these particular benchmarks. To the point where if you changed the name of the executable, the performance would drop significantly.
Has the situation changed since then? I mean, there's a reason why e.g. Nvidia drivers know and care about what games you have installed. I can also imagine quite a bit of rationale behind this, from a positive aspect - paying some knowledgeable people to extensively tweak settings by hand for popular games, and then having these settings become part of the driver seems somewhat reasonable. The above example on the other hand, probably was done for nefarious purposes. But how would we know from the outside, when it seems to have become standard practice, and for good reason?
Emery Berger [0] at UMass Amherst does great work in this, and it made me very wary of a lot of performance benchmarks. A big source no one considers is alignment, wherein things like the executable name as you mentioned or the username or anything else in the shell environment (which iirc is copied into the first few pages of a process). I think in one of the cases, one of Emery's students with a very long username ran the same application significantly slower because the username pushed the pages of code over a boundary.
I would highly recommend Emery Berger's talks [1] for anyone interested in benchmarking software. The alignment and code layout in memory that huayra mentions has shockingly large performance implications.
Do you have sources? I have never heard of any CPU manufacturer optimizing Geekbench scores.
Consumer grade CPUs are driven by a variety of things - marketing, brand loyalty, gaming benchmarks, reviewers such as Anantech and Extremetech, and yes, Geekbench scores. Datacenter processors are a whole another beast and driven by proprietary benchmarks by vendors.
Geekbench runs so many different tests, atleast a couple of dozens of tests. So, if CPU manufacturers are optimizing to those tests, its hard to believe that that's not a good thing and would apply to pretty much what an average user does in the first place.
There are multiple instances where phone manufacturers optimize the CPU performance when a specific app is loaded - e.g. Oneplus and Geekbench in the OnePlus 5 [1].
There’s the saying that Intel used to employ one compiler engineer per line of SPECint/SPECfp source code; and I heard it from an Intel engineer!
(This is not something that should be taken literally, but rather seriously; a lot of money was hanging on these particular benchmarks at some point - when enterprises were buying serious enterprise hardware - so it made sense to invest in compiler tech to make sure you’re not wasting all the hard work done by hardware designers on stupid code generation.)
> A significant portion of Apple's lead came from buying exclusivity on TSMC's 5nm process
It's hard to estimate how much of Apple's advantage came from the 5nm process, but we can get a guess by comparing the A13 and A14. One of the biggest differences between last year's A13 and this year's A14 is the 5nm process (vs 7nm+). The performance increase seems to be on the order of 15% [1], and Apple was able to fit in a lot more transistors- 11.8B vs. 8.5B, though we don't know how many of those transistors were spent on the CPU, and how many were spent in other areas like the GPU or ISP.
Seems like the additional transistors and lower power consuption were used to increase performance in a bunch of ways- higher clock speeds, more cache, and a bigger reorder buffer being the obvious improvements. Perhaps AMD can pull off a similar improvement when they switch processes.
How much of that added density is used for what kind of performance and how much is used to prioritize efficiency is up to the chip designers -- for the M1 you get a couple cores that focus on performance and a couple cores that focus on efficiency, plus a lot of area for the GPU and a not insignificant area for the mysterious (to me at least) 16-core neural engine (aside: I wonder if e.g. the graphics pipeline is able to make use of that, as it seems like it should be able to perform matrix multiplications, and it'd be a bit of a waste if that just sat there most of the time).
So you'll probably see some variation but each time a process is scaled down to 0.7x of the previous size, you'll get smaller transistors that use less power individually and you could expect a "40% performance boost for the same amount of power and a 50% reduction in area" (according to https://semiengineering.com/5nm-vs-3nm/)
90 nm (2003) * 0.7 = 63 nm
65 nm (2005) * 0.7 = 45.5 nm
45 nm (2007) * 0.7 = 31.5 nm
32 nm (2009) * 0.7 = 22.4 nm
22 nm (2012) * 0.7 = 15.4 nm
14 nm (2014) * 0.7 = 9.8 nm
10 nm (2016) * 0.7 = 7 nm
7 nm (2018) * 0.7 = 4.9 nm
5 nm (2020) * 0.7 = 3.5 nm
To me, it's a bit of a miracle that Intel is still able to sort of compete on mostly 14 nm nodes, but maybe that's because "node size" basically just means "smallest feature size", and their 14 nm or new 10 nm process is a little better than e.g. competing 14/10 nm processes, or maybe their chip designs just prioritize different things that have a decent real world effect (e.g. Intel CPUs have AVX-512, but AMD CPUs don't).
minor correction: the geometric mean of the uplift of the specfp and specint benchmarks is 21.3%, which is perhaps notably better than 15% (up to interpretation, that).
A few caveats: A compiler change made made the libquantum uplift (a part of the spec benchmark) huge, and non-representative of hardware changes. Conversely, the geekbench 5 multicore results are actually worse for the iphone 12 pro vs. 12; hinting that they're over-boosting the iphone 12 pro, so peak perf is likely actually a little better (from the hardware's perspective, not that an app can do much about it).
In any case, let's call it a 20% uplift with the same amount of threads, very respectable.
Another caveat: AMD may not be able to get the same 20% uplift from 5nm that apple does. Without any expertise in this matter I can only repeat what others claim, but I've heard people say that it's hard to make x86 wider (i.e extract more ILP) because the instructions are variable length, making lookahead more difficult. I guess we'll see next year with zen4 if that's true!
The only part of Apple's marketing material that even references desktops are related to integrated graphics.
So what's clear is that a lot of people are acting irrationally especially since the M1 is their low-end laptop CPU. At least wait for their mid-range laptop/desktop CPU scheduled for next year to make proper judgements.
A lot of that is the IO die. The mobile equivalent will have 85-90% of the single-core performance with a 15W TDP. Apple still has better perf-per-watt, but it will be a much more apt comparison.
Also the MSRP is $450. It may be a bit more expensive now, but in a month or two the price will drop.
The most impressive part to me is how the m1 compares to the 3900X. I’ve got a mere 3600X and every laptop I owned or worked on over the past year is noticeably and painfully slower than the 3600X. It’s been a relief to get home and turn on my desktop. It doesn’t matter if the laptops I’m using are very recent i7’s and i9’s, the desktop is always very noticeably faster.
I got my m1 MacBook Pro 16GB yesterday and was pretty confused to find that Rust compilation felt faster than the desktop. To go from recent Intel laptops taking two to three times longer to compile Rust than my budget desktop despite the laptops' whiny fans and extreme heat, to having my desktop be sightly outclassed by an ice cold laptop on battery (which got at least 15 hours of use, including said compiling, with no need to charge) is a world changer. I can’t remember the last time a laptop was this close to desktop performance for my everyday workflow.
Now that I think about it, I haven’t even tried optimising compile times on the MacBook like I usually do with Rust projects. My desktop would have been running lld at least to make its compilation significantly faster, and the MacBook more than kept up in spite of the handicap.
No doubt that the 3600x is impressive- but Desktop class CPUs will outperform laptop ones almost always (due to the significantly higher power draw and thermal envelope).
My 7 year old desktop goes toe-to-toe with my top of the line MacBook Pro from 2020 and in almost all benchmarks crushes it.
your 4930k from 7 years ago is a 130w processor for PC enthusiasts, it was a top of the line product back in its days with a $600 price tag. most regular desktop users wouldn't buy that processor, e.g. I bought a 4770k at half of your 4930k price in 2013.
on the other hand, your mentioned 1065G7 is a 15w processor that doesn't really represent what a laptop processor can do in 2019/2020. laptop processors like i7-10875H would be a far better apple-to-apple comparison in case you still want to make your claims above. the reality here is that at much lower power draw and thermal envelope (45w vs 130w), laptop processor like 10875H would crush 4930k in both single core and multi core settings.
I don't have an m1 yet but have a similar experience with laptops performing extremely poorly compared to desktops (to be expected) but also poorly compared to my older Apple MacBooks. I have a 2015 13" MBP that will perform a compilation task quicker than my 2019 XPS.
The much older CPU in the MBP has a much faster base clock of 2.4GHz vs 1.3GHz in the XPS. What I find frustrating is that the XPS will throttle down to base clocks for any workload sustaining about 10 seconds, even if the temperature is reasonable, e.g. 50C.
In Windows I can work around this a little using ThrottleStop but I'm primarily a Linux user and haven't found a reliable way to bypass Intel's turbo limits with my Ice Lake CPU. Linux also has annoying bugs where the CPU will limit itself to base clock when connected to AC but then turbo up to 4.1GHz when the AC is unplugged, baffling.
Glad to hear about your positive results with m1, it's for sure a purchase I'll be making in 2021.
> Linux also has annoying bugs where the CPU will limit itself to base clock when connected to AC but then turbo up to 4.1GHz when the AC is unplugged, baffling
I had got same issue with my dell before, it caused by trouble with charger(I have replaced it 3 times). I actually get a lot of power issue with dell laptop, my an other dell can't fall to deep sleep. I won't pick a dell anymore >:< .
I come away from this way more impressed by the M1 than I was going in. It's a comparison of a ~$700 stock, low end full computer Vs. $700 for high end CPU only. And the high end CPU has overclocking tweaks enabled, and overclocked memory.
The M1 generally keeps up quite well, which blows my mind.
The power use difference Is pretty big as well.
Yeah, it would be great if Apple was totally open source.
But this is going to push the entire industry to do way better.
The M1 also has overclocked memory (by spec, DDR4 only supports 3200MHz), and the 5800X isn't a 700$ CPU, actually it's at 450$ (and it will fall even further soon, I bought a 3900x recently for below 400$).
If that's true, then it actually is entirely in-spec. I had read several articles claiming it was running at 4266MHz, which would definitely have been extremely out of JEDEC spec, but not unusual for current-gen DDR4 XMP profiles.
Some higher-end RAM even gets above 5100MHz / 10200MT/s https://www.crucial.de/memory/ddr4/blm2k8g51c19u4b, so I had expected Apple to get somewhere around 7600-8400MT/s at 16-19-19-36 or better timings.
PSU: $90 (silverstone sfx 500w 80 plus gold - there's probably cheaper options but most PSUs seem sold out on amazon)
RAM: $70 (16gb 3200mhz CL16, gskill or team group)
Case: $70 (Silverstone tek)
Motherboard: $120 (asrock b550-m itx)
There's your $1000 mac killer. I wouldn't call it a sensible build, but it should outperform the mac across the board. Notably the GPU is significantly better but the closest gpu the m1 is to trading blows with is the Radeon 560 which is only $30 cheaper. Also the mac comes with 8gb ram in the base model, and 16gb is a $200 upgrade but I could not find a decent speed 2x4gb ddr4 kit as nobody wants that little ram anymore in a custom build. I could save $40 by going with a 1x8gb kit for $30, but single channel will hurt performance.
I'd also consider $30 to go from the 250gb ssd to a 1tb variant, something apple charges $400 for.
The reason I wouldn't call it sensible is that the 1650 super is a budget gaming card while the 5800x a higher tier cpu. If you were actually planning to build this for gaming I'd suggest dropping down to the 5600x and putting the extra $100. actually for gaming I'd even suggest squeezing the budget for a 2060 (or even better wait for the standard 3060) by dropping to basic ddr4-2133, and going for standard sized case/psu rather than the small form factor to chase the mac.
On the other hand, if you're not gaming and just need a video output, save like 120 bucks and just get a gt 1030. Use that to buy some nicer RAM or more cores via a 5900x depending on what suits your workload better. Sadly you have to get _a gpu_ as Intel is falling behind and AMD won't give us high end APUs in the individual market.
Doesn't include monitor, keyboard, mouse, speakers, webcam, microphone, battery backup, and fingerprint scanner which would be necessary to make it "comparable" to a base model Macbook Air.
It also isn't a portable, fanless laptop which has its perks.
In most programming benchmarks, single core and RAM is the game. That’s why the machine in this post has overclocked ram and an undervolted CPU. In that area the 4750u loses badly. In Node for example you will see nearly 60% better performance on M1.
(I am wondering why that got downvoted, perhaps it upset someone.) the 4750u sits at around a 1050pt GB5 single threaded score while the 5800x (with tweaks detailed in the article) scores ~1800. The M1 is 1700 or so. The 4750U just isn’t there for things like Node and scores similarly to Intel’s best desktop CPUs which are real world 50% slower at these very tasks than the M1 (or I imagine, this 5800x)
Dunno, maybe because compared to Node.js developers, C/C++/Rust developers have more time to burn on HN (due to their multithreaded build still takes ages), so they are more likely to show up and make a downvote?
Joke aside, claiming "In most programming benchmarks, single core and RAM is the game" is far from truth.
Also benchmarks are biased towards multicore use cases, probably because slow things that feel like heavy computation are more commonly had parallelization efforts put into them, and are fun to benchmark.
It may outclass it in performance. However, all my life I've been waiting for something that runs over 10h on a single charge. I am naturally surprised that this is not something that ppl truly appreciate in a laptop. Hell, I even bought a cheap Atom laptop once just to have more battery time.
So as of now, there's literally no decent competitor to M1 laptops. One must be living under a rock to buy anything but Apple and this is coming from someone who doesn't have Apple products and always hated their walled off ecosystem. I am reconsidering my life choices :)
What if I want to use the same machine both at my desk and at work? The ability to use the machine on-the-go is one thing, but the actual portability is second.
But you already knew that, didn't you.
Not saying I agree with the "One must be living under a rock to buy anything but Apple" part, that's nonsense.
Sure, having a laptop in that case is unavoidable, but you won't _require_ 10 hours of battery life either.
In the long term I don't think laptops are exactly the right answer for portability. I think the ideal would be that when we get up from our desks, all of our running programs (even the whole OS) would migrate to our phones. As soon as we open our laptop, they would all migrate there.
That’s just your opinion. You don’t need large monitors to code (lines of code are 80-100 characters), and moving around and changing positions (desk to couch etc) every couple hours while working is generally better for health and concentration. Not having to hunt around for a power adapter makes it that much easier and better.
Using a laptop for 10h is super useful for days where you have to travel somewhere, have an appointment, and travel another 5h by train back. Avoids having to constantly have it plugged in in the train, and allows using it even in trains without power sockets
Neither does the Mac Mini, what's your point? If you're building your own desktop PC, then you're clearly not comparing it to a laptop.
On top of that, Apple charges $300 (in Sweden) for every 8GB of RAM, and $300 for every 500GB of SSD storage you add. The costs quickly even out. Unless you buy the absolute cheapest Mac Mini version, a self-built PC will be cheaper, at least according to the PCs I've built.
>Doesn't include monitor, keyboard, mouse, speakers, webcam, microphone, battery backup, and fingerprint scanner which would be necessary to make it "comparable" to a base model Macbook Air.
Let us not pretend that you don't need to also buy a keyboard, monitor, speakers, and mouse for the Macbook Air to be in anyway usable.
@dang Someone flagged my comments. I think they were very normal, about the value of a MacBook air M1 compared to buying parts and assembling a PC, but perhaps they were against the rules.
The parent post appears to be discussing a MacBook Air though. The specified machine doesn’t do much good for someone looking for a laptop; I’d be very surprised if the average customer compares a MacBook Air type device with a PC desktop when shopping for a new computer.
By the point you get the Mobo CPU and PSU you already are over the price of an M1 Mac Mini and even then, to get the results seen in this you need to overclock ram and under volt the CPU in a big way, putting you at risk of a good bin vs bad situation. Personally the fact that they had to start with some serious tweaking on a desktop to even compete with a laptop CPU seems messy.
First, the article is about performance benchmarks and how the M1 is not some magical entity like the Apple hype train has been screaming. Second, the comment I'm replying to is:
>for most use cases the apple's MBA offers much more value and will have slower depreciation.
You can build a fine computer with a $400 CPU and $500 left for the rest. One that will absolutely dominate any laptop over a long time span because laptops cannot dissipate heat forever (it's simple a matter of mass and shape).
I got an M1 MBA to use until the real pro machines come out. It's much faster than the 2017 MBP it replaced, and does it while staying completely silent. The no fan noise is uncanny...sort of like a 'is this thing working?'.
Not scientific at all, but one of my large java app test suites takes ~7 minutes to run on my 2017 MPB and ~3 minutes to run on the M1 MBA using the Azul ARM build.
Worth noting that the M1 MBA doesn’t have a fan at all, noise-making or otherwise. But the M1 MBP has been similarly described as fan silent, even with its fans running.
Why ? Comparison on single core bench is at disadvantage of the bigger cpu not the opposite , the m1 would have seemed even better against a 128 core epic of 7k$..
Wait some weeks and see how it compares to a 5x00u , also consider that apple is 'cheaper' now because are trying to bring more people fast to help transition to arm .. then everyone will be locked in in the ecosystem happy to pay the 30% apple tax for the entire life of the hw
That is kind of conspiratorial sounding. Apple has to charge more than a PC maker which spends almost nothing on build, quality, design, developing and maintaining their own OS and software.
Charging more in the premium segment is normal in any industry because you a smaller volume to spread fixed costs over.
I was saying the opposite.. apple could charge even more imo but then leave the hardware free like others do , if you solder everything, firmware lock everything , sell upgrades at 5x the price (and compare the price of the baseline to say.. wow it's cheap ) then apply a tax on all software, now when we compare prices we should account for that also
The entry level is cheap so apple can win some price bench comparison ..also in general I can understand what is marketed like that but I don't have appreciate that btw.
why we don't compare the 16gb ram 1tb ssd Mac 2.5kusd against a 700usd 4800u with same spec ? If you are comparing programmers productivity.. or for a better monitor monitor .. ok let's use a 900usd 4800u then.. ok 80% of perf in single core and 140% on multi for less than half the price.. and wait the 5800u and see.. when you compare you have to be fair between the components.. single core bench from 12core 24th vs 4core it's the start of every biased bench
Last year i replaced my aging mac pro with a 3900x amd Hackintosh. The cpu upgrade was hard to believe. Everything felt fast in a way even my 2019 air failed to achieve.
The idea that the laptop-grade M1 is keeping pace in these tests with the processor in my desktop is mind blowing. Sure the 5800x was tuned for these tests. Sure the M1 is on 5nm. All of this is missing the point: Intel created a huge market opportunity and 2 of the best poised companies to take advantage of it have.
Computer consumers are winning big thanks to TSMC in 2020. Let's all hope intel can turn things around and keep the space competitive.
The 3900X really is an incredible processor. I switched from mostly crappy Dell work laptops to a proper desktop with a 3900X late last year and the compile and general responsiveness times are night and day (not unexpectedly)
I went from an i5-2500K to a 2700X, I was going to replace it with a 3xxx series (3700X or better) but there is simply no rush, the 2700X is a monster on my workloads.
So I can sit and wait for the 5000x series to be available at which point the 2700X/2080 goes to the boy and the 5950X/3080 goes in my work machine.
Hopefully sometime in the first half of next year when stock levels stabilise.
> Let's all hope intel can turn things around and keep the space competitive.
Although they have a lot of work to do, I can't help but think people writing off Intel are way too keen. Intel's annual dividend is still bigger than AMD's revenue IIRC, so at worst they should be able to buy themselves out of the hole they're in.
That’s not how that works. The dividend may be larger, but that only is somewhat realistic if AMD were trading at 1x revenues (P/S ratio)...which they aren’t. Intel can’t afford AMD without huge financing or share issuance.
Intel would have to be literally batshit to try and buy AMD, unless their market position was so thoroughly lost thst it wouldn't be shat on by regulators.
I'm glad someone finally started comparing state of the art vs state of art.
Most comparisons have been M1 vs the frankly dated Intel offerings with the previous Mac product line.
So things to remember would be that power/performance is not linear and so 5800 like CPU could be scaled back in power consumption without sacrificing the same proportion of it's performance. Also, 5800X is on TSMC 7nm process where as M1 is on the 5nm; this difference will have a measurable impact on performance and power consumption.
This isn't an apples to apples comparison like the author says, because the package power (37W) of the Ryzen is still quite a bit higher than the M1's. You can see it right there in the graph from Anandtech.
The 5800X is a beast of a CPU. The IPC improvements over the previous generation really are incredible. It's not surprising to me at all that it beats the M1 in several benchmarks. However, it's not the same class of device at all.
In the real world, there aren't many people making purchasing decision between an M1 Apple laptop or Mac Mini and an AMD 5800X desktop machine anyway. Still interesting to see the comparisons.
The leaked Geekbench results of AMD's upcoming 5800H laptop parts are about 85% of the M1's single core score and matching the M1's multi-core score. TDP is still higher than the M1, obviously, but AMD is also using a 7nm process while Apple got the jump on 5nm by buying exclusivity on TMSC's 5nm process through the end of the year.
I'm extremely impressed with the M1 and I'll be buying an Apple Silicon device as soon as they have something with 32GB or more of RAM. However, the sentiment that the M1 chip obliterates any and every desktop part on the market is getting kind of silly at this point.
However, the sentiment that the M1 chip obliterates any and every desktop part on the market is getting kind of silly at this point.
1. Nobody ever stated that. If anything, it was all of the Intel and AMD users who created that straw man. "See, it's slower than a 64-core Threadripper at 280 watts". No kidding.
2. The M1 is clearly not a desktop-class SoC nor was it intended to be. However, it's more than competitive with many desktop x86 processors that run faster and hotter while consuming much more power.
3. As a first attempt, the M1’s performance per watt is very impressive. Obviously the 5nm process helps but it's never one thing with Apple: it's the unified memory, the 8 instruction decoders (AMD and Intel max out at 4), and the integration of the 16-core Neural Engine, among other features.
Look, it doesn't have to work for everyone. Of course there are workloads where it's not going to be enough. That's fine. My personal workload requires 128GB of ram but I don't go around telling people 8GB laptops are pointless.
I don't think that's the point they were trying to make. I think they were saying that just because the OS has RAM compression doesn't mean it's like doubling the RAM on other operating systems, and that's even before you consider that those other operating systems have RAM compression too. Saying something is like having 16 GB of RAM when you can't actually use it as 16 GB of RAM deserves to be called out.
It's helpful, but it's nowhere near doubling your memory. Especially when you have an SSD to swap inactive data to, the benefit of compression is equivalent to adding something like 0-2GB of RAM.
> In the real world, there aren't many people making purchasing decision between an M1 Apple laptop or Mac Mini and an AMD 5800X desktop machine anyway
I have a 15 inch 2018 i9 MBP that's objectively a terrible device (throttling/keyboard).
With corona and WFH I'm weighing my options - right now everything is out of stock but early next year I'd be interested in building a desktop. My problem is the M1 is so good that if they do a 8 performance cores version for 16 inch MBP in spring there would literally be no point in having a desktop - I could have a single device that outperforms the desktop machine. Frankly I don't see why they couldn't offer a 13 inch varian with more cores - thermals are not a constraint.
Does Apple ever announce anything before the day it happens? This is the company that drops iOS, macOS, and M1 on developers the day they do the public.
I'm not sure where people are pulling all these Apple silicon timelines from, but I can guess :)
There’s not been leaks, but there’s no reason to believe that it won’t be coming any time soon. Hypothetically the latest it would come out is in first half of 2022, but I’d be surprised if it took that long for the 16 inch macbook pro.
Depends. If you go for the base model, yes. If you need more memory or a lot of fast storage or gobs of GPU horsepower or a lot of I/O, the 5800X will be definitely cheaper by a mile.
That being said, at the same config, the 5800X system would indeed only barely be cheaper.
Yep it's probably the first at this size and packaging that you can just hook up to a TV and replace a decent desktop computer with (like I did; goodbye late 2015 iMac 5K...).
> please keep in mind that for single-threaded tasks, a single core doesn’t consume all 95W TDP headroom of the CPU. Based on Anandtech’s analysis, a single 5800X core only consumes 17.3W at 4850 MHz
Yeah, that's irrelevant. One M1 core is not going to use its entire package power either, because there are other powered cores in there like the GPU.
Look at the chart again. One Ryzen core active, 37 watts total package power. It doesn't matter what the per core wattage is, because you can't get that one core without the package.
> One Ryzen core active, 37 watts total package power. It doesn't matter what the per core wattage is, because you can't get that one core without the package.
This is the desktop Zen 3 which is still using the 12nm I/O die. It uses more power as a result, which nobody really cares about on the desktop. The Ryzen laptops use a single die which is entirely 7nm and thereby less power with no corresponding reduction in performance.
Interestingly, the conclusion states the following:
> While M1 is indeed very powerful for its size, when comparing it to the high-end x86 desktop, it is still slower.
Okay, great. I'm not sure what to expect when you compare the lowest end first generation processor from Apple to one of the more high-end of the spectrum x86.
Edit: Forgot to mention how the power draw on the M1 is still significantly less; m1 with all cores @ 100% is ~20W whereas the x86 ryzen was at 17.3W for 1 one core
> To conclude that it performs better than the existing x86 CPUs, is a mistake.
True, no denying that. Exciting times in this market; I wonder how much better gen 2 of Apple's chips will be if the lowest end can do this.
The reason people are making these comparisons between CPUs that are basically in entirely separate market segments is because of apples marketing suggesting that the M1 beats everything, rather than just other low power CPUs, and people taking said marketing at face value.
Can you provide a link where they say it beats everything ?
Because on their marketing page they specifically compare it to laptops and refer to low-power use cases. I have never seen them compare it against HEDT platforms.
Here's the initial announcement video with a timestamp where they called it the "worlds fastest CPU core". They've very quickly stopped using that as the tagline, but it was part of their initial marketing push.
https://youtu.be/5AwdkGKmZ0I?t=529
True, the text on screen did have that phrase without qualifiers, but what he actually said is: “…when it comes to low-power silicon, our high-performance core is the world’s fastest CPU core.”
Well, not quite. They claimed that their own processor designs will beat everything. They made significantly less bold claims about the M1.
While obviously neither of us can predict exactly what they’re planning for the rest of the product line, I think it would be naive to assume they haven’t at least internally validated that they can improve performance above these entry level offerings. Otherwise they just axed almost their entire Mac brand, and quite likely the rest would falter before too long.
I think it's very naive to assume that the marketing department of a megacorporation isn't going to say anything to make more sales.
It's likely that no-one has validated anything when it comes to future tense comparisons.
Read all, believe nothing, especially don't believe the future predictions of someone trying to sell you a laptop today.
Apple has future M1 samples for sure, but they don't have future AMD samples to benchmark against, unless somehow Apple has done some industrial espionage...
In car terms, it's like Tesla trying to persuade someone not to buy a Porsche Taycan. Of course Tesla's going to say their battery lasts longer, even longer than the second generation Taycan no-one has seen that's coming out in a few more years.
> I think it's very naive to assume that the marketing department of a megacorporation isn't going to say anything to make more sales.
Huh? I was just saying the claim (whether it turns out to be true or not) wasn’t about only the M1, and that it’s more reasonable to assume that Apple is confident they can release something above their lowest range offerings.
> The reason people are making these comparisons between CPUs that are basically in entirely separate market segments is because of apples marketing suggesting that the M1 beats everything
I thought it was pretty clear they were talking about mobile with the m1.
I think we're seeing posts around desktop CPUs and AMDs upcoming chips because non-Apple people are concerned/annoyed, possible subconsciously, that Apple Silicon is within striking distance of being some of the best hardware out there. I'm not saying that Apple has earned that crown yet, but if they do, and it's only available on Apple machines, it will annoy a lot of power users who want the best.
>apples marketing suggesting that the M1 beats everything
Nope—Apple’s marketing never said that. Everything they said in their November 10, 2020 press release has been backed up by reviewers and testers. It's you guys that blew their claims out of proportion.
For example, there's no reason to disbelieve that the M1 13-inch MacBook Pro is up to 3x faster than the best-selling laptops (of the 9 months leading up to November 10, 2020) in it's class—13″ to 14″ laptops that cost around $1200.
Is there a popular PC desktop that cost $600-$699 that's faster than the Mac mini? I've already seen the M1 Mac mini discounted to $625…
From the Apple press release [0]:
And in MacBook Air, M1 is faster than the chips in 98 percent of PC laptops sold in the past year. [1]
And with M1, the 13-inch MacBook Pro is up to 3x faster than the best-selling Windows laptop in its class. [2]
And when compared to the best-selling Windows desktop in its price range, the Mac mini is just one-tenth the size, yet delivers up to 5x faster performance. [3]
[1]: Testing conducted by Apple in October 2020 using preproduction 13-inch MacBook Pro systems with Apple M1 chip and 16GB of RAM. Performance measured using select industry-standard benchmarks. PC configurations from publicly available sales data over the last 12 months. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.
[2]: Testing conducted by Apple in October 2020 using preproduction 13-inch MacBook Pro systems with Apple M1 chip, as well as production Intel Core i7-based PC systems with Intel Iris Plus Graphics and the latest version of Windows 10 available at the time of testing. Best-selling system based on publicly available sales data over the last nine months. Tested with graphics-intensive workloads in commercial applications. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.
[3]: Testing conducted by Apple in October 2020 using preproduction Mac mini systems with Apple M1 chip, as well as production Intel Core i5-based PC systems with Intel UHD Graphics 630 and the latest version of Windows 10 available at the time of testing. Best-selling system based on publicly available sales data over the last nine months. Tested with select industry-standard graphics benchmarks. Performance tests are conducted using specific computer systems and reflect the approximate performance of Mac mini.
Of course it's more reasonable to compare it with other low power CPUs in laptops, and not desktop CPUs with 5x the TDP that cost as much as the mac mini before you even buy the rest of the PC, I agree with you on that - I'm just pointing out that people are making these comparisons because Apple themselves did it first.
I'm just pointing out that people are making these comparisons because Apple themselves did it first.
This is slightly disingenuous—I think most people know they meant the design of their cores are faster than any other core, GHz for GHz. They didn't say "the M1 is faster than any other processor out there".
The 5800X runs at 3.7 GHz, peaking at 4.8 GHz with all 8 cores.
The M1 runs at 3.2 GHz and 4 of the cores are low-power cores. Of course the 5800X is faster; this shouldn't be a surprise.
However, as the AnandTech review has pointed out with their benchmarks, if the M1 ran at the same speed as AMD's processors, the M1 would be faster. That's what Apple implied with the "world's fastest core" thing. Even now, the M1 has 8 vs. AMD's 4 instruction decoders, allowing it to process more instructions per clock cycle, with faster RAM.
They have cranked-up M1's in the lab running at faster speeds that they've benchmarked, so they know what they said is true--even if they can't say how they know yet.
The proper way to understand Apple-speak is "even though the M1 is an entry-level chip, it's more than competitive with Intel and AMD's newest chips at a fraction of the power and heat. Wait until we crank up the speed and add more performance cores in future products to see what they can really do."
>if the M1 ran at the same speed as AMD's processors, the M1 would be faster. That's what Apple implied with the "world's fastest core
That's a ridiculous claim. If the AMD cpu ran at 10Gz it would be even faster. Neither does because neither CPU is designed to run at those frequencies.
The M1 is an amazing CPU and it is extremely fast given its power consumption and the frequency it runs at, but a lot of the claims that run around are straight out of Apple Reality Distortion Field.
You might want to watch that video you’re spamming links to. Note the words immediately prior to what you quoted are: “when it comes to low power silicon”. Your entire narrative falls apart if you don’t quote like a creationist.
> But this device wasn't built for use cases where power consumption doesn't matter.
This makes less difference than you'd think, because power consumption matters all over. If you use too much power you get high temperatures and have to clock down to avoid overheating.
The core designs for desktops and laptops are basically the same. The desktops use more power because it allows them to hold higher clock speeds -- the base clock of the 4800U (8C, 10W) is 1.8GHz, the base clock of the 3945WX (12C, 280W) is 4.0GHz.
This is also why desktops and laptops have about the same single thread performance. Running a single core at full speed is within the laptop power budget.
I care how it compares to other products on the market for my use case. It doesn't matter what it's built for. This is a relevant comparison from that perspective.
That's cool, you shouldn't buy one if you care about portability. I personally love not being tethered to my desk, so I haven't really owned a desktop in a decade. The M1 for me was just so perfect I had to jump out and buy one!
What’s crazy is the 5800X destroys my ThreadRipper 1950X, with 35% better performance even on highly parallelized workloads! On the other hand, my Skylake 2600k CPU still trades blows with 2020 so-called “10nm” desktop processors.
yea its hard to compare, this makes sense as what is the most powerful cpu one can get no matter the power usage but there isn't anything close to M1 when it comes to high performance and long battery life. Even ryzen 4000 series laptops throttle a lot when on battery power while M1 performance doesn't change. I don't expect this to change with Zen 3 laptops either.
...which is entirely reasonable. Laptops are heavily customized/optimized for power/heat, whereas off-the-shelf CPUs (like the 5800x) are very much general-purpose devices but can absolutely be customized to be fit-for-purpose (and the entire hardware ecosystem is built around that).
> Java Renaissance: Ryzen 5800x is faster than M1 in most tasks by a large margin.
> Java SciMark 2.0: In the SOR benchmark, the 5800X is more than twice as fast. For others, the 5800X is slightly faster, with the exception of Monte Carlo Integraton scored 2.7% lower than M1.
> Java DaCapo: 5800X is mostly faster by a significant margin, except for the H2O benchmark which is more than twice slower.
> Python PyPerformance: Overall, the execution time is roughly the same, with the 5800X slightly faster. Probably a faster Python implementation like PyPy can highlight the differences better.
> golang.org/x/benchmarks: The 5800X performs significantly better in all benchmarks, around 30% faster in most benchmarks and some are twice as fast.
> Redis: The 5800X performs significantly better in all benchmarks
> JavaScript Web Tooling Benchmark (v8): The 5800X is significatnly faster in most benchmarks
> Java Renaissance: Ryzen 5800x is faster than M1 in most tasks by a large margin.
You can also write some thing like following
- In 6 of 24 cases M1 is faster than 5800X
- In 7 of 24 cases 5800X is faster by a margin
- Rest are toe to toe and 5800X is slightly faster
5800X and M1 are different class of CPUs with different constraints. M1 is designed for low-end notebooks and this comparison is enough to show us that M1 is fast enough.
----
I think that M1 Mac Mini is released for using as CI workstation on servers by developers to improve porting situation to Apple Silicon. Lot's of open source projects can't add Apple Silicon support because CI/CD services don't support Apple silicon and unit tests don't run for arm64-darwin.
The author seems to consider “large margin” and “significantly better” to be around 25%. Not sure most people would agree with that. Also, can you imagine what would happen if the m1 was overclocked and actively cooled?
What are you basing “twice as fast” on? The margins shown are nowhere near that across the board and it took over overclocking a system with 4 times the RAM to produce the most decisive wins. Java is sensitive to memory pressure so just the RAM alone is a significant factor.
It was a Mac Mini, but yeah, I can't find it in the article, did the tests account for cooling? I assume the M1, like any other modern SoC, will throttle as the temperature goes higher.
It has a fan and heatsink identical to the intel Mac mini, which was a 45w part. It was insufficient for that task but more than enough for the M1 running all out.
lol judging from this post, my MBP m1 is pretty much better than my desktop 3900x...
Not disappointed! The only time the fan even spun up was when I built nodejs from source (which took 10mins). And even then it ran at like ~10% fan speed or something.
It wasn't really very fast. Intel got away with releasing basically the same CPU year over year with minor incremental improvements for almost a decade before anyone caught up.
Tiger Lake has pretty impressive single-core and GPU performance. Tiger Lake H also promises to be very interesting.
Honestly, 2021 is going to be a good year for laptops and mini-pcs. I don't think any of the chip manufacturers have objectively bad silicon for those form factors anymore.
Please note that M1 MBP with 16GB of RAM is being compared to Ryzen 5800X machine with 64GB of RAM. GC-driven memory management performance (I am looking at you Java) can tangibly affect the benchmarking results.
Those are single core benchmarks, the wow effect is because you are comparing on single core benchmarks 4 core against 24 core cpu , the 5800x with only 4 cores enabled would probably still beat the m1 , probably even with one
Yeah that's the impression I get every time. Just think about it. The most power hungry desktop core needs 20W and going all the way from 4Ghz to 4.8Ghz is easily the difference between 10W and 20W. Apple's TDP for the entire package is 30W. They can trivially just overclock their CPU to the same performance level of any desktop. Nothing surprising here but everyone sells this like some amazing thing. In practice they use less power because the SoC is fully integrated, they have the 5nm process advantage and there are special processes for mobile chips that consume less power below 4GHz.
Consider for one second that in theory the M1 is technologically superior because it can decode double the instructions per second and yet it still is only consuming 2x times less power but at the cost of delivering worse performance on a superior manufacturing process. A lot of hard work with very little to show for it.
Regarding the 5nm vs 7nm thing, I expect apple to be at least 1 node or intra-node improvement (eg: 5nm vs 5nm+ or 4nm) ahead for foreseeable future. They pay TSMC pretty penny to have access to cutting edge nodes. Not much different from how Intel had node advantage over others albeit here apple paying for it so at-least others aren't locked out of it for too long.
They'll presumably be the first to use each node, but that only gives them a number of months before competitors are on it. I would expect to see 5nm Ryzen before 3nm Apple Silicon.
There's also a major question about what happens with Intel. If they ever get their process advantage back then it's not clear what Apple's response is. But if they implode then AMD takes over the PC market and probably becomes TSMC's biggest customer, which could put them in a position to get on newer nodes at the same time as Apple.
I think it's interesting Apple is continuing what they started with Intel. Remember Apple's deal with Intel where Intel would hold back their newest CPUs from the rest of the market so Apple could have their big reveal and "Worlds fastest blahblahblah" blurb to push their Mac refreshes? But apple never really upgraded their proc offerings, and you couldn't upgrade your processor on the machine you bought, so after 6 months or whatever they would be lagging behind in performance vs what you could build or buy elsewhere.
Well I guess once AMD whooped Intel that was not longer going to be an option. Not only would Intel not be able to deliver their "World's fastest blah blah" marketing claim, but Intel couldn't afford to hold anything back from the general market. I guess their little deal with TSMC letting them clear the cobwebs off this strategy and continue it for a bit longer going forward..
Yea I think this argument about node size comes up because in this case you couldn't buy just the soc but the whole Mac computer. People didn't care that some of advantage Intel had was from their process superiority because you didn't have to make any choices other than the CPU itself.
AMD would rather spend money on getting more capacity for the current 7nm products, that have seen a couple of years of severe shortages at launch, and gain market share. It's a problem that Apple doesn't seem to have, though maybe with desktop/server chips they might if they also start to gain market share.
I'm a huge AMD fan. Been waiting all year for my dream Ryzen laptop but just given in to a MacBook Air M1.
I have no regrets, but this gives me hope that a true Ryzen ultrabook might exist next year, now that the industry have to take AMD seriously. I will sell my MacBook in a heartbeat if that happens.
The plus is Apple hardware has low depreciation so it might worth getting the M1 if you're a dev and appreciate rapid compile times. My webpack build times are less than half the 7700HQ that I was using which makes me way more productive. Also just got a nice Ubuntu VM going last night using vftool. :-)
It is explained in the article: undervolting the CPU allows it to run longer at higher frequency, otherwise it will be thermally throttled. RAM has no such problem, overclocking improved performance.
Yes, but is that fair in a benchmark? The RAM maybe because all high end RAM speeds are technically out of spec and are overclocking. But messing with the CPU voltage to juice the benchmark results seems like cheating to me.
> Apple's chip has 4200Mhz RAM. Wouldn't it be fair to have both ram speeds the same?
No it wouldn't, because Apple is selling that as their supported configuration. If that crashes due to ram / chip instability you can take it back to Apple to get a new one with the expected performance. That's completely different from your homebrew ram overclock, if that config doesn't stay stable, tough luck.
For Zen, you actually want to run your memory clock at a multiple of the FCLK and not as high as possible. So making the RAM frequency the same as the M1 might have actually harmed performance.
I've read various theories about the right memory clock speed to use for Zen, but in all the gaming benchmarks I've seen, faster RAM clock speed ends up resulting in higher performance.
> I've read various theories about the right memory clock speed to use for Zen, but in all the gaming benchmarks I've seen, faster RAM clock speed ends up resulting in higher performance.
Got citations? Better yet if there are any developer-oriented benchmarks.
I'm building a Zen3 machine to use as a workstation. I just ordered DDR4-3600 RAM (but can still probably change the order if I made the wrong call). The specifications [1] only mention 3200 Mhz, so I think even that is overclocking the I/O chiplet to run at 1:1. I want a stable development machine rather than a toy so I wasn't planning to go further.
I also read an interview [2] (most interesting part quoted below) which suggests there's diminishing utility in going beyond 3600.
> Okay, so what’s the best price/performance?
> DDR4-3600 continues to be a “sweet spot.” The kits are inexpensive, widely available, perform well, and have good compatibility. Is it the best in every category? No, but that’s not what the sweet spot is. 3600 is a good bet because it’s a good value in perf/$ for someone who wants to plug and play. Is it the best possible performance? No. Is it close? Yes, and without tinkering.
> What’s the best memory, even if i have to overclock?
> Probably very tight timing 3600 or 3800, just like the Ryzen 3000 Series. The timings on these memory bins can be super aggressive versus higher memory speed grades, and that usually overpowers frequency.
My RAM is actually CL18 DDR4-3600. Maybe I should have gone with CL16...I'm not sure how much it really matters for development though, and I'm getting 64 GiB of RAM so the cost per byte adds up a bit more than it does for a gamer buying 16 GiB.
Here's one side-by-side you can watch with the same games being played on the same Zen-based system, but with varying RAM clock speeds: https://youtu.be/8H0DEkpEDCE Generally, the highest-clocked RAM got the best average and min FPS in the games tested.
I'm having trouble finding it now, but IIRC I also once saw a side-by-side comparing different RAM latencies, and the differences were not enough to outweigh clock speeds; i.e. overclocking with worse latencies was better than lower latency but lower clock.
That being said I've seen non-gaming benchmarks that imply that there is a 3600mhz sweet spot. So it may vary based on workload. I haven't looked into that closely though — I was looking at benchmarks when building a gaming PC, so I had a specific focus :)
The M1 beats my Xeon E-2176M @2.7GHz with 64GB of memory in every test but Jython. (I'm on OpenJDK 15, which probably doesn't matter). Later tomorrow I'll try it on my 2700x based system.
45W vs 10W... Running these tests may have changed my mind on this little CPU!
I have a little doubt about the results of the Java benchmarks.
I think the JVM tests should have been executed with the same amount of heap allocated for each platform in order to get the internal dynamics/heuristics of the JVM to be comparable. IMHO all JVM tests should have been executed with -Xmx 7G
(8 Gigs maximum as per the M1 MacBook Air minus a little something for the OS and the system buffers)
Why? You cannot buy 8gb of dual channel high performance ddr4 - it's a capacity with such a small market and low expected price that manufacturers won't bother. It's only Apple that insists on $200 extra for 16gb.
I'm sorry, if I wasn't making this clear enough. I'm not proposing to remove DIMMs. I'm merely suggesting to run the Java VM with the same amount of memory/heap for all the platforms. That is purely a software setting. The command-line switch to configure the heap size of a JVM is -Xmx
The reason why I'm suggesting to have the same heap size for the benchmarks is that the maximum heap size is the single most important (tuning) setting for the JVM. Based on this setting the VM sizes its internal data structures and adepts its behavior.
Also, garbage collection times are typically* dependent on the size of the heap. With most* garbage collectors collecting a 32gb heap takes longer than collecting a 8gb heap. If the workload of the benchmark allows for the heap to be used entirely, then garbage collection overhead is directly related to the heap size.
* unless a "big heap" garbage collector like ZGC or Shenandoah is used
Can anyone explain the Redis numbers, specifically the 3900X versus 5800X [1]? The 5800X is showing numbers easily more than 2x the 3900X. I understand this is a single-threaded test, so the additional cores of the 3900X (12C/24T versus 8C/16T for the 5800X) do it no advantage. But I wouldn't expect the Zen 3 architecture to be that dramatically superior to Zen 2.
I would bet that the redis-benchmark and the redis-server were running across the CCD, not tasklocked to CPUs on the same chiplet. I have a 3970x, I'll run this test on different chiplets and the same chiplet...
$ taskset -c 1 redis-server
$ taskset -c 2 redis-benchmark -p 6379 -P 8 -q -c 1 -n 100000
PING_INLINE: 775193.81 requests per second
PING_BULK: 869565.19 requests per second
SET: 724637.69 requests per second
GET: 787401.56 requests per second
INCR: 775193.81 requests per second
LPUSH: 684931.50 requests per second
RPUSH: 699300.69 requests per second
LPOP: 680272.12 requests per second
RPOP: 704225.31 requests per second
SADD: 769230.81 requests per second
HSET: 675675.69 requests per second
SPOP: 819672.12 requests per second
LPUSH (needed to benchmark LRANGE): 680272.12 requests per second
LRANGE_100 (first 100 elements): 107758.62 requests per second
LRANGE_300 (first 300 elements): 30721.96 requests per second
LRANGE_500 (first 450 elements): 20040.08 requests per second
LRANGE_600 (first 600 elements): 14658.46 requests per second
MSET (10 keys): 317460.31 requests per second
$ taskset -c 20 redis-server
$ taskset -c 2 redis-benchmark -p 6379 -P 8 -q -c 1 -n 100000
PING_INLINE: 469483.56 requests per second
PING_BULK: 505050.50 requests per second
SET: 446428.56 requests per second
GET: 465116.28 requests per second
INCR: 458715.59 requests per second
LPUSH: 425531.91 requests per second
RPUSH: 438596.50 requests per second
LPOP: 429184.56 requests per second
RPOP: 434782.59 requests per second
SADD: 460829.50 requests per second
HSET: 421940.94 requests per second
SPOP: 478468.88 requests per second
LPUSH (needed to benchmark LRANGE): 425531.91 requests per second
LRANGE_100 (first 100 elements): 96899.23 requests per second
LRANGE_300 (first 300 elements): 29019.15 requests per second
LRANGE_500 (first 450 elements): 19109.50 requests per second
LRANGE_600 (first 600 elements): 14035.09 requests per second
MSET (10 keys): 253807.11 requests per second
As the sibling comment said, this is most likely a methodology error. The difference is due to the 3900X benchmark goes plainly wrong. I guess the take home lesson is if someone is benchmarking a Zen-like CPU without mentioning "NUMA" at least once in their post, just don't trust it.
A very important reason this CPU benchmarks well in these tests has to do with the fast SSD and SSD controller. One cannot ignore the I/O factors in these tests, which often involve reading and writing to lots of little files.
An additional observation that I haven't read here, is that all of these languages (except Python) are JITted, and the ARM backend may be less mature than the x86 equivalent
The speculation I've read is that the Pro will look similar to current styling but smaller, and the iMac will be a screen on a stand without the chin from the current model. More like a big iPad Pro. I wouldn't be surprised to see FaceID make its first Mac appearance with those.
Very interesting observation. Is that the end game? An iPad Pro to rule all? It can be docked on a nice stand, powered from the mains for more performance and charging when required, docked to external monitors when needed. Now they have chip parity across the line, what stops this from happening?
Who said it is stopped from happening?
It's happening right under our eyes, at a slow pace, because you can't just make so many changes in terms of software, ui, perspective, etc.
OP, it may be worth downclocking the RAM to 3600MHz (while tightening timings) unless you have a golden chip with >1800MHz Infinity Fabric. Decoupling IF and memory speed can cause huge latency (90ns on my 3950x).
Obviously you want a baseline and benchmark from there, but you clearly know what you are doing.
As a developer, I honestly don't care if my CPU has 2 cores or 20 cores as long as it gets the workload done quickly.
In the real world, I only care about how it performs and how much it costs. The implementation details are interesting, but they don't matter when I'm trying to get work done as fast as possible.
The M1 comes in at a great price point for what it is and it's obviously your only option if you want to run macOS (hackintosh isn't an option for production use). Good to have options.
> I only care about how it performs and how much it costs.
> I honestly don't care if my CPU has 2 cores or 20 cores
So first means: "my only concerns are performance and cost". What determines performance and cost? Primary answer: number of cores. If you care about performance, you care if the CPU has 2 or 20 cores. If you care about cost, you care if the CPU has 2 or 20 cores. You might not articulate your concerns in this way, but they are intimately bound together.
If you only care about "good enough" performance and "in my range" cost, then your concerns are still bound up with core count.
So what's your apparently more correct understanding of some of those words?
> So what's your apparently more correct understanding of some of those words?
My guess would be "I care about results, not how you get them". Which is a completely acceptable policy if your workloads line up exactly with the benchmarks.
The benefit of trying to "peek behind the curtain" is mostly for trying to extrapolate anything else from the benchmark results (how would it perform on my different workload? which setups are worth benchmarking anyway? what could I change to improve the performance?).
Correlations don't make it wrong to say "I only care about X, not correlated thing Y." Don't try to impress them with stats about cores, because cores can vary enough to make those numbers meaningless in a vacuum. You need to put them in the proper context, which... gives you performance and cost numbers.
This guy really doesn't get the point - he's testing an embedded part with 16GB RAM against an overclocked desktop part with 64GB RAM (and a load of the benchmarks are garbage collected so the extra RAM is significant).
Its also the case that none of these benchmarks are Apple specific - users buying a Mac Mini for development are likely to be developing in Swift for which the M1 has special optimizations.
Need to wait for Apple to release a part for an iMac or MacBook Pro to inform his conclusion that x86_64 is going to be able to outperform Apple's ARM chips going forward.
Ryzen and M1 both had design wins this year I feel because of broader and faster access to memory. In the next few years L1 & L2 caches might be getting bigger.
I think a lot of commenters ar forgetting another fact about the M1: It also offers low power cores, neural engine, a GPU and RAM all on a single die. Therefore die sizing comparisons to most desktop class cpus make little sense and money comparisons too. (Of course the article was about something different, I’m referring to the comments)
It really doesn't matter what kind of stat you bring out against the M1. People have made up their mind and either the marketing worked or it didn't. The minority will be swayed by facts and will cling to whatever preconception they have or been told to have. That's just how the world is (now?).
You’re making a bit assumption that may or may not be true: that M1 will scale well to 8 performance cores. x86 vendors (and POWER) and Linux have put a lot of effort into multi core scaling in the last 15 years or so. Building a fast and efficient core is not the same thing as building one that scales well. I’m sure that Apple is capable of making an effective 8- or 16-core machine, but it may not be as simple as just adding cores. Look at the recent AMD machines’ complicated core organization for examples.
As far as I know, other than Graviton and other very new N1 designs, there haven’t been many big ARM systems at all.
But this is an exception. Fujitsu has a long history building supercomputers, and before adopting ARM they were building their own high performance SPARC machines with their custom Tofu interconnect.
Re: thermals, they’ve been pushing those limits since the PowerPC era and that includes high-TDP systems like the XServe and Mac Pro packed with CPUs, GPUs, and custom accelerator chips. The big problem was that Johnny Ives kept wanting to make ultra-thin showpieces - if they’ve learned from that debacle they’ll produce something 30% bigger with a robust safety margin.
The M1 mac mini is air cooled and I believe that's the setup most are benchmarking against? The proc pulls just over 20W max load IIRC(best on estimates because Apple never talks about it so everyone just assumes all m1s perform at the Mac mini level but draw the air power).
Yeah, but what about having it pull 120W to get better performance? It’s always plugged in. They don’t have experience there (but will hopefully figure it out quickly). This is the right example, thanks.
First, many of these benchmarks are single threaded. Secondly, in the same power envelope as the 5800X you can also find the 16 core 5950X. So perhaps Apple would have to add even more cores.
Finally, in a core count war AMD has an easier task. They scale up their designs with multiple chips. A Zen 3 CCD is about 80mm^2 on 7nm, which is already much smaller than the M1 SoC on 5nm. If Apple adds 4 more cores, their chip size will increase. Meanwhile AMD can add more cores without increasing chip size like Apple's current integrated approach. This is why AMD is already shipping 64 core CPUs (EPYC 7763) using multiple of the Zen 3 CCD as these desktop CPUs.
Yeah and since Ryzen with 8 and 16 cores and Epyc with up to 64 exist it is an obvious loss because Apple has yet to release products in these categories.
How many times does it take for people to understand this...
Comparing single core performance between laptops and desktops is an apples to apples comparison. The Ryzen uses 20W on a turbo boosted core, it uses less when you utilize all cores. Same for Apple which needs 10W for a single core and the whole package is 35W. There is way enough headroom to turbo boost to 20W.
The reason why there is a 20W vs 10W difference between the Ryzen and Apple chip is that Apple uses 5nm and the Ryzen clocks way higher (and thereby performs better). There is no free lunch where Apple can double their performance and mop up anything because AMD can just manufacture with 5nm and reduce their clock boost to reach the same per core power consumption. And it turns out this is exactly what they do on their laptop chips. Just reduce the clock speed and watch the power efficiency pour in.
If anything it's Apple that is backed into a corner because they are in a local optimum from which they can only escape by sacrificing their primary competitive advantage. Ramping up the clock speed ruins power efficiency.
The original test linked at the beginning of the article is telling that multicore is not that relevant for the tested performance. The article is mentioning the power envelope and the fact that 5nm is temporary exclusive to Apple, but not for long - AMD will utilize it and that would be a closed test. In the end, if you want to compare performance before you buy, you care less about the internal and more about performance and price.
The first test listed is Renaissance, which, from https://renaissance.dev/resources/docs/renaissance-suite.pdf, is a concurrency benchmark. It likely benefits from multiple cores. All in all I'm not sure the author has the relevant expertise.
I look forward to an M1 processor comparison to something with a similar core count, similar total power envelopes, and similar cost. The Ryzen processor benchmarked here costs more than the entire Mac Mini M1 system.
They're 2 different CPU with the different market. The only reason I think that this comparison is being made is that it is a response to an avalanche of articles claiming that M1 beats everything speed wise and is the best thing since sliced bread.
The major difference between Ryzen 5000-series and Apple M1 is the former is sold out worldwide and the latter is on the shelf at every Apple Store in the world. Comparing the M1 to something that nobody can buy is dumb.
Just checked and I could see some Ryzen 5000 available on Amazon (not the 5900X though).
Honestly I expect the price and availability to suck big time for both. AMD can't manufacture half the volume they should to satisfy demand, while Apple is a poster child of $900 laptops costing €1000 when they finally cross the Atlantic long after release.
"Available at an Apple store" might be the biggest joke one could possibly say. There's only a handful of store worldwide.
I don't know if you're just being obtuse or what but there are over 500 Apple stores in 25 nations, that's not a handful. If you don't feel like visiting one, Apple offers same-day delivery for only $5 if you live near one. The product is readily available.
I'm not trying to be obtuse or anything, I'm just saying they have very few physical shops. There might be 10 in Paris for example but if you're in the rest of the country, tough luck!
Well I am double checking and I see they have shops in the top 5 cities now. Maybe things have changed for the better. Last time I tried to buy an Apple product there was simply no shop in a hundred miles radius.
Well, you're not wrong, but there are huge wait times for the M1 MacBooks all over the world. My M1 Air 16G was shipped after three weeks, and some US people over at Macrumors have been waiting for a month. The Mini seems to be more available.
There is a key aspect to take into consideration, price of the cpus and energy. With the price of 2 or 3 ryzen we can buy a complete pc. Anyway really useful info
Benchmarking Ryzen on ClearLinux is problematic, Fedora 33, Arch or Gentoo would be much better.
Intel explicitly didn't set AMD-specific optimizations to favor Intel.
This is all splitting hairs. Major manufacturers are in active competition and bar temporary hick-ups they will all be leap frogging each other while offering more or less the same deal over time in regards to performance / power.
In the end it'll be just personal preference whether go with Apple, AMD or Intel.
Well I suspect due to loyalty factor Apple as usually may be able to extract more money from the customers.
> While M1 is indeed very powerful for its size, when comparing it to the high-end x86 desktop, it is still slower
I too am astonished that a high-end x86 desktop could possibly be faster than a Mac mini or a laptop without a fan. Now I will have to burn my Apple card (if I had one) in protest.
I call on Apple to immediately retract all of its marketing that says that the M1 is the fastest CPU in the universe.
> obscene power draw
Who cares about power in a desktop (or laptop for that matter)? These things aren't solar powered - just plug it in like a normal person! Duh.
I also note that a single Ryzen 5800X processor is actually cheaper than the Mac mini, and is readily available from many helpful and enterprising resellers on ebay at special holiday pricing.
Many other clickbait articles claimed that M1 is better than the best desktop class CPUs. I only tested the water and found otherwise. The 5900HX is planned and should be within 90% performance of my 5800X within 45W TDP. Not close to the power consumption, but then zen 3 is only on 7nm, while M1 is on 5nm.
The Mac Mini is $699 with only 8GB RAM, that runs out after a few browser tabs. So, for a work machine, you would need at least the 16GB model and add $200 to that. Though the build quality and the panel is excellent.
You could opt for the 5600X. The core count is not relevant in the article, as the benchmarks are mainly single-threaded.
This is comparing the M1 which a mobile focused chip in a tiny power envelope to the pinnacle of AMD cpu design in a giant (relatively) power envelope. My point was that the article could have stood on its own without even benching against the M1. Just show the performance of the 5800X in tasks the M1 and the 5800X are not in the same league.
I don't know what you are talking about. The problem with fake information is that you have to spend more resources debunking it than it takes to create fake information in the first place.
Anecdotal, of course, but I currently have 42 tabs open in Firefox 84 (native M1 build) on an 8gb M1 Mac Mini, as well as two VS Code windows and a bunch of other things like Mail, Calendar, Authy, Bitwarden, etc. As far as memory management is concerned, user experience is indistinguishable from my 32gb Ryzen 3700X desktop with a Samsung NVMe.
8GB is plenty for most people right now. A browser won’t eat that much memory for normal sites and reasonable tab counts.
An interesting thing about the M1’s RAM limit, though, is that it literally can’t get any bigger. 16GB is pushing it, but fitting 32GB into a package-on-package simply can’t be done with our current tech. Apple is solely relying on Moore’s law for future advances because, with the way these processors are designed, external infrastructure would wreck performance.
The unified memory does go to great lengths to make memory not such an issue. It also pages a ton. I’ve read that people can throw up hundreds of tabs in safari on the 8GB model and be just fine.
The claims were fastest per-core performance, not fastest absolute performance in a single system. Heck, there are desktops out there with hundreds of cores in them, drawing hundreds of watts of power. I haven't had time to delve into most of these benchmarks, but they look multithreaded to me.
Here's a datapoint. I just got an HP Omen gaming laptop for $1100 on Amazon. Pretty much equivalent to the 3800x mentioned in the article, but came with 16GB of RAM and 1TB SSD, and 6GB Nvidia 1660ti discrete graphics. I can seemingly run unlimited browser tabs, video conference and play a game across 3 monitors (built in one is 144hz) simultaneously with no latency whatsoever. I upgraded to 32GB of ram for $144 and added another 512gb PCIe SSD I had lying around, because it had an extra slot. The fans do run (quietly) all the time, and it has a battery life of about 4 or 5 hours. Not in the same league performance per watt as the M1, but interesting performance for the dollar none the less.
The article mentioned faster, not cheaper. A performance-per-dollar and per watt is certainly valuable, but for a forum that always mentions that a couple hundred bucks are worth spending for shaving off time from a developer workflow (SSDs, RAM, etc, assuming they earn market rates) this benchmark is worth it.
As a desktop user, I certainly am interested in these numbers.
Why would you buy a Mac mini for performance critical work? I could see being bound to a laptop for some workflows, but is that Ryzen available in laptops?
Yes, but only the zen 2 flavor which is significantly slower per core. The M1 competitor with similar thermals, close IPC per core, and on board GPU is widly rumored to be announced in Jan at CES.
> The M1 competitor with similar thermals, close IPC per core, and on board GPU is widly rumored to be announced in Jan at CES.
Ok, so what you’re saying is that AMD is wildly rumored to have a processor that you might be able to find as an option on a niche (term not used in a negative sense) brand laptop in Summer 2021 that directly competes with Apple’s M1 processor available in Fall 2020 on their highest sold models which are their lowest price point machines?
Have you looked at the laptop market lately? AMD has had quite a few design wins. Brands like Lenovo (including the Thinkpad, flex, and legion), HP (pavilion), Dell (including inspiron, gaming, and Alienware lines), Asus (Studiobooks, zenbooks, TUF, and ROG), Acer Aspire, and MSI, and the Microsoft Surface.
I wouldn't call all of those "niche". AMD has made a surprisingly quick move from the bargain basement laptops to the Thinkpad and Microsoft Surface lines in a single generation.
The nice thing is the new chips are pin compatible, so every current design win should move over to the new chips quickly. In fact half of the Ryzen 5000 laptop chips are renamed and slightly tweaked zen 2 chips from the previous gen. Based on the number of leaks, benchmarks, photos, etc it seems like the channel might well be full.
So I'm comparing laptops available in Nov (although people I know couldn't get theirs till Dec) to laptops I expect to be available in Feb, but that's speculative of course. Even the previous gen (zen 2) APUs are getting similar write ups to the M1. Things like benchmarking against the I9-9980KH and i7-1075H and winning. The zen3 is quite a bump vs the zen2 so I'm optimistic it will go well.
I'm hoping to get a few, maybe a mac mini like widget and another for a NAS. I don't expect them to crush a M1, but to be reasonably competitive, and while I contributed to the patreon to get linux on the M1, I expect the Zen 3 chips to have much better linux support.
> AMD has made a surprisingly quick move from the bargain basement laptops
I specifically called out niche as not a negative term for this reason. I didn’t mean to imply bargain basement, but more that high end processors like this would only be included on a few models. Things like gaming or developer focused machines vs something built for a wider audience (like the current M1 machines are).
> So I'm comparing laptops available in Nov (although people I know couldn't get theirs till Dec) to laptops I expect to be available in Feb, but that's speculative of course.
I walked into an Apple store in November and walked out (in under 3min I might add) and had an M1 13” MBP. I think them taking my temperature at the door took almost as long as checkout. Having closely followed CES announcements over the years, I respectfully disagree that there will be laptops in February available for delivery/pickup using the processor just announced the month before, but there is a slim chance so I won’t bet against it. Much more likely to be May/June/July based on past experience.
Wow, lucky. I needed to arrange for some family travel on short notice and wanted a MBP (13", 16GB ram, 1tb storage). Current delivery times start on Jan 21st.
Possible, and I've seen similar. Just seeing a surprisingly wide number of announcements, leaks, and multiple brands with multiple different Zen3 APUs.
How is it clickbait? The title says it compares a specific CPU model vs another specific CPU model and that is exactly what it does. Where do you see deception in the title?
>Show me an x86 laptop in the same power envelope that can match the fanless MacBook Air with the m1. I’ll wait. Oh and do so at roughly 1k.
How does this prove that the title is lying?
>On the desktop the Mac mini is 699 USD in its base spec. The 5800X is 500 euros on its own before motherboard, case, ram etc etc.
Again, how is this relevant to whether the title is clickbait or not?
>Apples to apples benchmarks either controlling for power or price would make for a more relevant article.
Seriously, what does this have to do with clickbait?
Here is my perspective. I saw lots of articles that pretended that the M1 is better than all x86 CPUs including Ryzens (one of them is linked in the article). This was getting on my nerves because everyone is being dishonest. I clicked on this article because the title compared a Ryzen desktop CPU with the M1 with the expectation that these benchmarks disprove the garbage journalism and that's exactly what I got. The Ryzen 5800X performs better than the M1. x86 is still strong in single core performance. Those real "clickbait" articles were just lazy and trying to hype something up.
This article is not clickbait and you are just being dishonest.
For my work machine I don't care much about the price (to a point) or the power consumption. I want the fastest compiles money can buy. This article is exactly what I want to see, and confirms my decision to go for the 5900x. Now if AMD can just get them in stock. The local waiting list is months long.
Depends on the language of course. Most JS tools are strongly single threaded. Rust is quite good for multi core and benefitted hugely from my 3900x but even that only keeps all the cores busy for the first 60% of a compile then we're limited by the slowest chain of dependencies.
The compiles are parallel, but not all work can be done in parallel or divided among 32 threads equally. So from 24 to 32 threads is not typically a 33% increase unless your compiles are huge (like the Linux kernel).
I'm opting for the 5950x. Yeah it's 33% more cores for 45% more money, but in for a penny.. The 5900x is already split across 2 core complexes and the 5950 will just add 2 more cores to each. Cinebench multi-core does shows exactly what you would hope adding the extra cores: 5900x is 75% the score with 75% the cores vs the 5950.
On the other hand there have been plenty of benchmarks reported of like to like and many have been asking what M1 would look like against the best AMD has to offer.
There’s room for plenty of comparisons. It’s a big world.
I'm not sure why you're getting downvoted. The M1 is ahead by 2-3x in performance per watt and highly competitive in performance/dollar. This result is indeed quite meaningless as Apple will have 16 high-performance core variants of the M1 soon.
> Apple will have 16 high-performance core variants of the M1 soon
I doubt that’s coming soon. The M1 is a HUGE chip. The decoders and pipelines take up a massive amount of space, and the L2 cache is apparently unwieldy, too. They’ve focused on making a limited number of cores perform very well, but there isn’t enough room. They need another die shrink to add any more.
I think the upcoming pro hardware will just be tweaks to the current formula.
It’s not that big, it’s ~120 mm^2. You can go to intel or AMD and find 750 mm^2 chips...
Apple have 2 advantages here.
- They don’t play in the low-cost arena, people expect a Mac to be a bit more expensive, so they have a larger cost budget to work with
- They don’t have to make a profit on the CPU in particular, whereas Intel and AMD do.
Apple want a 38-40% margin on the whole shebang, and if it needs a more expensive chip, then that’s what’ll happen, maybe taking cost-savings from (2), and of necessary adding price because of (1).
I would be shocked if Platform Architecture at Apple didn’t already have 8, 16, and 32-core variants of the M series up and running, or at the very least in simulation. They’re coming, and the world will change. Again.
I have mentioned this in several comments and no, it's not that simple and worse, there is zero money to be made even if they did release such a product because Apple would never sell their chips as an OEM.
The M1 is certainly impressive but how much does the same configuration say with 16GB and 1TB SSD cost? Most enthusiasts aren't going to buy an AMD Ryzen 5800x and limit themselves to 8GB RAM and 256GB SSD. A mac mini 8GB + 512GB SSD configuration is 900USD. Can you imagine what a 16GB + 1TB SSD cost, which is not an uncommon configuration? My guess is in the 1300USD range.
The M1 processor is extremely impressive for the price and power envelope. However, I'm noticing a lot of people have taken Apple's marketing material a bit too literally and assumed that it somehow beats any and every desktop CPU out there, which is clearly not the case.
Moreover, as the author mentions: A significant portion of Apple's lead came from buying exclusivity on TSMC's 5nm process through the end of the year. It will be interesting to see how AMD stacks up as they roll out 5nm parts in the future, compared to Apple's scaled up M1 successors.
Exciting times. It's good to have some progress in CPU technologies again after years of Intel stagnation.