M1 and M2 actually are not produced on the exact same process node. M1 is N5 and M2 is N5P, an optimized version of N5.
I think Kuo might be misinterpreting the statement from TSMC regarding revenue from N3. The key is that they said it wont "substantially" contribute to revenue until 2023. Of course processors like M2 Pro/Max/Ultra wont generate the same amount of numbers like something more high volume like an iPhone and in the grand scheme of things can't represent a substantial contribution to TSMC revenue.
The fact is TSMC said they'll start N3 HVM in September. So they are producing something and we know Apple is expected to be the first customer for this node. It's too early for the A17 so either it's the M2 Pro/Max/Ultra or something new like the VR headset chip. Can someone see another possibility?
Apple still btw has to replace the Mac Pro with an Apple Silicon based model and their own deadline (2 years from first M1) is running out. It could make sense that they want to bring this one with a "bang" and claim the performance crown just to stick it to Intel :)
I would expect that Apple would push its goldenchild (iphone) onto the node first. Its a small chip which they can use as a pipecleaner, makign sure they can get the yields up and optimise the process before pushing a larger die onto the node.
They easily could have been allocating risk production on the iphone for the past couple of months, ready for the launch. Apple being like yes we will take lower yields for less cost.
I do not expect any company to announce production 3N product, until apple has had one out for atleast 6-12 months. Look how long it took the rest of the industry to move to 5N. I swear part of that reason was an exclusivity agreement with Apple, and it massively paid off for their CPUs. Having a node advantage is always massive in terms of price / performance / power matrix.
Are you suggesting they might have produced millions of A16 chips on N3 during risk production phase and launched it before TSMC even reaches HVM? Highly unlikely. Risk production is a phase where they still make changes and fix issues. It's like a beta phase. It does not come at a lower cost, it would be more expensive to throw out a big chunk of chips. The iPhone chips are very high volume, you can't produce them before reaching... high volume manufacturing phase.
The iPhone contributes to TSMC revenue in a substantial manner so that also would totally not fit what TSMC said.
The M2 Pro/Max/Ultra are much lower volume and higher margin. It makes sense to start with them.
The iPhone contributes the most to apple’s revenue and margin. It wouldn’t be a great pipe cleaner because they need a ton of chips on a committed launch schedule, and can’t afford any yield screw up there.
With the Mac, they could probably afford a 10% yield, can extend ship times, and in the worst case could even push back a launch.
While the bigger chip does push down yields, my bet is they have more wiggle room than needed to compensate.
Except that they are much larger chips, that will be much more sensitive to yield issues. They could do that, but they will be expensive. Maybe that's ok.
Their biggest (m2 max in studio) are more like chiplets with interposers, which exponentially lowers the yield problems depending on how much you can split it up. Also larger chips can still set a threshold on gpu cores to disable due to defects and that kind of thing, where a mobile one might just throw out the chip with a much lower number of (or any?) defects.
Nvidia can make tons of tiers out of the same chip by just setting different thresholds on the number of usable cores, it isn't all just price discrimination (though sometimes I think they have been found to be fusing off much more than needed for the number of defects as a pure price discrimination play, or that might have been Intel with cache size).
M2 Pro/Max chips will be huge. Only the ultra is using an interconnect between dies, but that's two full max dies. M1 Max is 432 mm^2, that's enormous. M2 is 25% bigger than M1.
There's not a lot of lower core count SKUs for these chips either. There's a few GPU tiers and CPUs tiers but not lots of room to go down. Pro has 8/10 CPU cores and 14/16 GPUs. Max is 10 CPU and 24/32 GPU.
There is no 4-7 CPU core segments like AMD does with their chiplets. Intel has much lower tiers for their dies.
I think a lot of people who watch these things suspect exactly what you wrote at the least. Apple funds much of TSMC’s research and development for a new node, pays for production for use in iPhones, both sides make tons of money and repeat the cycle on a new node and other companies come in and buy up capacity on that cutting edge node, seemingly comfortably coasting behind apple. I guess now Apple may use some capacity themselves for their mac cpus after the newest iDevices prove the tech.
Starting HVM in Sept does not mean you get Revenue in Sept. It takes months before volume reached, testing, packaging done and shipped. TSMC isn't unusual in stating it they wont get revenue from N3 until 2023.
Yes. So either way, Kuo cannot deduce that M2 Pro wont be on N3. If the revenue is realized later or if the numbers are too low to justify calling it a substantial contrubution to TSMC revenue... same result. Kuo's argument does not seem to hold water. Now, that does not mean the inverse is true and M2 Pro is guaranteed to be on N3. I can only come up with the VR chip as alternative and so far I think nobody else came up with a suggestion.
The current assumption and his prediction, along with alignment on other data inside Supply Chains suggest new MacBook Pro, assuming it uses M2 Pro, would come out in Oct / Nov.
And if New MacBook Pro indeed uses M2 Pro, and M2 Pro uses N3, it would be classified as substantial revenue. Hence his word on M2 Pro wont be on N3.
TSMC isn't normally the one to spin words on substantial contribution to revenue. At least until now it means actual product shipment. As they do get N3 product revenue in terms of pilot project and and part of product R&D.
Edit: There were rumours of Intel being the first customer for N3 with their GPU. Using it as Tiles on their next gen Meteor Lake SoC. Personally I think that is likely the case.
It's more complicated than this, accounting as a field exists pretty much because there's intricate sets of rules and ways to interpret them. Here, my understanding is a good accountant would say not to recognize the revenue until you consider it shipped -
i.e. if you agree to pay me a bajillion dollars for a time machine with an out clause of no cash if no delivery, that doesn't mean I get to book a bajillion dollars in revenue
over the top example, but this was the general shape of much Enron chicanery, booking speculative revenue based on coming to terms on projects they were in no shape to deliver, so its very much an accounting 'code smell' if 'code smell' meant 'attracts regulators attention'
Given that there is an iPad with a M1-chip, I think Apple is smart and produces broad chips that fits in all the devices and potentially VR glasses.
It makes sense that their more expensive M2 Pro chip is made on the 3nm process, which is more expensive and has less capacity. It would go to more expensive Macbooks because those are less in demand and have a higher price. For the VR glasses it makes sense they release an expensive developer unit and wait for 3nm to ramp up for the mass consumer version.
> Apple still btw has to replace the Mac Pro with an Apple Silicon based model
At about the same price, I think 10x Mac Studios Ultra count as replacing much more than a single maxed-out Mac Pro. Though I do expect an Apple Silicon Mac Pro is still coming, I do not see any need for Apple to rush to meet some marketing deadline from almost 2 years ago.
They don't ,,have to'' be ready in 2 years. The M2 numbers from the N5P process were underwhelming, I wouldn't replace my M1 MacBook pro without seeing significantly superior performance / watt numbers, and happy to wait for the N3 process to be in production whatever it takes.
Of course nothing forces them to be ready within 2 years but alas, that's what Apple said they'd do. I agree the M2 numbers were not amazing. I guess after the big M1 shock it's hard to follow up with something that comes even close. You can't get similar gains like the transition from x86 to an integrated arm based SOC brought, doubly so when there's no substantial process node improvement (N5 -> N5P is a minor optimization). In the end they mostly bought better performance with a bigger die and increased power consumption. I'm pretty convinced they'll need N3 for the next jump but even that wont be on the level of the Intel -> M1 step.
The revolution has happened, now it's all about evolution.
BTW if Apple wants to increase the prices of the Pro macbooks like they did with the M2 Air due to inflation then they better justify it with some good gains. The big changes in terms of hardware redesign already happened last time.
What other CPU core design iteration managed to improve performance while also cutting power draw?
Anandtech's deep dive on the performance and efficiency cores used in the A15 and M2:
Performance:
>In our extensive testing, we’re elated to see that it was actually mostly an efficiency focus this year, with the new performance cores showcasing adequate performance improvements, while at the same time reducing power consumption, as well as significantly improving energy efficiency.
Efficiency:
>The efficiency cores have also seen massive gains, this time around with Apple mostly investing them back into performance, with the new cores showcasing +23-28% absolute performance improvements, something that isn’t easily identified by popular benchmarking. This large performance increase further helps the SoC improve energy efficiency, and our initial battery life figures of the new 13 series showcase that the chip has a very large part into the vastly longer longevity of the new devices.
The report you quoted and linked to is about the A15 and not the M2. The M2 is based on the A15 but from what I've seen it does use quite a bit more power (~30%?) than the M1 when loaded. Anandtech did not analyze the M2 yet as far as I can see.
If the increase in power consumption comes from the additional GPU core, from increased frequencies in the CPU cores or other added parts to the chip imho is not that important for users (and depends on what they are doing). They see the system as a whole. They get x% more performance for y% more power usage. For the CPU x is smaller than y. This is totally normal when increasing frequencies.
Note: I'm not saying the M2 is bad. It's a very good chip indeed. All I said was it was not amazing. It was an iterational, yet welcome, improvement. And I think one couldn't expect anything amazing quite so quickly.
Would we say the Zen 4 core design is less efficient because AMD is going to start bundling an integrated GPU with Ryzen chips, or would we just talk about Zen 4 core power draw vs Zen 3?
Apple's performance cores managed to improve performance while cutting power.
What other iterative core design did this?
It helps to remember that Apple isn't playing the performance via clock increases no matter what happens to power and heat game.
I guess that's where the misunderstanding comes from. I was not talking about CPU cores alone. Only M1, M2 as a whole.
But I still am not sure if I can believe that the M2 CPU improved performance while at the same time cutting power. Can you link to some analysis? Would be very interesting. Though please not the A15 one, the cores are related but not the same and the CPUs have big differences.
Apple hardware updates are always and only moderate incremental improvements, since the very beginning. Apple is reasonably predictable in this regard. It is unrealistic to expect a generational and exponential leap in performance or efficiency in any of Apple's hardware refreshes. That has never happened, and it likely never will. What happens instead is whatever model gets a little bit better than the last revision. The M1 was not a massive leap forward, but instead an impressive lateral move. Subsequent Apple Silicon chips will only be a smidgen better than the newest previous revision.
> And I think one couldn't expect anything amazing quite so quickly.
I'm not expecting to see such a performance jump from another CPU transition again in my lifetime. The jump from x86 to M1 was a boost because of TSMC's fab process compared to Intel's, yes, but it was also from the ISA change, leaving behind a lot of the x86 cruft.
So how do you explain the M2 13 inch mbp throttling severely under load while the M1 13 inch mbp doesn't as heavily. It's the same chassis. It's impossible for the M2 to use less power, the heat has to come from somewhere. It's not the GPU since many benchmark use a CPU only load to show throttling behaviour.
Are you European? The price of the M2 Air did not increase. The USD price was exactly the same as for the M1 Air. Both debuted at $1199. The price went up in Europe because of a drastic reduction in the EUR/USD exchange rate.
The M1 Air launched at a price of $999. The increase to $1199 happened with the launch of the M2 Air.
> With its sleek wedge-shaped design, stunning Retina display, Magic Keyboard, and astonishing level of performance thanks to M1, the new MacBook Air once again redefines what a thin and light notebook can do. And it is still just $999, and $899 for education.
Actually the dollar is falling as well (in real purchasing power) due to inflation. It's doing quite a bit better at the moment compared to e.g. Euro. One factor could be the increased interest rates. Why would I keep Euros and get still nothing when I could get at least a little bit on the USD. Also the war in Europe does not help their currency.
> I wouldn't replace my M1 MacBook pro without seeing significantly superior performance / watt numbers
What makes you think that this will happen in one generation? The point of the M2 is not to get M1 users to migrate, it’s to keep improving so that MacBooks are still better products than the competition. Apple does not care that you don’t get a new computer every year, they are most likely planning for 3 to 5 years replacement cycles.
> The M2 numbers from the N5P process were underwhelming.
No, they weren't. Performance improvements are precisely in line with every previous hardware update. Apple hardware updates are always and only incremental improvements, since the very beginning. Apple is reasonably predictable in this regard. Expecting more than what Apple has always given in hardware updates is overtly unrealistic.
After the November 2020 launch day chaos, with not that much existing software available was working on those machines at the time like Docker, Java, Android Studio / Emulator, VSTs, etc a typical developer would have to wait more than 6 months just to do their work with fully supported software on the system and to take full advantage of the performance gains rather than using Rosetta.
At that point, they might as well skipped the M1 machines and instead waited to purchase the M1 Pro MacBooks. Now there isn't a rush in getting a M1 Macbook anymore as now Apple is already moving to the M2 line up.
By the time they have made an Apple Silicon Mac Pro, they are already planning ahead for the new series of Apple Silicon chips; probably M3, which will be after the M2 Pro/Ultra products.
After that, it will be the beginning of the end of macOS on Intel.
What’s the point of this comment? Every consumer electronic product has a new version a year or 2 away.
Apple products also have a long reputation of having a sweet spot for buying a new product. The Mac Buyers guide has existed for like a decade or more.
> What’s the point of this comment? Every consumer electronic product has a new version a year or 2 away.
So after 9 months releasing the M1 Macbooks, the M1 Pro Macbooks came out afterwards, already replacing the old ones in less than a year. Given this fast cycle, there is a reason why the Osborne effect precisely applies to Apple's flagship products rather than 'Every consumer electronic product'.
This is a new system running on a new architecture and it must run the same apps on the user's previous computer. Unfortunately, the software for it was just too early to be available on the system at the time and if was there, it didn't run at all in Nov 2020. Even a simple update will brick the system.
What use is a system that bricks on an update; losing your important file or for power users having to wait 6 months for the software they use everyday to be available and supported for their work?
Going all in on the hype fed by the Apple boosters and hype squad doesn't make any sense as a buyers guide.
>So after 9 months releasing the M1 Macbooks, the M1 Pro Macbooks came out afterwards, already replacing the old ones in less than a year.
The M1 Air and 13" Pro are really entry level machines. The first model with a M1 Pro costs $700USD over the base model 13" M2 MBP. The M1 Pro still has much better performance compared to a base M2. The M1 Pro, Max and Ultra didn't replace anything. No one with a budget is going "Oh, the M1 Pro only cost an extra $700USD, I'll get that".
>What use is a system that bricks on an update; losing your important file or for power users having to wait 6 months for the software they use everyday to be available and supported for their work?
What's the point of this comment? Things happen. It sucks. Apple isn't the first and won't be the last company to make a mistake. Don't get sucked into the shininess of their latest product.
> The M1 Pro still has much better performance compared to a base M2. The M1 Pro, Max and Ultra didn't replace anything.
Exactly. Hence, why many skipped the M1 and bought the 14-inch Mac with M1 Pro instead. With the time it took for all the existing software to work properly on Apple Silicon, the 14 inch M1 Pro was available and little to no-one bothered with getting the old broken 13 inch M1 MBP.
> No one with a budget is going "Oh, the M1 Pro only cost an extra $700USD, I'll get that".
No one on a "budget" would get a computer that would cost more than $1,000 and it bricks on a system update / restore or chooses an Apple machine in the first place. Plenty of money saved up by then or financing options for the next version, instead of wasting it all on launch day and losing all your files next week.
> What's the point of this comment? Things happen. It sucks. Apple isn't the first and won't be the last company to make a mistake. Don't get sucked into the shininess of their latest product.
It is the truth of the matter and it happened very frequently on launch day with lots of users complaining that their shiny new computer bricked on an update / restore, etc and have lost all their files and are unable to use the computer. So once again...
What use is a system for users that bricks on an update / restore; losing all your files and waiting months for the software one uses everyday to be available and supported for basic work?
> Don't get sucked into the shininess of their latest product.
Don't tell that to me, tell that to these people who fell for it. [0] [1] [2]
Photoshop worked on M1 on day 1. What is the argument even about here? Someone is upset that all software developers didn’t port everything to M1 overnight?
As of March 2021, Photoshop now runs natively on Apple computers using the Apple Silicon M1 chip with 1.5X the speed of similarly configured previous generation systems.
Even before that it wasn't supported or designed to run on M1, hence the frequent crashes and freezing users were getting. Therefore, it wasn't working.
So how exactly is waiting until March 2021 for a stable M1 version of Photoshop, "working on day 1"?
Working != being a native app. The latter may be an interesting technical detail, but absolutely no user of the app will really care. They care about “does it open and not crash” (which _everything_ did b/c Rosetta) and - in a very distant second place - is it faster than before (which it also was).
So no: There was no one saying “but no software works”
This is just nonsense, there is tons of performance critical software.
A lot that hasn't been ported to native still doesn't run properly or at all.
DAWs that run on rosetta have many problems with VST plugins and performance.
Just because something opens and doesn't crash, doesn't mean it works properly at all.
I’m pretty sure I saw this somewhere, but now I can’t find it, so I accept the possibility (though I think it’s low probability) that I’m misremembering.
The Mac Studio is the obvious Mac Pro replacement. It's hard to do a fixed system and also charge much more than $5k for it, and Apple has failed over and over at any kind of non-fixed Mac Pro.
They might just leave it as is. The studio is a very capable machine at the high end, and if given a beefier GPU/CPU and _maybe_ more DRAM it probably replaces the bulk of the professional use cases.
At the same time, not being part of the Apple ecosystem, should I be worried about the closed nature of this. I have been using Linux for over two decades now, and Intel seems to be falling behind.
(I do realize Linux runs on the M1. But it's a mostly hobby projects, the GPU is not well supported, and the M1/M2 will never(?) be available with open H/W.)
I've not tried the webcam and microphone, which I guess i could from Firefox.
Battery life is less than when the drivers further evolve, because I think it's imperfect in going into sleep mode.
The distro is Asahi Linux, which is ARM ArchLinux. All ARM binaries.
If you follow the Asahi Linux page, it updates super frequently, as drivers get tuned and so on.
not the person you are responding to but i was looking into it today. webcam/mic/speakers don't work but bluetooth does. there is arm to x86/x86_64 translation tools akin to rosetta 2 but they have a lot of warts and are not well supported yet. the most promising one in my opinion is called fex.
Not in the public release yet. An experimental Mesa driver is running (https://rosenzweig.io/blog/asahi-gpu-part-6.html), but the kernel driver is still a work in progress (though it's making quick progress!). The demo at the end of the article is a proof-of-concept with the M1 acting as an eGPU for another computer; not something usable for a desktop environment yet.
in my case i could use it as a daily driver, since i'm just needing a fast browser and Linux with compilers etc. but i've been using macos as a daily driver despite loathing (since the dawn of time) its font rendering.
>despite loathing (since the dawn of time) its font rendering.
Could you expand on that a little please? I've always found the Mac's fonts & font rendering to be most pleasing so I'm interested to hear a different opinion - what annoys you about it?
i've got super sharp vision fortunately, so i see the half shading and such, and it strains my eyes which, otherwise, "expect" to bring edges into sharp contrast.
My eyes love the rendering engine on Win11, or whatever trickery they're using for fonts, and similarly on ArchLinux.
Oh I see, I thought it was the actual font rendering you disliked, I hadn't considered the smoothing being an issue! Between Apple's high DPI displays and my own eyesight I don't notice it (although I do remember when I was younger hating subpixel anti-aliasing when it was still in use, because of the rainbow around characters)
For anyone who's interested, in macOS you can disable this font smoothing with:
It's great for you. But some of us are using Linux in industrial applications. You can't really put an Apple laptop inside e.g. an MRI machine. It may run highly specialized software, needs specific acceleration hardware, etc.
It's going to be a very sad day when consumer electronics win over industrial applications.
Apple hardware has never been about non-consumer, server or industrial applications outside of some film, music and movie studios using mac pros and the Xserve long time ago.
And if your making an MRI machine or other industrial equipment that consumes a huge amount of power, the fact your attached computer uses 300W vs 600W doesn't really seem like much of a big deal.
Apple has a head start with their ARM machines, but I'm also not really worried that the rest of the industry won't catch up in a few years eventually. You can only really pull off the new architecture trick once or twice, and being a leader has a way of inspiring competitors.
Apple's software and OS is also horrible to use in server applications, you only do it if you need to do it, such as iOS CI, device testing and such. Otherwise you avoid it as much as you can.
What are you on about? A €5M MRI machine will have whatever computer its manufacturer will want to support. Which will probably be something like a Core 2 running Windows XP.
None of these machines have used Macs, ever. Why would anything Apple does affect this market?
I don’t think you need to worry about that, those are completely different use-cases and markets. ARM CPUs will be available and widespread in other applications soon enough, and Linux support is already strong in that regard.
No. This is not a good universal solution. What if the machine needs more processing power than one laptop can provide?
Do you want to put a rack of laptops inside the machine, waste several screens and keyboards? Log into every laptop with your AppleID before you can start your machine? It's such an inelegant solution.
Instead, the x86/Linux environment lets you put multiple mainboards in a machine, or you can choose a mainboard with more processors; it is a much more flexible solution in industrial settings.
It would be a gimmick given that real-time workloads can't be offloaded via some serial connection to consumer laptops. You'd still need hardware and software capable of driving and operating the machines embedded in the machines themselves.
No. You want the computer running the thing to be as simple, known, and predictable as possible. So that is necessarily going to be a computer provided by true manufacturer, and not whatever a random doctor feels like using. Consumer devices are compeletely irrelevant or that use case.
While MRIs don't use ionizing radiation like the Therac-25 did, I can think of a few bad outcomes from someone finding a 0-day on anything that can control the machine. And of course if it's read only it still has sensitive medical info we wouldn't want leaked.
You are probably right. But computers were not like that for the last 40 years. I wonder about alternative history without IBM PC Compatible. Maybe we just hit the performance wall and now the only way forward is system on chip. Anyway, better move on and start thinking about your computer as an appliance.
Power use tends to scale non-linearly past a point - disabling turbo modes would likely significantly reduce the peak power use, and ~18% performance differenceis pretty big buffer to lose.
The 6850u also beats it rather comprehensively according to those same results, and that's only 18-25w.
Really, you'd need everything power normalized, and even the rest of the hardware and software used normalized to compare "just" the CPU, which is pretty much impossible due to Apple and their vertical integration - which is often a strength in tests like this.
The 6850U is comparable in power use and still has a big perf gap against the M2 in mosts tests. Though there are some tests where the M2 leads with a big gap too so maybe it comes down to software in a lot of these. Still it seems to me like Apple is not leading.
>Unfortunately for testing, as mentioned, right now there is no Linux driver exposing the M2 SoC power consumption under Linux. Hopefully this will be addressed in time but unfortunately meant not being able to deliver any accurate performance-per-Watt / power consumption benchmarks in this article. But when such support does come, it will likely show the M2 indeed delivering much better power efficiency than the Intel and AMD laptops tested. Even under demanding multi-threaded workloads, the M2 MacBook Air was not nearly as warm as the other laptops tested. It's a night and day difference of the M2 MacBook Air still being cool to warm compared to the likes of other notebooks like especially Dell XPS laptops that get outright hot under load. The power consumption metrics should also be more useful/relevant once Linux has working M1/M2 GPU support in place too.
I mean, you shove a fan on the M2 and it beats itself...
According to this it should be an average of 19.3 Watts with a peak of 31.85 Watts.
Apple also exceeds the stated tdp during peaks as well but we don't have that information atm. And remember there's a 14% perf gap between the two.
My purpose isn't really to say AMD is definitely better since apple still probably takes the win in overall product, I think the MBA is thinner and that's important to me. But it's to show that x86 isn't behind in performance and that you're not making sacrifices in that department to maintain software compatibility with the x86 ecosystem.
That average is over the entire benchmarking suite, including single thread tests and when tests are loading from disk or otherwise not fully saturating the CPU. Some of those benchmarks in that power consumption number are GPU only!
> And remember there's a 14% perf gap between the two
Like I said, power is not equal at all.
> x86 isn't behind in performance and that you're not making sacrifices in that department to maintain software compatibility with the x86 ecosystem
Comparing the lowest end chip from one vendor to the highest end chip from another is not exactly a great look. Especially when the Arm chip is basically matching the x86 one while having only a few years of software optimization work.
I do think M2 is more power efficient, but it seems close enough to me. The Thinkpad in real usecase testing has very good battery life, 15 hours etc doing regular work. I just don't have the perspective that the fact it can scale up in power should be held against it. It's pretty typical when you're doing some super computationally expensive processing to be plugged while it's the casual emails etc that has to have great battery life.
> Comparing the lowest end chip from one vendor to the highest end chip from another is not exactly a great look.
Is it anyone else's fault that Apple only has one sku. The M2 is a 20 billion transistor chip while the Rembrandt is a 13 billion transistor chip. I'd argue that the M2 is higher end one. The laptops MBA/Thinkpad compared are the same price.
> Especially when the Arm chip is basically matching the x86 one while having only a few years of software optimization work.
So we agree it matches lol? That's what I was arguing for. Nowhere did I say Apple sucks. I default to using Apple products and have been for almost all my life. I was just trying to make a case that x86 is good enough too hardware wise.
One could argue that the Ryzen's biggest pitfall is that it hasn't adopted a big.LITTLE configuration yet. Alder Lake keeps it's thirsty TDPs while staying relatively respectful of your temps and battery life. It's not quite as granular as Apple's core clusters, but the work with Thread Director is a promising start. Seeing AMD push heterogeneous systems so far down the roadmap virtually guarantees that they won't get Apple-level power efficiency for a while.
On the bright side, AMD has carte-blanche to design whatever they want. Not only can they one-up Intel by implementing core clusters, but they could also one-up Apple by adding power management per-logical-core, or some weird chiplet optimizations. The sky is the limit, really.
Alder lake is much worse temp wise. Look at the new Dell XPS design. They literally had to remove the F keys to make room for additional heatsink to get the newer Alder lake CPUs to work in a reasonable way.
Those Dell XPS are no better than an Intel Macbook, they're designed by people who can't put function before form and consistently screw up their hardware design enough to avoid like the plague. I'm not the least bit surprised they didn't pick the right chip for the job, two years ago it was Dell sending out emails to XPS owners warning them not to leave it asleep in a bag for risk of permanent damage...
I've tried a few Alder Lake laptops now (and daily-drive a 12700k desktop), and I don't really have any complaints about the thermals. Gaming, music production, video editing, none of it can seem to push the CPU past 40c under extended load. It's a solid chip that stands toe-to-toe with it's contemporaries, and I reckon it's going to get scarily good once Intel transitions it from 10nm++ to 5nm.
I agree but that still doesn't invalidate my point. They had to significantly overhaul the thermal system for alder lake. I validating the point that it uses less power then the prior Intel gens.
As the other reply mentioned they are testing against the M2, and they are also testing the lower powered AMD part 6850U which does best the M2 in some tests.
Not sure why you came out so strong with such a false statement.
Me too. I really wish I could buy a Samsung Galaxy Book Go 360 which is ARM and has amazing battery life, and install Ubuntu on it, but I don't think there's a known possible way to do so.
I really want a competent, high-end ARM Ubuntu laptop to happen. The Pinebook Pro has shitty specs and looks like shit with the 90s-sized inset bezels and 1080p screen.
I just spent a lot of time looking around for a laptop that had good battery life to develop on (i.e. ssh).
I eventually went with the MacBook pro with M2 because - *it actually is amazing*.
It lasts for like 3-5 days of all day use in typical vim/firefox use for me on a single charge.
I debated going with a system 76, falcon Northwest TLX, etc for more power and x86 such that archlinux would be more compatible, but most laptops with x86 processors only have ~1-2 hours with a dGPU or maybe ~<10 hours with windows as the os (~drops significantly with Linux typically).
It's unfortunate, but x86 is really awful in this area - so I went for ARM, and the best ARM based computer i could find (aluminum chassis / great durability) is the M2 based MacBook pro (slightly larger battery than the air).
What's nice is it completely beat out my expectations. I have a nice and fairly new desktop with an i7 on arch. My desktop takes 12 minutes to compile duckdb.
The M2? 6 minutes.
Color me impressed.
Just got it recently, and I'm looking forward to sourcing Asahi Linux on it tomorrow.
Along with Linus' recent push to Linux kernel from the M2, I think it's likely that a very large portion of Linux users will be using apple silicon soon.
Yeah this doesn't work for me. I develop with a lot of sensors and hardware and drivers are a pain in the ass.
I have a box of 30 cameras and exactly zero work on Mac.
Also, fuck Mac keyboards, I can't develop with them, and the constant quacking noises and spinning beachballs that I have to meditate to while I have absolutely NO idea what's causing the delay.
Even Alt+Tab doesn't work correctly, tiling shortcuts don't work consistently, and sending bytes over USB HID API to switch logitech devices isn't reliable either.
(I own zero Macs, all of my personal machines are Linux, I was given a Mac M1 for work and it's inefficient as hell, productivity-wise.)
Likewise, I'd be on that for sure. Right now I'm using older MacBook Air's running Ubuntu as my daily drivers and a big dell at the home office for other work.
Longer battery life and something like the Galaxy Book Go would definitely make me happy.
Apple isn't going to somehow make 64 bit ARM in to something proprietary. Sure, they have their own special instructions for stuff like high performance x86 emulation, but aarch64 on Apple is only going to mean more stuff is optimized for ARM, which is good not only for Linux, but for other open source OSes like the BSDs.
It happens to be a standard ARM instruction, Apple pushes some bits into ACTLR_EL1 (Auxiliary Control Register, EL1, "Provides IMPLEMENTATION DEFINED configuration and control options for execution at EL1 and EL0") in the kernel on context switch. The DTKs used a proprietary register and touched it using msr, but again, no custom instructions.
Apple does in fact ship custom instructions on their silicon, but where those are used, how they work, and how ARM lets them get away with it is a story for another day :)
https://aws.amazon.com/pm/ec2-graviton/ is an indication that Amazon cares about linux support for the arm64 architecture. So the question is how much variance there is to the M1 relative to that.
x86 processors will be produced on the same nodes. Many ARM SoCs require binary blobs or otherwise closed source software, so they are not the best choice to run Linux on if you're approaching it from a longevity and stability perspective.
I think the concern is there is currently no 'IBM-compatible'-like hardware ecosystem around ARM. Raspberry Pi is closest, but nothing mainstream yet. And it looks like RISC-V will have a better chance than ARM.
RISC-V barely has any end-user visible deployment yet. Despite that, it has strong platform standarization (OS-A Profile, RVA22 and standardized boot process through SBI, UEFI specs).
This is all just in time for VisionFive2, just announced. I suspect it will ship in large amounts.
Linux support is about much more than instruction set support. Most ARM chips are shipped on SoCs which can take a lot of work to get Linux running on, and even then it might not run well.
It used to be the gate length. But eventually improvements were made that didn't neatly map to the gate length but still doubled transistor density, so they transitioned to just dividing the number by sqrt(2) each generation (since chips are two-dimensional making everything sqrt(2) smaller in each dimension increases density by factor 2).
Until 14nm the relation to transistor density was kind of true, all 14nm processes have similar density that is roughly double that of 22nm, which is roughly double that of 32nm. But now even that's meaningless and all they're doing is steps of factor sqrt(2) out of habit.
If M2 Pro is similar to M1 Pro (two M1s duct-taped together with very fancy duct tape), this is interesting because usually chips need to be significantly reworked for a newer process and this implies an M2 core complex will be printable both at 5nm and 3nm. It would be interesting to know how much of this is fabrication becoming more standardized and how much is Apple's core designs being flexible. If this is the latter, then Apple has a significant advantage beyond just saturating the most recent process node.
The M1 Pro was not two M1s duct taped together. Their core configurations do not share the same proportions (8+2 vs 4+4).
You may be thinking of the GPUs? Each step in M1 -> M1 Pro -> M1 Max -> M1 Ultra represents a doubling of GPU cores.
Or you may be thinking of the M1 Max and Ultra. The Ultra is nearly just two Maxes.
Regarding your point about flexibility, it’s hardly unprecedented for the same core to be used on different processes.
Apple has at times contracted Samsung and TSMC for the same SoC. Qualcomm just recently ported their flagship SoC from Samsung to TSMC. Even Intel backported Sunny Cove to 14nm. And of course there’s ARM.
meta: HN constantly feels the need to be maximally pendantic even when what they're trying to say was already covered, and it's just very tedious and leads to an exhausting style of posting to try and prevent it.
that's really why the "be maximally generous in your interpretation of a comment" rule exists, and the pedantry is against the spirit of that requirement, yet it's super super common, and I think a lot of people feel it's "part of the site's culture" - but if it is, that's not really a good thing, it's against the rules.
Just waiting for the pedantic "ackshuyally the rule says PLEASE" reply.
Well actually, it's not just HN, I see this pattern all over tech Twitter, programming subs on Reddit etc too. I think it happens when people want to participate in the conversation but don't have anything actually worthwhile to say, so rather than say nothing they nitpick.
I want to generously overlook any particular words you used and totally disagree with your main point. :) I think that it’s a positive feature of threaded comments to spin off side discussions and minor corrections. In this case, the correction was wrong, but if it was right, I’d have appreciated it in addition to whatever else ended up being written by others.
What’s bad for discussion is when those receiving the reply feel attacked, as if the author of the minor point was implying that nothing else was worth discussing. I wish that neither parent nor child comment authors felt the urge to qualify and head off critical or clarifying responses.
> I wish that neither parent nor child comment authors felt the urge to qualify and head off critical or clarifying responses.
I actually would go the other way and say that preemptively laying out a rebuttal to a common/superficial counterargument is an important supportive component of an argument. I personally wish that comment authors wouldn't take it as a personal slight when a common/weak counterargument is addressed preemptively.
Any scientific paper has a "what came before and why it's wrong and sucks, and why my approach is awesome and cool and better in every way" section and those are really the same thing - preemptive addressing of criticisms that reviewers/etc would make. You wouldn't say those are dismissive and superficial because they "belittle the previous authors and dismiss their work and arguments", or that "preemptively rebutting an argument is offensive to the reviewer". Why is that a bad thing?
"I am aware of a common concern X and I think it doesn't hold water because..." is a pretty reasonable thing to post in a casual debate and it really diminishes the discourse for people to take offense to it and for people to have to excessively censor or limit their discussion as a result. It's not good for the community.
Anyway, the other thing is, a lot of debates really come down to "values differences", which is a kind of X-Y problem. For example, in a political debate, a lot of debates over a policy aren't really about the policy, but rather a fundamental disagreement about whether government can (or even should attempt to) efficiently perform an action or regulate a certain kind of conduct. And preventing certain underlying ideals or principles from being surfaced in the discussion tends to lead to rather pointless debates where nothing is agreed, because it was never about the policy in the first place. So the policy actually worsens that problem as well, you just end up with tilting at policy windmills instead of addressing the actual area of disagreement (which is not the policy at all).
I wonder if they'll also bump the M2 machines to 3nm silently, if the efficiency bump is minor? Apple previously split the A9 between TSMC and Samsung at two different node sizes, so it wouldn't be completely crazy.
Or perhaps they're content to leave the M2 as 5nm for easy performance gains in the M3 next year. It also has the advantage of keeping the cheapest machines off of the best node size, which is surely more expensive and more limited than 5nm.
There's no reason to assume that a 3nm and 5nm M2 core is identical in that way. It's probably similar to the changes Intel used to do for die shrinks when they were doing tick-tock.
The heading is just blatant speculation. It was written before Ming-Chi Kuo tweeted about M2 Pro being on enhanced 5nm. Once he said that, the article was updated to include his tweet - adding an element of uncertainty. Ming-Chi Kuo is a credible news source; cultofmac is an aggregator - they don't care if they're wrong and they optimize for eyeballs.
This post reeks of fanboy-ish excitement; there's nothing to see here.
So we'll see at least 1-2 years of Apple Silicon being at least one node ahead of competition. I am curious for how long will be Apple able to pull this lead off, and what the perf/watt will look like when (if?) AMD has node parity with Apple in the near future. Or when perhaps Intel uses TSMC as well, and the same process node.
I think this was Apple's game for a LONG time. They have led in mobile chips to the point where they are sometimes 2 years ahead of the competition.
They do this using their monopsony power (they will buy all the fab capacity at TSMC and/or Samsung, and well before competition is aiming to do so either).
> They do this using their monopsony power (they will buy all the fab capacity at TSMC and/or Samsung, and well before competition is aiming to do so either).
It's not just buying power - Apple pays billions of dollars yearly to TSMC for R&D work itself. These nodes literally would not exist on the timelines they do without Apple writing big fat checks for blue-sky R&D, unless there's another big customer who would be willing to step up and play sugar-daddy.
Most of the other potential candidates either own their own fabs (intel, samsung, TI, etc), are working on stuff that doesn't really need cutting-edge nodes (TI, Asmedia, Renesas, etc), or simply lack the scale of production to ever make it work (NVIDIA, AMD, etc). Apple is unique in that they hit all three: fabless, cutting-edge, massive-scale, plus they're willing to pay a premium to not just secure access but to actually fund development of the nodes from scratch.
It would be a very interesting alt-history if Apple had not done this - TSMC 7nm would probably have been on timelines similar to Intel 10nm, AMD wouldn't have access to a node with absurd cache density and vastly superior efficiency compared to the alternatives (Intel 14nm was still a better-than-market node, compared to the GF/Samsung alternatives in 2019!), etc. I think AMD almost certainly goes under in this timeline, without Zen2/Zen3/Zen3D having huge caches and Rome making a huge splash in the server market, and without TSMC styling on GF so badly that GF leaves the market and lets AMD out of the WSA, Zen2 probably would have been on a failing GF 7nm node with much lower cache density, and would just have been far less impressive.
AMD of course did a ton of work too, they came up with the interconnect and the topology, but it still rather directly owes its continued existence to Apple and those big fat R&D check. You can't have AMD building efficient, scalable cache monsters (CPU and GPU) without TSMC being 2 nodes ahead of market on cache density and 1 node ahead of the market on efficiency. And they wouldn't have been there without Apple writing a blank check for node R&D.
They absolutely use their power (aka money) to buy fab capacity but they are also responsible for a ton of investment in fabs (new fabs and new nodes). Because of that investment they get first dibs and the new node. In the end it's up to the the reader to decide if this is a net positive for the industry (would we be moving as fast without Apple's investment? Even accounting for the delay in getting fab time until after Apple gets a taste).
What would motivate TSMC to choose to only have 1 customer?
TSMC is known as "huguo shenshan" or “magic mountain that protects the nation”. What would motivate TSMC to choose to have their geopolitical security represented by only 2 senators?
IIRC they were using TSMC before TSMC had a material process lead and supported them (and moved away from Samsung) with big contracts and a long term commitment. Hardly surprising that they have first go a new process. Not a risk less bet but one that has paid off.
Exactly. You cannot look at that as if they decided 2 years ago to just buy all the capacity. Their relationship with TSMC goes back way further than that, and there have been several ups and downs along the way.
Yea this is what I am wondering as well. If nobody else ends up switching to ARM in the laptop/desktop space and eventually AMD and Intel are making 5 or 3nm chips then surely this massive lead in power efficiency is going to close. At the current levels the new apple computers seem awesome - but if they are only 10-20% more efficient?
You do have ARM in Chromebooks. Any wholesale switch for Windows seems problematic given software support. But beyond gaming, a decent chunk of development, and multimdeia, a lot of people mostly live in a browser these days.
Everybody except Intel and Samsung is screwed if TSMC stops making chips.
Apple (and the rest of the mobile industry) would try to move to using Samsung's fabs and Intel would go back to being the undisputed king on desktops, laptops, and servers.
I think TSMC has like 2-3 times the fab capacity that Samsung does right now for modern chips, so there would be a huge chip shortage.
Apple's $200 billion cash pile would come in handy when trying to buy fab capacity from Samsung so they might come out ahead of less cash-rich competitors.
There would be a significant hit to processor performance. Samsung fabbed the recent Snapdragon 8 Gen 1, which has comparable single core performance to the iPhone 7's A10 fusion chip.
The thing is that 4nm doesn't actually mean anything. Intel's 10nm node is mostly on par with TSMC N7 and it caused quite a bit of confusion, so Intel renamed a bit improved version(something they would call 10nm++) to Intel 7. It's all just marketing and have been for 15 years or so.
I don’t think that is fair. ARM and Apple have very different objectives. Apple does not care if its CPU design division makes any money and they can get away with very large SoCs that rely on cutting edge manufacturing. ARM on the other hand needs to sell designs that manufacturers can use to make money. So there is a tendency to go towards simpler designs that are easier to manufacture. So sure, Apple has an edge in performance. But they don’t have the same business model or optimise for the same things.
Well, they would be seriously hurt. However, does that matter when almost every tech company (including Qualcomm, MediaTek, AMD, Apple, ARM, Broadcom, Marvell, Nvidia, Intel, so forth) would also be harmed?
TSMC going down is basically the MAD (Mutually Assured Destruction) of tech companies. Kind of a single point of failure. Intel would probably weather it best but would still be hurt because they need TSMC for some products. Plus, well, in the event of TSMC's destruction (most likely by a Chinese invasion), Intel might raise prices considerably or even stop sale to consumers especially as their chips would now have major strategic value for government operations. NVIDIA might also survive by reviving older products which can be manufactured by Samsung in Korea, but same situation about the strategic value there, and getting chips from Korea to US might be difficult in such a conflict.
Does anyone have any resources that explain the historical reason(s) TSMC became what it is? How did the world's most important hardware manufacturer manage to get constructed on a geopolitical pinchpoint?
Way down the road I hope Tim Cook writes a memoir. I’m curious as to his unvarnished thoughts about doing business in (and being so reliant on) Taiwan and China. I’m sure he can’t publicly express some of those thoughts without unnecessarily adding risk for Apple but he must have lots of interesting opinions about things like being reliant on TSMC vs trying to build their own fabs, etc.
I recall Tim Cook speaking about it actually. He says, Apple is there not because it's cheap but because they actually have unmatched engineering capacity. He was talking about how much quicker everything can get moving and how much accessible engineering talent they have.
It makes sense, after all there are much poorer countries in the world but you can't expect for Afghanistan for example be like "okay okay Apple, we will do the chips and the phone at %25 discount". In fact, you can't expect even EU or USA to start easily doing that because the manufacturing know how is long gone with the de-industrialisation of the west. Sure, there's some talent and capacity left but its not anywhere near the scale of what exists in Taiwan and China.
Based on how Jobs life and legacy are discussed at Apple, I doubt cook will ever speak frankly about matters that affect Apple.
Kara Swisher is interviewing him and Laureen Powell Jobs soon. I expect that interview to continue to focus on only the best possible characterizations of Steve.
If the economic situation we are all into at the moment is because of Ukraine war, I think is ww3 happens slower iPhones will be the least of our concerns.
If TSMC poofed out of existence because of a few bombs from china or a freak super natural disaster, global GDP would drop significantly and quickly. One of the biggest SPOF that I'm worried about in the world.
Repeat of the car market? I'm going to make a bunch of money off the old computers collecting dust here, and Apple's going to have to unlock support for new OSes on them, making all its money on services.
By that time it'll be far from bleeding edge. The Taiwan fab will be close to N2 while the Arizona fab will be able to produce 5nm generation chips, that'll be 4-5 year old tech then.
For what reason would TSMC abruptly stop supplying chips short of war? There's nothing other than war that would cause it. And if there's a war, Apple's profits are the least of problems.
Meteor, Large tsunami, Earth quake, Solar flare, extremely infectious disease that isn't as weak as Covid-19. There are so many natural disasters that could cripple or outright destroy TSMC production facilities.
Meteor and Solar Flare that are big enough to cause problems would be global problems.
Large tsunami is possible, but I doubt they built these things right next to the ocean, and as the other poster says they're built to withstand large earthquakes.
Climate change impacts or global pandemics could also significantly impact TSMC's operations. Or, Chinese actions that are somewhat short of full-on war.
Also relevant here: TSMC is building a chip fab in the US.
I wonder how many people have to collaborate to get a 3nm semiconductor out the door. TSMC has 65,000 employees. ASML has 32,000 employees and 5000 suppliers. The complexity of it all is unimaginable!
It is possible it is for a Mac Pro, but that would be just for bragging rights to have a halo processor that beats everything AMD and Intel have in terms of power and single threaded and multithreaded workloads. It all depends on how good the yields are on 3nm.
As an aside, I am not sure who a Mac Pro computer would be for at this stage. I wonder if they would design it to be able to slide into a rack mount. How many Mac workloads need a computer more powerful than a Mac Studio at this point? It was once you needed a pro machine to properly use photoshop or illustrator and now a MacBook Air is fine most of the time and a Mac Studio is perfectly capable of doing even the most demanding video editing. The only thing that comes to mind would be for render workloads for animation houses and for video game developers. Will the chassis design be tailored for that? Will it be a good value for for those types of customers who are not going to care what it looks like because it is going to be in a rack or in a corner?
I’m entirely unclear if an ARM-based Mac Pro would look anything like the old Mac Pro.
Well. Actually. If it would look anything like the late lamented cheese grater Mac Pro. I worry it would be a lot more like the trashcan Mac Pro.
Because for a long time, the Mac Pro has meant expandability. Lots of RAM slots. PCIe slots. Drive bays. Big-ass GPUs. Something adaptable to pro work flows, namely audio and video capture and production.
Is anything like that possible with the Apple Silicon system? I think it’s extremely unlikely they’d support GPUs. And I really don’t think they will support old-fashioned RAM—they seem to be all-in on directly-on-die memory. And PCIe cards? Shrug? No idea, because we’ve not seen anything like that yet.
Is it just a case and logic board and then you are plugging SOC modules onto? Would Apple ever sell such a thing? Better question, would Apple ever sell the individual modules?
On one hand I don't think this will ever make them money, on the other, it might just be a halo product that is worth the cost in good press. A shiny expensive thing that Apple can say, "look, see, we have the fastest personal computer you can buy."
So it seems like the M2 is really an "M1+" or "M1X", whereas the M2 Pro/Max/Ultra are really the second-generation Apple Silicon.
That's fine, in my opinion. M1 is still an amazing chip, and if that product class (MacBook Air, entry iMac, etc.) gets even marginal yearly revisions, that's still better than life was on Intel.
Actually I get a different impression. Although the M2 tests have been impressive nonetheless (the M2 being based on the A15 and not on the A14 makes it more the an M1X imho), the issues around throttling and thermals with the MacBook Air make it seem to me, that the M2 was actually designed to be on the 3nm node - which then seems to have been delayed by TSMC. That the rest of the M2* line will presumably be made with the 3nm process boosts this impression for me.
I was planning on getting the redesigned M2 Air, but with the above in mind (which is just speculation) it got me thinking again.
I know I'm being irrational about this, but for some reason this makes me lean toward getting an M1 Air or 13-inch Pro rather than an M2: it's like with the M2 performance gains are being squeezed out of the same (or similar enough) M1 process rather than changing the process significantly, at the cost of efficiency.
I almost did this, but the return of MagSafe (which frees up a USB port) and the display improvements were worth it to me. Oh, how I’ve missed MagSafe.
An important note is that the MacBook pro M2 vs M2 Air appear to be essentially the same hardware with the same price - but the MacBook pro does have a slightly larger battery.
And it is quite incredible. I've been very impressed that i can work all day on an M2 MacBook pro on vim - for several days - with only one charge.
The battery life is incredible.
About to switch to Asahi Linux, so I hope it stays, but I am almost certain the battery life will be better than any x86 computer running Linux I could've gotten.
They're also available used and a lot cheaper now.
I'm plenty happy with mine and don't plan to switch any time soon. Yeah, the M2 Air looks a bit nicer, but it's more of a Pro-follow up with its boxy design and ... eh, the M1 Air is totally fine in all aspects I can spontaneously come up with. It's a really good device and the laptop I might recommend for years to come. It getting cheaper and cheaper will only increase the value you'll get.
I'm curious what actually unifies the "MX" for some X. There are different chips in the series, and apparently they can even be on different-sized processes and keep the name
M1 uses the same Firestorm and Icestorm cores as the A14 SoC. M2 uses the same Avalanche and Blizzard cores as the A15. So one can argue about the importance of the differences, but they are clearly two different generations.
M1 pro, max, and ultra still have the same cores, just a different number of them. One would assume that M2 derivatives will be the same: different combinations of the same cores.
No, not really. The "3nm" in the "3nm process" is not a measure of anything in particular, and even if it is a measure, the measure may or may not be in the neighborhood of 3nm.
Several years ago, fabs started naming each next-gen process with a smaller number of nanometers, even if the process size didn't change. It's just marketing now.
Node process is determined by the physical manufacturing. Architecture is determined by the design templated on during manufacturing. You could make an arm core on an intel process (which I think even happens in some of their testing phases). So yes.
The nm in the process names is more marketing than a reference to something physical.
For example:
> The term "5 nanometer" has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a 5 nm node is expected to have a contacted gate pitch of 51 nanometers and a tightest metal pitch of 30 nanometers. However, in real world commercial practice, "5 nm" is used primarily as a marketing term by individual microchip manufacturers to refer to a new, improved generation of silicon semiconductor chips in terms of increased transistor density (i.e. a higher degree of miniaturization), increased speed and reduced power consumption compared to the previous 7 nm process.
I think Kuo might be misinterpreting the statement from TSMC regarding revenue from N3. The key is that they said it wont "substantially" contribute to revenue until 2023. Of course processors like M2 Pro/Max/Ultra wont generate the same amount of numbers like something more high volume like an iPhone and in the grand scheme of things can't represent a substantial contribution to TSMC revenue.
The fact is TSMC said they'll start N3 HVM in September. So they are producing something and we know Apple is expected to be the first customer for this node. It's too early for the A17 so either it's the M2 Pro/Max/Ultra or something new like the VR headset chip. Can someone see another possibility?
Apple still btw has to replace the Mac Pro with an Apple Silicon based model and their own deadline (2 years from first M1) is running out. It could make sense that they want to bring this one with a "bang" and claim the performance crown just to stick it to Intel :)