Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple M2 Pro to use new 3nm process (cultofmac.com)
285 points by nateb2022 on Aug 25, 2022 | hide | past | favorite | 231 comments


M1 and M2 actually are not produced on the exact same process node. M1 is N5 and M2 is N5P, an optimized version of N5.

I think Kuo might be misinterpreting the statement from TSMC regarding revenue from N3. The key is that they said it wont "substantially" contribute to revenue until 2023. Of course processors like M2 Pro/Max/Ultra wont generate the same amount of numbers like something more high volume like an iPhone and in the grand scheme of things can't represent a substantial contribution to TSMC revenue.

The fact is TSMC said they'll start N3 HVM in September. So they are producing something and we know Apple is expected to be the first customer for this node. It's too early for the A17 so either it's the M2 Pro/Max/Ultra or something new like the VR headset chip. Can someone see another possibility?

Apple still btw has to replace the Mac Pro with an Apple Silicon based model and their own deadline (2 years from first M1) is running out. It could make sense that they want to bring this one with a "bang" and claim the performance crown just to stick it to Intel :)


I would expect that Apple would push its goldenchild (iphone) onto the node first. Its a small chip which they can use as a pipecleaner, makign sure they can get the yields up and optimise the process before pushing a larger die onto the node.

They easily could have been allocating risk production on the iphone for the past couple of months, ready for the launch. Apple being like yes we will take lower yields for less cost.

I do not expect any company to announce production 3N product, until apple has had one out for atleast 6-12 months. Look how long it took the rest of the industry to move to 5N. I swear part of that reason was an exclusivity agreement with Apple, and it massively paid off for their CPUs. Having a node advantage is always massive in terms of price / performance / power matrix.


Are you suggesting they might have produced millions of A16 chips on N3 during risk production phase and launched it before TSMC even reaches HVM? Highly unlikely. Risk production is a phase where they still make changes and fix issues. It's like a beta phase. It does not come at a lower cost, it would be more expensive to throw out a big chunk of chips. The iPhone chips are very high volume, you can't produce them before reaching... high volume manufacturing phase.

The iPhone contributes to TSMC revenue in a substantial manner so that also would totally not fit what TSMC said.

The M2 Pro/Max/Ultra are much lower volume and higher margin. It makes sense to start with them.


The iPhone contributes the most to apple’s revenue and margin. It wouldn’t be a great pipe cleaner because they need a ton of chips on a committed launch schedule, and can’t afford any yield screw up there.

With the Mac, they could probably afford a 10% yield, can extend ship times, and in the worst case could even push back a launch.

While the bigger chip does push down yields, my bet is they have more wiggle room than needed to compensate.


Except that they are much larger chips, that will be much more sensitive to yield issues. They could do that, but they will be expensive. Maybe that's ok.


Their biggest (m2 max in studio) are more like chiplets with interposers, which exponentially lowers the yield problems depending on how much you can split it up. Also larger chips can still set a threshold on gpu cores to disable due to defects and that kind of thing, where a mobile one might just throw out the chip with a much lower number of (or any?) defects.

Nvidia can make tons of tiers out of the same chip by just setting different thresholds on the number of usable cores, it isn't all just price discrimination (though sometimes I think they have been found to be fusing off much more than needed for the number of defects as a pure price discrimination play, or that might have been Intel with cache size).


M2 Pro/Max chips will be huge. Only the ultra is using an interconnect between dies, but that's two full max dies. M1 Max is 432 mm^2, that's enormous. M2 is 25% bigger than M1.

There's not a lot of lower core count SKUs for these chips either. There's a few GPU tiers and CPUs tiers but not lots of room to go down. Pro has 8/10 CPU cores and 14/16 GPUs. Max is 10 CPU and 24/32 GPU.

There is no 4-7 CPU core segments like AMD does with their chiplets. Intel has much lower tiers for their dies.

The dies are just huge huge huge.


I think a lot of people who watch these things suspect exactly what you wrote at the least. Apple funds much of TSMC’s research and development for a new node, pays for production for use in iPhones, both sides make tons of money and repeat the cycle on a new node and other companies come in and buy up capacity on that cutting edge node, seemingly comfortably coasting behind apple. I guess now Apple may use some capacity themselves for their mac cpus after the newest iDevices prove the tech.


Starting HVM in Sept does not mean you get Revenue in Sept. It takes months before volume reached, testing, packaging done and shipped. TSMC isn't unusual in stating it they wont get revenue from N3 until 2023.


Yes. So either way, Kuo cannot deduce that M2 Pro wont be on N3. If the revenue is realized later or if the numbers are too low to justify calling it a substantial contrubution to TSMC revenue... same result. Kuo's argument does not seem to hold water. Now, that does not mean the inverse is true and M2 Pro is guaranteed to be on N3. I can only come up with the VR chip as alternative and so far I think nobody else came up with a suggestion.


The current assumption and his prediction, along with alignment on other data inside Supply Chains suggest new MacBook Pro, assuming it uses M2 Pro, would come out in Oct / Nov.

And if New MacBook Pro indeed uses M2 Pro, and M2 Pro uses N3, it would be classified as substantial revenue. Hence his word on M2 Pro wont be on N3.

TSMC isn't normally the one to spin words on substantial contribution to revenue. At least until now it means actual product shipment. As they do get N3 product revenue in terms of pilot project and and part of product R&D.

Edit: There were rumours of Intel being the first customer for N3 with their GPU. Using it as Tiles on their next gen Meteor Lake SoC. Personally I think that is likely the case.


Well dosent Apple usually prepay? Thats normally why they get preferential treatment.


In accrual accounting payment has very little connection to when revenue is recognized. In order for revenue to be recognized the product has to ship.


It's more complicated than this, accounting as a field exists pretty much because there's intricate sets of rules and ways to interpret them. Here, my understanding is a good accountant would say not to recognize the revenue until you consider it shipped -

i.e. if you agree to pay me a bajillion dollars for a time machine with an out clause of no cash if no delivery, that doesn't mean I get to book a bajillion dollars in revenue

over the top example, but this was the general shape of much Enron chicanery, booking speculative revenue based on coming to terms on projects they were in no shape to deliver, so its very much an accounting 'code smell' if 'code smell' meant 'attracts regulators attention'


I think this case is how do you book you paying one billion cash today for a time machine sometime next Tuesday of next year?

Accrual accounting says even if you have the money it’s not “yours” until the product ships.


> something new like the VR headset chip

My bet is on this one.


Given that there is an iPad with a M1-chip, I think Apple is smart and produces broad chips that fits in all the devices and potentially VR glasses.

It makes sense that their more expensive M2 Pro chip is made on the 3nm process, which is more expensive and has less capacity. It would go to more expensive Macbooks because those are less in demand and have a higher price. For the VR glasses it makes sense they release an expensive developer unit and wait for 3nm to ramp up for the mass consumer version.


> Apple still btw has to replace the Mac Pro with an Apple Silicon based model

At about the same price, I think 10x Mac Studios Ultra count as replacing much more than a single maxed-out Mac Pro. Though I do expect an Apple Silicon Mac Pro is still coming, I do not see any need for Apple to rush to meet some marketing deadline from almost 2 years ago.


They don't ,,have to'' be ready in 2 years. The M2 numbers from the N5P process were underwhelming, I wouldn't replace my M1 MacBook pro without seeing significantly superior performance / watt numbers, and happy to wait for the N3 process to be in production whatever it takes.


Of course nothing forces them to be ready within 2 years but alas, that's what Apple said they'd do. I agree the M2 numbers were not amazing. I guess after the big M1 shock it's hard to follow up with something that comes even close. You can't get similar gains like the transition from x86 to an integrated arm based SOC brought, doubly so when there's no substantial process node improvement (N5 -> N5P is a minor optimization). In the end they mostly bought better performance with a bigger die and increased power consumption. I'm pretty convinced they'll need N3 for the next jump but even that wont be on the level of the Intel -> M1 step.

The revolution has happened, now it's all about evolution.

BTW if Apple wants to increase the prices of the Pro macbooks like they did with the M2 Air due to inflation then they better justify it with some good gains. The big changes in terms of hardware redesign already happened last time.


> I agree the M2 numbers were not amazing.

What other CPU core design iteration managed to improve performance while also cutting power draw?

Anandtech's deep dive on the performance and efficiency cores used in the A15 and M2:

Performance:

>In our extensive testing, we’re elated to see that it was actually mostly an efficiency focus this year, with the new performance cores showcasing adequate performance improvements, while at the same time reducing power consumption, as well as significantly improving energy efficiency.

Efficiency:

>The efficiency cores have also seen massive gains, this time around with Apple mostly investing them back into performance, with the new cores showcasing +23-28% absolute performance improvements, something that isn’t easily identified by popular benchmarking. This large performance increase further helps the SoC improve energy efficiency, and our initial battery life figures of the new 13 series showcase that the chip has a very large part into the vastly longer longevity of the new devices.

https://www.anandtech.com/show/16983/the-apple-a15-soc-perfo...

Intel and AMD seem to have both returned to the Pentium 4 days of chasing performance via increased clock speeds and power draws.


The report you quoted and linked to is about the A15 and not the M2. The M2 is based on the A15 but from what I've seen it does use quite a bit more power (~30%?) than the M1 when loaded. Anandtech did not analyze the M2 yet as far as I can see.


As previously noted, those core designs are used in both the A15 and the M2.

Just as the same cores were used in the A14 and M1.

Using more power overall comes from adding additional GPU cores and other non-CPU core functionality.


If the increase in power consumption comes from the additional GPU core, from increased frequencies in the CPU cores or other added parts to the chip imho is not that important for users (and depends on what they are doing). They see the system as a whole. They get x% more performance for y% more power usage. For the CPU x is smaller than y. This is totally normal when increasing frequencies.

Note: I'm not saying the M2 is bad. It's a very good chip indeed. All I said was it was not amazing. It was an iterational, yet welcome, improvement. And I think one couldn't expect anything amazing quite so quickly.


We're talking about CPU core design.

Would we say the Zen 4 core design is less efficient because AMD is going to start bundling an integrated GPU with Ryzen chips, or would we just talk about Zen 4 core power draw vs Zen 3?

Apple's performance cores managed to improve performance while cutting power.

What other iterative core design did this?

It helps to remember that Apple isn't playing the performance via clock increases no matter what happens to power and heat game.


I guess that's where the misunderstanding comes from. I was not talking about CPU cores alone. Only M1, M2 as a whole.

But I still am not sure if I can believe that the M2 CPU improved performance while at the same time cutting power. Can you link to some analysis? Would be very interesting. Though please not the A15 one, the cores are related but not the same and the CPUs have big differences.


Apple hardware updates are always and only moderate incremental improvements, since the very beginning. Apple is reasonably predictable in this regard. It is unrealistic to expect a generational and exponential leap in performance or efficiency in any of Apple's hardware refreshes. That has never happened, and it likely never will. What happens instead is whatever model gets a little bit better than the last revision. The M1 was not a massive leap forward, but instead an impressive lateral move. Subsequent Apple Silicon chips will only be a smidgen better than the newest previous revision.


> And I think one couldn't expect anything amazing quite so quickly.

I'm not expecting to see such a performance jump from another CPU transition again in my lifetime. The jump from x86 to M1 was a boost because of TSMC's fab process compared to Intel's, yes, but it was also from the ISA change, leaving behind a lot of the x86 cruft.


So how do you explain the M2 13 inch mbp throttling severely under load while the M1 13 inch mbp doesn't as heavily. It's the same chassis. It's impossible for the M2 to use less power, the heat has to come from somewhere. It's not the GPU since many benchmark use a CPU only load to show throttling behaviour.


Are you European? The price of the M2 Air did not increase. The USD price was exactly the same as for the M1 Air. Both debuted at $1199. The price went up in Europe because of a drastic reduction in the EUR/USD exchange rate.


The M1 Air launched at a price of $999. The increase to $1199 happened with the launch of the M2 Air.

  > With its sleek wedge-shaped design, stunning Retina display, Magic Keyboard, and astonishing level of performance thanks to M1, the new MacBook Air once again redefines what a thin and light notebook can do. And it is still just $999, and $899 for education. 
https://www.apple.com/newsroom/2020/11/introducing-the-next-...


The euro is not doing great, but the dollar is falling as well.


The dollar is not falling. It has been extremely strong against any standard world currency basket. So strong it hurts exporters.


Actually the dollar is falling as well (in real purchasing power) due to inflation. It's doing quite a bit better at the moment compared to e.g. Euro. One factor could be the increased interest rates. Why would I keep Euros and get still nothing when I could get at least a little bit on the USD. Also the war in Europe does not help their currency.


> I wouldn't replace my M1 MacBook pro without seeing significantly superior performance / watt numbers

What makes you think that this will happen in one generation? The point of the M2 is not to get M1 users to migrate, it’s to keep improving so that MacBooks are still better products than the competition. Apple does not care that you don’t get a new computer every year, they are most likely planning for 3 to 5 years replacement cycles.


Almost nobody should update from one generation of CPUs to the next one though. Incremental upgrades are fine.


idk, I feel like most people don’t replace their expensive MacBook Pros every single time there’s a new one


> The M2 numbers from the N5P process were underwhelming.

No, they weren't. Performance improvements are precisely in line with every previous hardware update. Apple hardware updates are always and only incremental improvements, since the very beginning. Apple is reasonably predictable in this regard. Expecting more than what Apple has always given in hardware updates is overtly unrealistic.


After the November 2020 launch day chaos, with not that much existing software available was working on those machines at the time like Docker, Java, Android Studio / Emulator, VSTs, etc a typical developer would have to wait more than 6 months just to do their work with fully supported software on the system and to take full advantage of the performance gains rather than using Rosetta.

At that point, they might as well skipped the M1 machines and instead waited to purchase the M1 Pro MacBooks. Now there isn't a rush in getting a M1 Macbook anymore as now Apple is already moving to the M2 line up.

By the time they have made an Apple Silicon Mac Pro, they are already planning ahead for the new series of Apple Silicon chips; probably M3, which will be after the M2 Pro/Ultra products.

After that, it will be the beginning of the end of macOS on Intel.


What’s the point of this comment? Every consumer electronic product has a new version a year or 2 away.

Apple products also have a long reputation of having a sweet spot for buying a new product. The Mac Buyers guide has existed for like a decade or more.


> What’s the point of this comment? Every consumer electronic product has a new version a year or 2 away.

So after 9 months releasing the M1 Macbooks, the M1 Pro Macbooks came out afterwards, already replacing the old ones in less than a year. Given this fast cycle, there is a reason why the Osborne effect precisely applies to Apple's flagship products rather than 'Every consumer electronic product'.

This is a new system running on a new architecture and it must run the same apps on the user's previous computer. Unfortunately, the software for it was just too early to be available on the system at the time and if was there, it didn't run at all in Nov 2020. Even a simple update will brick the system.

What use is a system that bricks on an update; losing your important file or for power users having to wait 6 months for the software they use everyday to be available and supported for their work?

Going all in on the hype fed by the Apple boosters and hype squad doesn't make any sense as a buyers guide.


>So after 9 months releasing the M1 Macbooks, the M1 Pro Macbooks came out afterwards, already replacing the old ones in less than a year.

The M1 Air and 13" Pro are really entry level machines. The first model with a M1 Pro costs $700USD over the base model 13" M2 MBP. The M1 Pro still has much better performance compared to a base M2. The M1 Pro, Max and Ultra didn't replace anything. No one with a budget is going "Oh, the M1 Pro only cost an extra $700USD, I'll get that".

>What use is a system that bricks on an update; losing your important file or for power users having to wait 6 months for the software they use everyday to be available and supported for their work?

What's the point of this comment? Things happen. It sucks. Apple isn't the first and won't be the last company to make a mistake. Don't get sucked into the shininess of their latest product.


> The M1 Pro still has much better performance compared to a base M2. The M1 Pro, Max and Ultra didn't replace anything.

Exactly. Hence, why many skipped the M1 and bought the 14-inch Mac with M1 Pro instead. With the time it took for all the existing software to work properly on Apple Silicon, the 14 inch M1 Pro was available and little to no-one bothered with getting the old broken 13 inch M1 MBP.

> No one with a budget is going "Oh, the M1 Pro only cost an extra $700USD, I'll get that".

No one on a "budget" would get a computer that would cost more than $1,000 and it bricks on a system update / restore or chooses an Apple machine in the first place. Plenty of money saved up by then or financing options for the next version, instead of wasting it all on launch day and losing all your files next week.

> What's the point of this comment? Things happen. It sucks. Apple isn't the first and won't be the last company to make a mistake. Don't get sucked into the shininess of their latest product.

It is the truth of the matter and it happened very frequently on launch day with lots of users complaining that their shiny new computer bricked on an update / restore, etc and have lost all their files and are unable to use the computer. So once again...

What use is a system for users that bricks on an update / restore; losing all your files and waiting months for the software one uses everyday to be available and supported for basic work?

> Don't get sucked into the shininess of their latest product.

Don't tell that to me, tell that to these people who fell for it. [0] [1] [2]

[0] https://news.ycombinator.com/item?id=28090774

[1] https://news.ycombinator.com/item?id=28940358

[2] https://news.ycombinator.com/item?id=27065953


Well actually, the Osborne Effect is s myth

http://www.thesoftwareunderground.com/2005/06/osborne-effect...


>…not much existing software was working [lists a few nerdy dev tools used by 0.01% of the Mac user bsse]…


Surely software developers (and other people using x86-only software like Photoshop) are more than 0.01% of the mac user base.

Apple has specifically said that vim users are the reason they put back the physical escape key…


Photoshop worked on M1 on day 1. What is the argument even about here? Someone is upset that all software developers didn’t port everything to M1 overnight?


> Photoshop worked on M1 on day 1.

Complete and absolute nonsense.

From [0]:

As of March 2021, Photoshop now runs natively on Apple computers using the Apple Silicon M1 chip with 1.5X the speed of similarly configured previous generation systems.

Even before that it wasn't supported or designed to run on M1, hence the frequent crashes and freezing users were getting. Therefore, it wasn't working.

So how exactly is waiting until March 2021 for a stable M1 version of Photoshop, "working on day 1"?

[0] https://helpx.adobe.com/uk/photoshop/kb/photoshop-for-apple-...


Working != being a native app. The latter may be an interesting technical detail, but absolutely no user of the app will really care. They care about “does it open and not crash” (which _everything_ did b/c Rosetta) and - in a very distant second place - is it faster than before (which it also was).

So no: There was no one saying “but no software works”


This is just nonsense, there is tons of performance critical software. A lot that hasn't been ported to native still doesn't run properly or at all. DAWs that run on rosetta have many problems with VST plugins and performance. Just because something opens and doesn't crash, doesn't mean it works properly at all.


> Apple has specifically said that vim users are the reason they put back the physical escape key…

Do you have a source for this ?


I’m pretty sure I saw this somewhere, but now I can’t find it, so I accept the possibility (though I think it’s low probability) that I’m misremembering.


I thought the same thing, but I can't find the source. Bernstein bear effect.


The Mac Studio is the obvious Mac Pro replacement. It's hard to do a fixed system and also charge much more than $5k for it, and Apple has failed over and over at any kind of non-fixed Mac Pro.

They might just leave it as is. The studio is a very capable machine at the high end, and if given a beefier GPU/CPU and _maybe_ more DRAM it probably replaces the bulk of the professional use cases.


Apple explicitly said that a Mac Pro that isn't the Studio is coming.


> The Mac Studio is the obvious Mac Pro replacement.

No it is not. Mac Pro users wouldn't not migrate to a system that doesn't allow removable or extra drives.


The trash can Mac Pro was exactly that. It had very little expandability.


The Mac Studio is a good trash can replacement. The trash can was a poor cheese grater replacement.

(speaking as a happy user of both a trash can Mac Pro and a [non-Apple] full-tower workstation)


This is cool!

At the same time, not being part of the Apple ecosystem, should I be worried about the closed nature of this. I have been using Linux for over two decades now, and Intel seems to be falling behind.

(I do realize Linux runs on the M1. But it's a mostly hobby projects, the GPU is not well supported, and the M1/M2 will never(?) be available with open H/W.)


I'm running Asahi Linux on the M2 and it's great. Drivers are not all complete yet, but it's awesome.


How is webcam and microphone support? Web conferencing always killed me running Linux even on well supported hardware

How is battery life?

What distro are you running?

Are you doing all ARM binaries or is there some translation layer that works?

Sorry to bombard you, but I’m really curious about the support


I've not tried the webcam and microphone, which I guess i could from Firefox. Battery life is less than when the drivers further evolve, because I think it's imperfect in going into sleep mode.

The distro is Asahi Linux, which is ARM ArchLinux. All ARM binaries.

If you follow the Asahi Linux page, it updates super frequently, as drivers get tuned and so on.


not the person you are responding to but i was looking into it today. webcam/mic/speakers don't work but bluetooth does. there is arm to x86/x86_64 translation tools akin to rosetta 2 but they have a lot of warts and are not well supported yet. the most promising one in my opinion is called fex.


If you buy hardware supported by Linux, Zoom works well on it, including screen sharing.

I've been using Zoom on my desktop with a USB camera for video calls and screen sharing for a while now.

I was pleasantly surprised it actually worked and worked well.

Edit: to clarify, this post is about Linux in general, I don't use M1 or M2 Macs with Linux.


Why are you answering a question specifically made about M1 and M2 with a generic answer about a Desktop computer?

What is the point?


Here's what the GP says and what I'm discussing:

> Web conferencing always killed me running Linux even on well supported hardware

This is a forum for discussion, and that's what I'm participating in, a discussion.

I am unsure why you believe that my reply must strictly address a question in manner that you deem suitable.


Does it have GPU acceleration yet?

EDIT: apparently it does!!! https://asahilinux.org/2022/07/july-2022-release/ Not perfect of course but this is surprisingly good progress in a short time.


OpenGL ES 2.0 could happen as early as the end of the year. Vulkan is a long way off.


Apparently some old OpenGL games are already playable!

https://rosenzweig.io/blog/asahi-gpu-part-6.html


That is running in macOS not Linux.


> apparently it does!!!

Not in the public release yet. An experimental Mesa driver is running (https://rosenzweig.io/blog/asahi-gpu-part-6.html), but the kernel driver is still a work in progress (though it's making quick progress!). The demo at the end of the article is a proof-of-concept with the M1 acting as an eGPU for another computer; not something usable for a desktop environment yet.


As your daily driver machine?


i've not switched over to try that yet.


in my case i could use it as a daily driver, since i'm just needing a fast browser and Linux with compilers etc. but i've been using macos as a daily driver despite loathing (since the dawn of time) its font rendering.

i'll switch at some point, probably.


>despite loathing (since the dawn of time) its font rendering.

Could you expand on that a little please? I've always found the Mac's fonts & font rendering to be most pleasing so I'm interested to hear a different opinion - what annoys you about it?


i've got super sharp vision fortunately, so i see the half shading and such, and it strains my eyes which, otherwise, "expect" to bring edges into sharp contrast.

My eyes love the rendering engine on Win11, or whatever trickery they're using for fonts, and similarly on ArchLinux.


Oh I see, I thought it was the actual font rendering you disliked, I hadn't considered the smoothing being an issue! Between Apple's high DPI displays and my own eyesight I don't notice it (although I do remember when I was younger hating subpixel anti-aliasing when it was still in use, because of the rainbow around characters)

For anyone who's interested, in macOS you can disable this font smoothing with:

    defaults -currentHost write -g AppleFontSmoothing -int 0


It's great for you. But some of us are using Linux in industrial applications. You can't really put an Apple laptop inside e.g. an MRI machine. It may run highly specialized software, needs specific acceleration hardware, etc.

It's going to be a very sad day when consumer electronics win over industrial applications.


Apple hardware has never been about non-consumer, server or industrial applications outside of some film, music and movie studios using mac pros and the Xserve long time ago.

And if your making an MRI machine or other industrial equipment that consumes a huge amount of power, the fact your attached computer uses 300W vs 600W doesn't really seem like much of a big deal.

Apple has a head start with their ARM machines, but I'm also not really worried that the rest of the industry won't catch up in a few years eventually. You can only really pull off the new architecture trick once or twice, and being a leader has a way of inspiring competitors.

Apple's software and OS is also horrible to use in server applications, you only do it if you need to do it, such as iOS CI, device testing and such. Otherwise you avoid it as much as you can.


It's not about the electrical power use. It's about one market player taking out the air from the market by integrating the entire supply chain.


You can only really pull off the new architecture trick once or twice

Apparently not.

The Mac has gone from Motorola 68000 to PowerPC to Intel to ARM.

And if needed, they could do it again.


What are you on about? A €5M MRI machine will have whatever computer its manufacturer will want to support. Which will probably be something like a Core 2 running Windows XP.

None of these machines have used Macs, ever. Why would anything Apple does affect this market?


> Why would anything Apple does affect this market?

Apple is gobbling up its supply chain and using precious fab slots. It affects others.


I don’t think you need to worry about that, those are completely different use-cases and markets. ARM CPUs will be available and widespread in other applications soon enough, and Linux support is already strong in that regard.


> can't really put an Apple laptop inside e.g. an MRI machine. It may run highly specialized software, needs specific acceleration hardware, etc.

This sounds more like a pitch for letting the MRI machine talk to the laptop than putting redundant chips in every device.


No. This is not a good universal solution. What if the machine needs more processing power than one laptop can provide?

Do you want to put a rack of laptops inside the machine, waste several screens and keyboards? Log into every laptop with your AppleID before you can start your machine? It's such an inelegant solution.

Instead, the x86/Linux environment lets you put multiple mainboards in a machine, or you can choose a mainboard with more processors; it is a much more flexible solution in industrial settings.


It would be a gimmick given that real-time workloads can't be offloaded via some serial connection to consumer laptops. You'd still need hardware and software capable of driving and operating the machines embedded in the machines themselves.


No. You want the computer running the thing to be as simple, known, and predictable as possible. So that is necessarily going to be a computer provided by true manufacturer, and not whatever a random doctor feels like using. Consumer devices are compeletely irrelevant or that use case.


> letting the MRI machine talk to the laptop than putting redundant chips in every device

This sounds like a huge security risk.


While MRIs don't use ionizing radiation like the Therac-25 did, I can think of a few bad outcomes from someone finding a 0-day on anything that can control the machine. And of course if it's read only it still has sensitive medical info we wouldn't want leaked.

https://en.wikipedia.org/wiki/Therac-25


Eventually non-Apple laptops will be sold with silicon from this process node. You just won’t be the first to use it, which is fine.

Also, Asahi is getting closer to “generally usable” at an astounding pace, so who knows.


I think that is what the parent meant by feeling left behind. It is either Apple or something underwhelming like ThinkPad with Snapdragon.


But this is how everything works. I’m sure Ferrari has certain technology before normal cars do.

Sure, you will not get computers with N3 chips for a year or two if you don’t want to use Apple, but I just don’t see why that’s a huge problem.


You are probably right. But computers were not like that for the last 40 years. I wonder about alternative history without IBM PC Compatible. Maybe we just hit the performance wall and now the only way forward is system on chip. Anyway, better move on and start thinking about your computer as an appliance.


AMD is not behind at all. Have you seen the latest benchmarks?

https://www.phoronix.com/review/apple-m2-linux/15


5900HX TDP: 35-80W depending on boost setting. Most gaming laptops set it at 60W+.

M2 TDP: 20W


Power use tends to scale non-linearly past a point - disabling turbo modes would likely significantly reduce the peak power use, and ~18% performance differenceis pretty big buffer to lose.

The 6850u also beats it rather comprehensively according to those same results, and that's only 18-25w.

Really, you'd need everything power normalized, and even the rest of the hardware and software used normalized to compare "just" the CPU, which is pretty much impossible due to Apple and their vertical integration - which is often a strength in tests like this.


The 6850U is comparable in power use and still has a big perf gap against the M2 in mosts tests. Though there are some tests where the M2 leads with a big gap too so maybe it comes down to software in a lot of these. Still it seems to me like Apple is not leading.


>Unfortunately for testing, as mentioned, right now there is no Linux driver exposing the M2 SoC power consumption under Linux. Hopefully this will be addressed in time but unfortunately meant not being able to deliver any accurate performance-per-Watt / power consumption benchmarks in this article. But when such support does come, it will likely show the M2 indeed delivering much better power efficiency than the Intel and AMD laptops tested. Even under demanding multi-threaded workloads, the M2 MacBook Air was not nearly as warm as the other laptops tested. It's a night and day difference of the M2 MacBook Air still being cool to warm compared to the likes of other notebooks like especially Dell XPS laptops that get outright hot under load. The power consumption metrics should also be more useful/relevant once Linux has working M1/M2 GPU support in place too.

I mean, you shove a fan on the M2 and it beats itself...


It is not, the laptop tested with the 6850U has a 30W PL2 and 50W PL1.


I don't know where you got those numbers. That it can sometimes hit higher peaks is a good thing and shouldn't be counted against it.

https://www.phoronix.com/review/ryzen7-6850u-acpi/6

According to this it should be an average of 19.3 Watts with a peak of 31.85 Watts.

Apple also exceeds the stated tdp during peaks as well but we don't have that information atm. And remember there's a 14% perf gap between the two.

My purpose isn't really to say AMD is definitely better since apple still probably takes the win in overall product, I think the MBA is thinner and that's important to me. But it's to show that x86 isn't behind in performance and that you're not making sacrifices in that department to maintain software compatibility with the x86 ecosystem.

No sacrifices imo, AMD is just fine.


That average is over the entire benchmarking suite, including single thread tests and when tests are loading from disk or otherwise not fully saturating the CPU. Some of those benchmarks in that power consumption number are GPU only!

Take a look at the AOM AV1 power consumption graph (https://openbenchmarking.org/result/2208044-NE-6850U700026&s...), it clearly displays huge drops when the test stops running. The CPU is turboing to 30 watts in 75% of the run.

If you use the 15W hard cap performance numbers (low power mode), performance drops 20-40%.

> Apple also exceeds the stated tdp during peaks as well but we don't have that information atm

We actually do. Notebookcheck recorded 20W usage on a peak MT load (https://www.notebookcheck.net/Apple-MacBook-Air-M2-Entry-Rev...). The MBA is fanless so it's physically impossible for it to substantially exceed that amount without frying your lap.

> And remember there's a 14% perf gap between the two

Like I said, power is not equal at all.

> x86 isn't behind in performance and that you're not making sacrifices in that department to maintain software compatibility with the x86 ecosystem

Comparing the lowest end chip from one vendor to the highest end chip from another is not exactly a great look. Especially when the Arm chip is basically matching the x86 one while having only a few years of software optimization work.


I do think M2 is more power efficient, but it seems close enough to me. The Thinkpad in real usecase testing has very good battery life, 15 hours etc doing regular work. I just don't have the perspective that the fact it can scale up in power should be held against it. It's pretty typical when you're doing some super computationally expensive processing to be plugged while it's the casual emails etc that has to have great battery life.

> Comparing the lowest end chip from one vendor to the highest end chip from another is not exactly a great look.

Is it anyone else's fault that Apple only has one sku. The M2 is a 20 billion transistor chip while the Rembrandt is a 13 billion transistor chip. I'd argue that the M2 is higher end one. The laptops MBA/Thinkpad compared are the same price.

> Especially when the Arm chip is basically matching the x86 one while having only a few years of software optimization work.

So we agree it matches lol? That's what I was arguing for. Nowhere did I say Apple sucks. I default to using Apple products and have been for almost all my life. I was just trying to make a case that x86 is good enough too hardware wise.


One could argue that the Ryzen's biggest pitfall is that it hasn't adopted a big.LITTLE configuration yet. Alder Lake keeps it's thirsty TDPs while staying relatively respectful of your temps and battery life. It's not quite as granular as Apple's core clusters, but the work with Thread Director is a promising start. Seeing AMD push heterogeneous systems so far down the roadmap virtually guarantees that they won't get Apple-level power efficiency for a while.

On the bright side, AMD has carte-blanche to design whatever they want. Not only can they one-up Intel by implementing core clusters, but they could also one-up Apple by adding power management per-logical-core, or some weird chiplet optimizations. The sky is the limit, really.


Alder lake is much worse temp wise. Look at the new Dell XPS design. They literally had to remove the F keys to make room for additional heatsink to get the newer Alder lake CPUs to work in a reasonable way.


Those Dell XPS are no better than an Intel Macbook, they're designed by people who can't put function before form and consistently screw up their hardware design enough to avoid like the plague. I'm not the least bit surprised they didn't pick the right chip for the job, two years ago it was Dell sending out emails to XPS owners warning them not to leave it asleep in a bag for risk of permanent damage...

I've tried a few Alder Lake laptops now (and daily-drive a 12700k desktop), and I don't really have any complaints about the thermals. Gaming, music production, video editing, none of it can seem to push the CPU past 40c under extended load. It's a solid chip that stands toe-to-toe with it's contemporaries, and I reckon it's going to get scarily good once Intel transitions it from 10nm++ to 5nm.


I agree but that still doesn't invalidate my point. They had to significantly overhaul the thermal system for alder lake. I validating the point that it uses less power then the prior Intel gens.


It's behind, looks like they are comparing the high end 5900hx with the low end m1 in multi core tests.


As the other reply mentioned they are testing against the M2, and they are also testing the lower powered AMD part 6850U which does best the M2 in some tests.

Not sure why you came out so strong with such a false statement.


Dunno what you're talking about it's definitely the M2 in the test. It's also the same price.


Me too. I really wish I could buy a Samsung Galaxy Book Go 360 which is ARM and has amazing battery life, and install Ubuntu on it, but I don't think there's a known possible way to do so.

I really want a competent, high-end ARM Ubuntu laptop to happen. The Pinebook Pro has shitty specs and looks like shit with the 90s-sized inset bezels and 1080p screen.


I just spent a lot of time looking around for a laptop that had good battery life to develop on (i.e. ssh).

I eventually went with the MacBook pro with M2 because - *it actually is amazing*.

It lasts for like 3-5 days of all day use in typical vim/firefox use for me on a single charge.

I debated going with a system 76, falcon Northwest TLX, etc for more power and x86 such that archlinux would be more compatible, but most laptops with x86 processors only have ~1-2 hours with a dGPU or maybe ~<10 hours with windows as the os (~drops significantly with Linux typically).

It's unfortunate, but x86 is really awful in this area - so I went for ARM, and the best ARM based computer i could find (aluminum chassis / great durability) is the M2 based MacBook pro (slightly larger battery than the air).

What's nice is it completely beat out my expectations. I have a nice and fairly new desktop with an i7 on arch. My desktop takes 12 minutes to compile duckdb. The M2? 6 minutes. Color me impressed.

Just got it recently, and I'm looking forward to sourcing Asahi Linux on it tomorrow.

Along with Linus' recent push to Linux kernel from the M2, I think it's likely that a very large portion of Linux users will be using apple silicon soon.


Yeah this doesn't work for me. I develop with a lot of sensors and hardware and drivers are a pain in the ass.

I have a box of 30 cameras and exactly zero work on Mac.

Also, fuck Mac keyboards, I can't develop with them, and the constant quacking noises and spinning beachballs that I have to meditate to while I have absolutely NO idea what's causing the delay.

Even Alt+Tab doesn't work correctly, tiling shortcuts don't work consistently, and sending bytes over USB HID API to switch logitech devices isn't reliable either.

(I own zero Macs, all of my personal machines are Linux, I was given a Mac M1 for work and it's inefficient as hell, productivity-wise.)


Likewise, I'd be on that for sure. Right now I'm using older MacBook Air's running Ubuntu as my daily drivers and a big dell at the home office for other work.

Longer battery life and something like the Galaxy Book Go would definitely make me happy.


Ubuntu on MacBook M1 is a horrible experience. Screen tearing and lots of other issues.


Yes, I'm not touching that until it is at least a few more years older. Being on the bleeding edge doesn't pay off if you just need stuff to work.


M1 with NixOS through VMware is my daily driver because they have graphics sorted out well enough.

Otherwise I'd just use qemu.

I'm happy to say I forget i'm even on a Mac.


That's a nice trick!


Apple isn't going to somehow make 64 bit ARM in to something proprietary. Sure, they have their own special instructions for stuff like high performance x86 emulation, but aarch64 on Apple is only going to mean more stuff is optimized for ARM, which is good not only for Linux, but for other open source OSes like the BSDs.


Apple are, if anything, more helpful to the Linux community that Qualcomm.


There are no special instructions for x86 emulation.


There's a whole special CPU/memory mode for it, actually.

https://twitter.com/ErrataRob/status/1331735383193903104


I don't really need Rob to explain to me how Apple's processors do TSO ;) There are no special instructions for Rosetta regardless.


>I don't really need Rob to explain to me how Apple's processors do TSO ;)

Lemme just look up TSO and...

https://github.com/saagarjha/TSOEnabler

...Oh. Fair enough, my mistake :P

Is there not an instruction to switch into TSO mode, though? Wouldn't that technically count? :P


It happens to be a standard ARM instruction, Apple pushes some bits into ACTLR_EL1 (Auxiliary Control Register, EL1, "Provides IMPLEMENTATION DEFINED configuration and control options for execution at EL1 and EL0") in the kernel on context switch. The DTKs used a proprietary register and touched it using msr, but again, no custom instructions.

Apple does in fact ship custom instructions on their silicon, but where those are used, how they work, and how ARM lets them get away with it is a story for another day :)


You know what I mean ;)


GPU progress looks pretty great, here is the latest post:

https://rosenzweig.io/blog/asahi-gpu-part-6.html


There are other ARM providers, and personally I expect to see some of them ramping up significantly in the next couple years.

Qualcomm's taking another shot, for example. https://www.bloomberg.com/news/articles/2022-08-18/qualcomm-...


Good question

https://aws.amazon.com/pm/ec2-graviton/ is an indication that Amazon cares about linux support for the arm64 architecture. So the question is how much variance there is to the M1 relative to that.


x86 processors will be produced on the same nodes. Many ARM SoCs require binary blobs or otherwise closed source software, so they are not the best choice to run Linux on if you're approaching it from a longevity and stability perspective.


There is... another


What would be your concern? M1/M2 is just arm which Linux has run on for decades?


I think the concern is there is currently no 'IBM-compatible'-like hardware ecosystem around ARM. Raspberry Pi is closest, but nothing mainstream yet. And it looks like RISC-V will have a better chance than ARM.


You're right.

RISC-V barely has any end-user visible deployment yet. Despite that, it has strong platform standarization (OS-A Profile, RVA22 and standardized boot process through SBI, UEFI specs).

This is all just in time for VisionFive2, just announced. I suspect it will ship in large amounts.


Linux support is about much more than instruction set support. Most ARM chips are shipped on SoCs which can take a lot of work to get Linux running on, and even then it might not run well.


According to Wikipedia:

> The term "3 nanometer" has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors.

I thought it at least maps to something physical. But it's just a marketing term.


It used to be the gate length. But eventually improvements were made that didn't neatly map to the gate length but still doubled transistor density, so they transitioned to just dividing the number by sqrt(2) each generation (since chips are two-dimensional making everything sqrt(2) smaller in each dimension increases density by factor 2).

Until 14nm the relation to transistor density was kind of true, all 14nm processes have similar density that is roughly double that of 22nm, which is roughly double that of 32nm. But now even that's meaningless and all they're doing is steps of factor sqrt(2) out of habit.


If M2 Pro is similar to M1 Pro (two M1s duct-taped together with very fancy duct tape), this is interesting because usually chips need to be significantly reworked for a newer process and this implies an M2 core complex will be printable both at 5nm and 3nm. It would be interesting to know how much of this is fabrication becoming more standardized and how much is Apple's core designs being flexible. If this is the latter, then Apple has a significant advantage beyond just saturating the most recent process node.


The M1 Pro was not two M1s duct taped together. Their core configurations do not share the same proportions (8+2 vs 4+4).

You may be thinking of the GPUs? Each step in M1 -> M1 Pro -> M1 Max -> M1 Ultra represents a doubling of GPU cores.

Or you may be thinking of the M1 Max and Ultra. The Ultra is nearly just two Maxes.

Regarding your point about flexibility, it’s hardly unprecedented for the same core to be used on different processes.

Apple has at times contracted Samsung and TSMC for the same SoC. Qualcomm just recently ported their flagship SoC from Samsung to TSMC. Even Intel backported Sunny Cove to 14nm. And of course there’s ARM.


>> Apple has at times contracted Samsung and TSMC for the same SoC

That was only once at 14nm(Samsung)/16nm(TSMC) as Apple outsourced US chip production in TX to Taiwan.

Qualcomm uses both TSMC and Samsung on rotational basis to this date.


[flagged]


Thats what the parent said


meta: HN constantly feels the need to be maximally pendantic even when what they're trying to say was already covered, and it's just very tedious and leads to an exhausting style of posting to try and prevent it.

that's really why the "be maximally generous in your interpretation of a comment" rule exists, and the pedantry is against the spirit of that requirement, yet it's super super common, and I think a lot of people feel it's "part of the site's culture" - but if it is, that's not really a good thing, it's against the rules.

Just waiting for the pedantic "ackshuyally the rule says PLEASE" reply.


Well actually, it's not just HN, I see this pattern all over tech Twitter, programming subs on Reddit etc too. I think it happens when people want to participate in the conversation but don't have anything actually worthwhile to say, so rather than say nothing they nitpick.


I want to generously overlook any particular words you used and totally disagree with your main point. :) I think that it’s a positive feature of threaded comments to spin off side discussions and minor corrections. In this case, the correction was wrong, but if it was right, I’d have appreciated it in addition to whatever else ended up being written by others.

What’s bad for discussion is when those receiving the reply feel attacked, as if the author of the minor point was implying that nothing else was worth discussing. I wish that neither parent nor child comment authors felt the urge to qualify and head off critical or clarifying responses.


> I wish that neither parent nor child comment authors felt the urge to qualify and head off critical or clarifying responses.

I actually would go the other way and say that preemptively laying out a rebuttal to a common/superficial counterargument is an important supportive component of an argument. I personally wish that comment authors wouldn't take it as a personal slight when a common/weak counterargument is addressed preemptively.

Any scientific paper has a "what came before and why it's wrong and sucks, and why my approach is awesome and cool and better in every way" section and those are really the same thing - preemptive addressing of criticisms that reviewers/etc would make. You wouldn't say those are dismissive and superficial because they "belittle the previous authors and dismiss their work and arguments", or that "preemptively rebutting an argument is offensive to the reviewer". Why is that a bad thing?

"I am aware of a common concern X and I think it doesn't hold water because..." is a pretty reasonable thing to post in a casual debate and it really diminishes the discourse for people to take offense to it and for people to have to excessively censor or limit their discussion as a result. It's not good for the community.

Anyway, the other thing is, a lot of debates really come down to "values differences", which is a kind of X-Y problem. For example, in a political debate, a lot of debates over a policy aren't really about the policy, but rather a fundamental disagreement about whether government can (or even should attempt to) efficiently perform an action or regulate a certain kind of conduct. And preventing certain underlying ideals or principles from being surfaced in the discussion tends to lead to rather pointless debates where nothing is agreed, because it was never about the policy in the first place. So the policy actually worsens that problem as well, you just end up with tilting at policy windmills instead of addressing the actual area of disagreement (which is not the policy at all).


There was some speculation in another thread not too long ago that the M2 design was originally 3nm, and was backported to 5nm after the fact.


I mean, Intel used a "tick-tock" model for a decade. (New microarchitecture, then die shrink, then new arch...) https://en.wikipedia.org/wiki/Tick%E2%80%93tock_model


I wonder if they'll also bump the M2 machines to 3nm silently, if the efficiency bump is minor? Apple previously split the A9 between TSMC and Samsung at two different node sizes, so it wouldn't be completely crazy.

Or perhaps they're content to leave the M2 as 5nm for easy performance gains in the M3 next year. It also has the advantage of keeping the cheapest machines off of the best node size, which is surely more expensive and more limited than 5nm.


I think you're thinking of M1 Ultra - which is 2x M1 Max on an interconnect.

M1, M1 Pro and M1 Max are separate dies.


There's no reason to assume that a 3nm and 5nm M2 core is identical in that way. It's probably similar to the changes Intel used to do for die shrinks when they were doing tick-tock.


you are mixing M1 Ultra which is two M1 Max taped together with M1 Pro which is a weaker variant of M1 Max.


The heading is just blatant speculation. It was written before Ming-Chi Kuo tweeted about M2 Pro being on enhanced 5nm. Once he said that, the article was updated to include his tweet - adding an element of uncertainty. Ming-Chi Kuo is a credible news source; cultofmac is an aggregator - they don't care if they're wrong and they optimize for eyeballs.

This post reeks of fanboy-ish excitement; there's nothing to see here.


So we'll see at least 1-2 years of Apple Silicon being at least one node ahead of competition. I am curious for how long will be Apple able to pull this lead off, and what the perf/watt will look like when (if?) AMD has node parity with Apple in the near future. Or when perhaps Intel uses TSMC as well, and the same process node.


I think this was Apple's game for a LONG time. They have led in mobile chips to the point where they are sometimes 2 years ahead of the competition.

They do this using their monopsony power (they will buy all the fab capacity at TSMC and/or Samsung, and well before competition is aiming to do so either).


> They do this using their monopsony power (they will buy all the fab capacity at TSMC and/or Samsung, and well before competition is aiming to do so either).

It's not just buying power - Apple pays billions of dollars yearly to TSMC for R&D work itself. These nodes literally would not exist on the timelines they do without Apple writing big fat checks for blue-sky R&D, unless there's another big customer who would be willing to step up and play sugar-daddy.

Most of the other potential candidates either own their own fabs (intel, samsung, TI, etc), are working on stuff that doesn't really need cutting-edge nodes (TI, Asmedia, Renesas, etc), or simply lack the scale of production to ever make it work (NVIDIA, AMD, etc). Apple is unique in that they hit all three: fabless, cutting-edge, massive-scale, plus they're willing to pay a premium to not just secure access but to actually fund development of the nodes from scratch.

It would be a very interesting alt-history if Apple had not done this - TSMC 7nm would probably have been on timelines similar to Intel 10nm, AMD wouldn't have access to a node with absurd cache density and vastly superior efficiency compared to the alternatives (Intel 14nm was still a better-than-market node, compared to the GF/Samsung alternatives in 2019!), etc. I think AMD almost certainly goes under in this timeline, without Zen2/Zen3/Zen3D having huge caches and Rome making a huge splash in the server market, and without TSMC styling on GF so badly that GF leaves the market and lets AMD out of the WSA, Zen2 probably would have been on a failing GF 7nm node with much lower cache density, and would just have been far less impressive.

AMD of course did a ton of work too, they came up with the interconnect and the topology, but it still rather directly owes its continued existence to Apple and those big fat R&D check. You can't have AMD building efficient, scalable cache monsters (CPU and GPU) without TSMC being 2 nodes ahead of market on cache density and 1 node ahead of the market on efficiency. And they wouldn't have been there without Apple writing a blank check for node R&D.


I do sometimes wonder if we could ask and get an honest answer "Ok well then who wants to pay for all this from step 1?"


China.


They absolutely use their power (aka money) to buy fab capacity but they are also responsible for a ton of investment in fabs (new fabs and new nodes). Because of that investment they get first dibs and the new node. In the end it's up to the the reader to decide if this is a net positive for the industry (would we be moving as fast without Apple's investment? Even accounting for the delay in getting fab time until after Apple gets a taste).


> they will buy all the fab capacity at TSMC

What would motivate TSMC to choose to only have 1 customer?

TSMC is known as "huguo shenshan" or “magic mountain that protects the nation”. What would motivate TSMC to choose to have their geopolitical security represented by only 2 senators?


Because Apple is willing to pay a premium that it easily passes on to its loyal customers.


IIRC they were using TSMC before TSMC had a material process lead and supported them (and moved away from Samsung) with big contracts and a long term commitment. Hardly surprising that they have first go a new process. Not a risk less bet but one that has paid off.


Exactly. You cannot look at that as if they decided 2 years ago to just buy all the capacity. Their relationship with TSMC goes back way further than that, and there have been several ups and downs along the way.


Yea this is what I am wondering as well. If nobody else ends up switching to ARM in the laptop/desktop space and eventually AMD and Intel are making 5 or 3nm chips then surely this massive lead in power efficiency is going to close. At the current levels the new apple computers seem awesome - but if they are only 10-20% more efficient?


You do have ARM in Chromebooks. Any wholesale switch for Windows seems problematic given software support. But beyond gaming, a decent chunk of development, and multimdeia, a lot of people mostly live in a browser these days.


Can anyone speculate what will happen to Apple if for any reason TSM abruptly stops supplying chips? Will their revenue just drop by 80%?


Everybody except Intel and Samsung is screwed if TSMC stops making chips.

Apple (and the rest of the mobile industry) would try to move to using Samsung's fabs and Intel would go back to being the undisputed king on desktops, laptops, and servers.

I think TSMC has like 2-3 times the fab capacity that Samsung does right now for modern chips, so there would be a huge chip shortage.

Apple's $200 billion cash pile would come in handy when trying to buy fab capacity from Samsung so they might come out ahead of less cash-rich competitors.

There would be a significant hit to processor performance. Samsung fabbed the recent Snapdragon 8 Gen 1, which has comparable single core performance to the iPhone 7's A10 fusion chip.


>There would be a significant hit to processor performance.

You are probably right. Look at the gains Qualcomm saw by migrating from Samsung to TSMC: https://www.anandtech.com/show/17395/qualcomm-announces-snap...


I never thought you’d get gains by simply switching vendor. 4nm to 4nm and they still saw gains.


The thing is that 4nm doesn't actually mean anything. Intel's 10nm node is mostly on par with TSMC N7 and it caused quite a bit of confusion, so Intel renamed a bit improved version(something they would call 10nm++) to Intel 7. It's all just marketing and have been for 15 years or so.


The performance of that core is not necessarily all dependent on the node... Apple have been lauded for their designs. Samsung / arm, not so much.


> Samsung / arm, not so much.

I don’t think that is fair. ARM and Apple have very different objectives. Apple does not care if its CPU design division makes any money and they can get away with very large SoCs that rely on cutting edge manufacturing. ARM on the other hand needs to sell designs that manufacturers can use to make money. So there is a tendency to go towards simpler designs that are easier to manufacture. So sure, Apple has an edge in performance. But they don’t have the same business model or optimise for the same things.


With $200 billion why can't apple just fabricate their own chips?


You mean TSMC.

Well, they would be seriously hurt. However, does that matter when almost every tech company (including Qualcomm, MediaTek, AMD, Apple, ARM, Broadcom, Marvell, Nvidia, Intel, so forth) would also be harmed?

TSMC going down is basically the MAD (Mutually Assured Destruction) of tech companies. Kind of a single point of failure. Intel would probably weather it best but would still be hurt because they need TSMC for some products. Plus, well, in the event of TSMC's destruction (most likely by a Chinese invasion), Intel might raise prices considerably or even stop sale to consumers especially as their chips would now have major strategic value for government operations. NVIDIA might also survive by reviving older products which can be manufactured by Samsung in Korea, but same situation about the strategic value there, and getting chips from Korea to US might be difficult in such a conflict.


> TSMC going down is basically the MAD (Mutually Assured Destruction) of tech companies.

This is why it's appropriately called the "Silicon Shield", within Taiwan: https://semiwiki.com/china/314669-the-evolution-of-taiwans-s...


Does anyone have any resources that explain the historical reason(s) TSMC became what it is? How did the world's most important hardware manufacturer manage to get constructed on a geopolitical pinchpoint?


This podcast covers most of the important points:

https://www.acquired.fm/episodes/tsmc


Way down the road I hope Tim Cook writes a memoir. I’m curious as to his unvarnished thoughts about doing business in (and being so reliant on) Taiwan and China. I’m sure he can’t publicly express some of those thoughts without unnecessarily adding risk for Apple but he must have lots of interesting opinions about things like being reliant on TSMC vs trying to build their own fabs, etc.


I recall Tim Cook speaking about it actually. He says, Apple is there not because it's cheap but because they actually have unmatched engineering capacity. He was talking about how much quicker everything can get moving and how much accessible engineering talent they have.

It makes sense, after all there are much poorer countries in the world but you can't expect for Afghanistan for example be like "okay okay Apple, we will do the chips and the phone at %25 discount". In fact, you can't expect even EU or USA to start easily doing that because the manufacturing know how is long gone with the de-industrialisation of the west. Sure, there's some talent and capacity left but its not anywhere near the scale of what exists in Taiwan and China.


Based on how Jobs life and legacy are discussed at Apple, I doubt cook will ever speak frankly about matters that affect Apple.

Kara Swisher is interviewing him and Laureen Powell Jobs soon. I expect that interview to continue to focus on only the best possible characterizations of Steve.


If WW3 goes hot beyond the regional proxies, we all lose pretty hard.

A new iPhone 14 or Mac M2 will be a pipe that we all chuckle about how we used to care about such things.


If the economic situation we are all into at the moment is because of Ukraine war, I think is ww3 happens slower iPhones will be the least of our concerns.


If TSMC poofed out of existence because of a few bombs from china or a freak super natural disaster, global GDP would drop significantly and quickly. One of the biggest SPOF that I'm worried about in the world.


Repeat of the car market? I'm going to make a bunch of money off the old computers collecting dust here, and Apple's going to have to unlock support for new OSes on them, making all its money on services.


TSMC announced its new Phoenix Arizona fab would begin mass production in the first quarter of 2024.


By that time it'll be far from bleeding edge. The Taiwan fab will be close to N2 while the Arizona fab will be able to produce 5nm generation chips, that'll be 4-5 year old tech then.


Similarly Intel just announced a new partnership to accomplish similar. Also in Arizon:

https://money.usnews.com/investing/news/articles/2022-08-23/...


That is funny we are literally about to run out of water over here in AZ.


For what reason would TSMC abruptly stop supplying chips short of war? There's nothing other than war that would cause it. And if there's a war, Apple's profits are the least of problems.


Meteor, Large tsunami, Earth quake, Solar flare, extremely infectious disease that isn't as weak as Covid-19. There are so many natural disasters that could cripple or outright destroy TSMC production facilities.


Meteor and Solar Flare that are big enough to cause problems would be global problems.

Large tsunami is possible, but I doubt they built these things right next to the ocean, and as the other poster says they're built to withstand large earthquakes.

Infectious diseases are another global problem.


The claim was "There are no things except war that can disrupt TSMC". There was nothing about the cause having to have no global effect.


Climate change impacts or global pandemics could also significantly impact TSMC's operations. Or, Chinese actions that are somewhat short of full-on war.

Also relevant here: TSMC is building a chip fab in the US.


> For what reason would TSMC abruptly stop supplying chips short of war?

Earthquakes would be top of my list of things that would cause problems.


The facilities are made to mitigate and reduce damage due to earthquakes.

https://esg.tsmc.com/en/update/governance/caseStudy/1/index.....


I wonder how many people have to collaborate to get a 3nm semiconductor out the door. TSMC has 65,000 employees. ASML has 32,000 employees and 5000 suppliers. The complexity of it all is unimaginable!


It is possible it is for a Mac Pro, but that would be just for bragging rights to have a halo processor that beats everything AMD and Intel have in terms of power and single threaded and multithreaded workloads. It all depends on how good the yields are on 3nm.

As an aside, I am not sure who a Mac Pro computer would be for at this stage. I wonder if they would design it to be able to slide into a rack mount. How many Mac workloads need a computer more powerful than a Mac Studio at this point? It was once you needed a pro machine to properly use photoshop or illustrator and now a MacBook Air is fine most of the time and a Mac Studio is perfectly capable of doing even the most demanding video editing. The only thing that comes to mind would be for render workloads for animation houses and for video game developers. Will the chassis design be tailored for that? Will it be a good value for for those types of customers who are not going to care what it looks like because it is going to be in a rack or in a corner?


I’m entirely unclear if an ARM-based Mac Pro would look anything like the old Mac Pro.

Well. Actually. If it would look anything like the late lamented cheese grater Mac Pro. I worry it would be a lot more like the trashcan Mac Pro.

Because for a long time, the Mac Pro has meant expandability. Lots of RAM slots. PCIe slots. Drive bays. Big-ass GPUs. Something adaptable to pro work flows, namely audio and video capture and production.

Is anything like that possible with the Apple Silicon system? I think it’s extremely unlikely they’d support GPUs. And I really don’t think they will support old-fashioned RAM—they seem to be all-in on directly-on-die memory. And PCIe cards? Shrug? No idea, because we’ve not seen anything like that yet.


Is it just a case and logic board and then you are plugging SOC modules onto? Would Apple ever sell such a thing? Better question, would Apple ever sell the individual modules?

On one hand I don't think this will ever make them money, on the other, it might just be a halo product that is worth the cost in good press. A shiny expensive thing that Apple can say, "look, see, we have the fastest personal computer you can buy."


It's for Mr. John Siracusa. Plus people who want a modular computer, but I don't know how modular would an M* Mac Pro be.


One missing piece for new Mac Pro is RAM. Current Mac Pro can be equipped with 1.5 TB. Mac Studio limit is 128 GB.


So it seems like the M2 is really an "M1+" or "M1X", whereas the M2 Pro/Max/Ultra are really the second-generation Apple Silicon.

That's fine, in my opinion. M1 is still an amazing chip, and if that product class (MacBook Air, entry iMac, etc.) gets even marginal yearly revisions, that's still better than life was on Intel.


Actually I get a different impression. Although the M2 tests have been impressive nonetheless (the M2 being based on the A15 and not on the A14 makes it more the an M1X imho), the issues around throttling and thermals with the MacBook Air make it seem to me, that the M2 was actually designed to be on the 3nm node - which then seems to have been delayed by TSMC. That the rest of the M2* line will presumably be made with the 3nm process boosts this impression for me.

I was planning on getting the redesigned M2 Air, but with the above in mind (which is just speculation) it got me thinking again.


I know I'm being irrational about this, but for some reason this makes me lean toward getting an M1 Air or 13-inch Pro rather than an M2: it's like with the M2 performance gains are being squeezed out of the same (or similar enough) M1 process rather than changing the process significantly, at the cost of efficiency.


I almost did this, but the return of MagSafe (which frees up a USB port) and the display improvements were worth it to me. Oh, how I’ve missed MagSafe.


Yeah the things you mention definitely enter into the equation.


The hardware is much nicer on the M2 Air though.


An important note is that the MacBook pro M2 vs M2 Air appear to be essentially the same hardware with the same price - but the MacBook pro does have a slightly larger battery.

And it is quite incredible. I've been very impressed that i can work all day on an M2 MacBook pro on vim - for several days - with only one charge.

The battery life is incredible.

About to switch to Asahi Linux, so I hope it stays, but I am almost certain the battery life will be better than any x86 computer running Linux I could've gotten.


You'd definitely save $200 since base M1 Air is $999 vs M2 Air being $1199.


They're also available used and a lot cheaper now.

I'm plenty happy with mine and don't plan to switch any time soon. Yeah, the M2 Air looks a bit nicer, but it's more of a Pro-follow up with its boxy design and ... eh, the M1 Air is totally fine in all aspects I can spontaneously come up with. It's a really good device and the laptop I might recommend for years to come. It getting cheaper and cheaper will only increase the value you'll get.


I'm curious what actually unifies the "MX" for some X. There are different chips in the series, and apparently they can even be on different-sized processes and keep the name

Anybody know more detail?


M1 uses the same Firestorm and Icestorm cores as the A14 SoC. M2 uses the same Avalanche and Blizzard cores as the A15. So one can argue about the importance of the differences, but they are clearly two different generations.

M1 pro, max, and ultra still have the same cores, just a different number of them. One would assume that M2 derivatives will be the same: different combinations of the same cores.


Microarchitecture.


Marketing?


That's what the M is for


It's possible it's nothing but marketing, but I didn't want to assume that without knowing what I'm talking about


Is the node process size comparable across architectures?


No, not really. The "3nm" in the "3nm process" is not a measure of anything in particular, and even if it is a measure, the measure may or may not be in the neighborhood of 3nm.

Several years ago, fabs started naming each next-gen process with a smaller number of nanometers, even if the process size didn't change. It's just marketing now.


Process and architecture are mostly independent.

Node process is determined by the physical manufacturing. Architecture is determined by the design templated on during manufacturing. You could make an arm core on an intel process (which I think even happens in some of their testing phases). So yes.


No. At these nodes it's all marketing.


"HVM" = High-Volume Manufacturing

Used in the comments here... I had to think for a minute to get it.

HVM is full production status, as opposed to early risk or testing.


Has this analyst ever been correct?


hey anyone at Apple or able to influence there:

my use cases for machines expand when they can address 128gb RAM and above.

everything between 32gb and 96gb is the same use cases, for me. nice to haves but doesn't dramatically change anything.

just in case anyone needed to know


Thanks I will let Tim know.


excellent. i've been waiting for an M2 macbook pro


That already exists.

https://www.apple.com/shop/buy-mac/macbook-pro/13-inch

Did you mean an M2 Pro MacBook Pro?


It's so weird that they made another one, keeping the old design and touch bar alive.

I wonder whether they do focus group testing and found that some significant minority likes them enough for it to be worth it.


womp womp. hard dislike


What will the industry do when it reached 1nm?


The nm in the process names is more marketing than a reference to something physical.

For example:

> The term "5 nanometer" has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a 5 nm node is expected to have a contacted gate pitch of 51 nanometers and a tightest metal pitch of 30 nanometers. However, in real world commercial practice, "5 nm" is used primarily as a marketing term by individual microchip manufacturers to refer to a new, improved generation of silicon semiconductor chips in terms of increased transistor density (i.e. a higher degree of miniaturization), increased speed and reduced power consumption compared to the previous 7 nm process.

https://en.wikipedia.org/wiki/5_nm_process


There's plenty of room at the bottom. (pm would be next).


9A, 8A, 7A, etc.

1nm = 10A




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: