Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
TSMC cancels chip price cuts and promises $100B investment surge (nikkei.com)
297 points by baybal2 on April 1, 2021 | hide | past | favorite | 245 comments


1. TSMC were spending ~$30B per year on capital expenditure already.

2. Since leading edge node is forever increasing in cost, the next 3 years, i.e 3nm in 2022 and 2nm in 2024 are expected to be higher. ~$100B investment aligned with their initial plan / trend / target anyway.

3. So really the major news is stopping ( or to be precise, delaying ) price reduction. Which is unusual but understandable given the current demand situation. And No, it is not TSMC's fault. You should ask how every single Fabless Semiconductor company has failed with their demand, supply chain analysis and projection. Although one could argue it is not their fault either, since their client gave wrong projection. The only company that is not affected is possibly Apple.

4. There has been mounting pressure from investors, politicians and MSM media on Intel IDM 2.0, and supply problems. This announcement feels more like addressing those concerns.

5. Remember both Samsung and Intel are expanding capacity. And even GF and many other smaller players. It took DRAM and NAND three years to catch up with demand ( and then over supply ). Which in hindsight is pretty damn impressive. Although during the ~2016 - 2019 everyone felt awful.


> The only company that is not affected is possibly Apple.

This is an interesting point. Apple is pretty hard nosed on their predictions and capacity reservations, to the point where they could be funding part of this expansion at TSMC (as they have done for Hon Hai, among others, for decades).

I continue to be astonished by Apple’s ability to manage their supply chain so tightly (which requires not just an iron fist but very very intense internal process). They seem to be the only ones who can manage to do that.

And it’s not like other big companies are lazy; the part that really amazes me is that their process hasn’t leached out to other companies, as so much else does in the Valley.


I just saw something about this. The reason Apple hasn't been impacted is because their sales have been dropping for the past few years on iPhones and other devices. That trend hasn't changed during the pandemic. While many of the other players predicted drops during the pandemic, and initially it did, that demand dramatically increased, especially in the desktop market (AMD, Nvidia, etc). It is hard to fault companies from predicting the massive increase in a market that has been decreasing for decades(?) now. There is limited capacity at TSMC and Samsung. They leave buffer, but that was quickly consumed because almost everyone needed more capacity. It takes potentially years to increase capacity (build a new fab, etc), they have no room to allow any companies to adjust their capacity reservations.

So at the end of the day, it isn't that Apple was necessarily better...their demand just didn't change and continued the downward trend in certain product areas, giving them breathing room.


It does look like apple's unit sales (of phones at least which are the lion's share of product) have been declining (conveniently they stopped reporting unit sales long ago) but I don't think that really applies here. Apple's share of the total semiconductor market is far from dominant.

What's really interesting is how well they couple demand to shipment. One is that they appear to have low levels of unsold inventory, both on the incoming (BOM) side and output (unsold manufactured output, such as phones), reflecting some incredible discipline. They also manage to insure themselves against supply shock (because such a tight tolerance for over purchase and overproduction makes your supply chain more brittle) by making big moves in their supply chain like financing their suppliers, doing manufacturing R&D on their suppliers' technology and then supplying that tech to the suppliers, taking big positions in commodity markets (e.g. famously in DRAM a few years ago) and other such things you can do when you have so much cash on the balance sheet.


If they have so much cash and margin, why don't they keep more inventory instead of all these supply chain black magic?


Better use of capital and better supplier flexibility (why consume your suppliers’ capacity on stuff you don’t need when you could switch them to the next gen thing you need?).

You could think of it as avoiding bufferbloat in the physical domain.


Also refreshes are real annoying/unprofitable when you have millions of old tech sitting in warehouses. Apple has pretty good science on the sneak peak / release / fulfill of new tech. You can basically get day1 or at least week1 release of any new product.


One thing to remember is that the current CEO Tim Cook started in the supply chain management side of the company...

And did that job for Steve Jobs during the growth of the iphone.

It would be expected that Apple would have extended insight into supply chain and capacity, seeing as the head of the company was/is the one who did that job.


I'm constantly surprised by how many people seem to forget that Tim Cook got to where he is on the basis of being a supply chain logistics whiz.

If there's anything I expect Apple to excel at, almost without question, under his leadership, it's supply chain management.


The comment reminds me of Nokia end times. Everyone said they mastered logistics.


I wasn't commenting on Apple as a whole; I'm actually not a huge fan of the company overall under Cook's leadership.

But supply chain management? They excel at that which doesn't surprise me given who the CEO is.


It's also a strength on the demand prediction side which is typically done by marketing not the manufacturing side of the business.

Yes, I know Cook came from Compaq for his supply chain chops, but there are amazing supply chain folks elsewhere in the industry. On this dimension Apple is in a class by itself. Perhaps that fixation does come from Cook, as supply chain wizards typically don't rise that high in the corporate chain.


Apple is experiencing supply chain issues, and there is a shortage of laptop devices right now if you try to make bulk orders (>1000 units). I have no context on their phones.


They tend to be out of stock when next product is becoming. I wish they think it's also a problem.


There aren't many companies that can compete with 100 billion dollar capex demands. While the loans are freely available, the development risk remains. TSMC has a proven track record pushing into the latest node - intel does not. Samsung has been a little bit behind for a while while keeping pace.

Following this trend - 2nm will have something on the order of 200 billion in capex outlays. Given that being first to market has a premium, the loan will be high risk. If you borrow thinking you'll be first and are late by 4 years then you and your bank are going to have a problem.


On the bright side, the new processes will have a longer life. When your 135nm process is replaced by 14nm within a less than a decade, there's a huge gap there and the old process isn't worth much. If you go from 14nm to 2nm, you can still use 14nm for a lot of price sensitive customers.


The density increase is similar between those two jumps, isn’t it? Density significantly drives the price of chips since you can use it to create more chips per wafer. Maybe developing chips at 2 nm will be much more difficult due to quantum effects?


Before you start reaching quantum effects, you'll run into crosstalk problems where electrons on 1 circuit path can jump through the silicon substrate to other paths, effectively causing a short. There is a limit to how much you can do with silicon as your substrate simply because of the size of the atoms.


That’s quantum tunneling, right? That’s one of the quantum effects I was referencing.


x nm is now tend to be a marketing number, so not so much compared to old number.


The law of diminishing returns. Shrinking brings lower and lower benefits because the jumps may be hefty in relative values (10% smaller) but in absolute values (nm) they don't bring the same benefit as they did 15-20 years ago.

On the other hand, even in practical terms the older nodes are more than adequate for most customers these days simply because they are far closer to a modern node than the equivalent would have been 20 years ago (for the reason stated above).


Every single fabless semiconductor designer failed to predict demand because their customers lied to them. The car industry pulled a really dick move last year, and then pulled an even more dick move in the opposite direction, and now everyone is screwed. Back in early 2020 the car manufacturers decided that demand for cars would go down, and because they have a religious aversion to keeping any stock, cancelled lots of orders with their vendors, screwing said vendors over. Their vendors could not afford their fab slots because automotive is such a big part of their revenue, so they cancelled their fab slots. Those slots were happily sold on to entertainment and computing customers, who anticipated a jump in demand due to people staying home more. So far so normal.

However, the car industry got it badly wrong - people, afraid of public transit, started buying more cars rather than less. The car industry, being screwed due to their just-in-time religion of zero stock, was faced with their production lines stopping so they called up all their vendors, and asked for those orders back, and some more on top. The vendors then tried to get their fab slots back, and were told to come back next year. Some of them ended up buying other fabless IC designers out of their slots, causing the problem to spread. Others cancelled their existing orders to other customers, and auctioned off their existing inventory to increasingly desperate car manufacturers at a 6x to 8x premium. Anyone who was not prepared to pay that or didn't act fast enough was screwed. From that point on, a bunch of companies that depend on those lines of microcontrollers had to rapidly redesign their product to use another device, taking even more devices off the market with unplanned demand. The users of those devices then had to move to others, causing even more availability cascades. This is how two nasty moves by the car industry caused global market disruption in a number of industries that depend on electronics. This is not a normal "demand has increased, and industry can't keep up" event, it's elephants dancing and trampling everyone else underneath.

This is further aggravated by the top three automotive semiconductor suppliers (NXP, Renesas, Infineon) having their facilities destroyed in two unrelated disastrous events - a fire at Renesas' wafer processing plant, and Texas freezing over, destroying NXP's and Infineon's fabs through cleanroom contamination and process interruption. Those events took out months' worth of production, and destroyed product that had already been sold before manufacture. This would have been recoverable in a normal market, because distributor stock could hold a couple months, but in this case it was game over for non-automotive customers as all distributor stock was already gone by then.

I see this in my work every day now - customers coming to me for help with redesigning products to use a different microcontroller, or help with sourcing parts from unusual sources because their normal channels are gone. I've been in this industry a long time and never seen anything like this before. This is not a failure of supply chain analysis and projection on the part of the fabless semicon vendors. This is their biggest customers fucking their vendors over not once but twice by lying to them about their own demand.


Now I know why my business, that has nothing to do with semiconductors are booming too.

I own a store that sells parts and tools to attach things to each other (originally we sold nuts and bolts but those are unprofitable without gigantic volumes).

In the last months suddenly we started to get an unusually high amount of orders from factories intending to use our products in manufacturing, while until then all we ever got was orders for replacements parts and maintenance.

Since we are a store, not a manufacturer, our prices aren't lowest as possible... so we are very confused about why Toyota/Honda and others for example, called us wanting parts, instead of calling our supplier, since we know they have their contact anyway (I won't say who it was but for example one time a manufacturer asked us if a product would help them, and asked us to design something for their production line... we did, then they ordered the product from our supplier and never paid us anything for all "free" engineering work we did for them).

So now I can guess what happened: car industry cancelled non-semiconductor orders too, their slots got sold, and now they want it back... so my store that tends to have higher stock than others keep getting new clients willing to pay through the nose to have parts because our own supplier doesn't have them in stock and can't deliver any in short term...


The thing about a car is that BOMs are monstrously large.

If you have 1000 ICs in a car, and one of them is missing, that's a $100,000 car you can't sell.

Car industry haphazardly buying out last stocks of ICs will not help them work around that "one missing chip" problem, and production lines are potentially stuck for many more months.

The panic was undue, well, or best say of no use. The are screwed, but its of no use for them to hope for some desperate moves improving the situation now if they can't assure 100% availability of all, and every component on their BOM.

I have few buddies who went to work on an ECU for MTU/Siemens. They ran exactly into that when their companies went for a complete redesign of their ECU to run on consumer STM32. They had hopes of that such old 180nm-130nm CMOS chips easily tolerating around 130C°, they did tests, it worked fine, and then they ran into undocumented high temperature protection kicking in on a slightly newer chip revision, but they already bough few millions of them, and other ICs for a new design in inventories.


And now STM32 are really tough to find for hobby electronics when one used to find them everywhere, and cheap.


I was looking yesterday, the STM32 chip used in the typically $5 blue pill now goes for $16 a piece.


Dude, I ended up buying like 10 or 15 blue pills a year or two ago with genuine STM32 chips for like $2/each. If they are clones, they have all the peripherals and complete ram/flash from my testing.


What’s a $5 blue pill?


It's a small development board (similar to Arduino but based on the STM32F103C8T6) It's often blue so people often call it the "blue pill"


Similar Microcontroller to others like the Teensy, Elite C, Proton C, Pro Micro, etc...

All of which have been used in custom keyboard builds but are capable of doing other things as well.


Ouch.

It seems like the way to reduce risk would be interchangeable parts from multiple suppliers?


>undocumented high temperature protection

sneaky market segmentation by STM?


I doubt it. You want a part to shutdown if knows it is going to fail, not pull everything high and cause the device to catastrophically malfunction.


>However, the car industry got it badly wrong

You write at the beginning and end of your post that the car industry lied to their vendors, but then you also write they got it badly wrong, which means they were incorrect about their projections of demand. Surely it can't be both, and it sounds like the car industry did not lie, but simply were wrong about their predictions for the future.

Or am I misunderstanding?

Nevertheless, thanks for providing context for the whole situation.


The car industry got their demand planning wrong, so they lied to their vendors that they wouldn't need that inventory. They then turned around and went "hey actually screw that we want the inventory after all", making their previous promise that they wouldn't buy it a lie.


I would not characterize that as a lie, which typically involves an intent to deceive.

Rather, the only people with intent to deceive might be the chip vendors. They either sold chip capacity that the automakers still had the legal rights to, or they reneged on sales to non automakers to resume supplying to automakers.


No, the chip vendors had their orders cancelled by ford/gm/etc., so they cancelled their orders with the fabs, who then sold the fab time to other chip vendors for different products.

Why would you build chips you know you can't sell?

Also, people talk about how big of an impact car manufacturing is to the economy and while that is true, they aren't chip manufacturers biggest customers. Apple spends more per year on semiconductors than the ENTIRE auto industry. To top that off the automotive ICs aren't high margin stuff, so if I was a fab or chip vendor, I would be focused on higher margin stuff.

The blame for this is SOLELY on the auto manufacturers.


That's not what "lie" means.


The most surprising thing about this story is the apparently massive number of people who didn't have cars, relied entirely on public transport and then decided to buy cars? How certain are you of that analysis? Surely that is a rather rare case, as the typical public-transit-only person tends to live in a city where parking space is at a premium. Whole tower blocks cannot easily switch from transit to cars overnight just because they prefer it.

Or is this the mass exodus from American cities I keep reading about? They're buying cars because they're moving from NY to Florida, that sort of thing?


It could be both. I imagine public transit is something people would try to avoid in a pandemic. When I lived in NYC I always thought about getting a car but it never made sense to spend that money. Bring in COVID and that might have done it.

My running theory is that manufacturers and retailers continue to underprice their goods,likely because they expect to eventually return to normal. This ends up effectively hiding inflation.

Many buyers have more discretionary cash than usual due to reduced spending from lockdowns. Demand is therefore increased on the things people can and want to buy. Then resellers/scalpers see the margin and buy up inventory to arbitrage. The scarcity compounds.


Just in time delivery has its downfalls when vendors have issues, but this is a known trade-off to that model. If a company keeps inventory on hand, then they have the worry of owning excess parts that they can never use and is a sink cost later. So these companies can lose money with either, it just depends on the circumstances that cause it.

I'd be interested to learn more about the order cancellation you were saying that fabs were doing to non car manufacturers. Shouldn't contracts prevent that sort of behavior (without proper compensation)? And if they did that, they are burning bridges that would make people less likely to do business with them in the future. Maybe the car chip business is enough money to warrant such moves, but seems potentially sort sighted depending on exactly all what you said happened.


> Shouldn't contracts prevent that sort of behavior (without proper compensation)?

That depends on how big the customer/order is, no? If (as a manufacturer) you could get away with not having it in a contract, why include it?

Similarly, instead of actually cancelling they could also "lie". Say that there are capacity issues, etc. That the capacity issue is mostly because they resold the existing manufacturing capacity, is left out.

> Just in time delivery

The well-known example of this is Toyota. Interestingly, they've started ensuring they do have stocks. See e.g. https://www.reuters.com/article/us-japan-fukushima-anniversa.... I found that rather interesting.


As with all things, an optimization (e.g. agile, JIT/lean manufacturing) is predicated on certain assumptions. In this instance, "we exist in an ecosystem where our suppliers can rapidly respond to volatile demand from us."

If the assumptions fail, then it's not a failure of the optimization, but an incorrect application of it. Maybe we don't currently live in a world in which elastic semiconductor availability at scale can be assumed.


When I first studied about JIT in college, I asked my professor: Doesn't this manufacturing methodology lack fault-tolerance and isn't it deeply susceptible to natural calamities or economic failures ? He replied that it was proven, adaptable and reliable. Other suppliers would come in if some suppliers failed.

JIT (aka lean manufacturing) today has now permeated pretty much most of the global supply chain. Human civilisation is far more fragile at the moment than in the last few. hundred years.


The true argument should have been that it's more efficient (for everyone involved). Which would have invited the honest debate between the relative merits of resilience vs efficiency.


That reminds me of Taleb's idea of comparing certain investment strategies to "picking up pennies in front of a steam roller" (or insuring mortgage backed securities). It's easy money until it isn't. But when it isn't, it is really bad.


>Human civilisation is far more fragile at the moment than in the last few. hundred years.

It was less fragile when there were famines all the time?

I don't understand how people can repeat this rhetoric when it seems obvious to me that however fragile it is, it's less so than in all of history. Even if things rapidly get much, much worse, it wouldn't change my opinion.

How do you think we would determine which is the correct perspective?


There's a distinction between technology level and time disposition, I think.

Compared to a self-sufficient small-scale agrarian society (say, 10th century Europe), what would have caused famine in their time would not for us.

At the same time, we allocate our time differently than they did -- few of us actually farm for ourselves.

If we allocated time more similarly + applied current technology, it'd be pretty hard for people to starve (between improved long term food storage, GMO crop yields, and environment mitigation).

Side note: the always educational Bret Devereaux lays out a solid argument for why famines were the result of an underdeveloped monetary and trade system, that led to fragile choices being optimal for individual farmers. [0]

[0] see "Risk Control" section https://acoup.blog/2020/07/24/collections-bread-how-did-they...


Nearly all industries have moved to Just-In-Time to reduce costs, at the cost of making the entire economy more fragile. We saw this highlighted in the pandemic as suddenly there were no reserves of hospital capacity due to JIT optimization.


Heck even consumer goods like toilet paper were impacted.

JIT is useful when the end-product has a high depreciation rate and/or you forsee yourself frequently changing the part out for a newer version.

Another tendril on the issue; modern accounting practices tend to prefer JIT instead of the costs/overhead mechanics of storing inventory.


>fabs were doing to non car manufacturers

fabless design house sign wafer agreement with Fab like TSMC. I'm guessing they didn't sign the agreement thus cancel their slot.


There is no such thing as excess part since you can always offer extended warranty or something to buyers to sell them at least at cost.


People forget why "just in time" was invented and why it became a religion...

I do not fault companies for not planning for a worldwide pandemic.


I think "lying" is pretty harsh. The pandemic caused many, many people and companies to have trouble predicting future demand. To me, it's a miracle that the American economy has kept going as strong as it has. I know it's very important for car manufacturers to predict future demand, and I know it's an extreme frustration for many people whose logistics are screwed up, but really, it shouldn't be a surprise that the car manufacturers mispredicted the impact of a once-in-a-lifetime pandemic.


Maybe we need a futures market for fab slots. Then we could just blame speculators for price swings instead of a particular industry.


A futures market here could help investment in fabs.

Allow manufactures to hedge / lock in future value.


You left out the AKM fire in October, which caused some essential production to get offloaded to Renesas. I think it may actually have gone to the same Naka plant that had the more recent fire.


> because they have a religious aversion to keeping any stock

In the US people love to keep huge numbers of cars in stock. Apparently most people buy from stock there, and almost nobody does JIT, for some reason. So this isn’t the case everywhere.


I had taken parent poster to mean that car manufacturers hate keeping large stocks of their component parts, not of finished cars.


If you're talking about dealers, they're very different from the manufacturers. Pretty much every car has someone who's bought it already (whether that be an end consumer or a dealer) by the time it rolls off the line.


Creates an interesting dynamic where the dealership has to sell what's parked on it's lot right now.


Really great comment...Appreciate when people do this.


Just wanted to say that I really appreciate your comment.


if you wrote a book about it, i'd buy it.


Top comment. Really appreciate this analysis. I think there is a tangential question to be asked: are consumers really demanding "software defined cars" in the first place?

4 Forces Changing Automotive Electronics Systems

https://www.eetimes.com/4-forces-changing-automotive-electro...


> "the next 3 years, i.e 3nm in 2022 and 2nm in 2024"

I wish the industry would standardize on using "transistors per square millimetre" as the metric for their node size instead of "nm".


Even that is not exactly the same. Cells, or logic gates per mm° is too rather ambiguous, as different implementations of same logic families do things differently, and the metric gets even wronger for different logic families.

The best I believe will be a whole cell library metric using some lowest common denominator blocks like registers, adders, bus pieces etc


I'm going to be that person and say it doesn't matter. Either you're a consumer who really only cares about a finished product's performance or you're a chip designer who has the time and money to research past the nm number.

All us armchair semiconductor fabrication experts can have a good old time arguing ove who's 2nm is better. The real world will continuing doing it's thing.

*I know this is hacker news and some of you might be real experts. I know nothing. Please don't feel disrespected.


You're basically right. Nobody doing actual chip design cares about process nm numbers. Process selection done right is a fairly tedious comparison of dynamic power consumption, static power consumption, achievable operating frequencies, wafer cost, SRAM density, IP availability, process maturity/yeild, fab capacity/ availability and about a dozen other factors.

There will be a large-ish excel spreadsheet somewhere to do the comparison.

The nm number is for press releases and non-technical investors.


It's basically a version number.


Yes, that's a good analogy, although the nm number by itself isn't the full version e.g for 28nm alone, TSMC had 28HP, 28HPL, 28HPM, 28HPC and 28HPC+ variants, of which only HPM and HPC (but not HPC+!) are at all compatible.

28HPC+ and 28HP have very different performance, even if they're both "28nm".

And if you really want to specify a process you also need to know the metal stack (lots of options there), Vt selection(s) (most processes have 2-5 options) & high-voltage device support option.


Honestly, SRAM bits per square mm seems like a perfectly fine metric for me. While I recognize that the memory-to-logic density varies, sometimes significantly, between processes, (a) SRAM cells are optimized to within an inch of their lives ("within a nm of their lives"?), including often being revised in minor process updates, so accurately reflect the capabilities of the process; (b) many chips, including the ones I personally tend to be interested in (processors, etc) are SRAM-heavy, and get more so each generation; (c) it's a single number with some connection to reality, which is better than both a single number with limited connection to reality ("process"), or a whole suite of numbers that require deep interpretation (cell library metrics).


And it is exactly because SRAM gets so intensely optimised with each generation, I did not want to list it. SRAM density, and performance is not characteristic of how the rest of logic will perform no other logic component get squeezed so much.


Ironically, it seems random logic may have been scaling better than SRAM recently...


Because SRAM was already squeezed to the max generations ago.


I found this article on the topic really informative:

https://read.nxtbook.com/ieee/spectrum/spectrum_na_august_20...


I find it concerning how few companies exist to do this work. I really hope Intel's new foundry business is successful.


Their 20B investment into their Arizona campus has me convinced. They're doing the right thing (if you live in the US), my 5900X is pretty unreliable and I never had issues with my Intel systems. The stars have aligned so I'm going to buy their chips from here on out.


What issues have you been having with your 5900x? I've been using mine without issue but for a short amount of time.


Ever since installing a BIOS with AGESA 1.2.0.0 (and now 1.2.0.1) my machine locks up on multicore loads. It also will lock up while navigating in the BIOS. My original BIOS that supported Zen3 was 1.1.8.0 and I had no problems whatsoever, and I was tinkering with it then far more than now, since the build was new and I was excited for it. The problems started immediately upon the newer BIOS being installed, and they put a read/write lock in the 1.2.0.0 release so I can't flash back.

I normally don't update unless I need to, but they advertised so many fixes that I felt compelled. It's reproducible, I can lock it up immediately upon starting Prime95, and it'll occasionally crash running CBR20. I would hope over time, probably take a year or two, they'll get this sorted out.. but I may just order an i9-10850K and a Z490, and be done with this. I run bone stock UEFI BIOS settings, no PBO or other overclocking.

I run a small business off this machine, and it has to be reliable, performance doesn't matter if you have stability issues.


>You should ask how every single Fabless Semiconductor company has failed with their demand, supply chain analysis and projection.

That's an oversimplification. As lead times increase, overbooking increases and that gives rise to further extension of lead time. Delays also become more prevalent at high utilization because there's no capacity in reserve. It's very difficult to predict where demand will go in a year of totally abnormal market behavior and unpredictable helicopter money.

In other words, fabless chipmakers are asked to do the impossible and some of them, predictably, failed to accomplish that task. Imagine asking AMD in April of 2020 to predict how many PS5s will sell during Christmas.


Selfishly... so will all this CapEx by TSMC, Samsung, and Intel put pressure on wages? I’m in the industry at a large company getting 3% raises, 10% of comp as stock, and a salary about 1/3 less than a software dev. All while being highly reviewed.

I write a lot of software for my job (although it’s not my title) and I’ve been thinking of trying for pure software jobs even if the work is more boring (I do semiconductor R&D)


I also find it surprising that Hardware Engineers have a lower pay scale (Although I didn't hear very big differences like you said). Is there some kind of supply-demand behind this? Engineers for car-related things as well. Anecdotally I think you could in theory get a title change to software engineer, while doing the same thing, to get more pay.


There are big differences. A close friend who's a hardware engineer shared with me her salary. With five years of experience and a MS, her salary is the same as a fresh-out-of-undergrad junior software engineer at the same company. That's $120k in the valley.


Huh, I'm surprised. As a software guy, I think the hardware engineer has a tougher job that requires more skill.


It is tougher, but how tough a job is has little to do with the pay.

It's all about supply/demand ratio, which is higher for hardware engineers. Hence they get paid less. There are simply far fewer HW jobs than SW ones. While there are more SW engineers, it doesn't make up for the delta in number of jobs available.

Even within the SW world, I've jumped around jobs of varying difficulty. The tougher jobs did not pay more.


If you don’t switch for higher salary, you are part of the problem. I can understand video game makers, but if your work is boring and have a low pay, and a great software skillset, improve the pay of your colleagues by switching.


> I write a lot of software for my job (although it’s not my title) and I’ve been thinking of trying for pure software jobs even if the work is more boring (I do semiconductor R&D)

This sounds like me in a past life. I assure you: It's worth the change. (Often) Easier work. Paid more. Treated better. Nicer work environment. It's worth it even if the pay were the same.

Also, with SW roles in your resume, you can then find jobs outside the semiconductor industry. Lots more options available to you.


3% raise a pretty typical for software engineers as well.


3% raise of 50k is much less than 3% of 150k though. GP could get a hella big pay bump and then much larger raises if he switched, even if on paper it’s still the same percentage.


I switched for the flexibility (only a few hardware players versus a seemingly endless list of software shops) but the pay bump has been nice.


I switched for the money


Maybe we should start pulling chips of trashed electronics. There must be hundreds of thousands decently fast chips going into our trash.

How many dead xboxes and PlayStations have chips in them that are still good?

I'm thinking of a reverse pick and place machine that pulls chips of boards (boards that can not be used to fix other dead devices) and is able to test each valuable component for later reuse.


> Maybe we should start pulling chips of trashed electronics. There must be hundreds of thousands decently fast chips going into our trash.

This is already been happening for decades on industrial scale in countries doing eWaste recycling. Just a decade ago, you could've still find people selling refurbished chips in Shenzhen on every corner. Now it kinds of became a much more low profile enterprise.

The US defence department, colloquially known as the pentagon, ran a wargame scenario a few years ago to see what will happen if US will be hit with a worst case scenario, years long semiconductor unavailability due to sabotage of domestic fabs, massive industry wide cyberattack, or something happening to Taiwan.

One of comissioned think tanks recommended that exact methods of military going door-to-door to gather old gaming console, smartphones, and PCs for conversion to use in munitions, and military equipment.


My first reaction to this plan is that it’s for the birds. The government would be better off just building a fab themselves. The raw materials aren’t that hard to get hold of; integrating consumer electronics into military systems would likely take just as long, if not longer (with all sorts of other negative effects on quality and security). I guess the analogy is with scrap metal drives in WWII, but this situation is nothing like that at all. It’s a fantasy scenario that makes no sense.

We defend Taiwan with everything we’ve got. There is no plan B, and it’s not just about semiconductors.


The government would be better off just building a fab themselves.

But you can't just 'build a fab' can you? Thats the point. You need a shipment of ASML's magic EUV machines and a bunch of their engineers and a bunch more technicians and managers with the right kind of skills for running a large cutting edge fab.


You only need those for the latest and greatest. A lot of options open up if the priority is any decently fast processor over the latest. Multi-CPU systems fell out of favor in the consumer space because single CPUs got so fast with these processes, but multiple CPUs with an older process is fine in this scenario.


...or just be content with a slower chip.


Military-grade equipment and ammunition don't require cutting-edge manufacturing nodes. There are fabs for older nodes aplenty, and new ones don't require cutting-edge equipment from ASML. Also, since older fabs are usually kept around, there is going to be demand for older tech and spare parts.

To sustain AI and other applications that require cutting-edge tech, the best strategy would be to either stockpile chips (not really feasible since the tech is deprecating) or to build spare capacity, along with the necessary supply chain.


> a bunch of their engineers and a bunch more technicians and managers with the right kind of skills

In a WWII scenario, how long does it take to rapidly retrain a bunch of people with these skills. We have no shortage bunch of brilliant, quantitatively minded people trading derivatives and writing ML models to recommend TV shows. In a total war scenario, I assume most would be drafted for the war effort. Semiconductor engineering is complex, but probably not so complex that your average Math Olympiad couldn't pick it up in a month or two.


I did some materials engineering coursework in college, and I think your comment significantly underestimates the technical complexity of <100nm process node manufacturing.

We had many brilliant minds working on these problems for decades, but it took time, skill, effort, and money over these decades to achieve each milestone of development. Now each of these firms (and each of their suppliers) has trade secrets that go deep, and barring some unlikely breakthrough of brilliance, these secrets can't be independently discovered by the brightest minds in any less than the time it already took to discover and develop them in the first place.

There's also the problem of supply chains -- modern fab tech requires a ton of downstream tech: the fab machines, the parts in them, and the places those parts were developed and manufactured... and the machines that were required there as well (repeat...). Also the raw materials, especially rare earth minerals.


> Semiconductor engineering is complex, but probably not so complex that your average Math Olympiad couldn't pick it up in a month or two.

We have a chip shortage going on for months now, if all it took was two months and a bunch of smart people, those billions of dollars in chip orders would've made it happen. That tells me it has to be a bit more complex than you think.


Has any major organization started or entered the semiconductor fab space in the past year? From speaking with industry analysts, my understanding is we could spin up additional capacity. The issue is that most everyone fears the shortage will be over after the Covid supply chain disruptions, and therefore any new investments will not recoup their necessarily long payback periods.

This is a very different calculus than a WW2 type scenario. Fully mobilized, the government would almost certainly pay up for microchip capacity today, regardless of the long-term payback residual.

If there was evidence of deep-pocketed firms trying and failing to get into semiconductor fab in 2020, I'd change my opinion. Like if Google or Amazon were trying to open their own fab plants, but were failing, then I might suspect that a WW2 fully mobilized war effort would run into difficulties. But as far as I can tell, the issue is that nobody wants to jump into the fab business, regardless of the temporarily higher profits over the next 2-4 quarters.


A 10nm/7nm fab costs $10B and takes years to build, assuming you've got a process which works - developing that from scratch will take you a decade before you can even design the fab. You could compare the enterprise to a smaller scale NASA.


It's not a question of knowing how to build industrial capacity, it is actually building that capacity. Before you can build a fab, you need to build all the machines that go into a fab, and before that all the machines for building those machines, and so on and so forth. 9 women can't make a baby in a month, at some point you come across steps that just take time.


If you think there's a scenario that justifies it in 10 years, you start today. In our quest for ever-more-just-in-time-efficiency we've forgotten that it doesn't work for certain things.


IIRC the US navy is currently using XBox controllers:

https://www.theverge.com/2017/9/19/16333376/us-navy-military...

Of course this is a far cry from ripping a SoC from a smartphone and putting it into e.g a bomber drone, but I assume there's much more being done - just not published.


One of the reasons cited for using XBox controllers was the availability. If one breaks, you can find them at any port. Very apropos in this context...

I have a vision that on a submarine there is a cable box, not unlike the one that most of us probably have, filled with random cables with connectors we haven't used in years. And somewhere in that tangled mess is a spare XBox controller. I can imagine some poor Seaman trying to untangle a the controller cord so that they can use a periscope. I'm sure the Navy is far more organized, but it's funny to think about.


I absolutely believe it-I use one professionally to maneuver an underwater ROV. They're intuitive, already familiar to a lot of people, and surprisingly reliable.


Wow. As long as they’re not using joy cons I guess!


>The government would be better off just building a fab themselves

OP: >years long semiconductor unavailability due to sabotage of domestic fabs

OP said "sabotage of domestic fabs". what the govrt. lay out is what happened when all else failed thus going door to door to collect semis.


[flagged]


There’s nothing macho about a conflict of this magnitude, it would be absolutely horrible. But we have to set clear expectations and unambiguous boundaries to avoid a dangerous miscalculation, not pretend we have contingency plans that don’t actually exist.

There is nothing inevitable about a Chinese invasion of Taiwan. My sense is that the Chinese leadership are far too insightful to actually start that war, but you never know.


They haven't been churning out amphibious ships that you would expect them to if they thought invasion was imminent.


You should look at the stockpile sizes. Unless Russia gets involved it would not be a balanced situation at all.


Seriously. We're talking about horrible things, here.

But if it came to a nuclear exchange, China has ~300, and the US and Russia ~6,000 each. Total yield numbers are harder to come by(?), so those are +/- considering tactical-scale inventory.

Historically, this was because China's nuclear doctrine was to limit their stockpile to a minimal deterrent.

Personally? Sitting about 40km from a priority US target, I'd support the US risking a strike to guarantee Taiwan's freedom.

Either you stand for something, or you don't. And China doesn't get to claim modern territory by dusting off historical documents. Hong Kong was untenable, but Taiwan should be a line. If its citizens don't want to be Chinese, they get to make that choice.

PS: Russia would probably care less about a strike on China, unless China negotiated their support via treaty. That ship sailed in the early 60s.


>"Personally? Sitting about 40km from a priority US target, I'd support the US risking a strike to guarantee Taiwan's freedom."

Personally I'd support the US never letting people with this kind of thinking anywhere near decision making. Let them play stocks.


All insistence in international politics has risk. If you can't tolerate risk, you'll get pushed around by those who are willing to.


Respond with overwhelming force if being pushed. If however you are the one starting pushing you are just a bully or in this particular case fuckin' war criminal.


It’s not “dusting off historical documents” it’s official government policy on both sides of the straight that there’s “one China”.

One could argue that the only reason why the ROC hasn’t declared independence is because they’d fear that it would provoke an attack from across the straight, but the reality is that independence is not a cut and dry issue in Taiwan.

Take some time to actually educate yourself on the issue before advocating for nuclear war.


Taiwan has been Dutch, Han, Qing, Japanese, and ROC. Any appeal to history is nine^H^H^H^Hten dashes of propaganda.

> it’s official government policy on both sides of the straight that there’s “one China”

That's an oversimplification. The KMT / Pan-Blue believe there's "one China" in the sense that they believe the PRC is an illegitimate government, currently occupying mainland Chinese territory that belongs to the ROC.

The DPP / Pan-Green believe there are two Chinas, with Taiwan as an independent entity from the mainland PRC. Their current official position is that this state of affairs already exists, and therefore there is no need to make any declarations or changes from the status quo.

So in summary, about one half of Taiwanese political power considers the communist government as illegitimate, and the other half considers Taiwan independent. Which I would guess is fairly different from what the PRC defines "one China" as.


This is the nuance I was calling for.

Yes, there are different definitions on both sides of the straight. And while the independence side has gained strength in recent years, the official stance has been that China and Taiwan are a part of one country, however defined, and, to my knowledge, no official government act of the ROC has been promulgated to the contrary.

As I stated in my original reply, independence isn’t a cut and dry issue. To assert that these claims are “dusty” historical claims in light of what you’ve written here is disingenuous.

Edit: I should add that there is only one government in Taiwan, the ROC. The stances of the individual political parties cannot be taken as synonymous with the stance of the ROC itself. To do so would be as ludicrous to say that there’s no right to an abortion in the United States, because 1 of the two major political parties opposes it. The most that can be said is that both abortion in the US and independence in Taiwan are controversial and delicate political issues within each respective jurisdiction.


> to my knowledge, no official government act of the ROC has been promulgated to the contrary

The 1999 "Resolution on Taiwan's Future," since elevated to DPP party platform, is crystal clear on "two Chinas."


Party platforms are not official government policy.


In representative democracies with regular elections? They kind of are.


> Personally? Sitting about 40km from a priority US target, I'd support the US risking a strike to guarantee Taiwan's freedom

What? That doesn’t make sense. If the US triggered nuke slinging over Taiwan the Taiwanese would be ash, not free.


Why would China nuke Taiwan?


out of spite. they'd get glassed anyway, so why not use those nukes for something?


Because that's literally the land the hypothetical war would have been started to capture? Because the casus belli to their public has been that it's (PRC) Chinese land by historical and cultural right? Because the entire domestic and international point of provoking a Strait war would be to demonstrate China's strength?

It's a pretty bad look if your "winning" looks like "killed millions of people and rendered the land in question uninhabitable for a decade+." (To say nothing of the potential non-Taiwan collateral damage)


And retalliation would be immediate and dooming.

Mutually assured destruction


Better Dead Than Red is alive and well, hahahaha.


We can shoot down some of those nukes. Maybe china has to fire 20 on taiwan before one of them gets through our anti-ICBM weapons on our naval ships.

Means that we might come out of a war with china with only a few cities lost. Might actually be worth it.


The stockpile is a ridiculous measurement. How many would it take to have mutually assured destruction? A dozen? One of the most worthless dick measuring contests in history.


A 100kt nuclear warhead airburst will cause moderate damage to an area of 33.5km^2. Beijing has an area of 16800km^2, 501 times larger. China has 160 cities with over 1 million inhabitants. Even if they were all used against major population centers (which they wouldn't be) and all of them made it to their targets (which they wouldn't), thousands of nuclear warheads would still not be enough to destroy a large nation.


What leads you to the estimate of a dozen? Very few modern warheads have multiple-megaton yields, it's less efficient than many small-yield warheads.


> How many would it take to have mutually assured destruction?

In case of China, or Russia, thousands.


[flagged]


So far, we had only _two_ instances of nuclear weapons fired in anger. Post-WW3 (if civilization survives) would be a world where nuclear strikes between countries are a reality and recent history. Not sure whether such a world is worth living in either.


I see that they have internet access in those institutions.


I can kinda see the value of getting a million PS3 cpus for in your new drone swarm or industry controllers. But how is gathering generic decades old hardware going to help? For Computation as a service its not that valuable or easy to install. And for reuse in 'new' devices it's even more questionable. All this stuff barely just works in this specific configuration.


I'm only doing a few things on machine with around 32GHz worth of processing power (# of processors * GHz) that I couldn't do on my 80486 running 33MHz with a few MB of RAM. Emacs starts up a little bit faster. Pages in Chrome are more interactive than in Lynx, Mosaic, or Netscape. My word processor fits more text.

On the whole, that 1000x increase in performance mostly bought the ability to have most of the code that runs be interpreted or JITed rather than statically compiled, to play videos, to do some heavy numerics and graphics (including things like family photos and videos), and so on.

It's hard to think of anything I really /need/ in a life-or-death sense or an economic-survival sense that couldn't be done on older hardware with appropriate software. There's a ton of things I want, that help, or where migrations would be massive projects.

If computers were to entirely disappear, and I couldn't automate things, communicate digitally (emails, messaging), word process, write code, I think that'd be a major systemic-collapse-level implosion of society. A lot of people would simply die.

On the other hand, if computers were to regress to 33MHz-level performance, Youtube isn't sticking around, but we'd likely adapt as a society with some structural change, but without such a collapse.


Older levels of performance wouldn't cripple humanity. But putting together something that actually works out of a big parts bin is a crapshoot, and supporting a deployment of machines which are each completely unique is an IT nightmare.

We could do it if we had to, of course.


I think the best chances to get something useful would be to collect x86 consoles. x86 means you've got a well supported target, and consoles means you've got limited hardware variations.

Military could probably pressure Microsoft to give them some way to turn an original xbox into a PC; and if you give people $50 for an original xbox most will be happy.


There are a number of commonly-used microcontrollers. Z80 comes to mind. So does Atmel. So do a few common ARM variants. If you can cover those, you're not in bad shape.


> Youtube isn't sticking around

As an end user it would be manageable but dont forget that the whole world runs on servers and many Industries would collapse if computing power is gone


Computational power < network connectivity

If we lost a few decades of processor improvements, it would be about as parent noted.

If we lost worldwide connectivity or all undersea backbones between some terrestrial networks... it would be civilization-altering.


I feel like BBSes did okay for most purposes over 300 baud modems. I mean, I'm not getting a picture of what I'm buying on Amazon, unless I want to wait 5 minutes for it to download, but I can still make the purchase.

On the server side, you'll be running C code rather than a nice high-level language, but it will get the job done. You might not have all the ML which lets me get the most relevant product recommended, but....

Emails definitely won't be HTML, but they'll come through.

The world would work.


because for a lot of things you don't need that much cpu power.

Creating something that can read several sensors (switches and buttons are also sensors), and putting aggregate status updates on a serial line, or displaying status lights, triggering relay or something similar can be done, on decades old gaming consoles, and is probably one of the most common use of processors in the world today.

In fact it is easier than trying to use modern cpu, that wasn't designed for that.

Not everything is about computation, in fact in terms of units sold its probably a minority.


Military equipment is often more than a decade old and uses far older electronics.


"The US defence department, colloquially known as the pentagon, ran a wargame scenario a few years ago to see what will happen if US will be hit with a worst case scenario, years long semiconductor unavailability due to sabotage of domestic fabs, massive industry wide cyberattack, or something happening to Taiwan."

There is also the danger of counterfeit chips. It is already a plague for those of us hoping to get some good deals from Chinese vendors, but the US Navy as well was hit at least once with fakes.

https://www.justice.gov/sites/default/files/criminal-ccips/l... (.pdf document)


Aren't these BGAs? Safely removing and then also re-balling large non-socketed chips is tricky.

Do they actually have machines for this? I understand that the re-work house we use do it by hand and also spend a chunk of time inspecting the chip once removed since it's very easily damaged. Maybe there are more automated places doing this on a mass scale, we don't do it very often which might explain the general poor results.

This video is pretty cool showing the process https://www.youtube.com/watch?v=TIPO4Q9k1Zo


It's tricky to do on a one off case, but once you get the profiles down it's very automatable.


Xboxes and PlayStations have cryptographic chains of trust going back to their on die boot ROM. You can't just pull it out and use it on another Xbox or PlayStation, much less use them for generic computation.

I think the Xbox One is the first console to have no unsigned code execution at all during it's lifetime.


You can trivially run whatever code you wanted to if you seized the signing keys; I suspect that in such a "collect all the general-purpose computers for general-purpose computing things" that would be the first order of business.

Not that the government doesn't already have copies, of course.


I guess I didn't read a .gov mandate in what the parent was suggesting.


It's technically possible for sure, but I would be surprised if the total cost of such a process was worth it compared to buying new chips in bulk.


>How many dead xboxes and PlayStations have chips in them that are still good?

Those things have one big APU (with weak CPU cores), some VRM and memory, GDDR5 + useless DDR3. I'd discount the APU as it'd need someone to make OS support for with the blessing of Sony/Microsoft. That leaves the chips of GDDR5, which can be used by low end GPUs - I don't think they'd ever recoup and investment and likely cheap manual labor of preheat + hot air is the easier option. No idea if anyone would get a license from AMD or NVidia for such GPU.

Edit: the capacitors would have the best value but well... I cant see anyone relying on old caps for anything but repair.


You can buy a motherboard with the xbone apu in China and run windows 10 on it. https://www.anandtech.com/show/16336/installing-windows-on-a...


Kettle chips are really good and made in the UK.


NEW 5nm Kettle Chips! I'd buy some.


Get out!


Serious answer: Reusing old components makes no sense as the changes in technology are too big (say from a PS3 which has 45nm chip to a 7nm PS5 cpu). However you can keep using your playstation for longer as the new one will be more expensive or harder to get.


I bought a Tesla T4 on eBay. Many people will buy a set of broken laptops and use the parts to make a fixed one like Like Miami. Louis Rossman constantly has to find donor boards to fix laptops.


Moores law rules this out.

A PS3 only has 256mb of ram, so you need to get it out of the case, remove it from the board, check it still works, package it up and sell it.

But I can buy 2GB for 5 GBP. Can you do all the aforementioned things for less than that?

I wonder if we could sort and melt down chips as a source of raw semiconductors (I hear rare earth metal mining/refining is very dirty). Maybe that would be economic?


What would you put them in? Everyone wants the latest and greatest Playstation or Xbox with the fastest chips.


If latest chips were unavailable (war, civilisation collapse), we'd do just fine with existing chips for decades or maybe even centuries.

With clever software, you can eek much more real world use out of hardware. For example, your PS3 is probably sitting with its CPU idle right now, when it could be doing useful computations for someone else who needs more computing power.


> existing chips for decades or maybe even centuries.

Chips experience solid-state diffusion, so I'd expect the expected survival probability of chips (ignoring any infant mortality effects) to go something like feature_size / sqrt( time ). If this is true, then I'd guess modern process nodes would have very low survival rates out at 100 years. Though, it's been over 20 years since I had any solid state chemistry, and most of my experience with diffusion modeling is in financial models, so take my wild guess with a grain of salt.


But the failure rate in observed time for non-nand components is very very low.

Take a 5 year old phone, and the chances of the main board being bad might be 10%... But the chances of the main CPU being bad is probably under 0.1%. The other 99% of failures are mostly water damage, physical damage, bad soldering, fatigue failure, bad nand, etc.

Maybe they'll all fail at once, but I somehow doubt it. IC's from the 1970's are still going strong, also with a low failure rate.


You don't have to be clever (unless you're talking about video games), just not completely reckless.


Completely pie-in-the-sky. Chips aren't fungible.


Yet here we are https://www.youtube.com/watch?v=qNje63vx73s Chinese "Motherboard" made using salvaged VRMs and transistors, salvaged capacitors, salvaged Intel Chipset!, and made to work with retired Intel Xeons and older DDR3 ECC memory.


Yeah, we would have to assume a post-fab apocalypse where we dig out electronics from a landfill for it to be economically viable.


From the article:

"However, Liu said that it is "economically unrealistic" for all countries to "onshore" additional chip production, warning that this could lead to more unprofitable capacity."

Well. The world is clearly not listening to that. So the question is how the business of chip manufactoring will look in 2 or 3 years?


He may well be right, but concerns about supply chain security might make it worthwhile for countries to subsidise capacity that would not otherwise be economically viable.


It makes sense for China to overinvest in national chip manufactoring due to sanctions. However for the west, of which Taiwan here is a part of, it is very risky. If the west no longer supplies China's chip market, it would need significant less chip manufactoring capacity.


If the west can’t supply China, the west should ban Chinese imports.


You assume there would be malice involved. Even today TSMC prioritises some customers (Apple) over others (car manufacturers). In a future Taiwan that is a part of China, TSMC might have a new pecking order where Huawei is at the top and now Apple is chip constrained. You'd ban all Chinese imports unless one Chinese company rewrites it's contracts? Of course not.


1. can we afford to just build fab without consideration of profit? what you going to do when you can't turn a profit with all the fabs in the US?

2. TSMC and the rest of the foundry industry are exposed to the highly cyclical nature of the semiconductor industry. we are seeing the high demands for chip because of covid and car vendors' f'ck up. what you going to do with all the fabs build in the US when you are hit with downturn?

we saw Intel struggle with 10nm and 7nm delay. i think its wiser to invite fab company like TSMC and Samsung to build fab in the US even with smaller capacity.


Some choice quotes from Mark Liu, chairman of TSMC this week agreeing with you:

* "It's economically unrealistic for all the countries to build additional chip production capacity,"

* “Uncertainties led to double booking, but actual capacity is larger than demand,” How quickly those concerns are resolved “really depends on future U.S.-China negotiations.”

* each country developing its own domestic semiconductor industry would lead to a lot of “nonprofitable” capacity.

>what you going to do with all the fabs build in the US when you are hit with downturn?

Priority sourcing from domestic fabs, let TSMC whither, by design. Semi is oil now, there's more strategic/geopolitical considerations than supply/demand curve. US/EU/CN do not want Taiwan to have semi dominance, it's not in anyone's interest. Current TSMC position is happenstance due to poor industrial policies that countries are scrambling to address. IMO press releases and capex spending around Arizona announcement suggest TSMC wasn't prepared to built US fab, let alone 6. US pressured them, EU failed to. There's a good chance TSMC/Taiwan will try to delay their silicon shield evaporating for as long as they can. Probably not a coincidence big ticket US weapons sales are scheduled around when fabs would be up.


Speaking as someone in AZ, the fact that we can get a 5nm fab going here in the US speaks volumes to how effective that pressure was. I actually disagree with Mark Liu, on his opinion on it leading to a lot of nonprofitable business... I think that for the US and EU it is actually critical from a national security and economic standpoint to develop new semiconductor capabilities. I think that there will be profit in doing domestic semiconductor, but only because the government is going to have to subsidize the buildout of this capability.

I do think that its going to become a race to the bottom for wafer/ic costs, but that is actually a really good thing. I think there are going to be more companies that are developing in house technology and more focus on things like FPGA and edge-based compute. If TSMC, GlobalFoundries, UMC and Intel are smart, they will focus on pivoting towards the software/simulation side of things, IP cores and flexible logic like FPGA type of solutions. Ideally they would want every big company developing their own CPUs, edge devices, etc. and have them optimized for their process nodes.

I suspect this won't be the last major fab push, especially since people are seeing how fragile the supply chain is.


1. Yeah, we certainly could, but profit is what got us from 50um to 3nm.


This reminds me of a fun fact I learned many years ago, which is that there's a corollary to Moores Law: the same growth curve applies to the cost of building new fabs.


How much would it cost to start a design semiconductor business? On the low end of the budget.


Manufacturing you'd need billions and the support of your government. But a design business? There's quite a lot of those around.

Bare minimum estimates, working backwards:

- $? for marketing and distribution, physical inventory costs

- $100k/y for one field engineer (FAE)

- $250k for first successful manufacturing run

- $250k for first full mask run with bugs

- $100k bringup boards, test equipment, engineering time fixing it

- $25k shuttle run for initial testing. This will find at least one bug.

- $100-200k outsourced layout: this is boring, specialised, and low value-add, so get someone else to do it

- $500k/y misc software and testing staff or consultants

- $100-500k/y each: 3-5 senior design engineers. For best results, these are people you already know and are spinning out of their job at Big Boring Semi Co

- optional $100k really big FPGA + software + custom boards

- $250k/y software licenses from Cadence or Synopsys, unless you're very brave and want to try the open source flow

- IP licenses. Not just obvious things like ARM cores, but analogue or semi-analogue IP like high-speed transcievers.


Excellent description, I wished I could've pushed your comment higher.

People are always shocked when I tell how tiny is the semi industry, but it really is.

Besides the super-concentration of semi manufacturing which starts to get more coverage, designs needs some exposure too.

When Apple bought PA Semi, it went rather unnoticed, but people didn't know that the amount of logic designers of a such calibre who can design cores like ZEN, or Firestorm is probably less than 100 in the whole of North America.

Sounds very dramatic, but America is less than 100 senior logic designers away from getting out of design business too.


Do you know how it is to work on software in this industry ? Be it for the tools or embedded software programming or in VHDL / verilog.

I'm still in university and I've always liked hardware and low level related stuff but I've heard bad things about software in hardware companies like Qualcomm.


The wage disparity in between semi industry at large, and almost everything else is a matter of legends...

Almost every part of semi industry is very bad on effort to salary ratio, and players like Intel, or Qualcomm are far from the worst, they actually do very well on salary front, and attracting talent. It's Asian companies who score the worst on that.

TSMC process development job is 100:1 lottery win + 20 years of your life for $50k a year salary. Know people with first hand experience of that.


I'm quite happy at Cirrus doing tools, but I'm aware that they are an outlier who put effort into staff happiness.

In many other places software is an afterthought to the hardware, and it shows.


the software license number is off by a large factor for leading edge nodes. large!


The professional tools are licensed at 100k-200k/year per seat each, there is no way around them if you want to actually tape out the chip. An experienced backend engineer needs 3-4 months to take a design from Verilog to a file that can be send to a Foundry. You also need access to a PDK from a manufacturer like TSMC and probably will have to license some of the digital-analog components (PLL, IO-Pads) from someone. Manufacturing itself is fairly expensive as well, we taped out via europractice ( https://europractice-ic.com/schedules-prices/), you pay about 12k per square millimetre in TSMC 65nm. Even older processes are more affordable, but you have to keep in mind that each of them requires at least some customisation and access to blocks specific to the process (memories for instance).

On the design and verification side the story is slightly better, verilator (https://www.veripool.org/wiki/verilator) for instance is an excellent tool that lets you simulate system-verilog code with high performance. It has seen wide industry adoption in the last years and now a couple of companies like Google, SiFive, etc. are investing in open source tooling for hardware. You can go a long way to building your design without having to pay for expensive licenses. The downside is that most of the interface type components (PCI-Express, DDR4, etc.) are prohibitively complicated to build yourself, so you will need to rely on external IP at some point.


> The professional tools are licensed at 100k-200k/year per seat each, there is no way around them if you want to actually tape out the chip.

Unless you talk about countries where people simply don't pay for software no matter what.

Some companies are just fine with 10 years old pirated Virtuosos in places like China.

It was a big surprise to me that even software which probably has less than 100 licensees globally is still getting warezed.


Yes, I can totally believe that, the software itself is actually distributed in a fairly convenient format, given that some of it is only compatible with some obscure RedHat version, so installation amounts to copying files from an iso image and pointing it to a license server. It probably is also easy to patch out the License server code. I would be fairly surprised if those places also had access to current gen PDKs and the required memory compilers. We have to basically order memory blocks by mail and don't get access to the tools to generate them ourselves.


To build chips similar to what TSMC produces (i.e. some of the most cutting-edge fab tech in the world)? Hundreds of billions of dollars. Decades of experience. Growing silicon crystals and turning them into transistors barely bigger than a few dozen atoms is immensely difficult. And then turning that into a process you can repeat with precision and churn out at high yield is another enormous hurdle.


Serious question as someone totally naive on this. A few hundred billion dollars seems like an extremely good deal to lead this area of technology - why don't we see funding for this at a government level? I mean, we now have a 2 trillion dollar infrastructure plan.

Is there an impediment I'm not aware of? Or maybe it's not as good of a deal as I think?


> why don't we see funding for this at a government level

Because America fell asleep at the wheel while Taiwan did not. TSMC and other Taiwanese chip manufacturers benefited from a government that saw the need to be a part of high value manufacturing. Meanwhile America of the 90s and 2000s assumed that it would be the most powerful country in perpetuity and so it wouldn't matter where something was being produced as long as they could pay for it.

The events of the 2010s has shown the flaws of this thinking. It is now possible for American firms to be cut off completely from semiconductor manufacturing, similar to how Huawei was cut off. There is now support from the American government to restore semiconductor manufacturing. They are footing a part of the bill for TSMC's new plant. I'm sure Intel's lobbyists are skilled enough to get part of their new plants paid for as well. China is doing the same - the state has deployed all it's resources behind SMIC to ensure that what happened to Huawei never happens again.

It's more complicated than this though. You don't just need semiconductor fabs, you also need rare earth metals, almost all of which is mined and refined in China. America is attempting to reshore this too (https://www.economist.com/finance-and-economics/2021/03/31/g...)


> need rare earth metals, almost all of which is mined and refined in China

This is usually presented as an ace up the sleeve that China has secured through shrewd strategic thinking. In reality, rare earth metals are neither particularly rare, nor are they expensive, nor are they in high demand.

Rare earth metals are more common than silver or mercury, somewhat rarer than cobalt. The ones with excellent magnetic quantities go for ~$50 per kilo (allegedly only 200g is required per electric car), the ones used for catalysts and alloy making go for as low as $2 per kilo, cheaper than copper.

They're not nearly as much of a constraint as claimed.


My understanding is that they are hard to extract economically without destroying the environment. Also I believe more come from Brazil now anyways?


This is my understanding as well; one of the reasons why China has such a high percentage of that market isn't because no one else can, but rather that they don't want to. The environmental damage is often horrendous [1][2] so everyone that can outsources it. I believe there's also been a lot of work on reducing or replacing relience on those materials in quite a few areas, as I believe a parent comment mentioned.

[1] https://e360.yale.edu/features/china-wrestles-with-the-toxic...

[2] https://www.bbc.com/future/article/20150402-the-worst-place-...


Intel's new fab is in the US (where I reside), I hope they get subsidized as I'm worried about our manufacturing capability across the board. It's affecting what I'm looking to purchase as well, having a few issues with my Ryzen 5900X system and instability matters far more than performance so I'm thinking hard to an Intel alternative. "Slow" but stable, and investing in the US? Sign me up.


Modern semiconductors were invented in America, literally why we call it Silicon Valley. We used to have the best fabs in the world but 80's and 90's corporate American business culture decided it was better to shut down all that manufacturing and move it overseas for more short-term profit. They're now coming to the hard realization that maybe that was a massive mistake.


The story is not anywhere near that straightforward for semiconduction fabrication. A huge portion of that fabrication, for example, just moved out of california to other places in the US. Also there's been a general consolodation of fabrication plants over time as more and more advanced nodes put the price of a plant up by orders of magnitudes. There just isn't enough money for the same number of fabs at the bleeding edge. Also, intel are still US based manufacturing and they have only quite recently been toppled from the throne of most advanced process. They have simply lost to the advancement of overseas tech companies (generally it seems because they made a bad bet on their strategy for further shrinks to their process), nothing to do with US companies outsourcing overseas.


> There just isn't enough money for the same number of fabs at the bleeding edge.

There definitely is but it will come at the cost of share buy backs and keeping share price up which affects compensation of management. Failure to keep the share price up by investing in manufacturing in US will also invite predatory share holders to topple the management. The cult of "free" markets driving wealth growth at the top of the society seems to have trumped strategic common sense.


Tech employees get share compensation too, even though HW engineers get paid less than SW.


20 years ago I had the line, American businessman and political leaders are stupid. Because they think they own those factories in China and the far east.


Huh?

Samsung and TSMC are home-grown in their respective countries and Intel still makes chips. Together, they make the majority of the world's chips.


One word: Financialization.

The 70s started the trend of financializing (is that the word?) everything, that meant that at every step of every productive process someone packaged it into a tradable "paper", which got packaged with other paper from other productive processes and then re-sold and repackaged again and again to the market as proxies of the value from those productive processes.

Holding paper assets turned out to be much more lucrative than the actual production for most companies, not to mention far far easier than competing in the market with products and innovation, so naturally it slowly began to chip away at the quality of the production in favor of holding paper that represented production. Outsourcing became possible and they noticed it didn't matter to the ones buying the paper assets who actually did the work, so it skyrocketed.

Financialization required an immeasurable amount of debt, they began extending credit to anyone so they could buy said paper, and then turned around and made the debt itself financial paper.

That got us the stock market booms of the 80s, the 90s, the dot com, the financial crisis, and of course their busts.

Now we are at the end, and the signal? companies are adding bitcoin to their balance sheets. The cycle of finacialization is complete, paper is no longer required and you don't even have to outsource your production, not that you have it anyways, as all you need to post profits and pretend you are a good CEO is buy crypto and hodl.

Production for a 'modern' business is rapidly becoming a thing of the past. Governments can try to buy their way into any market but it is useless; most businesses have no real capacity to compete delivering cutting edge products with actual real "bare-metal" profitable businesses because they can only make profits "on paper".

(sorry if finacialization is not how it's written)


Doom doom money doom doom. Nonsense.

Top 10 companies

https://fxssi.com/top-10-most-valuable-companies-in-the-worl...

Most of these are providing goods and services that 100s millions of people are getting value from, with the possible exception of Facebook :p

As an aside I must admit I had never heard of Delta Electronics before at all; nearly 50 years old and 5th most valuable company.

Out of the top 10 you do get some pure play financial companies like Berkshire, Morgan Chase, Visa (maybe).

Cutting edge semi conductor manufacture is arguably some of the most advanced technology and manufacturing we do at scale. It's just plain hard hard. The equipment, the expertise, the lead time to manufacture, it's all big scale problems.

Relatively speaking getting manufacturing of old node sizes would like not be remotely as hard, 32nm maybe even 16nm.


Those companies, maybe with the exception of Facebook and Microsoft, are literally only possible because of paper money. Take out the paper they hold from their balance sheets and their size goes down in half. Take away their ability to own their markets by issuing debt bonds, which would mean competing with the revenue generated by their own products, and you kill them.

Apple became the biggest company in the world not only because they create products everyone wants, but equally because they are masters at managing money and debt. That is what gives them the edge in production as they barely produce anything. They are probably the best at managing financial assets by far.

And Tesla, well Tesla is very bad at building cars, they are bankrupt without paper profits. They shouldn’t be on that list.

Semiconductors is not harder than any other cutting edge industry that has come before. Taiwan is not special, they are 20 million people. A good chunk of them trained and educated in the US.


Also note that as far as hardware production is concerned, Tesla is the only company on that list that actually manufacturers anything at scale in the US anymore.


> Apple became the biggest company in the world not only because they create products everyone wants, but equally because they are masters at managing money and debt.

What, by not having any? The correct amount of cash for a company to have is $0 because it's not a productive asset, as you say, but Apple's strategy ended up with them having hundreds of billions of it overseas. That is literally the opposite of MBA philosophy, not an even better version of it.

(And they own a factory in Ireland.)


> That is literally the opposite of MBA philosophy, not an even better version of it.

Who said anything about MBAs? The Apple way is clearly the better way to do it at the moment, and has been for the last decade. The MBAs at Intel are not even in the same league.

And Apple owns factories in the US too. They own them for other reasons, not because they "need" them to make products or are profitable on their own.


That list is wildly incorrect. Delta Electronics doesn't have anything remotely close to a 1.4T USD market cap.


> Now we are at the end, and the signal? companies are adding bitcoin to their balance sheets. The cycle of finacialization is complete, paper is no longer required and you don't even have to outsource your production, not that you have it anyways, as all you need to post profits and pretend you are a good CEO is buy crypto and hodl.

This literally isn't possible because of how GAAP accounting rules work for bitcoin. You can't count the value going up if you hold, but you have to mark it down if it goes down. It's only bad for you.


You don't need to report it to the SEC for the market to "get wink wink" your bitcoin holdings went up.


The US dominant ideology is private enterprise, not government enterprise. This is why the US does not even own its high-tech weapon manufacturers, preferring to have Boeing, Raytheon and Lockheed-Martin at one remove. Of course, they wouldn't exist without government money, but they're private companies with private shareholders.

So when you're talking a few hundred billion dollars for a US-first technological capability, the important question is who will get the money and how will they be held accountable for delivery? What, even, are they expected to deliver - no use if the plant is more expensive than TSMC so everyone chooses them anyway.

The history of these initiatives is not great: https://www.wyomingpublicmedia.org/post/foxconn-promised-130... (Republican)

https://fortune.com/2015/08/27/remember-solyndra-mistake/ (Democrat)


>why don't we see funding for this at a government level?

China big fund 2014 yield nothing. SMIC still have to poach TSMC engineers to get them to 14nm. we'll see how China big fund II going to do.

U.S. govrt is dumping money into semis by asking TSMC and Samsung to open fab in US and Intel is gunning for US govrt's money by going IDM 2.0.


The US is heavily investing in that, there are recent DARPA programs (POSH, IDEA, SHIELD, SAHARA, ...) addressing this and continuous government investment in semiconductor technologies.


I'm not sure if you are referring to the US specifically, but there are governments doing this.

https://www.reuters.com/article/eu-tech-semiconductor-idUSKB...


having skilled peoples at every org level capable to do the job is the most difficult part. Any gov could spend Trillions and still get little in return


Yes, but the question was about a design business, not start a new fab.


You can read around the parallax propeller 2 forums for an idea of what it takes on the low end - they were very open about the process. IIRC they have one guru who does software and Verilog, one who does RTL layout (term?) and the rest are support staff who do other things at the rest of the company most of the time (PCB designer, assembly, sales, HR, etc.)

The raspberry pi pico team also did a good podcast episode talking about this - I think it was a podcast called innovation coffee or something like that. Hosted by a guy from ARM Europe.


> How much would it cost to start a design semiconductor business? On the low end of the budget.

A decade ago, it would've taken you $1m in China for a 50/50 shot at it.

If your first tapeout works, you will make money, if not... you wasted $1m.

$1m looks like pocket change to most American tech people now, but the fabless trains has long since departed.

You will need a sum with few more zeroes, to get a jumpstart in the industry now.

There was 4-3 years long extreme consolidation push in the industry, with big swallowing dozens producers of commodity products. It is very reminiscent how the demise of American oil industry happened when the industry gone from thousands of oil producers, to less than 10 in one decade. Everybody bets now that there will be 10-15 or so mega-fabless running the industry in coming decade.


It massively depends what you want to build...

There are plenty of designs where just $10k will get you an ASIC built in some university... It won't be anything like leading edge silicon though, but for some products that's all you need.


> There are plenty of designs where just $10k will get you an ASIC built in some university

With a few micron university process? Yes, can you commercialise it? No.

I believe 180nm has for long been the limit of how "commercially viable" is defined even for stuff like analogue, and discrete logic ICs.


Equipment = billions.

A bunch of talented people with experience who can use the equipment productively = priceless. Probably can't be bought, otherwise some countries would already have done it.


The (mainland) Chinese are buying as much TSMC talent as possible [0] so don't count them out just yet. It turns out everyone has a price.

[0] https://www.google.com/search?q=china+hiring+tsmc


The question is about design of chips, not manufacturing. That is quite the difference in the market that has developed.


There was this a while back:

https://news.ycombinator.com/item?id=23755693

Google offers free fabbing for 130nm open-source chips (fossi-foundation.org)


It depends pretty heavily on the semiconductors you want to produce, if you want to replicate the technology of the early 60s people have done that in their kitchen (see cooking with Jerry for an example.)


I don't know how much would it normally cost, but the EU is spending 145 billion EUR to kick-start it: https://www.eetimes.eu/eu-signs-e145bn-declaration-to-develo...


Careful setting next year's quotas based on last year's shortages...


True, but where do you see demand dropping off? I would actually assume demand will go up even more, since the economy was slowed down due to the pandemic.


I think the changes pushed people to invest in new devices, etc, for a per-user lifestyle, while going back to restaurants, meeting rooms, etc, there's an excess of second hand inventory.


oh I bet many are going to find the opportunity provided by the equipment gives them a lot of flexibility with work but also to communicate with friends and family. Many had never considered using zoom, their ipads, and other means, to keep in touch. The visual aspect may finally lead to where people will desire phones calls are both audio and visual.

maybe the video phone will become an actual reality instead of something most people only see on TV and in movies


Honestly, I think it should already be dropping off.

Everyone, their kids and their pets has updated their electronics this year. That made sense as work/school from home was a thing. Now they have new electronics, and no need for additional electronics. Plus pretty soon things will reopen, and people will have opportunities to spend money on beer and restaurants and events.

Even if covid 2.0 launches this year, I already have a new ipad etc.

I think there'll be very low sales for most electronics for a while.


> True, but where do you see demand dropping off?

Likely December.

Right now, because everyone is fearful of shortages, they've overbought or are in the process of overbuying.

At some point, everyone is going to realize that, in the US, carrying inventory has tax implications, and everybody is going to try to dump a bunch of inventory back into the system that they really didn't need and aren't prepared to pay taxes on.

Which will be fine for a while--until the middlemen get back up to what they consider useful inventory levels and quit buying. And all the prices will crash.


For TSMC there's very little risk

This is because a newly competitive AMD takes up a ton of wafers. They can't make them fast enough, and in 2022 they are releasing a completely new platform with DDR 5 on 5nm, and presumably new GPU line.

This should again clobber Intel 10nm products. Even if there's no chip shortage, the customers of TSMC are very healthy (AMD, Apple, other phone manufacturers).


Who's keeping inventory? Certainly for consumer channels there's basically no inventory to be had.


End companies, themselves. Any inventory that shows up gets overbought immediately. Two companies I know have x10 the microcontrollers they need for the year because they overbought when they could.

If they pushed those microcontrollers back right now, they'd make quite a nice profit. But they won't.

Instead, those companies will hit the tax implications starting in December and then will try to push their overbought inventory. Of course, at that point everybody will be trying to push their inventory and the prices will crash.


Crypto bans would free a large portion of GPU demand.


This isn't about next year though, this is about planning capacity for the next decade at least.


I don’t think the shortage will be over in the next 2-3 years at the least. This is a huge problem that the media only briefly covers. Biden just announced 25b to semiconductors bc homeland security knows we have major flaws in the supply chain.


The expected price reductions will have been part of the financial modelling for products. Big customers of TSMC is Apple which traditionally build the best products with the best inputs and then sells them for quite some time benefiting from lower input costs.


How do you think the company will pay for this capital expenditure outlay – with cash, debt financing, sell additional shares of stock, something else?

What makes you think the company will use this method of financing?

Do you think this is the right approach? Why or why not?


I am wondering if this concentration of capacity in Taiwan with China looming over it is really such a great idea.


TSMC is building a fab in AZ.


While true, if I recall correctly that fab will be a tiny minority of TSMC's overall output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: