This must be scary reading for Intel, still battling to make anything decent at 10nm (which I know is more like 7nm TSMC, but still, this is another 2 steps on from their 7).
It's a bit worrying how all this innovation, with the possibility (perhaps even likelihood) of completely cornering the market, is all in one country, that is very close to another country, that doesn't want it to be its own country.
> all in one country, that is very close to another country, that doesn't want it to be its own country.
That's actually good news for Taiwan. TSMC is already very important for everyone. If it becomes the only player in high-end chip making, there is a good chance western countries won't let China invade Taiwan.
It's a good start. They need to move carefully while balancing their assets.
They also need to train the new employees in the US, it'll take time to get great chip engineers there, and get Taiwanese volunteers to move to the US.
It is automated in the sense that silicon goes in one end, and ideally no human being touches it until it comes out of the other end in a finished package. It is not at all automated in the sense that there is a ton of maintenance work on the production line.
I love reading through random tech journals, the type with highly industry-specific ads in them, as an insight into the worlds other people live in. Jobs and every day problems I will never experience first-hand.
One of the single most eye-popping experiences like that was an ad in a chip industry quarterly I found somewhere that was breathlessly advertising that their latest tool (a machine the size of a garage) now had an MTBF of "just" 27 hours instead of the previous 2-3 hours, a massive tenfold improvement! Cutting edge stuff, apparently, compared to the competition. The really impressive part apparently was that by cutting down on maintenance time, it could be "online" for up to 90% of the time instead of just 30-50% of the time or somesuch. Apparently that's impressively good in this space.
The rest of the ad (and much of the journal) kept going on and on (and on!) about how their tool has easy-swap parts, can keep the vacuum during repair, has tool-less panels, quick access, no need for heavy lifting, etc...
I don't think I'm exaggerating too much by saying that nearly 50% of all technical development in the fab industry is about reducing the maintenance effort, and hence staffing costs.
I can only wonder why one would use ads in an industry which is so specialized and has so few players. Wouldn't it be more efficient to just talk to these people?
The moving part? Probably because of better salaries. I don't know much about Taiwan and how nationalistic they are, but for most of the world working in the US is seen as the career endgame (or means to get there).
If the question was why would Intel invest into making the chips in US - from recent news it seems like US understands how dependent it is on other nations for silicon manufacturing, which poses many questions. The biggest one seems to be national security, which looking from the side is the only factor that gets thing done in US.
> I don't know much about Taiwan and how nationalistic they are
You will want to read up on The Republic of Formosa, WWII and the history of Taiwan at some point. There are enough Chinese nationals on this website to downvote us all but suffice to say, it is widely regarded at the federal level that Taiwan could come under siege by the Chinese Communist Party at anytime and be used as a bargaining chip with the West along with Hong Kong, the DPRK armistice and Western financial interests in Macau. We need to begin evacuating Hong Kong and Taiwan immediately back to the USA --in fact, China just seized a boat of people fleeing Hong Kong last night, the first time they have done that (and on what charges, nobody knows, not even the Chief of Police of Hong Kong). As the President said today, we are going to break it off cold with China and as you expand your knowledge of the situation, you will realize that China claims Taiwan as part of its territory and the people of Taiwan do not.
Hong Kong was legally handed over to China, but they had a 50(?) year handover period which they're breaking. Taiwan was never China's. A bunch of Chinese nationals went to China during the cultural revolution, and China would love to have Taiwan, but Taiwanese people are basically Chinese refugees and do not want that. China has been posturing to conquer them for years. My wife's parents (Taiwanese) immigrated to Canada during one of the periods of high tension between the countries. China was flying fighter jets over Taipei as an intimidation tactic.
Lose its competitive advantage, slowly consolidate remaining assets into private hands with borderless migration capacity, elect a government composed entirely of conman, and five to ten years from now, begin losing every violent international confrontation that it tries to start?
I agree with everything but...the losing of every violent international confrontation.
Maybe you'd lose against other modern, well integrated forces. But there is a lot of the world whereby forts, trenches, gun emplacements are still their best. And they are simply butter to the modern military knife.
Barring major catastrophe,(ww3, another american civil war, a pandemic that kills large % of younger population etc) it will take far more than 10 years to decline past that point.
The Iraq Military Exercise (I can't call that a war.) is a clear example of that.
What war could the US start without the usual and immediate involvement of Russia, China or Iran -- I mean, except for a war with its own citizens that is.
The US lost the Afghanistan war, is slowly losing Irak to Iran, lost all face and is getting humiliated in Syria, failed at preventing Russia impose its presidential candidate, is alienating its european allies (see the Iran sanctions debacle), is supporting the savage Saudi Arabia -- who is losing the war in Yemen.
The US won Afghanistan handily, they just didn’t know what to do with it when they had it. I think America would have been better off placing Afghanistan under a military governor for a few years while they built up the country’s institutions.
Most powers make the same false assumptions that they "easily won" an insurgency/guerrilla war.
No you didn't win anything, you temporarily held territory, you wasted enormous resources, and were in fact beaten by goat herders in the end.
Without putting the whole country in a concentration camp and re-educating for 20 years and without providing an alternative economic model, a central administration in Afghanistan is irrelevant wether you do it for "a few year" or decades.
I think you’re conflating Afghanistan with Vietnam. America never fully committed to Afghanistan as they did with Vietnam. From the very outset they tried to do it on the cheap by relying heavily on the northern alliance. Subsequent to the invasion, America was far more interested in pursuing the Iraq war, so Afghanistan quickly became a backwater.
This meant that America became heavily reliant on local war lords (some of whom had dealings with the Taliban) to ensure security and maintain order. This undermined the government they were trying to build in Kabul and contributed to a culture of corruption. None of which endeared the common afghan to their newly formed institutions. The Taliban exploited these weaknesses with classic insurgency tactics and gradually took territory from the weak central authorities.
This was all entirely avoidable, America just didn’t stay focused on the mission.
RF completely achieved its objectives in Ukraine, Georgia and Syria, without putting a "we have won" show.
Sure, there are sanctions, because Russia actually annexed Crimea.
Russia knows very well that teritory grabbing in itself is pointless in the 21th century, Russia grabbed enough territory to nullify any chance Ukraine or Georgia will join NATO.
Don't get me wrong, as an eastern european, I hate what Russia stands for, but militarily and geostrategiclly, they know very well what they are doing, because they cannot aford not to, unlike the USA.
When's the last time we didn't lose every violent international confrontation we were involved in? Also, yeah, the rest of that is both incredibly depressing, and of such a high plausibility that I can't see another way of it turning out. Pretty much like the US in Neal Stephenson's Snow Crash.
Idk the US “won” in Iraq very quickly (toppled the head of state, dissolved the official military). It was the occupation and rebuilding efforts afterwards that it failed at IMO.
I was talking about the rational objectives of the US as a state, if we agree that the US is a captured state by criminals with unknown strategic objectives or contrary to the US as a country, then, all bets are off.
1. The US was in no danger of remaining without oil, the price of oil skyrocketed after the US invasion, and the "access" was certainly not worth trillions of dollars that the war cost, not to US as a whole.
2. The US already had a military presence in Saudi Arabia, and without the vicious circle of violence it itself caused, the US had no reason to maintain a permanent military presence in Irak.
Just see the recent Soleimani debacle -- the US is one such debacle away from being kicked out of Irak after blowing those trillions, killing hundreds of thousands of civilians, helping spawn Daesh, causing economic devastation in Irak and the region.
But the US could have just bought the oil on the open markets.
In 2008, the US imported 600.000 barrels of oil per day from Irak -- that's when oil peaked at ~150 -- but if the whole year the oil stayed at 150$/barrel, and the US just stole it without paying Irak anything, that's just 32 billion USD.
It would take the US 60 years to recuperate all the money it would have spent on Irak war if it was to steal all the oil from Irak at the levels of 2008, or 380 years at todays import levels and valuations (assuming the oil was war booty -- it is not).
There is no way the war was started over oil, that would be beyond dumb.
My personal theory is it was just the industrial-military oligarchs who started the war as to forever saddle the US taxpayer.
The US used to have tens of thousands of troops in Taiwan. The US military formally withdrew in 1979, although it has quietly retained small numbers of US military personnel in Taiwan ever since, responsible for training and liaison with Taiwanese military forces, but not enough to be militarily significant.
A quick, unexpected, massive deployment of significant US military assets to Taiwan would put Beijing in a very difficult position. Either Beijing attacks, and starts a shooting war with the US – which would do massive damage to the Chinese economy; or else, Beijing doesn't and loses a lot of face in the process.
It would be a rather risky, high stakes gamble, but one in which the US might come out in front.
And what happens to the US (and world) economy if there's a war? And how would that war end without taking the world with it?
I think COVID has shown Chinese society/government/people are much better equipped for big shocks than the US is so if it's a protracted thing they would have a huge advantage.
You're saying unexpected but China has likely simulated this situation a million times and prepared a plan. I would bet China is constantly monitoring for such deployment and will react long before the troops set foot on Taiwan. AFAIK nearby island are already heavily militarized.
Also I'm not sure how people of Taiwan would react to such deployment. Neither Taiwan nor China recognize each other as a separate country. They both claim to be the righteous government to one unified China.
IIUC graph core chips are very different from Intel chips so idk if their manufacturing can be compared. Or is it that chip manufacturing does not depend on what chip is being manufactured?
I read everything I could find, even contact MS to get demo for Graphcore IPU, no success, so in my perspective something smells fishy, because, you can rent for one minute quantum computer, but you cannot get access to IPU's, only availble access is 10K a month.
Recommenders, speech to text, text to speech, page ranking in search engines, fraud detection, video analytics, content filtering (from dick pics to censoring news), etc.
Getting very close to the end now. The nearest-neighbor distance between Si atoms in a crystal is 0.235 nm, so a feature with 3nm width is 12.7 Si atoms across. It is amazing that this can be done, and the physics are increasingly weird at this level of scaling.
Once upon a time (the "Dennard Scaling Era") VLSI circuit design used the same relative geometry at different "feature sizes" - all dimensions of the design(wire width, wire spacing, gate length, gate pitch, ..., etc.) scaled by the same amount from generation to generation, so it was possible to completely specify the transistor layout with a single dimension (generally called 'L'), and derive all other measurements from that as a multiple of 'L'. Transistor density was proportional to 1/L^2.
The measurement that was used originally for 'L' was the gate length.
As designs shrunk below 40nm, it became impossible to shrink _every_ dimension proportionally. In particular, for planar silicon, gate length stops shrinking around ~30nm, but other things could still shrink. This meant that transistor density could still increase, but the relative geometry of wires/gates/spacing/etc. had to change, so it was no longer possible to specify the full geometry with a single number.
But people liked the single number as a handy way of comparing processes, so marketing kept using it as a way to compare processes. The way they decided to do that was mostly to try and keep the proportionality between the transistor density of a process and 1/L^2.
To the extent a "feature size" number of a process means anything, it means "the relative transistor density of this process is equivalent to what you would get if you had used the old (>40nm) geometry, and shrunk 'L' to the specified feature size". Even that relationship has degraded in recent years - now it's more like "we calculate the new feature size as the size of the previous process divided by sqrt(2)".
Regardless, as stated in the parent, there is no single dimension of any recent process that corresponds to the '3nm' number.
What would be even more useful is an actual answer to the underlying question the OP seems to be making: how much further until one of the many dimensions you are talking about simply runs out of Si atoms?
In other words, however "made-up" the 3nm marketing number may be, physics limits should still dictate a lower bound for it, and the OP seems to be wondering what that is.
There isn't an answer to that. We've already hit physical scaling limits for the old style (>40nm) of transistor manufacturing. Each successive generation since then has employed new tricks to improve performance, and each jump in performance is labeled with a smaller process node size number. This can keep going so long as there are more optimizations to be found. And since not all process optimizations involve shrinking dimensions, it's not necessarily the case that we can predict from physical principles when this will end.
For example, stacked chips are increasingly being used but are fundamentally limited by heat transport. Maybe when we get into sub-1nm "sizes" the process nodes will be defined by how well they transport heat out of volumetric chip designs? Or we'll switch to twisted graphene superconductors for certain components which increases efficiency without necessarily shrinking feature sizes. Etc.
I'm just throwing those possibilities out. The point is we can't predict when scaling will ultimately end.
That's still not a straightforward question to answer, because transistor shrinks aren't just about shrinking some dimensions while others stay at their limits. Transistor geometry has changed in more fundamental ways. Beyond ~28nm, the industry switched from planar transistors to FinFETs, so now instead of gate width we have an extra dimension and have to consider stuff like fin height and fin thickness and pitch. Starting around 3nm, we'll be seeing "gate all around" transistors—GAA FETs, in the form of nanowire, nanoribbon or nanosheets.
TSMC's 5nm 'N5' process -the highest density process currently shipping- has a raw transistor density of 173 million per millimeter^2. (https://en.wikipedia.org/wiki/5_nm_process)
If the transistors were laid out on a square grid (they aren't - it's rectangular), each square would be 76nm on a side. This area includes the transistor itself, the contact area (to connect the transistor to wires) and the required spacing to prevent the transistors from interfering with each other.
> "Regardless, as stated in the parent, there is no single dimension of any recent process that corresponds to the '3nm' number."
That's interesting - are they just entirely making that up then? What's the 3nm supposed to represent?
It seems like it's one thing to pick a specific dimension length to measure even if it's not proportional to all of the others, and another to just pick one that isn't represented at all.
3nm and 5nm are just marketing names it does not represent any geometry of the transistor. Probably the best analogy is 3nm would be the average length of the side of a pixel. One draws a transistor, or anything else, from many pixels.
The exact details are under NDA but to get a 'very' approximate idea of the scale of things one can look at the 5nm Wikipedia page.
They list the metal pitch as 30nm in TSMC's N5 node so in general two pieces of metal cannot be within 6 'pixels' of one another. One gets a rough guess on the distance between transistors by looking at the gate pitch (roughly 10 pixels in this case) but that measurement comes with a lot of caveats too.
Keep in mind this is when you're going out of your way to make something tiny but there are many good electrical engineering reasons to make the transistors larger still, and quite a lot of them are.
I hate when people complain about how uninformed everyone is, but don't put in even the tiniest amount of effort to help people become more informed. At the very least, include a link.
It's not fair to blame people for not knowing that "3nm" does not mean "3nm". Knowledge about measurement units is more prevalent than knowledge about marketing practices of semiconductor companies. Blame those misleading practices instead.
Exactly as one vendors 7nm may well contain more atoms than another vendors 10nm node. Node numbers mean nothing beyond comparing against comparable node's.
Yes it's sad that node sizes are marketed in the same way the GHz race was at one stage and equally the disparity is greater when comparing nodes like for like.
The narrowest dimension of a FinFET is roughly the process node size, within a small factor, so my comment applies. In fact, the first FinFETs were actually narrower than the node size when they were introduced.
It would be nice if we could standardize on some physical feature size that we would then call everything equally. It's silly to have a measurement system which is calibrated differently for what should be apples-to-apples comparisons. Imagine if some car manufacturers listed the length of their car from the rear bumper to the steering wheel and other measured their lengths from the rear bumper to the front bumper.
The comment is not in the gray as I write this, but one tends to get better results generally by providing more information in a positive light, rather than simply saying that something is wrong without providing anything better.
For example, if the parent comment had said, "Yes, the control over small numbers of atoms is interesting. Although transistors are going to be much larger than that, it is cool that the shrinking feature size allows [making up a "fact" here for demonstration purposes] edges of transistors to be sharper and a little closer together, so yields are higher for a given transistor density," then it would be more useful and better received.
But keep in mind, the parent poster likely didn't mean his words in a positive light. To portray them as such would be dishonest, which is unethical, confusing to all involved, and would lead to further confusion and incoherence as the conversation progressed.
They are more like minimum possible feature size, not average feature size. The real metrics you want to look at are things like transistor density, power density, power use, etc.
I'm not sure why I was downvoted, but my comment is technically correct (I work in ML and supercomputing; nearly all modern large ML and supercomputing devices are effectively a tradeoff between how much heat-producing CPU you can fit in a space, and how much latency-killing long cables you have.
It was far worse then, as copper places serious upper limits on long-distance high-speed transport. Data transmission latency of copper tends to be 2X that of light (physical bounds, not empirically observed), and making very long cables with copper is impractical due to signal losses. Once long fiber was cheaply available people started to use that for the long links (imagine a toroidal mesh with wrap-around links- the wrap-around links are very long (100+m) and need to be fiber).
However, long fiber causes latency problems- those wrap around links will slow do any global reductions you need to do.
It's getting worse for AI chips and better for memory. GPU-type devices doing machine learning have all that compute silicon running flat out all the time, all emitting heat. Flash memory, at least so far, doesn't seem have the data rate to have a heat problem. At any one time, most of the device is inactive.
It's amazing that there are 2 terabyte USB sticks for US$40.
It is getting worse for Flash. There are consumer M.2 SSDs that have huge heatsinks and some even with fans!
Mark Cerny says this is a concern for PS5 which will have user expandable SSD storage. Unlike SATA drives, the M.2 standard doesn't define z-height, and the drives that meet the PS5 min spec are too thick right now.
The main problem there is the SSD controller interfacing between the flash memory and the CPU. Right now, the only SSD controller that supports PCIe gen4 speeds and is small enough to fit on a M.2 card is a controller made on TSMC's 28nm process. Everybody else in the industry decided to move to TSMC's 16/12nm processes before trying to ship a high-performance PCIe gen4 SSD controller. Doing really advanced error correction at 5+ GB/s takes some juice, so SSD controllers have to follow in the wake of CPUs and smartphone SoCs by moving to smaller but more expensive (at least up front) process nodes.
It's not going to happen any time soon, but I'm looking forward to "computronium" as described in sci-fi - a computing substrate optimized at the molecular level to the point that the performance becomes the function of its volume.
I wonder if we can go to even higher density. From entropy constraints, computing performance asymptotically scales with surface area [0,1]. The general bound is Bekenstein bound[2]
It'll be interesting to see how/if quantum effects will be used in future computing tech (well aside from the whole quantum bangaps in semi-conductors). Effects like thermal super-conductors could enable much higher heat in a given area, perhaps built with quantum dots on semiconductors (IIRC, I thought some researchers tried that).
I guess if we hit the edge of feature density, we might start building things out of 3d FPGA type designs of homogeneous modules but because of heat, the capacity of such a design is ultimately going to be limited by its surface area, not its volume, no?
Indeed. I know there was a working 1nm node transistor out of Berkeley a couple years ago, but as far as I know that's pretty close to the limit. I'm really curious what will come after we hit that in production in ~10? years.
Is no one interested in the stats listed on the card? 200TFlops for 300W? That is incredible power efficiency! What kind of connectivity solution is needed to feed it?
Also the A100 is cheaper, and it’s perf is verified by MLPerf.
GraphCore’s claims are just that, claims. Some of these have been debunked already by people with access to the hardware (eg promised 10x speed ups were more like 1x).
Do you have a link to those results by independent researchers? I have also been very skeptical of GraphCore’s claims (and have talked about it on reddit) but haven’t had access or heard of anyone’s experience.
It's 21 minutes, not 5 minutes, but here's a lecture by Chris Mack on EUV from 2013. He's fantastic. (Skip around or increase playspeed if you don't have more than 5 minutes to spare.)
Chips produced by EUV work the same as any other chip. And even chips that are made with EUV are only using EUV for the very finest features - they're still using plenty of 'regular' lithography for the dozens of other lithography steps.
I don't know of a video but I'll try to summarize.
How to make a chip:
1. Slice pure silicon into a wafer.
2. Apply a thin coating of a light-activated chemical which does something to the wafer (etch, build up metal, dope[1] silicon, etc.).
3. Shine light through a mask which exposes a pattern into the silicon wafer.
4. Rinse off the un-activated chemical.
5. Repeat steps 2-4 many times for each layer.
6. Slice the wafer up into rectangular chips
7. Package the chips into a plastic, metal, ceramic, package with exposed metal contacts which can be soldered to a circuit board.
Why EUV?
Extreme Ultraviolet - really small wavelength. When you're trying to expose really small structures you need a really small wavelength.
Why is EUV so hard?
EUV is extremely high energy so it destroys everything, and it takes a huge amount of energy to create the light source. Also: quantum things.
Apparently that’s according to Intel folks discussing it before they can produce it though. We’ll only really know when it’s ready. A lot of times, you make compromises to get something to market... so maybe even Intel 7nm won’t end up being “Intel 7nm” ;)
Gentle reminder that 3nm is a meaningless marketing number that does not represent neither the transistor size no element density, and it does not correlate in any way with all other Xnm production of other companies.
>> ... it does not correlate in any way with all other Xnm production of other companies.
True. But it should be significantly better/smaller than TSMC 5nm which is claimed to be 1.85x density over TSMC 7nm, which they are using now to make a chip with 59 Billion transistors.
The only other players to compare to are Intel and Samsung.
It does correlate... generally the numbers respect actual end performance in respect to one another, nobody's having a smaller name that's actually worse than the bigger name of a different company.
In many cases, like in Samsung case, it does not even refer to the feature size, and just represents a generation of the tech process or some arbitrary incremental improvements.
It's a bit worrying how all this innovation, with the possibility (perhaps even likelihood) of completely cornering the market, is all in one country, that is very close to another country, that doesn't want it to be its own country.