How is your automotive eFlash at 28nm... oh, you're still working on it (since 2018)?
Well, guess we won't have many display drivers (or displays) or autos then, or maybe marketing should pull their heads out and smell the roses.
I mean, I get it. Most things should transition to 28nm on 300mm wafers for process & equipment reasons, but in order to do that many of them need for the right process to exist, and the foundries are concentrated only on the latest nodes that make margin $$ so they don't develop critical features for even 28nm. Could your customers redesign their entire architecture and packaging? Yes, but it will take years to decades to prove reliability.
I'll note that Apple's rumored OLED on Si for MR/AR is likely on a giant 80-90nm process in 300mm at TSMC so for money and volume, they'll do most anything... even build capacity.
If TSMC has enough demand to sell everything they make, they don’t really need to take specific client needs into account
I think the auto industry is looking to see if they can bypass the whole above mess with their own fabs. They don't need fancy processes, they need something reliable that they can depend on for years. The cost to a fab though means they need to worry about anti-trust as they can't go alone.
This thread is a good example of the legacy auto mentality of blaming a supplier, instead of taking responsibility for the situation they are in.
Too many mainstream articles on semiconductors are just vague, uncheckable facts combined with filler that seems to have been suggested by Intel's PR department, even when Intel is totally irrelevant. Intel needs delicious subsidies!
That will leave some decent spare capacity for automotive.
And maybe when a lot of such chips would be done in 28nm, it would make sense for TSMC to invest more in optimizing the process for them.
But yes, many nodes are long lived. So the nodes they are nolonger expanding would of been around one heck of a time and can imagine the equipment to produce current output will have a repair/servicing cost as well as materials that are starting to make smaller nodes more cost effective for them as a manufacturer and may well see legacy older nodes start becoming more expensive for suppliers to access moving forward. Which for some chips, may well prove less suitable or exsisting designs less accomodating to just shrink as from my understanding if you have a 40nm chip design, just running that on 28nm without any changes is not possible? Or certainly not as straight forward. Then there would be validation/testing for certification at the customer will need to do and for some chip nodes, that may prove more costly than sticking with the proven existing nodes.
So be interesting how this plays out, not aware of any real stickouts but mindful that for some companies using existing older nodes, things will not be as clear-cut as many will think.
From what I understand, "node" in this context doesn't look like a printer resolution, it looks like a lego kit of transistors and such that TSMC has tuned to use the layer heights / counts / materials they plan to deposit. Since these details change between nodes, the 2D shapes aren't portable. You have to swap components.
But these are services, not products. There's no one-size-fits-all way to move design Z from process X to process Y, for arbitrary values of X, Y and Z, nor is there ever likely to be.
And even once it's done, there are still costs to verify and test the results, which isn't cheap either.
May well end up in a decade or two in which suppliers end up having to use China or Russia for production as the only way to get access to the nodes they need, and not like things like that not happened before thinking of how NASA for a while was dependant upon old tried and tested Russian rockets for space launches. Just hope this is not a future problem that is allowed to creep up and hit us all.
I'm disappointed that nobody really bothers to work on the cost reduction side of older nodes. I kind of thought we could easily and cheaply turn out, say, 32nm chips by the million.
The price of ongoing power consumption probably dwarfs the price of the chip itself in terms of cost of ownership in most cases.
E.g. an i7 draws 65W at idle, or about 1.5kWh/day. That's ~$0.20/day where I live, about $6/month or $36/year. Max draw is ~4x that. I've probably paid more in power to run my CPU than I paid for the CPU itself.
I don't know which chip you're talking about but a 12900K idles at ten watts.
There are things like PMICs that gets little to no benefits by moving to 28nm. I doubt that will happen. But there are also plenty of designs that are stuck using older mature node, that gets some benefits but has no financial incentive in doing so. Or will require some specialty nodes that is not on offer, which TSMC are currently working on. ( Compared to what comments here suggest they dont give a toss about it. )
So the whole thing is basically about balancing Fab capacity. And there is no better time to do it. You are either stuck waiting for capacity in a node or you move to 2xnm node where new GigaFab are being built and has much better capacity planning. Do you want your $thousands to even $million product to be on hold because of a ( what used to be ) $2 chip cant be Fabbed?
A very small set of processes allow one way reuse e.g. you can build a tsmc28hpm design directly in the tsmc28hpc process (but you can't build a 28hpc design in the 28hpm process) - but, for instance, neither of those is directly compatible with the tsmc28hpc+ (note the 'plus' in the name) process. And all 3 of those have the same feature size and are made by the same foundry.
You can't take a tube radio and manufacture it with opamps without substantial design changes.
Samsung is starting to use GAA right now, and TSMC will use it for the next process node.
https://www.theregister.com/2021/05/06/ibm_2nm_semiconductor...
https://www.ibm.com/ibm/history/ibm100/us/en/icons/copperchi...
So at 30% margins on a $0.33 part, you can expect to break even 30M sales later!
Memory is a delicate balance between "we need to store the data strongly" and "we need to write the data quickly" and "make it small".
Memory is one of the things that fell off of Moore's Law quite a while ago.
From what I understand, doing so was no small effort and is considered a strong triumph for the team that accomplished it.
Did you throw RTL at a layout engine and let it figure it out? Pretty damn close to automated and could get to automated with a little upfront elbow grease by a company specializing in such things.
Heavy analog design? It's going to be a lot more work.
I.e. the traces would have essentially the same standing-wave tuning requirements when modelled as waveguides; would catch a harmonic of the original frequency when acting as antennae; etc.
Even if you ignore all the other complications when switching nodes (and there are a LOT!), this alone prevents simple downscaling of circuits. It is very likely that after downscaling at least part of the interconnects have to be rerouted.
As for automation of that task: It's the traveling salesman problem in disguise. Which means that you CAN automate it, and there exists software for that purpose, but the result are hardly optimal, and most likely leave quite a lot of possible performance on the table.
Add to that all the other necessary changes when switching nodes, and it becomes fairly obvious that switching nodes, even if only porting 1:1, is a massive effort that can easily span YEARS.
They focus on high end high margin and segment their customers accordingly
EDIT ADD https://www.techspot.com/news/94233-russia-plans-manufacture... "Russia plans to manufacture chips locally on a 28 nm node by 2030" currently they are on 90nm