Discussion in 'Article Discussion' started by bit-tech, 18 Jul 2019.
So the goal was too ambitious but then they go on to say:
So much for lessons learned, ehh?
Aggressively swapping sockets for chips that LITERALLY RUN ON THE SAME ARCHITECTURE.
Aggressively milking their dominant position with MINIMUM INNOVATION FOR BEST PART OF A DECADE.
Aggressively loosing the IPC crown because of 14mn roadblock and COUNTING THEIR LUCKY STARS FOR BS TDP RATINGS AND HIGH CLOCKS.
"Now AMD have jumped to 7nm we can't really keep milking you on 14nm for the rest of forever."
I'm not the only one that thinks when anyone says, "We have listened/we have learned our lessons" that they haven't in the slightest.
IIRC correctly the army calls them "learnings identified", probably because they know that no bugger will ever listen and keep making the same mistakes. "Lessons learned" also assumes that (i) an organisation has actually taken the learning onboard and (ii) actually done something to prevent/mitigate a repeat.
You know we are on a downward slope when marketing/PR/political droids take over what was previously carefully defined professional language and twist the meaning so much that no-one anywhere can ever use the same language professionally without being laughed at.
No surprise to anyone: Intel remain the only fab to move to Cobalt interconnects (lowermost metal layers), everyone else kicked the can down the road and are still using Copper, and are thus stuck at minimum interconnect widths due to minimum liner thickness. Samsung are the only other fab to even release papers on FEOL Co, but aren't even close to implementing it (still grappling with getting EUV to an actually usable state).
Yea but they only really did that because some 'study' sort of showed that copper interconnects became a problem under a certain size but the theory didn't stand up when put to the test, basically they discovered really small copper wires didn't misbehave as they should have.
Not true, the limitation in Copper scaling is not the copper itself, but the lining necessary to prevent migration from the surrounding die into the Copper. That lining has a minimum effective thickness, and that is what limits further scaling. For example: if your minimum lining thickness is 2nm then a 1nm Cu trace gives a total thickness of 5nm. Cutting that 1nm Cu trace width to a teeny 0.25nm (4x scaling) only reduces your total occupied width to 4.25nm (<1.2x scaling). Co being much less sensitive to migration means that even doubling trace width to accommodate Co's higher resistance means your occupied area goes down to 2nm once the liner is omitted (2x scaling).
That's just semantics, i would've thought explaining about diffusion barriers and electromigration was a bit redundant considering the level of understanding, and that the effective area of the diffusion barrier + the Cu interconnect doesn't change the substance of what i said.
The total effective width is the entire point: if you can't scale that down, then it doesn't matter how narrow you can lay down your Cu lines themselves if the line-to-line pitch isn't getting any narrower.
No it's not, that's like saying the total effective width of the cable i use to plug in my TV is the entire point, the TV doesn't care if there's 1000x more insulating materiel than wire or no insulating material whatsoever, what matters is the gauge of the wire not what surrounds it (excluding the obvious electrocution hazard from having no insulation and the unsuitability of using a cable that's 3 feet in diameter).
Like i said they discovered really small copper wires didn't misbehave as they predicted, that's got nothing to do with not being able to lay down Cu lines at whatever size, they're perfectly capable of making the interconnects thinner.
The way I perceive it is that complacency has come to bite Intel on the bum and they're trying to put a positive spin on it as being because they aimed too high.
For me as a consumer thankfully AMD are back on the scene, the constant security issues that have dogged Intel don't make me feel comfortable (be that right or wrong).
I don't know, maybe I feel the wrong way about it all but, as we all know, business is all about feelings.
I don't think the struggle to hit 10nm has anything to do with complacency, but rather my understanding is that the methods Intel chose to achieve 10nm proved unworkable, and it took them a long time to realise that they were just throwing good money after bad.
Ic scaling is based on packing density, not just feature size. Interconnect density is determined by minimum pitch, i.e. how close you can pack traces together. If you are limited in how close the traces can get together by the isolation area, and the isolation area cannot be shrunk further, then you are limited in how close the traces can get together regardless of the width of the Cu portion of those traces. It's a similar limit to the 2nm gate oxide thickness they everyone hit (leading to CPU/GPU driving voltages sitting around 1V) leading to all the designs using multiple gates per transistor to drive sufficient current as switching frequency increases (i.e. why increasing clock speed increases die area) and why everyone is working on adding Z-height to transistors to gain gate area.
Close, they appear to have gotten the Co deposition issue solved sufficiently to ship chips (so are just marginally ahead of EUV here) but it took them a LOT longer than they expected, to the point that parallel work in other areas intended for future processes caught up.
Interconnect density is not determined by minimum pitch, do you really think that even though they're fabricating stuff with a feature size of 7nm that physically they can't fabricate smaller interconnects than the 40'ish nm pitch?
It's not because they can't do it, if they wanted they could pack the traces four times tighter. They don't though because physics causes problems, things like electromigration, resistance, and other problems associated with trying to get electrons to 'go down' a really thin wire.
Aaaaaand hence the reason to move to Cobalt interconnects, to mitigate the issues keeping Cu traces more separated. Metal pitch = separation between trace centres, nothing to do with the nominal process scale (e.g. "7nm") which itself has had absolute bugger-all to do with actual feature size for years now.
You brought up the pitch.
And no, that's not the reason to move to cobalt, like i said at the start...
The study by (iirc) Applied Materiels claimed that reducing Cu interconnects would be problematic if they were shrunk much further than 40'ish nm, it's since been proven to be less problematic than Applied Materiels theorized.
If they wanted they could make Cu interconnects with 1nm cross-sections surrounded by a 1-2nm diffusion barrier instead of the current 30-40'ish nm, being able to fabricate it isn't the problem, the problem is the theoretical physics says strange things are meant to happen when trying to 'send' electrons down a really thin wire, however the theoretical physics were not what was seen in the real world.
Cu trace width isn't the problem, it's the width of the area around the traces that isn't getting any smaller.
The diffusion barrier is the problem. It's not doing any more shrinking regardless oft he Cu trace width, because if it were any thinner it would no longer do it's job of preventing diffusion.
The entire point of an Integrated Circuit is to pack as much circuitry into as small an area as possible. You need to consider both the insulation (and other barriers) and conductor width to determine actual packing density. To use your TV cable example, you could shrink the conductor width of your HDMI cable from 10mm to 1mm across, but if the insulator is stuck at 30cm wide for both it still won't fit behind the back of your TV. The 'unsuitability of using a cable that's 3 feet in diameter' is the exact reason for switching to linerless Co.
The diffusion barriers are 1-2nm whereas the Cu traces are 30-40nm, even if we were talking about total cross section of the DB it still only accounts for 4-8nm.
Not yet they aren't, maybe if Cu traces were sub 10nm then an argument could be made for the DB taking up to much space, they're not and so it's not.
No really? I thought the idea was to pack in a little as possible.
IDK where you got this idea that the problem is the DB, it's not, as I've been saying it's the (theorised) increased resistance and electromigration, something that in practice has not caused the major issues they were expecting (smaller transistors need less current (who knew)) and the movement of the ions within the interconnects hasn't happened as they were expecting (for reasons they still don't quiet understand).
The Corky vs edzieba battle is quite fascinating. I wish I understood what you were going on about.
I'm glad Intel have to eat some humble pie tbh.
Separate names with a comma.