Discussion in 'Article Discussion' started by bit-tech, 27 Jul 2018.
For a tech company as big and rich as Intel, this is getting embarrassing.
um , Gareth - the server parts in 2020 I suspect, not 2010 ...
/ ... walks away whistling
BREAKING: TIME TRAVEL DISCOVERED, INTEL AT THE FOREFRONT... ERR, BACKFRONT? FRONTBACK? BACKBACK?
FS, I'm starting to get the upgrade itch from my 2500k and thought I'd wait around for this next year. Not sure I can wait 12+ months though.
At the current rate that is about the only thing that can help them against Zen 2 in 2019 and Zen 3 in 2020, after all wringing out every last drop from the old stuff is what they are going to do later this year already...
I guess Mr Norrod (AMD’s SVP & GM Datacenter and Embedded Solutions) is even happier now:
I think 2019 will be my upgrade time, Zen 2 7nm with a reported 10-15% IPC improvement over Zen+ may just be enough to get me off my 3770.
Mind you that means having to move to W10. I think the 3770/W7 may be saved as a backup...
Intel just seem to continually annoy me now.
In what world did they deliver in excess of 70 percent product performance improvement, also if you have to tell people that you're winning you're probably doing anything but.
Meanwhile... AMD actually have 7nm due next year.
They have delivered over 70% improvement... in XTU, which is a completely BS benchmark that is designed and made by, you guessed it, Intel!
Depends on how selective you are with the numbers, compare the i7-5557U and the i7-8750H for example:
Both are 14nm i7 branded mobile CPUs, both cost around $400 and came out just three years apart.
The newer one is over 150% faster than the old one.
(just don't show the tripled core count and higher tdp on the its faster marketing charts)
Of course if you don't cherry pick comparisons among mobile CPUs the reality is much closer to 10% than 70%.
Bring on the 7nm Ryzen 2's.
In practical terms rather than "our paper number is bigger
smaller", Intel have remained competitive even while 'stuck' on 14nm. Do you buy your CPUs based on benchmarked performance and price/perf in the workloads you run, or on the theoretical length of some now arbitrarily chosen litho mask feature?
I think a lot of the complaining about the death of Tick Tock can be explained with how the Tick Tock model was very convenient for planning upgrades, year 1: Avoid potential early adopters issues of new arch, year 2: buy perfected version, rinse repeat ad infinitum.
For pretty much the last decade, Intel have stuck to the same release cycle for the consumer sockets: release a new socket, CPU and PCH generation, release second CPU and PCH generation for that socket (with backward and forward compatibility between them), then move to a new socket. This includes the current generation too. The time in which 'Tick Tock' was in effect (last process shrink 2014) has been half the time this release cadence has been in effect.
Not that I really care about the difference in speed between these i7s, do you mean the faster chip 150% of the speed of the other, or is it faster by 150%?
There's been many many examples of this by many many people. They say it's 120% faster, when they mean it's 20% faster. Something that is 100% faster than a different thing is twice as fast.
The old Cpu is 100% to provide baseline, new one 250%, diff is 150%.
But as I explained in the post, it is a cherry picked example (obviously I had designed the criteria (14nm, i7 branded, mobile, approx $400) to deliver the biggest result which includes having taken advantage of Intel writing i7 on just about anything that can go in a Laptop, the old one actually is a paltry dual core while the new one is a 6 core).
While that example obviously won't repeat across Intels entire lineup it served its purpose to demonstrate that Intels claim of 70% improvement (with no criteria mentioned other than 14nm) is not as impossible as it appears to be at first glance.
They must also be redesigning everything to be spectre/meltdown safe which is going to take time. Wouldn't surprise me if that is another reason for the delay.
Given that we are not even sure how many more variants of spectre/meltdown will be identified in the next weeks or months maybe years, I don't believe this.
It is impossible to have a 100% hw protection against Spectre class attacks, because the only way to do that would be to remove speculative execution capabilities and that would be like going back to before the steam engine.
Partial mitigation against the risk may be possible in hw, but for the rest it will come down to software since we can't just send the world back to the stone age.
Separate names with a comma.