News Intel leak points to eight-core consumer Cannonlake parts

Discussion in 'Article Discussion' started by Gareth Halfacree, 2 Oct 2015.

  1. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    9,419
    Likes Received:
    307
  2. ZeDestructor

    ZeDestructor Member

    Joined:
    24 Feb 2010
    Posts:
    223
    Likes Received:
    3
    Cache Coherency, no mention of iGPU.. Sounds more like Xeon-D or Atom/Avoton/Rangeley (server chips) than anything else, which is already 4-8 core, integrated memory controller and PCIe (Northbridge, in ye olden days) and PCH (southbridge, in ye olden days).

    Much excite over nothing I say.

    Next rumour pls.
     
  3. Bindibadgi

    Bindibadgi Tired. Forever tired.

    Joined:
    12 Mar 2001
    Posts:
    36,319
    Likes Received:
    419
    Cache Coherency and SoC = Xeon D, unless consumer chips are going SoC too. Shame Intel are already considering sandbagging this early.
     
  4. Hakuren

    Hakuren New Member

    Joined:
    17 Aug 2010
    Posts:
    156
    Likes Received:
    0
    It sounds interesting, but its about time to change PC architecture as we know it. Increasing number of cores is pointless if software can't utilize them. Best option is to make GPU take over role of CPU as a whole. GPUs offer x times more performance and you could simply increase that performance by adding more cards to the mix. Not for SLI/CF & 3 FPS more in game xyz but just for raw compute power.

    Of course Intel don't want that, but motherboards without CPU socket, just with PCI-Ex slots for more cards, why not? It can be done easily. Problem is with software which still requires CPU to organize things.
     
  5. Corky42

    Corky42 What did walle eat for breakfast?

    Joined:
    30 Oct 2012
    Posts:
    7,611
    Likes Received:
    97
    So your solution to software not utilizing more cores is to replace a 4-8 core CPU with a GPU that has something like 500-4000 cores?
     
  6. desertstalker

    desertstalker Member

    Joined:
    7 Feb 2004
    Posts:
    73
    Likes Received:
    0
    The reason GPUs are so fast is because they can only handle certain specific functions (certain classes of floating point arithmetic usually) that is extremely parallel.

    You cannot run most software on a GPU as they lack the necessary instruction set.
     
  7. jrs77

    jrs77 theorycrafting

    Joined:
    17 Feb 2006
    Posts:
    5,257
    Likes Received:
    121
    I've said it before, I'll say it again. Most of the tasks can't be spread over multiple threads, so adding more cores/threads to a CPU is a step in the wrong direction.
    The tasks that can be spread over multiple cores/threads are usually run on Server-CPUs like the Xeon or they're coded to run on GPUs to begin with.

    The only thing I'm interested in for CPUs is low power consumption and as high IPC as possible. Coupled with the current 4C/8T it's totally fine for 95+% of the software.
     
  8. ZeDestructor

    ZeDestructor Member

    Joined:
    24 Feb 2010
    Posts:
    223
    Likes Received:
    3
    All the small chips (LGA115x and lower) are gonna end up as SoCs. The high-end LGA2011 (some rather believable rumours of an even larger socket are floating for Skylake-E/EP/EX) chips on the other hand will likely never be, because you can put more CPU in the space that you'd spend on putting the PCH on the same die and get extra performance.

    As others have pointed out, the only reasons GPUs (and Xeon Phi) are faster is thanks to the ability of those applications to be insanely parallelizable. Most software (especially user-facing and realtime stuff) in contrast are quite serialized, so making bigger, faster cores delivers much greater benefit for those types of tasks than making many core CPUs. This is why something like an 8-core Atom (C2750 Avoton) can match a full Haswell CPU in some tests, but slow down a fair chunk in games.

    If you're only interested in getting higher performance for compute, then that is indeed precisely where Intel went with Xeon Phi, and AMD and nVidia through GPUs.

    Intel is going even further, incidentally, with Knights Landing (next-gen Xeon Phi) being able to self-boot and come as a socketed chip, likely shared with Skylape-EP/EX (Purley platform).

    CPUs on daughter-boards are already done for 4+ socket x86 servers (IBM X3850/3950, HP DL980, Fujitsu something, Oracle something, Supermicro something) and most big non-x86 (POWER, z/Architecture) IBM machines. One of the interesting benefits you get from such an arrangement is feasible hot-swappable CPUs, which are demanded in certain use cases. The problem with such systems is the insane engineering and validation cost, which nobody wants to pay for on lower-end systems.

    However, the industry seems to actually be moving to socketed everything actually, since for high-bandwidth applications you need lots of pins: NVLink is expected to use Low-Insertion Force high-density connectors (like those used in the Mac Pro, since Pascal is now a very small card thanks to HBM2), and Knights Landing will be straight up socketed chips. The benefits lie in eliminating complicated PCIe risers and reducing height (allowing for more heatsink surface area), as well as open up the ability to use much faster connections (NVLink is expected to be 80-200 GByte/s, compared to PCIe 4.0 16x' piddly 31.5GByte/s).
     
  9. schmidtbag

    schmidtbag New Member

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    I don't necessarily see this as a sudden edge over AMD. Intel has, to my knowledge, 18-core LGA Xeons right now. As soon as AMD were to release a product threatening to intel, intel could effortlessly add a pair of CPU cores to their i7 lineup and relax. I figure the only reason intel hasn't released 6 and 8 core i7s is because they don't need to.
     
  10. ZeDestructor

    ZeDestructor Member

    Joined:
    24 Feb 2010
    Posts:
    223
    Likes Received:
    3
    Partly that, but mostly because most consumer workloads genuinely don't need more cores. If they did, we'd be seeing much larger boosts from going up to a quad-core i7 than the usual 10% overall and near 0% for games (AnandTech's bench using 3D Particle Movement: MultiThreaded shows just above 75% improvement after accounting for clockspeed differences (by comparing single-threaded results) between the 6700K and the 6600K, while the rest is under 30% improvment).

    If AMD puts on serious pressure with Zen, expect Intel to just launch a 6-core in the mainstream segment to compete - easy to design, fast time to market, easy to advertise. If not, expect more 4-core. On the workstation side, they've gone from 6-core Westmere, to 8-core Sandy/Ivy Bridge, to 10-core Haswell, with dual-CPU as an option, which is very much the progression you'd expect, because of how most workstations are used to design and test software that will end up on machines with even more cores (at which point workstation CPUs become a balancing act between cores and clockspeed)
     
  11. Phil Rhodes

    Phil Rhodes Hypernobber

    Joined:
    27 Jul 2006
    Posts:
    1,415
    Likes Received:
    10
    Yes, yes, more cores, my precioussss.
     
  12. Stanley Tweedle

    Stanley Tweedle NO VR NO PLAY

    Joined:
    3 Apr 2013
    Posts:
    1,602
    Likes Received:
    22
    Yes, get rid of antiquated lowly quad core and replace it with 2000 Nvidia cores. I do a lot of 3d rendering and without 3rd party plugins the rendering is done on my intel quad core at 4.8ghz. Very very slow. 3rd party GPU renderer plugin is able to render the scene many times faster on my Nvidia 680.

    And someone said no software makes use of more than 4 cores? Totally wrong.
     
  13. jrs77

    jrs77 theorycrafting

    Joined:
    17 Feb 2006
    Posts:
    5,257
    Likes Received:
    121
    I do rendering myself, but even most render-egines don't make use of the GPU buit only use the CPU. V-Ray being the prime example here.
    There's three render-engines to my current knowledge that use the GPU... Cycles, LuxRender and the new nVidia thingy built into DazStudio.

    Still, that software isn't used by the vast majority of the PC-users. Office-Software, DTP-software etc are all single-threaded to this date, becasue these tasks can't be parallelisised by default.

    So yeah, for 95+% of all software more than 4C/8T is totally useless.
     
  14. Corky42

    Corky42 What did walle eat for breakfast?

    Joined:
    30 Oct 2012
    Posts:
    7,611
    Likes Received:
    97
    Is it really the majority of software not being parallelisised because of a can't (technical reasons)?
    Or is it not being done because the costs (dev time, complexity) outweigh the benefits?
     
  15. pbryanw

    pbryanw Member

    Joined:
    22 Jul 2009
    Posts:
    163
    Likes Received:
    0
    Eight Cores to rule them all, Eight Cores to find them,
    Eight Cores to bring them all and in Intel's SOC bind them :)
     
  16. ZeDestructor

    ZeDestructor Member

    Joined:
    24 Feb 2010
    Posts:
    223
    Likes Received:
    3
    They only work at all on your GPU is because they've been ported, and only part of the software (and in your case, seems to be the greatest part of the program). GPUs are incredibly limited in the range of instructions: they're basically giant vector processing arrays, and can do basically nothing else.

    For just about everything else, the performance is completely hopeless (either because the type of code doesn't suit the massive core-count system, or the program in question is inherently single-threaded), so your web browser, general OS services, networking, (G)UI, for example, would either not be runnable at all, run no faster than a more traditional CPU, or often times, run much, much slower or even, run on a very small number of GPU cores, wasting most of the cores while doing so.

    I never said no software makes use of more cores, just that most software does not.

    Precisely. Most software is inherently lightly-threaded, because the blocker more often than not (in modern times) is in interacting with the user, not actually running the software's computation: the average schmuck basically does basic office/DTP stuff and web browsing. That's it.

    The technical limitations play a large factor in the effort/benefit ratio, in some cases outright making it pointless.

    A good example or parallelization being completely poitnless would be word processing: It doesn't matter how fast your CPU is (within limits... something like a C2Duo at least these days...), the limit in word processing is the soggy human in control being slow at typing in, editing and laying out the document, with the processor basically sitting idle.

    For games (which I think is what most people here care about), a somewhat different limitation comes in from the very tight development schedule, where it would be possible to have games using 10+ threads well, but end up limited because you need to make the launch window before you run out of money. Obviously you could add it after launch, but why would you invest thousands of pounds on an improvement that would bring you essentially no extra sales, while you have other projects lined up?

    Another limitation that sometimes crops up is the idea that you want your program to run on everything (LoL or DOTA2 or PoE or anything Blizzard for example), including potatoes, and, usually (!), but making a big, heavy game that uses lots of resources well means that the minimum requirements are pretty high, like FarCry or Crysis back in their day, or even Windows Vista (and it's 7/8/8.1/10 descendants), which runs way better on modern hardware than XP, but came at the cost of some pretty high minimum requirements for decent performance.

    For server programs on the other hand (like say, databases, or high-performance computing (Folding@Home and the like)), development is iterative - so once the base feature set is built, features are picked for the next release, with threading, porting to GPU/Phi and other performance-enhancing measures added and/or improved upon.
     
    Cthippo likes this.
  17. rollo

    rollo Well-Known Member

    Joined:
    16 May 2008
    Posts:
    7,631
    Likes Received:
    89
    Developers are in business to make money. 4 threads is a cost effective middle ground and you know most of your audiance has this many threads.

    The list of games scaling past 4 can be counted on your fingers.

    When the 2 biggest played games in the world only require a single core cpu and onboard graphics to play. Demanding a 8 core cpu 8gb ram and the best gpu on the market to play can really limit your audiance.

    Its a issue Crysis had back in the day and did start alot of jokes about could your system handle crysis. To this day maxed out with extra mods that have been launched can still bring a decent spec system to its knees. Crysis still sold well despite this but its one of the few games with very high system specs to see massive sales.

    GTA 5 has only saw 2million life time pc sales. Saw sales of around 43million on consoles.

    Witcher series has struggled to get high sales despite its success.

    If you can not make cash you do not code for even higher specs. GTA 5 if it was PC only might of saw more sales but the latest steam survey would argue otherwise. With alot of those pcs under the minimum specs to play the game at 1080p.
     
  18. theshadow2001

    theshadow2001 [DELETE] means [DELETE]

    Joined:
    3 May 2012
    Posts:
    5,091
    Likes Received:
    125
    When I go to resource monitor and open the cpu section, I see that there is more software running with greater than 4 threads than there are 4 threads or less. Firefox is currently running with 95 threads for some reason.
     
  19. rollo

    rollo Well-Known Member

    Joined:
    16 May 2008
    Posts:
    7,631
    Likes Received:
    89
    Firefox is super buggy under Windows 10 eating resources like it's going out of fashion.
     
  20. theshadow2001

    theshadow2001 [DELETE] means [DELETE]

    Joined:
    3 May 2012
    Posts:
    5,091
    Likes Received:
    125
    I'm using windows 8. Firefox isn't the only process with a high thread count. AVG has nearly 200. Dropbox has 64, some nvidia stream thing has 20.
     

Share This Page