1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News Intel leak points to eight-core consumer Cannonlake parts

Discussion in 'Article Discussion' started by Gareth Halfacree, 2 Oct 2015.

  1. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    10,127
    Likes Received:
    615
  2. ZeDestructor

    ZeDestructor Member

    Joined:
    24 Feb 2010
    Posts:
    226
    Likes Received:
    4
    Cache Coherency, no mention of iGPU.. Sounds more like Xeon-D or Atom/Avoton/Rangeley (server chips) than anything else, which is already 4-8 core, integrated memory controller and PCIe (Northbridge, in ye olden days) and PCH (southbridge, in ye olden days).

    Much excite over nothing I say.

    Next rumour pls.
     
  3. Guest-16

    Guest-16 Guest

    Cache Coherency and SoC = Xeon D, unless consumer chips are going SoC too. Shame Intel are already considering sandbagging this early.
     
  4. Hakuren

    Hakuren New Member

    Joined:
    17 Aug 2010
    Posts:
    156
    Likes Received:
    0
    It sounds interesting, but its about time to change PC architecture as we know it. Increasing number of cores is pointless if software can't utilize them. Best option is to make GPU take over role of CPU as a whole. GPUs offer x times more performance and you could simply increase that performance by adding more cards to the mix. Not for SLI/CF & 3 FPS more in game xyz but just for raw compute power.

    Of course Intel don't want that, but motherboards without CPU socket, just with PCI-Ex slots for more cards, why not? It can be done easily. Problem is with software which still requires CPU to organize things.
     
  5. Corky42

    Corky42 What did walle eat for breakfast?

    Joined:
    30 Oct 2012
    Posts:
    8,232
    Likes Received:
    166
    So your solution to software not utilizing more cores is to replace a 4-8 core CPU with a GPU that has something like 500-4000 cores?
     
  6. desertstalker

    desertstalker Member

    Joined:
    7 Feb 2004
    Posts:
    73
    Likes Received:
    0
    The reason GPUs are so fast is because they can only handle certain specific functions (certain classes of floating point arithmetic usually) that is extremely parallel.

    You cannot run most software on a GPU as they lack the necessary instruction set.
     
  7. ZeDestructor

    ZeDestructor Member

    Joined:
    24 Feb 2010
    Posts:
    226
    Likes Received:
    4
    All the small chips (LGA115x and lower) are gonna end up as SoCs. The high-end LGA2011 (some rather believable rumours of an even larger socket are floating for Skylake-E/EP/EX) chips on the other hand will likely never be, because you can put more CPU in the space that you'd spend on putting the PCH on the same die and get extra performance.

    As others have pointed out, the only reasons GPUs (and Xeon Phi) are faster is thanks to the ability of those applications to be insanely parallelizable. Most software (especially user-facing and realtime stuff) in contrast are quite serialized, so making bigger, faster cores delivers much greater benefit for those types of tasks than making many core CPUs. This is why something like an 8-core Atom (C2750 Avoton) can match a full Haswell CPU in some tests, but slow down a fair chunk in games.

    If you're only interested in getting higher performance for compute, then that is indeed precisely where Intel went with Xeon Phi, and AMD and nVidia through GPUs.

    Intel is going even further, incidentally, with Knights Landing (next-gen Xeon Phi) being able to self-boot and come as a socketed chip, likely shared with Skylape-EP/EX (Purley platform).

    CPUs on daughter-boards are already done for 4+ socket x86 servers (IBM X3850/3950, HP DL980, Fujitsu something, Oracle something, Supermicro something) and most big non-x86 (POWER, z/Architecture) IBM machines. One of the interesting benefits you get from such an arrangement is feasible hot-swappable CPUs, which are demanded in certain use cases. The problem with such systems is the insane engineering and validation cost, which nobody wants to pay for on lower-end systems.

    However, the industry seems to actually be moving to socketed everything actually, since for high-bandwidth applications you need lots of pins: NVLink is expected to use Low-Insertion Force high-density connectors (like those used in the Mac Pro, since Pascal is now a very small card thanks to HBM2), and Knights Landing will be straight up socketed chips. The benefits lie in eliminating complicated PCIe risers and reducing height (allowing for more heatsink surface area), as well as open up the ability to use much faster connections (NVLink is expected to be 80-200 GByte/s, compared to PCIe 4.0 16x' piddly 31.5GByte/s).
     
  8. schmidtbag

    schmidtbag New Member

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    I don't necessarily see this as a sudden edge over AMD. Intel has, to my knowledge, 18-core LGA Xeons right now. As soon as AMD were to release a product threatening to intel, intel could effortlessly add a pair of CPU cores to their i7 lineup and relax. I figure the only reason intel hasn't released 6 and 8 core i7s is because they don't need to.
     
  9. ZeDestructor

    ZeDestructor Member

    Joined:
    24 Feb 2010
    Posts:
    226
    Likes Received:
    4
    Partly that, but mostly because most consumer workloads genuinely don't need more cores. If they did, we'd be seeing much larger boosts from going up to a quad-core i7 than the usual 10% overall and near 0% for games (AnandTech's bench using 3D Particle Movement: MultiThreaded shows just above 75% improvement after accounting for clockspeed differences (by comparing single-threaded results) between the 6700K and the 6600K, while the rest is under 30% improvment).

    If AMD puts on serious pressure with Zen, expect Intel to just launch a 6-core in the mainstream segment to compete - easy to design, fast time to market, easy to advertise. If not, expect more 4-core. On the workstation side, they've gone from 6-core Westmere, to 8-core Sandy/Ivy Bridge, to 10-core Haswell, with dual-CPU as an option, which is very much the progression you'd expect, because of how most workstations are used to design and test software that will end up on machines with even more cores (at which point workstation CPUs become a balancing act between cores and clockspeed)
     
  10. Phil Rhodes

    Phil Rhodes Hypernobber

    Joined:
    27 Jul 2006
    Posts:
    1,415
    Likes Received:
    10
    Yes, yes, more cores, my precioussss.
     
  11. Stanley Tweedle

    Stanley Tweedle NO VR NO PLAY

    Joined:
    3 Apr 2013
    Posts:
    1,612
    Likes Received:
    25
    Yes, get rid of antiquated lowly quad core and replace it with 2000 Nvidia cores. I do a lot of 3d rendering and without 3rd party plugins the rendering is done on my intel quad core at 4.8ghz. Very very slow. 3rd party GPU renderer plugin is able to render the scene many times faster on my Nvidia 680.

    And someone said no software makes use of more than 4 cores? Totally wrong.
     
  12. Corky42

    Corky42 What did walle eat for breakfast?

    Joined:
    30 Oct 2012
    Posts:
    8,232
    Likes Received:
    166
    Is it really the majority of software not being parallelisised because of a can't (technical reasons)?
    Or is it not being done because the costs (dev time, complexity) outweigh the benefits?
     
  13. pbryanw

    pbryanw Member

    Joined:
    22 Jul 2009
    Posts:
    171
    Likes Received:
    1
    Eight Cores to rule them all, Eight Cores to find them,
    Eight Cores to bring them all and in Intel's SOC bind them :)
     
  14. ZeDestructor

    ZeDestructor Member

    Joined:
    24 Feb 2010
    Posts:
    226
    Likes Received:
    4
    They only work at all on your GPU is because they've been ported, and only part of the software (and in your case, seems to be the greatest part of the program). GPUs are incredibly limited in the range of instructions: they're basically giant vector processing arrays, and can do basically nothing else.

    For just about everything else, the performance is completely hopeless (either because the type of code doesn't suit the massive core-count system, or the program in question is inherently single-threaded), so your web browser, general OS services, networking, (G)UI, for example, would either not be runnable at all, run no faster than a more traditional CPU, or often times, run much, much slower or even, run on a very small number of GPU cores, wasting most of the cores while doing so.

    I never said no software makes use of more cores, just that most software does not.

    Precisely. Most software is inherently lightly-threaded, because the blocker more often than not (in modern times) is in interacting with the user, not actually running the software's computation: the average schmuck basically does basic office/DTP stuff and web browsing. That's it.

    The technical limitations play a large factor in the effort/benefit ratio, in some cases outright making it pointless.

    A good example or parallelization being completely poitnless would be word processing: It doesn't matter how fast your CPU is (within limits... something like a C2Duo at least these days...), the limit in word processing is the soggy human in control being slow at typing in, editing and laying out the document, with the processor basically sitting idle.

    For games (which I think is what most people here care about), a somewhat different limitation comes in from the very tight development schedule, where it would be possible to have games using 10+ threads well, but end up limited because you need to make the launch window before you run out of money. Obviously you could add it after launch, but why would you invest thousands of pounds on an improvement that would bring you essentially no extra sales, while you have other projects lined up?

    Another limitation that sometimes crops up is the idea that you want your program to run on everything (LoL or DOTA2 or PoE or anything Blizzard for example), including potatoes, and, usually (!), but making a big, heavy game that uses lots of resources well means that the minimum requirements are pretty high, like FarCry or Crysis back in their day, or even Windows Vista (and it's 7/8/8.1/10 descendants), which runs way better on modern hardware than XP, but came at the cost of some pretty high minimum requirements for decent performance.

    For server programs on the other hand (like say, databases, or high-performance computing (Folding@Home and the like)), development is iterative - so once the base feature set is built, features are picked for the next release, with threading, porting to GPU/Phi and other performance-enhancing measures added and/or improved upon.
     
    Cthippo likes this.
  15. rollo

    rollo Well-Known Member

    Joined:
    16 May 2008
    Posts:
    7,663
    Likes Received:
    93
    Developers are in business to make money. 4 threads is a cost effective middle ground and you know most of your audiance has this many threads.

    The list of games scaling past 4 can be counted on your fingers.

    When the 2 biggest played games in the world only require a single core cpu and onboard graphics to play. Demanding a 8 core cpu 8gb ram and the best gpu on the market to play can really limit your audiance.

    Its a issue Crysis had back in the day and did start alot of jokes about could your system handle crysis. To this day maxed out with extra mods that have been launched can still bring a decent spec system to its knees. Crysis still sold well despite this but its one of the few games with very high system specs to see massive sales.

    GTA 5 has only saw 2million life time pc sales. Saw sales of around 43million on consoles.

    Witcher series has struggled to get high sales despite its success.

    If you can not make cash you do not code for even higher specs. GTA 5 if it was PC only might of saw more sales but the latest steam survey would argue otherwise. With alot of those pcs under the minimum specs to play the game at 1080p.
     
  16. theshadow2001

    theshadow2001 [DELETE] means [DELETE]

    Joined:
    3 May 2012
    Posts:
    5,112
    Likes Received:
    131
    When I go to resource monitor and open the cpu section, I see that there is more software running with greater than 4 threads than there are 4 threads or less. Firefox is currently running with 95 threads for some reason.
     
  17. rollo

    rollo Well-Known Member

    Joined:
    16 May 2008
    Posts:
    7,663
    Likes Received:
    93
    Firefox is super buggy under Windows 10 eating resources like it's going out of fashion.
     
  18. theshadow2001

    theshadow2001 [DELETE] means [DELETE]

    Joined:
    3 May 2012
    Posts:
    5,112
    Likes Received:
    131
    I'm using windows 8. Firefox isn't the only process with a high thread count. AVG has nearly 200. Dropbox has 64, some nvidia stream thing has 20.
     
  19. ZeDestructor

    ZeDestructor Member

    Joined:
    24 Feb 2010
    Posts:
    226
    Likes Received:
    4
    Firefox has been multithreaded for a long time, and multi-process is on it's way.

    Citation required of a completely clean profile being used with no addons.

    The problem is that while you can increase thread count quite easily, you often get concurrency-related blocks, namely: one thread has exclusive access on some resource, meaning all other threads who need that same resource are now blocked and must wait until the other thread(s) finish. Basically, this results in your software being multi-threaded, but getting no performance benefit from it.
     
  20. Guest-16

    Guest-16 Guest

    Yes in theory, but probably not.

    Intel will want to maintain its ASP, which means silicon size needs to be as small as possible, which means fewer cores.
    AMD could/should throw 6/8/10+ cores in a lower cost unit and it will win BIG in APAC, esp. China. China market loves core counts, because the focus on real-world and IPC is lower. A lot of market isn't technical and CPU numbers are easy to digest. AMD can go lower on ASP with little affect on stock price (since it's already in the *******). Intel can't, but what it could do is cut the iGPU from its products since China market won't ever buy an Intel GPU and whack on a load of CPU cores instead. It would still have a huge price issue since virtually all its perceived processor value comes from CPU core features: AMD could still compete on price. And it if was exclusive to China it would fuel a massive grey market. In could rebrand its 8 Atom core Xeons into desktop CPUs with lopped features, but that would canabolize its main CPU line so it'll never happen.

    AMD should have put the Xbone/PS4 CPU core (8 Jaquar) on the market - despite the lower IPC cores and cache cohereance issue between the two clusters (which is also present in the consoles btw) the numbers are what matter.

    Unfortunately AMD is weak in APAC and it cut a load people from its Taiwan office, so more blunders to add to the long list.
     
    Last edited by a moderator: 4 Oct 2015

Share This Page