Discussion in 'Article Discussion' started by Tim S, 18 Jan 2007.
40 cores dedicated to Windows Vista and 40 cores for other apps.
You really don't like MS do you?
I think that this may be a step in the right direction, but it could just turn out to be the next GHz race.
How measly will 2 cores appear in a few years' time? The race will be one for the first kilo-core processor.
maybe my future cpu... let the technology mature enough
Love Microsoft personally, Since I use them everyday in my Job and at Home, I'm just a bit biffed at my Vista installation last night blue screening, then hogging all my resources when it was finally installed
Though on another note, yes you are correct about the speed race. Just hope they keep the number of products down, as I can Imagine them doing 40 core versions @ 8ghz, 40 Core versions at 8.5ghz, then 60 Core versions of same values etc, could become a bit messy!
I seems to me as though this multicore idea is taking a step in the wrong direction in terms of computer flexability.
If you specifically dedicate a core to hd video and suddenly nobody wants to do hd video does that core become redundant? If this is the case an out of date cpu will have lots of useless cores doing nothing.
Am I wrong?
Oh and does the software exist at the moment to dedicate a single/multiple core/s to a single application? With 80, it would be usefull assigning a certain ammount to say your favourite Image editing package so it has dedicated processing power.
You haven't tried running Vista on a single-core machine, have you? Mind you, it's not too bad on a dual-core rig, but even still feels a bit slow. OS X has spoiled me, I suppose. I haven't honestly paid a ton of attention to hardware since switching, but I don't think FrozenFire is lacking desperately by any standards (one gen behind on video cards I suppose, but that's about it), and Vista doesn't feel nearly as smooth as it should on that rig, even with a clean install. Although in hindsight, I think I've only tried RC copies. Umm... not that I've obtained one of the leaked final versions to compare it to, or anything.
But that aside, I'd love this. I really hate interacting with my fileserver, and that's solely because it's the only single-core machine I have left. Since getting a dual-core system, I've vowed to never use less again. I can only imagine how much better a quad would be not having used one, so this is just a mindgasm.
Jamie - I don't think that the cores are permanently dedicated to a specific task, I think that pic was just demonstrating how you could assign tasks and seemed to imply it would have a few reserved for such (not entirely unlike how Via CPUs have that hardware encryption that kicks the hell out of any madly clocked AMD or Intel chip by about an order of magnitude, with a quarter the clock speed). Of course, if you're talking 80 cores, I don't think having a few specialized would be a bad thing, especially since they tend to be massively faster at task X if it's designed solely for that purpose. You're talking no more than 5% wasted cores IF (!) that task was to go away forever, which seems pretty unlikely in the case of HD and graphics at least.
No your not, and I agree, with multipurpose CPUs they can adapt to suit any task, think of unified shaders.
Sure they can perform all of these tasks (possibly simulaniously, assuming sufficient bandwidth) But how much of a performance gain would you really get? They are dedicated but simpler. As far as I can tell they would boast no performance gain over traditional proccessors.
Jamie, I think the labels down the right hand side are examples of what each of the cores could be doing at once... HD video, crypto and graphics were just the buzzwords the art guy was given to play with Each of the cores is the same, it's just that with 80 of them, you can assign loads of different tasks - it's making the point that you can assign many cores to a particular task with all those for GFX.
So yeah, what Firehed said...
80 cores? How big is that CPU going to be? The size of a motherboard?
It would work if they are basic CPU cores that can redistribute tasks on the fly, and configure the CPU array flexibly to suit the application. Ideally, to the software, the array would appear like a single CPU --just a really powerful and ideally suited one.
Nexxo, judging by the fact they have it running at 8GHz on current processes, at under 100W, it's not very big at all!
In terms of orders of ten, let's say current quad-core CPUs have 1 billion-ish transistors, and this has comparable thermal/electrical properties for 100ish cores, that means about 10 million-ish transistors per core... so Pentium II/III sort of level!
As for redistibuting tasks on the fly, that's presumably what these engineers are investigating.
There are other advantages of having 80 symmetric cores of course: you can put the manufacturing yeilds waay up by disabling the one or two cores that have defects, like they do with graphics chips (the X800GTO2 cards for example).
So the next Celeron/Semprons will have cut cores instead of cache...that would be interesting...
I think this is a step in the right direction, but I'm not sure how many cores you will need, 80 just seems like overkill, but I'll have to see 5 years from now how things are.
That's the stuff. Now if they can make the CPU array appear to software like a single really powerful CPU, we're in business. Don't program multithreaded apps; let the CPU array do it on the fly while the software is running.
I think that's the idea they should run with... I can see PC's looking very different in years to come. I envision being able to emulate operations and functions that are currently hardwired to the mobo with these processors, rendering the mobo to a simple socket and a few connections for hardware. Probably wouldn't be quite that simple, but I can see the potential to do so much with very little.
That is a very sexy idea... Perhaps they could even have a modular motherboard. A socket for the CPU, a few sockets for peripheral interfaces, NIC, perhaps a flash drive for the OS. All very compact. Nice.
What, like USB and PCI?
Great. Except, of course, that these are "Simple" cores, which means RISC. Like the PowerPC chips, not like the x86 chips (Which are very CISC chips). It also means that the performance benefit with more cores would be a lot less. See, having a RISC processor would be like going back to a 286 (Or earlier) - no FP, no MMU, no SSE, no 3DNOW, no MMX - but clocked at modern (3GHz) speeds. More work would have to be done by in software to re-create the functions missing from the chip (Which software, especially multimedia software, demands these days).
As a (Totally inaccurate, but representitve) example, it'd be the difference between doing standard multiplication (9*9) and repeating the values and adding them up (9+9+9+9+9+9+9+9+9). In essence, they're the same action, but because the RISC processor is missing the functionality to do the * bit, it has to go the long way round, slowing everything, especially as instructions in software are an order of magnitude slower than those hard-coded into a chip (1 or 2 cycles per instruction for hard-coded, compared to 10-20 for software driven, what with all the memory reading and writing required).
And consider, Apple moved away from RISC/PowerPC, as the performance was nowhere near as good as it's CISC/x86 equivalent.
Well, TFA states that we'll be seeing these in 'five to eight' years. If we take that as being about 6 years (it's an easier number too), and look at how Moore's Law predicts CPU manufactuing process (halves in size every 18-24 months):
72 months, divided by 18 = 4 cycles of Moores Law.
So if we're at 65nm gate lengths now, we could be at 8.125nm in 6 years.
It's horribly inaccurate but the point is they'll be tiny compared with what we have now. Supposedly a decade ago, everything was built on 500nm processes!
There would be other huge factors involved in actually getting to that scale of manufacturing in reality, too.
Warning: You could get lost in this link for days
Separate names with a comma.