Discussion in 'Article Discussion' started by CardJoe, 25 Jun 2010.
What happens if you use a GTX480?
The atmosphere catches on fire and the world ends
Not particularly surprising tbh. Didn't expect the 14x boost over a CPU but it was always going to be a significant margin.
More seriously, there doesn't appear to be any explanation of what specific tasks, applications, drivers and libraries were used to get these results - all of which will massively effect the result.
i.e. pre mid-2009 a GTX 285 would easily outperform a Core i7 in Folding@home, but after that (with the release of new clients) the CPU would be faster.
As always, it depends on what your doing with your hardware.
The results seem to tie in with the preformance gains I have seen doing GPGPU at uni, yes the GPU is a lot faster at certain tasks but there are others such as branching where the performace gain is significantly less plus the fact that existing software would have to be recoded for x86 to CUDA/OpenCL.
I'd like to see a comparision of performance per watt of the CPU and GPU running these tasks.
Does that mean that Intel just owned themselves?
Does this imply that there are lessons to be learned from gpgpu's for cpu manufacturers? Is it possible to apply knowledge from one field to the other and narow the gap? AMD would be in a great position to do this.
@Lizard - I thought the new clients were a little bit dishonest regarding cpu/gpu performance
@mjb501 - performance per watt would be interesting but I think we can guess given that you'd need a <5* performance improvement to make it worth while
@rickysio - you'd have to hope so given thats what the 480 was designed for... heck it's faster than a 5870 which was 'just' designed for graphics.
In what way? Do you mean how the apparent performance (if you're measuring in ppd) of the clients has varied over time as Stanford adjusts the points system?
I'm amazed Intel fessed up to it in the first place.
Marketing wise, they've shot themselves in the foot with both barrels.
I have absolutely no doubt that they loaded each test with as much bias as possible.
The fact that they used a GPU that was 1 year older that their CPU is proof in point.
To then come out and say the GPU was at least 2.5 times a fast as their CPU (at parallel processing tasks) is amazing.
One thing is for sure: no way in Hell would Apple's Marketing Dept allow such a test result to be released.
Quite so. Actually, I'm surprised it's only 14 times; for graphics I suspect it would really be quite a lot more than that, frame rate for frame rate (though a CPU based graphics engine would likely give more accurate results, if you care).
It's not really marketing - it's a paper for discussion by experts.
yep thats what I meant - though I only picked up the info from the forum - so I dont honestly know if it is true.
Why do I get an image of Charlton Heston screaming on a beach looking up at a huge 480
Employee: Doc, I've just pwned myself
Prof: so what, you're a nerd anyway
Not surprised really. Only surprised Intel has let this information out.
credit to them for not trying to hide it to be honest.
they can learn from this as can the industry in general, it also shows the difference isn't as large as a lot of people thought, which in intels eyes is of course a positive (and so it should be).
@ shagbag : the fact that apple wouldn't release this says more about apple than it does about intel
expected result for a 3.2GHz i7 vs gtx280, in fact, 2.5x is about the speed difference i get when encoding a video on i7 860 and gtx260.
what Intel should have showed is single threaded performance, or heavily branching based performance.
I tried putting a Nvidia GeForce GTX 280 in my CPU socket but it didn't work. Can anyone help me?
Separate names with a comma.