1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News GPUs 'only' 14 times faster than CPUs

Discussion in 'Article Discussion' started by CardJoe, 25 Jun 2010.

  1. CardJoe

    CardJoe Freelance Journalist

    Joined:
    3 Apr 2007
    Posts:
    11,343
    Likes Received:
    292
  2. rickysio

    rickysio N900 | HJE900

    Joined:
    6 Jun 2009
    Posts:
    964
    Likes Received:
    5
    What happens if you use a GTX480?

    Utter annihilation?
     
  3. Lizard

    Lizard @ Scan R&D

    Joined:
    17 Feb 2007
    Posts:
    2,890
    Likes Received:
    34
    The atmosphere catches on fire and the world ends :)
     
  4. EvilMerc

    EvilMerc Well-Known Member

    Joined:
    1 Feb 2010
    Posts:
    2,328
    Likes Received:
    80
    Not particularly surprising tbh. Didn't expect the 14x boost over a CPU but it was always going to be a significant margin.
     
  5. Lizard

    Lizard @ Scan R&D

    Joined:
    17 Feb 2007
    Posts:
    2,890
    Likes Received:
    34
    More seriously, there doesn't appear to be any explanation of what specific tasks, applications, drivers and libraries were used to get these results - all of which will massively effect the result.

    i.e. pre mid-2009 a GTX 285 would easily outperform a Core i7 in Folding@home, but after that (with the release of new clients) the CPU would be faster.

    As always, it depends on what your doing with your hardware.
     
  6. mjb501

    mjb501 New Member

    Joined:
    20 Jun 2010
    Posts:
    37
    Likes Received:
    7
    The results seem to tie in with the preformance gains I have seen doing GPGPU at uni, yes the GPU is a lot faster at certain tasks but there are others such as branching where the performace gain is significantly less plus the fact that existing software would have to be recoded for x86 to CUDA/OpenCL.

    I'd like to see a comparision of performance per watt of the CPU and GPU running these tasks.
     
  7. BlackMage23

    BlackMage23 RPG Loving Freak

    Joined:
    4 Aug 2006
    Posts:
    259
    Likes Received:
    1
    Does that mean that Intel just owned themselves?
     
  8. eddtox

    eddtox Homo Interneticus

    Joined:
    7 Jan 2006
    Posts:
    1,296
    Likes Received:
    15
    Does this imply that there are lessons to be learned from gpgpu's for cpu manufacturers? Is it possible to apply knowledge from one field to the other and narow the gap? AMD would be in a great position to do this.
     
  9. memeroot

    memeroot aged and experianced

    Joined:
    31 Oct 2009
    Posts:
    1,215
    Likes Received:
    19
    @Lizard - I thought the new clients were a little bit dishonest regarding cpu/gpu performance
    @mjb501 - performance per watt would be interesting but I think we can guess given that you'd need a <5* performance improvement to make it worth while
    @rickysio - you'd have to hope so given thats what the 480 was designed for... heck it's faster than a 5870 which was 'just' designed for graphics.
     
  10. Lizard

    Lizard @ Scan R&D

    Joined:
    17 Feb 2007
    Posts:
    2,890
    Likes Received:
    34
    In what way? Do you mean how the apparent performance (if you're measuring in ppd) of the clients has varied over time as Stanford adjusts the points system?
     
  11. Shagbag

    Shagbag All glory to the Hypnotoad!

    Joined:
    9 Nov 2006
    Posts:
    320
    Likes Received:
    4
    I'm amazed Intel fessed up to it in the first place.
    Marketing wise, they've shot themselves in the foot with both barrels.
    I have absolutely no doubt that they loaded each test with as much bias as possible.
    The fact that they used a GPU that was 1 year older that their CPU is proof in point.
    To then come out and say the GPU was at least 2.5 times a fast as their CPU (at parallel processing tasks) is amazing.

    One thing is for sure: no way in Hell would Apple's Marketing Dept allow such a test result to be released.
     
  12. Phil Rhodes

    Phil Rhodes Hypernobber

    Joined:
    27 Jul 2006
    Posts:
    1,415
    Likes Received:
    10
    Quite so. Actually, I'm surprised it's only 14 times; for graphics I suspect it would really be quite a lot more than that, frame rate for frame rate (though a CPU based graphics engine would likely give more accurate results, if you care).

    P
     
  13. rickysio

    rickysio N900 | HJE900

    Joined:
    6 Jun 2009
    Posts:
    964
    Likes Received:
    5
    It's not really marketing - it's a paper for discussion by experts.
     
  14. memeroot

    memeroot aged and experianced

    Joined:
    31 Oct 2009
    Posts:
    1,215
    Likes Received:
    19
    @Lizard

    yep thats what I meant - though I only picked up the info from the forum - so I dont honestly know if it is true.
     
  15. Centy-face

    Centy-face Caw?

    Joined:
    26 Apr 2009
    Posts:
    165
    Likes Received:
    2
    Why do I get an image of Charlton Heston screaming on a beach looking up at a huge 480
     
  16. cgthomas

    cgthomas Cpt. Handsome

    Joined:
    20 Oct 2009
    Posts:
    295
    Likes Received:
    2
    Employee: Doc, I've just pwned myself
    Prof: so what, you're a nerd anyway
     
  17. Fizzban

    Fizzban Man of Many Typos

    Joined:
    10 Mar 2010
    Posts:
    3,369
    Likes Received:
    134
    Not surprised really. Only surprised Intel has let this information out.
     
  18. delriogw

    delriogw New Member

    Joined:
    1 Aug 2007
    Posts:
    116
    Likes Received:
    0
    credit to them for not trying to hide it to be honest.

    they can learn from this as can the industry in general, it also shows the difference isn't as large as a lot of people thought, which in intels eyes is of course a positive (and so it should be).

    @ shagbag : the fact that apple wouldn't release this says more about apple than it does about intel
     
  19. wyx087

    wyx087 Homeworld 3 is happening!!

    Joined:
    15 Aug 2007
    Posts:
    10,687
    Likes Received:
    248
    expected result for a 3.2GHz i7 vs gtx280, in fact, 2.5x is about the speed difference i get when encoding a video on i7 860 and gtx260.

    what Intel should have showed is single threaded performance, or heavily branching based performance.
     
  20. bogie170

    bogie170 New Member

    Joined:
    11 Aug 2008
    Posts:
    340
    Likes Received:
    5
    I tried putting a Nvidia GeForce GTX 280 in my CPU socket but it didn't work. Can anyone help me?
     
    dark_avenger likes this.
Tags: Add Tags

Share This Page