1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News GPUs 'only' 14 times faster than CPUs

Discussion in 'Article Discussion' started by CardJoe, 25 Jun 2010.

  1. Faulk_Wulf

    Faulk_Wulf Internet Addict

    Joined:
    28 Mar 2006
    Posts:
    402
    Likes Received:
    6
    n00b question. But if GPUs are so much faster-- why haven't we adapted them to replace CPUs as the main processor for tasks? Is it like a standard-vs-metric type of thing where its just completely incompatible? Just seems like it'd be a massive upgrade.
     
  2. thehippoz

    thehippoz What's a Dremel?

    Joined:
    19 Dec 2008
    Posts:
    5,780
    Likes Received:
    174
    I like how they used the term 'crowing' when talking about nvidia.. that sounds about right :D
     
  3. Sloth

    Sloth #yolo #swag

    Joined:
    29 Nov 2006
    Posts:
    5,634
    Likes Received:
    208
    My own understanding is that
    a) People don't like change. It'd require a massive switch for the entire physical structure of the PC.
    b) CPUs are still better at a variety of tasks, tasks which are still quite common today. Most applications currently available and used can't support the highly parallel nature of a GPU.
    c) Standards such as CUDA and OpenCL need to be more widely developed/adopted.
    d) Intel and AMD like money.
     
  4. HourBeforeDawn

    HourBeforeDawn a.k.a KazeModz

    Joined:
    26 Oct 2006
    Posts:
    2,637
    Likes Received:
    6
    well this makes since, in all the apps I saw that took advantage of a GPU processed their task about 14-20 times faster then on the CPU so the numbers seem spot on. I dont really see the CPU ever going away, it will probably end up being the primary chip on mobos, like the CPU, North Bridge, South Bridge will all be in that one modular chip, which for the most part where we are heading anyways.
     
  5. yougotkicked

    yougotkicked A.K.A. YGKtech

    Joined:
    3 Jan 2010
    Posts:
    251
    Likes Received:
    9
    for a while now i have had a 'vision' of the future of computing, the way i see it, GPU's are a lot more powerful, but are less versatile. CPU's, while slower, are extremely adaptable. i honestly think that in 10/15 years, we will stop thinking of GPU's as graphical processors, and start seeing them as specialized processors, responsible for all types of digital heavy lifting, while the standard CPU will have many more cores than todays, and will be responsible for lower-level computations, and managing the GPU's workload.
     
  6. Elton

    Elton Officially a Whisky Nerd

    Joined:
    23 Jan 2009
    Posts:
    8,577
    Likes Received:
    196
    Parallelism. Imagine programming something that uses 320 threads.

    Hell imagine the time it would take to program something with 64 threads.
     
  7. Mraedis

    Mraedis Minimodder

    Joined:
    5 Sep 2009
    Posts:
    153
    Likes Received:
    0
    Of course, you could be smart and have it assign an 'empty' thread automatically, over thread(1) or stuff :D
     
  8. leexgx

    leexgx CPC hang out zone (i Fix pcs i do )

    Joined:
    28 Jun 2006
    Posts:
    1,356
    Likes Received:
    8
    GTX480 does about 15-16k when its working correctly (some times does 10k when VMware folding is running restarting the GPU client brings it back to full speed again)
    with the nerfed A3 bigadv work units the CPU i7 clocked at about 3.8-4ghz douls around 20k PPD (26k before with the A2 work units)

    good thing is thought the nerf to points seems global as the GPU3 comes into play thats 20% slower, A3 norm work units are about 20-40% slower and Bigadv A3 is 20-30% slower
     
  9. Glix

    Glix Left Thumb Stick in the mud.

    Joined:
    11 May 2010
    Posts:
    318
    Likes Received:
    1
    Remember the days that the CPU did all the work on it's own? I do and the nightmare that followed. xD
     
  10. Star*Dagger

    Star*Dagger What's a Dremel?

    Joined:
    30 Nov 2007
    Posts:
    882
    Likes Received:
    11
    WOW something that nVidia can do that is FAST, maybe they can sell millions of their cards to researchers because Gamers™ know enough to buy ATI

    .-
     
  11. VicDiesel

    VicDiesel What's a Dremel?

    Joined:
    26 Jun 2010
    Posts:
    2
    Likes Received:
    0
    Up to 14 times. Up to. The average was 2.5.

    And they didn't report how much effort it took to recode those benchmarks. A simple port will probably run abysmally. You really have to work hard to get those hundreds of threads running that the GPU requires.

    So if you have that one kernel (probably graphics) that speeds up bigtime, go for it. If you have something else, think very hard if you want to invest a couple of weeks/months in rewriting your code for a small gain.

    V.
     
  12. VicDiesel

    VicDiesel What's a Dremel?

    Joined:
    26 Jun 2010
    Posts:
    2
    Likes Received:
    0
    That is because software has to be massively rewritten to work efficiently on a GPU. With regular CPUs if the clock got faster, the application got faster. No work required. If the core count goes up, you have to do some stuff with threading before you see a gain. But that's manageable. With GPUs you basically have to recode your application.

    If you're a relatively small application and it happens to be one that gets good speed up (read, you're a game and graphics determines your speed) then you'll invest the effort. If you're something like MS Windows, you'll never run on a GPU. Too much work and no gain.

    V.
     
  13. MajestiX

    MajestiX What's a Dremel?

    Joined:
    14 Apr 2002
    Posts:
    152
    Likes Received:
    0
    isnt this why intel is playing with multicore atom cpus?

    keep the clock rate the same pump up the numbers of cores by 100's or shrink the chip.

    they are doing a lot of R&D to test new market, a company that size aint going to roll over with a few bad hands.
     
  14. Bakes

    Bakes What's a Dremel?

    Joined:
    4 Jun 2010
    Posts:
    886
    Likes Received:
    17
    Hence why we use CPUs for general processing and GPUs for graphics processing.

    Whilst the processing units are getting more powerful, there are still loads of CPU functions and capabilities that GPUs simply cannot handle.

    This paper was designed to try and persuade people that Intel chips were faster in large supercomputers and clusters, where you might see over 20,000 cores.

    Single threaded performance is pointless, simply because it's not representative of real world use. This is not a benchmark designed to interest geeks at their computers comparing Apples to Oranges, it's a scientific paper to convince people that when they're making their large processing clusters, they should use lots of Intel CPUs rather than nVidia GPUs.

    With regard to video encoding, I assume you use Badaboom. The reason why it's much faster is that they compromise massively on quality. If you drop the settings in (say) Handbrake to a comparable level, you're going to have roughly the same speed in either, but as soon as you try to crank up the image quality to HD level, the GPU will be a long way behind.

    To Gareth: I'm disappointed in you. The headline borders on the sensationalist, when even in your own article you write that it's only in one of the tests. It's a bit like saying 'i7 3000 times slower than GTX480' when the test you did was rendering Crysis. 2.5x faster is the actual figure, according to the study' so for you to say that it's 14x faster is basically taken straight from the Daily Mail Handbook of sensationalist headlines!
     
  15. knutjb

    knutjb What's a Dremel?

    Joined:
    9 Mar 2009
    Posts:
    62
    Likes Received:
    0
    I agree with Bakes that this is a scientific exercise that will guide Intel on both marketing and future product development. Intel and AMD have a much greater ability to impact the market than does Nvidia. GPU only can only stand on its own in a limited way. Nvidia has made progress but its not a broad spectrum tech at this time and I don't think they will make it that way.

    Don't think that Intel will take this lying down they have the resources to to adapt. AMD was thought to be crazy in buying ATI. Nvidia might want to push CUDA over competitors but I don't think they can because they aren't in a position to overthrow Intel and their presence in the software side. I think AMD/ATI might be in the best position to integrate CPU/GPU. They have a CPU that Nvidia doesn't have and far more experience than Intel in GPUs.

    In the real world these ideas and exercises don't always make it to market but they usually do have a significant impact on future hardware and software architecture. I do think GPU-CPU integration will happen but the GPUs will be used to increase parallel data processing performance and not specifically for graphics output. Think of the money ATI and Nvidia make on graphics cards, they won't want to give up those highly profitable margins. Plus high performance GPUs generate a lot of heat and how much thermal density can a CPU-GPU integrated package handle? If it takes up too much real estate it will likely not push GPU cards off the board. In the mass corporate world with low demands on GPUs it will make a lot of sense to further integrate more, if not all, functions onto a single chip where thermal limits aren't pushed and power consumption is a major consideration.
     
  16. Gradius

    Gradius IT Consultant

    Joined:
    3 Feb 2009
    Posts:
    288
    Likes Received:
    1
    14x is only for some cases.

    In reality we're 2.5x times slower than we should be.

    Way to go Intel. :/
     
  17. metarinka

    metarinka What's a Dremel?

    Joined:
    9 Feb 2003
    Posts:
    1,844
    Likes Received:
    3
    I think people also forget that mass parallelization isn't extremely practical for a lot of everyday applications. Some tasks simply cannot be parallelized because you're waiting on data from a previous operation.

    Now when we are talking about computer clusters that handle large data sets than threading becomes relevant. Seti @ home and folding @ home are evidence of the types of tasks that can be parallelized to the nth degree and still seek performance gains. Some tasks can only be reduced so much before you don't gain anything.


    I actually forsee the whole industry moving towards threading and it will take the hardware software and people (coders etc) a good 10-15 years for good threading practice and standards to come into place. If you look at all the CPU roadmaps they are moving to 6-8+ cores. I think we'll see sometype of hybrid hardware that can scale down to a few very fast cores for serial computing and scale up to N cores for massively parallel applications. Ala Joining a CPU and GPU into one die.
     
  18. TheMusician

    TheMusician Audio/Tech Enthusiast/Historian

    Joined:
    13 Jul 2009
    Posts:
    573
    Likes Received:
    32
    We're getting there. Flash 10.1 is just the start.
     
  19. Splynncryth

    Splynncryth 0x665E3FF6,0x46CC,...

    Joined:
    31 Dec 2002
    Posts:
    1,510
    Likes Received:
    18
    This just seems to be the next processing architecture argument. There may be some valid parallels from the discussion of x86 vs EPIC years ago.

    But computers are more than the underlying hardware, they are about software too. You need a solid community of systems programmers and tool authors to make the platform work. To me, this is why AMD's x64 won out over Intel's IA64.
    It's the same story with the GPGPU idea.
     
  20. Bakes

    Bakes What's a Dremel?

    Joined:
    4 Jun 2010
    Posts:
    886
    Likes Received:
    17
    AMD64 won because it was backwards compatible with x86.
     
Tags: Add Tags

Share This Page