News Intel unveils 32nm process technology

Discussion in 'Article Discussion' started by Tim S, 10 Feb 2009.

  1. Tim S

    Tim S OG

    Joined:
    8 Nov 2001
    Posts:
    18,882
    Likes Received:
    89
  2. n3mo

    n3mo What's a Dremel?

    Joined:
    15 Oct 2007
    Posts:
    184
    Likes Received:
    1
    Since the most intensive tasks are slowly moved towards GPGPU and CPUs will never even get close to GPU in efficiency and power, designing faster CPUs is just a waste of time. Even if they get to 11nm, the CPUs will be too slow for anything. Although it will be useful for GPUs.
     
  3. perplekks45

    perplekks45 LIKE AN ANIMAL!

    Joined:
    9 May 2004
    Posts:
    7,680
    Likes Received:
    2,040
    For any highly-threadable tasks you're right. But if it's not breakable into many threads, CPUs still have a purpose.
     
  4. Joeymac

    Joeymac What's a Dremel?

    Joined:
    3 Oct 2006
    Posts:
    243
    Likes Received:
    0
    Until of course they integrate the GPU into the CPU.... oh wait.
    Still, maybe Intel should work on a better GPU to put in with the CPU... oh wait they are doing that as well aren't they.
     
  5. Agamer

    Agamer Minimodder

    Joined:
    10 Mar 2005
    Posts:
    437
    Likes Received:
    4
    Except the CPU's will still be needed to do the tasks that can't be put onto the GPU. After all not all types of processing can be pushed to the GPU's.

    Hence any speed advantage in CPU's will still be able to benefit what remains on them.
     
  6. dogdude16

    dogdude16 What's a Dremel?

    Joined:
    16 Aug 2007
    Posts:
    108
    Likes Received:
    0
    I'm just excited that they are putting $7 billion in to the U.S. economy.
     
  7. n3mo

    n3mo What's a Dremel?

    Joined:
    15 Oct 2007
    Posts:
    184
    Likes Received:
    1
    Not always. As soon as the Linux kernel will be able to compile on a GPU architecture, the CPU will be useless. GPUs are, in the worst case, faster than CPU by a factor of ~15 and can easily do anything a CPU can - faster. Well, the will be eventually. CUDA is, fr now, only a frontend, the GPU architecture needs some changes, not to mention Micro$oft's inability to adapt to anything new. So, for now we will use slow, outdated x86, while wasting all the real power on stupid, repetitive games. But some day they will reach 11nm, the core clock won't go any further and no more cores will fit on the die. And than maybe someone will say "x86 was outdated and underperforming anyway, let's do something new" It's like with the oil - the less there is left, the more money and interest goes into alternatives.
     
  8. JumpingJack

    JumpingJack What's a Dremel?

    Joined:
    30 Dec 2006
    Posts:
    10
    Likes Received:
    0
    As mentioned above, for highly parallel and regular data structures, your statement makes sense, but irregular data structures, branched logic, and integer performance the GPU is the waste of time and power. The GPU also has to provide for full double precision, which I believe they have made massive progress, but it still not fully IEEE-754 compliant -- I need to check that, not sure if this is true anymore.

    Anyway, each has strengths and weaknesses. As we are witnessing, computing is evolving to bring the two together. In the near term, nVidia has a good argument with CUDA and some real killer applications/results, but they have no CPU to pair with it to round it out -- i.e. you cannot currently run an OS on a GPU for example. Intel, on the other hand, as well as AMD with the ATI acquisition, are have the two major pieces to bring the best of both together.

    In the long term nVidia is gonna get squeezed (my prediction)...
     
  9. devdevil85

    devdevil85 What's a Dremel?

    Joined:
    29 Nov 2006
    Posts:
    924
    Likes Received:
    0
    Amen to that brother.
     
Tags: Add Tags

Share This Page