Discussion in 'Article Discussion' started by Tim S, 30 Apr 2007.
Why has graphics power advanced so much more then CPU?
maybe because most customers have been more graphics focused than CPU focused?
That was one intense article which managed to loose me a bit towards the end due to its complexities but was still a good read none the less.
So what you are tying to say is that this new 80cpu array is intel's new graphics card?
I think this is the reason we saw talk about having processors with an integrated GPU in the nehalem lineup. Also its interesting to speculate on what could be done with a stacked CPU-GPU combination (CPU as thread rounting; logic and math ops at amazing speed?)
even if the on die 'GPU' parts are not used for graphics they would be welcomed with open arms by those of us doing analysis work.
I didn't read through the article but I was surprised to see the picture of a USB DAQ card from National Instruments. It was kinda funny cause I was just looking up the pinout for a PXI-6508 like 15 minutes ago here at work lol, and then I come and see them using one for this.
Good read, I enjoy articles with a more technical focus and this certainly provided such. It's interesting how the computing world is moving back towards specialization for some tasks rather than integration.
A pretty in-depth read, but a superb article.
With ref: to the previous posters comments about why graphics tech moves on faster than CPU, I agree.. it's comsumer led. Graphics cards are the Must Have thing for gamers, whereas you can get away with a CPU with less grunt.
Great article, I've been hoping for some more GPGPU stuff to come through (it'd be great if my gfx card auto encoded all my tv to divx or something).
The reason graphics cards are so far ahead was covered in the article somewhat. They're specialised, the 8800 only has to do pixel, vertex, geometry and physics processing. A processor has to do far more than that.
Also, we've been working around the limitations of x86 for a while now (x64 has fixed that somewhat) whereas a graphics card can change it's architecture massively every revision. CPUs are also slightly more memory starved than their GPU.
And now GPUs use more power (electricity-wise) than CPUs so they have to justify themselves for the increased power consumption
I like the design of the motherboard and terascale processor, with wires everywhere, a lot of machines, reminds me of Pi (movie, Darren Aronovski)
the absolute difference between GPUs and CPUs is that the latter are primarily logic opertation oriented and linear (hence why we call processes 'threads')
Hmmm I wonder how fast of a graphics card one would need for physx? I'd love to throw in a 6600gt next to my 8800GTX SLI mad powerfhouse dream system
Any dx9c graphics card should be better than your processor so a 6600gt would probably be pretty good.
it hasen't the cpu is a genral purpose proccessor which can do everything (including graphics very slowly) and the GPU is very fast at what it does but can only do very few tasks in comparison to a CPU. GPU computing isn't the be all and end all, alot of tasks run very much slower on a gpu to a cpu.
That's exactly right -- the GPU is a very powerful processor when code is written to take advantage of its massive parallelism. With very narrow code, or more generalised code, the CPU will run rings around the GPU.
Half Life 2: Episode 2!
ATi (AMD) have always been close to Valve. I'm sure the next time we'll see the current state of Episode 2 will in 3 months time. If you think, it leaves 3-4 months for the hype to brim up on the run up the "end of the year", when they plan to release, and then it'll be 'LOL physics kthnx$$$bai'.
They were pretty good getting Source updated and out with new tech too, eg. HDR, and indeed the physics in the original title were ground-breaking at the time.
It would also tie-in pretty well with the timing of their original delay announcement, and give a pretty good financial justification for it - look the money that could be made by both sides (sale of physics cards, and sale of the game to use them with.)
I think if you're going to push the bar on physics with games like Portal and whatever they might have planned for Episode 2 - you may just need some extra physics acceleration to get that extra 'wow'. Or not, but I'm sure Valve know good money when see it!
May be wrong, but it fits quite well
(I guess it could be Far Cry 2? )
anyone forgetting that Nvidia are a major backer of the Havok engine as well? (last I checked the G80 was purportedly designed to work with the engine as well as DX10)
It is definitely consumer led; but the rapid pace of advancement follows and supports the evolution in game design. Gamers want their games to improve graphically, to some extent it justifies the purchase of a new game and as a result graphics tech can count on continued advancement at the software end ensuring the market for new cards.
why the insistance that "we will never, ever, ever see an 80 core cpu"? yes, yes, I get it, this is just a research project to see what they can do, but who's to say in 5 years or so they won't come out with a 40 core or 100 core? If the benifits are there and it looks to be good step forward for computing, then I'll be first in line to pick up my 60 cores. If the software can be optimized and there is a need for the computational ability, then I have no doubt we will see such processors.
Awesome article dudes!
PS: "We all know the game, It is just a lame excuse to get postcards from all over the world...'
Separate names with a comma.