Discussion in 'Article Discussion' started by arcticstoat, 21 Jun 2011.
f@h might be good use for this
Interesting idea, but perhaps a couple of years too late. This would have been better released in the earlier days of GPGPU I think; something to stimulate the market and development in the area. Aren't serious computing programs written with CUDA/Stream/OpenCL in mind now?
This could be perfect for my CAD rendering rig
Heh, I knew this would be recycled larrabee tech the moment I read the headline.
How does one take advantage of this? Do I just write my code as usual- using my usual compiler? Use a different compiler targetting the Knights Corner? Any benefit for me as a developer? Could it be used to speed compiling/testing in my IDE, making for a super nice developer machine?
it's interesting isn't it? =]
i'd much rather see a pci-e card dedicated to opencl. basically it'd be a gpu but it wouldn't have video ports and it would be designed to ignore any of the instruction sets intended for video use only. think of it like agea's original ppu, except opencl instead of physx.
"Knights Corner"? who the hell thought that one up.
That's the nVidia Tesla line.
Surely what this is about is not beating GPGPU, but providing a better solution for certain problems that require out of order execution and other benefits of a x86 architecture. Sure, there are much faster, many more core products that can do my RISCy based simple maths, MUCH faster. But if I was designing a large server farm based super computer for general problem solving (ie. a University) then I would want flexibility.
Fill up a board with 2 GPGPU cards, 2 of these, and you have a machine that is capable of doing a lot pretty fast rather than a few things a bit faster. Sure there are specific companies that want to model proteins and the like that will likely say no thanks.
Its a good product, it has its place.
Except that Tesla is CUDA only NOT OpenCL
I quoted my own post
add a this and 2 GTX590 (with WB and single slot bracket and you have a 146 cores CPU and 2048 cores for GPGPU. Pretty much do everything rig.
I still think it will suck.
Even in it's 'natural home' running x86 code, it will be up against both Nvidia and ATI.
I see it like this: Intel release 50 core card, Tesla and AMD cards recieve 50% price cut. They're only non-intentionally-gimped common GPUs with commercial-use prices and software and support packages.
CUDA is moving at break-neck speed towards ever better flexibility and performance. By 2012 Keppler will be out and about (focused on flops/watt), CUDA will be improved, and I do not see x86 being 'all-that' in a field (HPC) where C++ is the requirement and nothing more. Plus they just shown their hand a year in advance. Thats a lot of pricing/marketing/CUDA pushing wiggle-room for Nvidia.
Disclaimer: I do not like Nvidia much at all.
If this supports OpenCL then I hope it'll mine well.....
Intel promised OpenCL within 6 months of launching Clarksdale. We still don't have it with Sandy Bridge. Yet we have Intel compilers ready since launch for its Quick Sync Video..
so would this be ideal for people in the rendering side of the field like video editing and music mixing and what not?
Including superheat your abode and create a stupendous electric bill
I know, sounds like a medieval yoghurt.
Seems like they dont want to do it like Atari with their ET cartridges.
Intel put their stockpile of useless 'wannabe' GPU processors on a pci-e card and sell it as math co procs and still make profit of it. Typically Intel....
Separate names with a comma.