Discussion in 'Article Discussion' started by CardJoe, 4 Jun 2008.
Dont be so sure, My uni just changed their 1st year CS stuff to python....
It's going to take nVidia about a week to come up with a CUDA video encoder which will make Intel look pretty stupid. The bit about quality is pure FUD because that's going to come down to how the software is written.
come on! C is the most basic language, how can any computer/embedded systems/hardware developer engineer don't know C? how can they survive?
although the accuracy point is taken. BOINC distributed computing don't use GPU is because they say it's not accurate enough to produce reliable results.
Not knowing C? We get to learn C at the earliest in 5th semester over here. First is Scheme then Java. If you're interested you can learn all those web-based stuff as well but the main language we're taught is Java.
I think this whole "battle of encoding" is pure marketing. We'll see who'll come out as a winner in this one but I have a feeling that both are right... somewhere... a bit... maybe.
Oh, and badders +4.
afaic CUDA and video encoding via cuda is in early stages of development in comparison to the seasoned encoders we have for CPU heck the DivX codec is ten years old and people have writing encoders for that time
as HDD sizes increase people will be asking for greater quality (therefore less compression) and could this be a realm for GPU or CPU who knows i personally beleive that the GPU is the next major leap and as soon as few freeware (along the lines of staxrip or handbrake) encoding apps are released that utilise CUDA Nvidia will start to win out.
If NViadia GPU has all the branching power, then they're wasting silicon that would have better been utilized for pushing pixels or massively parallel calculations. Intel is right in that a general processor is much better suited to branching. How much of that is required for video encoding is beyond me because I've not written any code for that type of application. I do understand Intel's point about IQ of the video as I have some understanding about what goes on during video encoding/compression. That said, any programmer worth his salt know ANSI C.
Anyway, a Hybrid approach to video encoding utilizing both types of processors is probably the best approach. The CPU for the branching and the GPU for the parallel calculations. Someone asked why not have a processor hybrid. Well you have to ask how much silicon would have to be partition for matrix type calculations and how much for branching. For every type of task the answer would be different. As is, CPUs have some degree of Single Instruction Multiple Data (SIMD). Intel calls it SSE/2/3/4/... and AMD called it 3D NOW. I think AMD adopted SSE a while ago. My point here is while Intel might be full of marketing, what they're saying isn't necessarily BS and their processor have been capable of doing similar type calculations to a GPU for quite some time (even before GPUs were marketed, although no where near as fast).
Separate names with a comma.