1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News Nvidia dismisses AMD's Batman accusations

Discussion in 'Article Discussion' started by Tim S, 3 Oct 2009.

  1. LordPyrinc

    LordPyrinc Legomaniac

    Joined:
    7 Mar 2008
    Posts:
    598
    Likes Received:
    5
    I just picked up the game yesterday and am installing it now. I do have an NVidia card, but I would be a bit tiffed if I had an ATI card and found out that not all the features are supported. I guess I made the right choice in manufacturer when I went with NVidia.
     
  2. Elton

    Elton Officially a Whisky Nerd

    Joined:
    23 Jan 2009
    Posts:
    8,577
    Likes Received:
    196
    AA at any rate isn't too big of an issue really. Seeing as ATT can force it easily. I mean I do that for almost all games, Oblivion, Mass Effect, Bioshock. It's not too surprising.
     
  3. crazyceo

    crazyceo What's a Dremel?

    Joined:
    24 Apr 2009
    Posts:
    563
    Likes Received:
    8
    AMD say "WHAAAA! WHAAAA! WHAAAA! I'm telling my mommy!"

    Nvidia say "Go tell ya mommy and ask her how my kids are doing!"

    Batman AA is a very good game and was sure to be big hit on the back of The Dark Knight movie. Don't you think it would have been in AMD's best interest to get onboard at the very beginning and help their hardware customers.

    AMD are to blame here not Nvidia. AMD could have said "Here, have all this equipment and development tools and let's see if we can get it working on our hardware" but no, they sat on their hands and instead got the marketing department to plan a strategy to fill the media with more whining.

    Why not put that energy into helping their customers instead of just pissing them off.
     
  4. ElMoIsEviL

    ElMoIsEviL What's a Dremel?

    Joined:
    6 Oct 2009
    Posts:
    3
    Likes Received:
    1
    You're full of it.

    I am the author of that video and you must be that retarded fanboi from HardOCP (Atech I believe is your name there).

    The problem you have here, my son, is that you have challenged individuals who are your intellectual superiors. Here is why your argument doesn't hold water.

    A. PhysX is written in CUDA (Which is a variant of either Fortran, C++ or C with special nVIDIA CUDA extensions).

    B. The framerate is actually in the 40-50s. The video is locked to 29FPS (FRAPS video recording). So not only is the CPU handling the PhysX in that clip, it's also recording and encoding the clip into an AVI file.

    So let's see how both of these two pieces of evidence relate to your argument. You claim that because CPUs can't execute special CUDA code as well as CUDA architecture based GPUs can; that GPU > CPU in terms of Physics? Correct?

    You are correct to mention that the level of Physical Interactions (the precision so to speak) is lowered. But what this does show is that you can get the same effects (the way it looks) contrary to those nVIDIA comparisons.

    Let's have a look here:
    http://www.realworldtech.com/page.cfm?ArticleID=RWT090909050230

    Do you notice something? Oh yes, a Nehalem based Core i7 is pretty much on par with a GT200 based nVIDIA GPU when it comes to horsepower. This also explains why the PhysX engine used in Ghostbusters (Infernal Engine) produces far more collisions than we see in Batman yet remains playable:
    http://www.youtube.com/watch?v=KGQue3ruGVw

    PhysX CUDA Libraries are coded to run like crap on CPUs and deliberately limited to show CPUs in a bad light (thus selling more nVIDIA cards). When you code a Physics engine properly (for a CPU as the Infernal Engine demonstrates) you can get damn close to GPU Physics without the hassle.

    Know your place :)
     
    Last edited: 6 Oct 2009
    impar likes this.
  5. Tim S

    Tim S OG

    Joined:
    8 Nov 2001
    Posts:
    18,881
    Likes Received:
    78
    I don't disagree that PhysX is poorly written for CPUs, but what I do disagree with is the fact you're using theoretical double precision throughput as a measure for efficiency when it comes to PhysX calculations. Since PhysX is supported by all Nvidia GPUs since G80, it doesn't make use of double precision FP ops since G80 doesn't support double precision.

    RV770 is still a more efficient GPU when it comes to overall peak theoretical throughput in single precision ops, but the VLIW architecture makes things a little more interesting for the developer if they want to achieve maximum throughput.
     
  6. chizow

    chizow What's a Dremel?

    Joined:
    12 Dec 2008
    Posts:
    24
    Likes Received:
    1
    Full of it? Only a "retarded fanboi" would claim "hardware PhysX runs just fine on the CPU" with that horrid demonstration of a Batman covered in crap. Not only that, but in other areas the effects are even worst, like Steam behaving like liquid in zero G environments. I guess its simulating PhysX in space on the CPU? What a joke!

    Make sure to let me know when they arrive, my child. /endthulsadoomvoice.

    Wrong, my child, PhysX is written in C and compiled for [insert whatever backend API is required]. Honestly, do you think PhysX was really written in CUDA when just about every platform it supports pre-dates CUDA and GeForce Acceleration? DO SOME RESEARCH before commenting ignorantly, especially if you're going to take some comical patronizing tone.

    It might be 40-50FPS at the start, but the point I highlighted, @1m10s it clearly drops to below 20 FPS, the transition is obvious once you actually have a sufficient CPU load, once again showing the CPU is inadequate when it comes to accelerating PhysX effects. And yes, the effects look like utter rubbish, so again, you'd have to be a "retarded fanboi" to think this is a valid workaround or substitute for actual GPU PhysX acceleration.

    PhysX (like Havok) has run on x86 platforms far longer than CUDA existed, so once again, any assumptions you're making about CUDA being unoptimized is clearly inaccurate. The PhysX runtime libraries are the same for both the CPU and GPU for PC releases.

    No you don't. Your video makes PhysX look like garbage, a proper GPU accelerated implementation looks great. Its not even close really. And you don't need to be a "retarded fanboi" to see the difference. Compare these paper effects to those in your video. Not only are per-object nodes and fidelity clealry increased, but it also runs at constant frame rate throughout:

    http://www.youtube.com/watch?v=6GyKCM-Bpuw#t=5m25s

    ROFL, that bit of irrelevant info might actually hold some significance if double-precision floating point was actually used for current physics engines, but its not. The relevant point in your link confirms the GPU handles single precision floating point math better than the CPU by ten-fold, with a GT200 clocking in ~1 TeraFLOPs and the CPU only ~100 GigaFLOPs. In actual practice, that ten-fold difference is actually much greater than that because the much lower throughput quickly becomes the bottleneck for any game engine, as it would have to wait for all physics calculations to complete before rendering the frame. In your example video the results are obvious, not only is PhysX running at reduced precision leading to the artifacting and collison detection problems, it STILL runs slower than GPU acceleration.

    Once again, PhysX pre-dates GPU acceleration, if you look in your games folders, the PhysX libraries use the same compiled binaries for both the CPU and GPU. Its not coded "like crap" anymore than any other software/CPU solution like Havok, Velocity, or Cryengine, its just limited to clearly inferior floating point throughput.

    LMAO the irony coming from someone who thinks that workaround video is somehow proof of anything other than the limitations of the CPU when it comes to physics acceleration. Maybe one day when you can buy a CPU capable of 10x greater floating point ops you'll be able to play Batman with PhysX at full fidelity, but by that point physics load will be such an insignificant portion of GPU load no one will even notice.
     
    M7ck likes this.
  7. chizow

    chizow What's a Dremel?

    Joined:
    12 Dec 2008
    Posts:
    24
    Likes Received:
    1
    As noted above....PhysX pre-dates GPU acceleration by a long-shot, its not even close. It ONLY ran on CPUs for years so if there's any inefficiences it'd be poorly written for GPUs. From a uarch standpoint, it should be obvious the CPU isn't as well equipped as a GPU to handle physics. Physics benefits from highly math intensive parallel operations and in-order instructions which the GPU excels at, whereas the CPU excels at OoO instructions and has difficulty extracting parallelism to even keep multiple cores occupied.

    Not sure how the Vec5 shaders can be seen as more efficient when this rarely bears out in practice. As you mentioned, the VLIW Vec5 arch relies heavily on optimation from the developer and compiler, unfortunately that rarely occurs so the solution for AMD has been a brute-force approach to just throw more mostly brain-dead shaders at the problem.

    With Cypress we're starting to see diminishing returns and as a result, the 5870 scales much worst than expected. Nvidia on the other hand has made numerous changes to their core arch by doubling SFU and dispatch, decoupling those units from SP within an SM, and allowing concurrent kernels to be run across each SM in the GPU. And on top of all that, they doubled SP just to be safe. I guess we'll see which turns out to be a better design decision, but I'd say Nvidia's design has been clearly more efficient given its performance typically outperforms ATI in just about every GPGPU application to-date.
     
  8. thehippoz

    thehippoz What's a Dremel?

    Joined:
    19 Dec 2008
    Posts:
    5,780
    Likes Received:
    174
    well I remember when the first physx demo game came out- cellfactor.. me and my buddy were pretty excited about ageias physics card after watching some video on the cloth effects.. anyway that first demo game was a total flop, we modified the ini file to run the demo off the cpu instead of the ppu.. it ran fine rofl

    I mean I remember looking at my buddy and going wth is this.. we were getting full frames too

    when nvidia bought them, it was pretty common knowledge physx was a gimmick- was a great idea.. but I have to agree with elmo- if coded correctly physics works fine off the cpu.. heck look at crysis

    if you look at how much cpu is used gaming on the core 2's and up- throwing physics at current cpu's is perfect.. I see it as nothing but marketing to tell you the truth.. seen nothing in any physx game that I said- wow they couldn't do that with havoc.. it just sells more nvidia cards
     
  9. ZERO <ibis>

    ZERO <ibis> Minimodder

    Joined:
    22 Feb 2005
    Posts:
    454
    Likes Received:
    8
    Maybe I should find a game with physics-x and do my own benchmarks to see the difference. If I ever do this test I will post here so that we can see the fps difference between using gpu vs cpu.
     
  10. thehippoz

    thehippoz What's a Dremel?

    Joined:
    19 Dec 2008
    Posts:
    5,780
    Likes Received:
    174
    yeah I believe elmo is right zero.. that video he posted- batman is in a dream (drugged by scarecrow that part) and you can't walk faster than that.. he's getting good frames considering it's using physx on the cpu- we experienced the same thing on cellfactor
     
  11. Elton

    Elton Officially a Whisky Nerd

    Joined:
    23 Jan 2009
    Posts:
    8,577
    Likes Received:
    196
    Thing about it is, you're going to have to re-write the ini a bit...

    Still, PhysX was always a gimmick.
     
  12. D3lta

    D3lta What's a Dremel?

    Joined:
    22 Sep 2005
    Posts:
    4
    Likes Received:
    0
    Hi guys,

    tutorial: how activate AA on demo release 1.0 and HD4000 w/o CCC

    link <--Google translation, sorry......
     
  13. gavomatic57

    gavomatic57 Minimodder

    Joined:
    23 Apr 2009
    Posts:
    5,091
    Likes Received:
    10
    To an ATI user yes, to a Nvidia user it is added value/polish.
     
  14. Elton

    Elton Officially a Whisky Nerd

    Joined:
    23 Jan 2009
    Posts:
    8,577
    Likes Received:
    196
    I can't really agree, if only because PhysX hasn't changed the gameplay much, so far, it's pretty cloth movements and corpses. Perhaps a window here and there and a few bricks, but there's no real gameplay change with PhysX to warrant a purchase of a second card..
     
  15. impar

    impar Minimodder

    Joined:
    24 Nov 2006
    Posts:
    3,108
    Likes Received:
    42
    Greetings!
    GTX260 user here, had to disable Physx to play Mirrors Edge whitout choppiness.
    Get the same 60FPS (vsync on) with it disabled or enabled, but the experience is much better with it disabled.
     
  16. gavomatic57

    gavomatic57 Minimodder

    Joined:
    23 Apr 2009
    Posts:
    5,091
    Likes Received:
    10
    How odd, I had the 192 core 260 and it run smooth with Physx on.
     
  17. gavomatic57

    gavomatic57 Minimodder

    Joined:
    23 Apr 2009
    Posts:
    5,091
    Likes Received:
    10
    It isn't supposed to change gameplay - there are always going to be people who bought ATI cards who can't use it. AA and AF don't change gameplay either yet we still crank them up.
     
Tags: Add Tags

Share This Page