Charlie gives his usual Nvidia friendly summation here: http://www.semiaccurate.com/2010/07/07/nvidia-purposefully-hobbles-physx-cpu/?c=684 While the original article is not entirely a morass of techno treacle here: http://www.realworldtech.com/page.cfm?ArticleID=RWT070510142143 Personally I'd like the Bit-tech staff to do an analysis of both articles and give a pronouncement on what Nvidia have been/are upto. Why? Because in my opinion, such hobbling of code and hardware is anti the spirit of a site such as Bit-tech. Bit-tech may reach a different conclusion. Either way, I for one would like to know what Bit-tech think.
I'd like to see bit take a stab at that as well. There appeared to be some bias in the article by semiaccurate.
Thank god the whole Physx-malarkey is just a marketing trick to begin with. Otherwise I might actually be upset by just how low Nvidia can go with their bs.
They can't market a product if they made physx work better on a CPU. A quad core has extra cores sitting there while the GPU is usually crunching graphics numbers. If the CPU got optimized and ran off a better instruction set, it would destroy the Nvidia GPUs. Not that GPU based physx is particularly fast anyway, actually it takes a lot out of frame rates already.
PhysX is just a pain, the number of game I've installed and its trown a paddy and I've had to hunt for a fix... I find Havok works a lot better with the same results...
I will read! Real world tech is usually a fantastic resource for the nitty gritty Just a shame they dont do more articles
An interesting article, but they do have a semi-legitimate reason, which still regardless is terrible for the public eye anyhow.
Charlie... I'm by no means an nVidia fanboy, but even I fail to see why nVidia should devote time and effort to making software that they wholly own (and use as a selling arguement for their hardware) run better on a competitor's product. This is like getting mad at that "will it blend" douchebag. It's his own iPhone, and if he wants to blend it up, that's his own choice to make, even if other people could have better use for it. From a technical standpoint it's interesting to see that PhysX could run much better on the CPU, but nVidia doesn't have CPUs, they sell toasters GPUs, so they make it work better on waffle irons GPUs.
Why I see your point, there are levels of "being a dick" about it. Just like Intel's "Genuine Intel" trick in its own compiler. For starters - if you want to make the best product for your customer base, and promote the fact your product is cross platform, it's just dick-ish to not optimise your product for every platform - after all it has to compete with other products that ARE optimised. They put all that effort into their drivers making games fast, why not PhysX dlls? Secondly - the tools to use SSE etc are out there already, so is CPU support. You'd have to purposely avoid them not to compile using them. That's being a dick. Would you now buy a game with PhysX, or DEVELOP a game using PhysX knowing you'll get purposely shitty performance on a CPU (no matter what CPU, since only two x87 instructions can be handled at once), or an expensive performance to buy an extra GPU that barely any of the market has?
nvidia locking the tech to their own GPUs - i can understand because other GPUs are direct competitors. nvidia gimping performance on CPUs to make their GPUs look better - a dick move to let them spread FUD about CPUs in general.
I will cite one of the comments from the semi-accurate article - "Ageia, the father of PhysX was started up in 2002... its a safe bet that the originating partners were working on PhysX quite some time prior to filing there articles of organization for the corporation. This would put the origination time closer to the 1999 x87 SSE changover, giving a possible explanation as to the use of the older Floating point instructions. (Especially since as you remember, SSE wasn't exactly welcomed with open arms upon introduction. It took a couple years for enough SSE supporting chips to flood the market that it was worth it for developers to start using the new instruction set without fear of alienating a large portion of the non-SSE supporting user base.) I find the use of x87 to be completely plausible from Ageia's standpoint... back then it was much more familiar and less time consuming to code for than SSE (And time was something Ageia was racing against to get it's first add on cards out the door.. I seriously doubt they had the financial or development resources needed to retool their design for SSE if they wanted to...) Now, since nVidia purchased them 2.5 years ago, and allegedly ported their technology over to CUDA, I would go as far as to call them lazy for not updating the instructions used for the task, but then again, I am not an electrical engineer, and have no idea which types of instructions "flow" better through the GPU/CPU pipelines... Maybe nVidia's engineers were being rushed by JHH to get PhysX working on the GPU ASAP, so they cut corners and stuck with x87 to simplify things and keep dear leader happy." This does seem to raise a good point but as Bindi said, Nvidia is being dick-ish: morally wrong but business wise. Is it possible that an anti-competitive lawsuit could be raised against Nvidia for this move?
I don't see how, ATI/AMD, developers and Intel are free to make their own drivers and their own systems that do the same. At the very least it doesn't look like any other anti-competitive case I have heard about.
They do do some fantastic articles. But 'they' are just one bloke who does this in his spare time I believe?
I would sooner buy a blended iPhone then anything with PhysX stamped on it While the feature does have some genuine appeal, it's been run into the ground in true nVidia fashion. They try to make everything a closed platform and erect tollbooths for every little bit of nVidia tech you encounter. This is just not how the PC platform works, and i vote with my wallet by not bothering to let Cuda or SLI comatibility (motherboards), or PhysX, or any of the other nVidia only tech be a factor at all in my purchasing decisions. nVidia marketing, if you are reading this... how about actually advertising with your strengths? You have F@H going for you in a big way.
I personally don't care because physx isn't actually very good.... at all... i mean like 1 good game has it and thats not even multiplayer. In fact no multiplayer game will ever sport it since half the customer base won't be able to run it and that really helps game developers make money. On the other hand Havok physics is just as good if not better, runs smoother, and available on all sorts of online games since you can be confident everyone has a CPU.
think posted this before.. but back when ageia owned physx, we were really excited remember downloading cell factor and it ran great on the cpu despite all the rave about the ppu being so much faster.. that's when I knew it was a gimmick and it did flop for ageia before the marketing team at nvidia got ahold of it.. all of a sudden it was the holy grail of gaming.. if you spend enough money on spin doctors, you can make eskimos buy ice (or fanboys depart with their money).. forget havoc had been doing it on the cpu for ages already I mean batman, played it with physx with the paper flying around- the stacks on the ground were static though and it kind of ruined the effect (in case you never tried it) it wasn't anything you had to have or couldn't be done with havoc for that matter.. that said batman was a awesome game- and I give props to the devs there but that's really the only game with physx worth talking about.. mirrors edge wasn't anything special either- it just reminded me how far in the back pocket of nvidia these devs were I'm glad some devs are still using havoc and haven't been pulled in.. hopefully opencl is the future and we don't have proprietary code used like this anymore