Discussion in 'Article Discussion' started by arcticstoat, 8 Mar 2011.
The table on page 3 is comparing the ATI cards against a 1.5GB GTX 570 and a 1.3GB GTX 570...
...shouldn't that be a GTX 580 in the second column in from the right?
I agree with the fact 3dm etc is not required but to see games that bring your system to a stand still is the better way to go. You never know, the same engines that cripple systems today (even though the games are awful) could produce the best game the world has ever seen and what good is it if you have bought a GPU based on the fact it can play lesser demanding games such as CoD and it falls over on the more advanced titles?
Thats like saying "my car is faster to yours, 0-60 = 3s where as yours is 3.4s" what good is it if after 60mph the first car runs out of puff and takes a further 5s to hit 100 yet the other is steaming along at 120 by this point? Ut makes the 0-60 irrelevant. The same can be said by using games that play perfectly well on something such as a 6950 and then saying the 6990 is epic because its twice the speed... throw in massive textures and a game engine that makes your system bleed and that 6990 may not even see the boost you would hope after shelling out the best part of £600.
I speak from experience, £568 for an 5970.. I was dissapoint
Swapped to a 480, saw my frames drop but the minimum frames increase dramatically.. slow and steady wins the race (read consistent)
So... If I buy an used HD5870 I'll get HD6990 performance or even more... It seems a great deal to me
Just saying that the "old" tech, can still compete with the "new" gpu's... I already have one HD5870, with a small cost I can have a top pc again...
I know what your answer will be, but I insist, for beast like this you need to review the performance with CRYSIS, its the most demanding game even to this day (and I seriously mean it)... this list of games you have here.. none of them are that demanding!!!
Now on the card, watching that they reached the 4 GB amount of memory, this means using a x64 OS its a must right?.
One thing that I find funny its that AMD always have power saving in mind when designing its CPU's, if Im not wrong I could say that's the main priority for them, power saving before brute power performance. But with GPU's they said "wth, lets just go crazy on this one and make it a power beast so hungry it needs a separate PSU for it"
I think he means HD5870's in CrossFire
After all the comments since nVidia's 4xx series about using too much power and running too hot and noisy, AMD does a copy job. Me thinks AMD Fanbois should currently have FOOT-IN-MOUTH syndrome... although FOOT-IN-MOUTH is no position for a Fanboi now is it?...
I think you need to grow up a bit. Everyone knows the top dual-GPU cards are hot, noisy and have high power consumption. The 9800GX2 did, the 4870X2 did, then 295 GTX did, the 5970 did... get my drift? The 465, 470 and 480 had far too high power draws, temps and noise and the situation was righted with the 5 series. See how easy it is to be a mature adult?
2GB chokes at Surround/EyeFinity resolutions of 5760x1200 in several games if you turn settings up too far. Go 5x1 portrait (5400x1920) and 2GB isn't enough at all. Go 7680x1600 and 2GB, again, isn't enough. There are a lot of games that run great at 1920x1200 and stay just below 1GB of VRAM usage as long as AA isn't applied too much... but ask them to do triplewide, and they get very, very VRAM hungry.
Even Windows 7 Aero is quite VRAM hungry in Surround/EyeFinity - VRAM usage goes up from 50-60MB on a single 1080p screen to 160-220MB.
looks like an awesome card, but ati what a fricking price, I think they may have to reduce that price when nvidia comes up with there dual gpu card
At that resolution how on earth do you know its the VRAM not just the card?
It's fairly easy. A Logitech G15 and MSI Afterburner were my essential tools for that.
Find settings which utilise just under the VRAM of the card and test. If the card is underpowered, sure, you'll see framerates in the 15-20 region. That's to be expected. But if you get a nice, solid 40+fps when you're utilising, say, 960MB of a 1024MB card (some of the VRAM is used as framebuffer) then you change a couple of settings and see framerates drop to <5fps, it's a VRAM limitation.
This is easy to test by having a similar card with double the VRAM. eg: 1GB and 2GB GTX560s.
It is impossible to see VRAM usage exceeding that of the VRAM on the card, so what normally happens in a 'choke' situation is that VRAM hits something like 990MB on a 1GB card and won't go any higher, or depending on the game, you might see VRAM usage drop like a rock - <400MB usage - this appears to be an artefact of the way modern GPUs can offload texture data via the PCI-E bus to the system memory.
Anyway, the only way to then check whether the framerate nosedive is the card running out of 'raw grunt' is then take those same settings on the 2GB card and watch VRAM usage and framerate. If VRAM usage exceeds that of maximum of the 1GB card, say 1100MB, and the framerate returns to a more comfortable 30+fps, it's a VRAM limitation, and not a 'lack of grunt' one.
Basically, a gradual decrease in performance indicates that the GPU isn't powerful enough for what you're asking it to do. A sudden drop indicates a VRAM choke.
Surround/EyeFinity makes this far more obvious than a single screen because of the extremely high resolutions you're asking the card(s) to cope with.
I can play games in Surround on 2GB GTX460s with settings that GTX470s can't cope with because they run out of VRAM and give me single-digit fps. Even GTX480s/580s choke if you ask them to run something like Crysis @ 6060x1200@Very High.
Another interesting example is Burnout Paradise. It's not a terribly demanding game to play, but in Surround on 1GB cards, at maximum details it's just not playable in Surround. You're looking at sub-10fps framerates. Switch to 2GB cards and you're back at 60fps (it's capped) with minimum framerates still in the high 50's when a major smash happens.
Hope this helps. I wrote a review on it looking at lots of games, but I won't link to it in case it gets me in trouble.
They probably should have used dual-fans set at a lower speed rather than a single fan being extremely noisy at max. There's a lot of room on the casing to support dual fans.
not that easy, and not that clear cut.
on a gtx580 1.5GB:
-Crysis runs steady 20 to 25 FPS with 4xAA at 2560x1440, out of VRAM or not enough GPU power?
-GTA4 runs at about steady 40 FPS at 2560x1440, out of VRAM or not enough GPU power?
both case are out of VRAM if going by your method of testing. both cases VRAM usage recorded by Afterbuner is above 1450MB. but FPS never takes a dive into single figure (FRAPs on G15) and GTA4 is completely playable, with FPS going up to 60FPS depend on amount of cars. Crysis admittedly is unplayable, but is that lack of GPU power or VRAM, or both?
re 6990 price:
the price isn't that bad, 5970 launched at similar price.
Thanks for those explainations and those reel life examples : ). This VRam issue is mainly due to the way filter are working. And unless a new algorithm is found, they will stay VRam ungry. This VRam need doesn't depend on the scene complexity / polycount, this is mainly depending on the screen resolution.
Your resolution isn't high enough to notice that issue or just increase the AA setting .... this will change everything. I was playing Unreal tournament (the first one) at 2048 x 1536 on my IIYama vision master PRO 510 (I miss that 19" CRT monitor) with a 128Mo RADEON 10 years ago ... and it was very playable ... but no AA back in those time.
Here is a simple RAM usage on Oblivion to illustrate that :
Now for the GTX590.
Huh.. why is Call of Duty: Black Ops listed as an OpenGL game when it runs on DirectX 9? True, the original Call of Duty used OpenGL but all the sequels since Call of Duty 2 have ran on DirectX9, at least according to MSI Afterburner's OSD they do!
Separate names with a comma.