1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Graphics Nvidia’s GTX970 has a rather serious memory allocation bug

Discussion in 'Hardware' started by lancer778544, 23 Jan 2015.

  1. Nexxo

    Nexxo * Prefab Sprout – The King of Rock 'n' Roll

    Joined:
    23 Oct 2001
    Posts:
    34,731
    Likes Received:
    2,210
    Again: nobody says it's wrong to complain when manufacturers get their facts wrong; they're just saying that:
    - nobody was lying or trying to be deceitful;
    - this is not as big a deal as you feel it is. You still have an awesome card for a decent price.

    "Transparency of design" went out with the birth of industrial secrets. In any case we wouldn't understand most of the specs and terms that nVidia engineers would throw our way if they really tried to explain how a GPU is put together. We just like to think we do. In the end all we need to know is:
    - Does it run crisis (does it run the games I'm interested in at an acceptable framerate under acceptable resolution and pretty settings)?
    - How big does my PSU need to be to feed that beast?
     
    Last edited: 28 Jan 2015
  2. The_Crapman

    The_Crapman World's worst stuntman. Lover of bit-tech

    Joined:
    5 Dec 2011
    Posts:
    7,705
    Likes Received:
    3,969
    About 80fps avg on Crysis 2 with ultra settings and hd textures.

    EDIT: More like high 90s after a bit more playing.
     
    Last edited: 28 Jan 2015
  3. Shirty

    Shirty W*nker! Super Moderator

    Joined:
    18 Apr 1982
    Posts:
    12,937
    Likes Received:
    2,058
    Oh hai. I was just sitting here enjoying a very small portion of my fast 4GB VRAM.

    Who said a 980 for a 1080p panel was overkill? :worried:
     
  4. Pookeyhead

    Pookeyhead It's big, and it's clever.

    Joined:
    30 Jan 2004
    Posts:
    10,961
    Likes Received:
    561
    As a final note from me on this: I'm not saying companies need to divulge all their design secrets, I'm saying that if they sell a card where only 3.5GB of the advertised 4 runs at anything like full speed, the customer needs to know if they are under the impression they're buying a 4GB card.
     
  5. law99

    law99 Custom User Title

    Joined:
    24 Sep 2009
    Posts:
    2,390
    Likes Received:
    63
    Like when you buy a **** technology based product, that knows it is ****. They write every freaking supported spec, compliance etc under the sun to make it seem better than it really is.

    A good example is car stereos.

    I bet now you know you could have had a 3.5gb card with 512mb of cache for just a massive fraction of the price, you would have bought differently though. /joke

    I'm with Pookeyhead on this one. It is the sort of annoying thing that could have made a difference to the purchase at the time. Given that some people take ROP count as an indication of the target resolution, it is well to consider.
     
  6. Parge

    Parge the worst Super Moderator

    Joined:
    16 Jul 2010
    Posts:
    13,022
    Likes Received:
    618
    I think you've just caused a few people in here to top themselves. :D
     
  7. loftie

    loftie Multimodder

    Joined:
    14 Feb 2009
    Posts:
    3,173
    Likes Received:
    262
    You're a bad person :hehe:
     
  8. Nexxo

    Nexxo * Prefab Sprout – The King of Rock 'n' Roll

    Joined:
    23 Oct 2001
    Posts:
    34,731
    Likes Received:
    2,210
    I totally agree that they should tell you what's in the box, but mistakes happen and in the end you judge a card on the real-life facts that matter, such as actual performance, not on numbers in a table that are highly abstract and meaningless to us anyway.

    Personally I don't know how to relate a certain number of ROPs to performance in Elite Dangerous at 2560x1600 with all its pretty dialled up to 'high'. But FPS I get.
     
  9. law99

    law99 Custom User Title

    Joined:
    24 Sep 2009
    Posts:
    2,390
    Likes Received:
    63
    Very true. Performance in reviews do tip the scales mightily for me. Techreport though, who I trust most after the insightful inside the frame articles, choose to emulate the card based on published specs, which were inaccurate. (which is a minus point to them now... **** who do I trust?)

    Looking elsewhere, particularly that Tom's article into Shadow of Mordor recently, I don't think there is that much to worry about other than whether that 10% stretches itself a bit more over time coupled with these RAM shenanigans.
     
  10. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    So theoretically (out of interest) would double the bandwidth make it less of an issue, or more ?

    It's not that people think it's wrong to complain when manufacturers aren't transparent, what Nvidia done was wrong. I don't think they wanted people to know that they had that level of granularity in disabling parts of the GPU, in the past they had to disable a lot more of the chip during the binning process and i don't think they wanted people (AMD) to know they can now get higher yields.

    Somethings you try to keep a trade secret, in this case it worked out in to their disfavor, but how many other tricks and tweaks go on behind the scenes that we know nothing about.
     
  11. Pookeyhead

    Pookeyhead It's big, and it's clever.

    Joined:
    30 Jan 2004
    Posts:
    10,961
    Likes Received:
    561
    Very good question. I can only imagine it will help the matter. The disparity between the 2 chunks may be wider as a result though. This is the sort of thing that there's no definitive data on yet.
     
  12. heir flick

    heir flick Minimodder

    Joined:
    2 Feb 2007
    Posts:
    1,049
    Likes Received:
    14
    had a play on tomb raider today with afterburner running.
    970 sli maxed out at 2560x1440. when i started playing the game was flying along at 120 fps and memory use was 3.5gb and after about 30 min started dropping to 50 fps.
    dont know if this is because of the ram issue or not
     
  13. Kronos

    Kronos Multimodder

    Joined:
    6 Nov 2009
    Posts:
    13,495
    Likes Received:
    618
  14. Pookeyhead

    Pookeyhead It's big, and it's clever.

    Joined:
    30 Jan 2004
    Posts:
    10,961
    Likes Received:
    561
    NVidia have gone on record saying they'll help people get refunds? Christ.... this is gonna hurt financially if they do this. Then again, that's NVidia... that's easy to say, but MSI, EVGA Asus etc.. It's ultimately down to them, not NVidia.
     
  15. Kronos

    Kronos Multimodder

    Joined:
    6 Nov 2009
    Posts:
    13,495
    Likes Received:
    618
  16. heir flick

    heir flick Minimodder

    Joined:
    2 Feb 2007
    Posts:
    1,049
    Likes Received:
    14
    yep from what ive read it has much more impact on sli
     
  17. wyx087

    wyx087 Homeworld 3 is happening!!

    Joined:
    15 Aug 2007
    Posts:
    11,998
    Likes Received:
    716
    Simple, card B is a cut down version of card A. So B should always perform relative to A. 5850 to 5870, 570 to 580, 8800GTS to 8800GTX, I can go on. As years pass, those cards always stayed relative.

    Eg, initially, 8800GTS 640MB vs 8800GTX 768MB produced average of 75 vs 90. After a few years, GTX was getting 40 FPS, GTS was getting just over 30. This is my person experience with those 2 cards. (then GT slotted in the middle, making GTS worthless :( )

    Also remember E8400 vs Q6600? Q6600 was basically 2 dual-cores. As more programs become multi-threaded, Q6600 performs consistently faster than E8400. GPU calculations are pretty much all parallel, so a cut down in number of pipelines, the performance can be easily correlated. Of course, providing no other resource limitations (eg, ROP, memory)


    What you've pointed out here is completely true, in theory left of your = should be faster than the right.

    If the caching algorithm is doing its job. But who knows how well nVidia have implemented it. It comes down to this, do we trust nVidia's cache algorithm. I personally would much prefer to not rely on patchwork, and know for sure my whole of VRAM are fast for the game.

    Looking back, 6000 series had a "TurboCache" feature to use system memory for lower end cards. Results were not great. But it does make 970's implementation look more promising, like Apple's iPhone to Newton.


    Please correct me if I'm wrong, Hardware Abstract Layer in this case is the driver?

    I've always said (including the bit you quoted) games query the driver.




    As I've posted numerous times. I'm happy with my card currently. But I would much prefer to have the OPTION to not rely on optimisations to ensure future games works correctly. My concern is different to Pooky's. My concern can't be seen or tested now, because nVidia is working their magic to make the card work well. My concern is when they stops their driver support. I paid for 80% of 980 and in 5 years time, is it still going to be 80%?

    Of course, I hope, as much as everyone else here, I am wrong. So that the card's resale value don't get destroyed and my investment continues to be awesome like it is now :D
     
  18. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
  19. wyx087

    wyx087 Homeworld 3 is happening!!

    Joined:
    15 Aug 2007
    Posts:
    11,998
    Likes Received:
    716
    Not according to the wikipedia article you posted: :p

    Of course, there may be many layers between hardware and software, and the wordpress post is in more detail.
     
  20. bawjaws

    bawjaws Multimodder

    Joined:
    5 Dec 2010
    Posts:
    4,284
    Likes Received:
    891
    OK, I'm not trying to be a dick here, but you've just contradicted your own argument with those figures :)

    In your example above, the 8800GTS initially provided 75/90 = 83.3% of the performance of the 8800GTX. A while later, it was providing 30/40 = 75% of the performance of the 8800GTX. So it looks like the 8800GTS failed to always perform at the same fixed percentage of the performance of it's bigger sibling. In fact, the relative difference between the two cards actually increased!

    It makes sense that a cut down version of a higher spec card will perform worse than it's bigger relative, but as games get more demanding over time, the performance differential will only ever increase as the cut-down card will hit some sort of limit before the larger card does. That's why it's not realistic to assume that one card will always perform at the same fixed percentage of the performance of another.

    Anyway, as I say I'm not trying to pick a fight here so it's probably best to leave it there.
     

Share This Page