1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Graphics Do you feel the Nvidia 9xx series in general has thus far been a bit of a let down?

Discussion in 'Hardware' started by Madness_3d, 11 Feb 2015.

?

"The 9xx series cards are a letdown"

  1. Agree (Letdown)

    12 vote(s)
    26.1%
  2. Disagree (Happy with products on market)

    34 vote(s)
    73.9%
  1. Madness_3d

    Madness_3d Bit-Tech/Asus OC Winner

    Joined:
    26 Apr 2009
    Posts:
    1,040
    Likes Received:
    36
    Not what I said, 980 performance at 165W is great, What I'm saying is we should have something with more performance at 250W and for it to be called the 980 and priced the same as a 980. As we have seen in the past when moving to a new architecture of GPU
     
  2. Shirty

    Shirty W*nker! Super Moderator

    Joined:
    18 Apr 1982
    Posts:
    12,936
    Likes Received:
    2,058
    To quote myself from here:

     
  3. Nexxo

    Nexxo * Prefab Sprout – The King of Rock 'n' Roll

    Joined:
    23 Oct 2001
    Posts:
    34,731
    Likes Received:
    2,210
    Right, I can picture the conversation at nVIDIA now:

    "Hey, we've got this new chip architecture with which we can make a GPU that absolutely blows the competition off this planet. We're talking epic mental performance here. A ruthless AMD killing literal game changer. This GPU will leave widows and orphans in its wake crying over the desiccated corpses of games who played themselves to death in an ecstatic orgasm of sheer pixel pushing power."

    "Cool. What's the wattage?"

    "Best bit: exactly the same as our previous model, 250 Watts"

    "Naah, sod that. Let's gimp the performance so we can bring the wattage down to, say, 165 Watts. Because we all know that what gamers look for in a top-end graphic card is frugal power consumption, not raw graphic performance, right? Right?"


    Possibly the reason that we got a great card at 165W, instead of a mind-blowing card at 250W is because that's the best that nVIDIA could produce. That's how far the current technology goes. It's not simply a matter of just clocking the GPU higher and pouring in more juice to get out faster performance. Neither is heat produced the only limit holding a GPU back.
     
  4. Madness_3d

    Madness_3d Bit-Tech/Asus OC Winner

    Joined:
    26 Apr 2009
    Posts:
    1,040
    Likes Received:
    36
    You've somewhat missed the point there, they've already got a chip, GM200 which is a 6 cluster version of Maxwell ,where GM204 is a 4 cluster version. They chose rather than releasing the larger GM200 first, as they used to with previous generations, to release the 4 cluster version as a Top end card for now, and not produce the GM200 en masse until the competition demands it. The 4 cluster version is cheaper to manufacture and so margins will be higher so yes, there was a meeting at Nvidia where they decided to go with the 4 cluster version instead of the 6. They could have made the 6 version, it may have been hotter and noisyer and slower than it will be when it comes later this year, but by not releasing it yet they can charge more money for the 4 cluster versions in the mean time. Remember the GPU market is artificial, normally only the top end card of an architecture is actually dictated by what is physically possible, all the others are artifically cut down versions of those cards which companies like nvidia dictate. By deciding to use a smaller design for that top end card they artifically limiting the progress of the market while they have the advantage.

    It's just plain anti consumer and they only get away with it because the competition isn't holding them to account.

    For clarification, 4 cluster, is 4 GPC's, each with 4 SMM's, each of those with 128 ALUs (2048 total), and with a 64 bit memory interface per cluster, so 256 bits in all

    A 6 cluster has 6 GPCs each with 4 SMM's each with 128 ALUs (3072 total) and with a 64 bit memory interface per cluster so 384bit wide in total


    Also 1000 posts :thumb:
     
  5. rollo

    rollo Modder

    Joined:
    16 May 2008
    Posts:
    7,887
    Likes Received:
    131
    980 that power consumption to performance ratio is sick if AMD hope to get back in a race they have to match its performance at worst and beat its power consumption. If it consumes double the power to be 10% faster then nobody will buy it.

    970 is the same card it launched at just people need to review there info on it. For Under £300 its one of the fastest cards ever released at that price point in the market. On launch it was faster than anything outside of duel gpu cards on either side ( ignoring the 980)

    960 is a 760 called a 960 personally dont care dont buy at this end of the market place we are on a tech site id be shocked if anyone into gaming really cares about the 960 or below gpus.

    Also think once dx12 launches you may see the 3 nvidia cards pick up masses of fps in games that are coded for dx12. Star Swarm demo showed theres performance to be had.

    Battlefield 4 64 player is still one of the most intense games that you can run today. The fact its coded properly unlike the other 2 suggestions is what makes it sell well.

    AC Unity does not even work on most pcs its such a badly coded piece of crap. Watchdogs is the same engine and has similar issues. Both games made by one bad company who have had alot of issues of late. From AC Unity to the Key issues in FC 4.

    Question really should be does Nvidia need a 250 watt card on the market at this very minute the answer is a short no. They have no competition and have had none for a good year. If AMD do not release a card before June are they still even fighting the same race anymore.

    As im guessing Nvidia has the 980TI ready to launch as soon as whatever AMD releases hits. And can then price drop the 970 and 980 to crush whatever they launch. They have such a lead at this point. AMD Needs a big success story to still be in existance as a player.

    As for competition Nvidia has none.
     
  6. loftie

    loftie Multimodder

    Joined:
    14 Feb 2009
    Posts:
    3,173
    Likes Received:
    262
    It should be faster, it's a new card :confused:

    [​IMG]

    Not sure what the prices are like where you are rollo, but a 970 and 290x go for about the same money, and they have about the same relative performance at 1080p according to TPU. I wouldn't call that 2 different races.

    As for not caring about the 960 or below, why? Surely all tech has something of interest? A lot of people, myself included, were interested in the 750ti when it launched.
     
  7. LennyRhys

    LennyRhys Fan Fan

    Joined:
    16 May 2011
    Posts:
    6,410
    Likes Received:
    918
    Yeah, because everybody who wants to play Crysis3 at 1440p runs a 580... :wallbash:

    As guys before have been saying, you need to compare like-for-like, in which case the 9xx series performs favourably against its predecessors.

    Crysis 3 (again)
    GTX770 33fps.... GTX970 40fps
    GTX780 40fps.... GTX980 48fps.

    I don't see any problem here.
     
  8. Madness_3d

    Madness_3d Bit-Tech/Asus OC Winner

    Joined:
    26 Apr 2009
    Posts:
    1,040
    Likes Received:
    36
    If you have any other examples to give of scaling between various generations please post them, I accept that no one would use that setup obviously but it does tell us something about the architectural improvement and the performance scaling between the two.

    Also disagree that you'd compare a 780 to a 980. The 980 has all of it's SMM's enabled, so represents that maximum performance Nvidia will get from that GPU layout without a clock speed bump. If they want to make a 980Ti it will have to be some variation on a 6 cluster design which is unlikely unless the competition really outstrips the 980 which we cannot be sure of at the moment. The 780 has 3 of it's 15 SMX's disabled and so is not a fair comparison, it doesn't represent Kepler at it's best or the highest spec single GPU on the market at the time the 980 replaced it, again, it is unlikely there will be a 980 Ti. It's more likely they'd wait for the next series, and release a new Titan in the meantime.
     
  9. Nexxo

    Nexxo * Prefab Sprout – The King of Rock 'n' Roll

    Joined:
    23 Oct 2001
    Posts:
    34,731
    Likes Received:
    2,210
    As you say, the GM200 is more expensive to manufacture. This means that its price would go up accordingly. The GM204 comes in at about £450,--; how much would the GM200 cost? And would people pay it when a GM204 is giving people ample gaming power? Just how well have those esoteric Asus Mars edition cards been selling again? But at the same time, people being people would resent paying £450,-- for what is the second best card. Just see what a drama they created around the 970 over a memory issue most people don't understand and was never even an issue until they found out about it.

    On the 780 vs 980 issue: I prefer a 980 working at its top spec and no 980ti, over a gimped 780 so that nVIDIA can later release a 780ti just to shake some more money out of our pockets.
     
  10. IanW

    IanW Grumpy Old Git

    Joined:
    2 Aug 2003
    Posts:
    9,189
    Likes Received:
    2,693
    Yes, but only because the 980 launched just a couple of weeks after buying my SECOND 780Ti :waah:
     
  11. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    I don't think that's how the binning process works, we don't have 6 cluster cards yet probably because not much of the silicon comes out with 6 clusters that are faultless. You can't sell something you don't have, or have in such low numbers that you would run out of stock.

    It's also not a matter of 4 clusters costing less than 6 clusters as fabrication costs don't change, the only thing that does change are the percentages of usable silicon, you may get 5% with 6 clusters, 25% with 4, 55% with 3, and 15% with less, if your yields don't match expected demand you have to build up stocks.
     
    Last edited: 12 Feb 2015
  12. LennyRhys

    LennyRhys Fan Fan

    Joined:
    16 May 2011
    Posts:
    6,410
    Likes Received:
    918
    Of course scaling is going to change over time. Just look at how the 8800GTX wiped the floor with previous GPUs... it was carnage. That same level of improvement has never since been repeated, and I don't think it ever will be. In some cases there was almost 100% scaling... that's just crazy. I'm really not sure what your point is with scaling - do you expect that same level of scaling with every new architecture/generation?

    Games are more demanding now than ever before, and hardware has changed drastically in terms of processing power and energy efficiency, so I don't see why we should expect the level of scaling that you seem to think is reasonable. Diminishing returns, dude.
     
  13. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    How about when techspot.com done a feature on 5 generations of GeForce ?
    http://www.techspot.com/article/928-five-generations-nvidia-geforce-graphics-compared/
    [​IMG]
     
  14. Chris_Waddle

    Chris_Waddle Loving my new digital pinball machine

    Joined:
    26 Mar 2009
    Posts:
    860
    Likes Received:
    61
    A very enjoyable read so far with some well made arguments.

    I'm a habitual upgrader and generally change my graphics card / cards once a year. In the last year or so in my gaming rig, I've had 2 titans in SLI which I loved and then replaced them with 3 x 290x's in xfire which I also loved.

    I was all ready to upgrade to a new SLI setup from Nvidia at the start of the year, read the reviews on the 980 and put my wallet away.

    As I already had a 780ti SC in my backup machine I really couldn't see any point in going for the 980, so I decided to try and pickup a cheap second 780ti SC to run in SLI and wait for the next release. I ended up buying 2 x 780ti Classified Hydrocoppers for not an awful lot more than the price of one new 980.

    This is the first time that I can remember skipping a release in something over 10 years. I don't give a crap that the 980 performs on 165W.

    I want a card to scream 'look at the performance gain I'm offering'. The 980 just doesn't do that in any way, shape, or form.

    I'm not saying that the 980 is a disappointing card, what it offers is good value for money compared to previous launch prices, but it is a very disappointing option as an upgrade from the previous generation.

    I'm in the 'who cares' camp on the 970 issue. IMO, for what it offers it's damn good value. I honestly don't care what is going on under the hood, so as long as it performs as they claim it will and 99.9999% of the time it will; I'm happy.

    As for the 960. Have to agree with most on here that it's poor value. I have been considering selling the 780ti SC in my backup machine as it's mainly used as a media machine now, but there's no real point as I wouldn't get much more (if anything more) for the 780ti than the 960 costs. What I would get is a massive drop in performance (even though I don't need it) - so on principle I won't change it.

    I voted no, it's not a let down. The 980 and 970 are excellent cards and are well priced.

    If you'd asked if they are a disappointing upgrade option over the 7 series, then I would have voted yes.
     
  15. Madness_3d

    Madness_3d Bit-Tech/Asus OC Winner

    Joined:
    26 Apr 2009
    Posts:
    1,040
    Likes Received:
    36
    @ Nexxo, the thing you're missing there is that GPU prices are arbitary bit, they are set according to what Nvidia and AMD think they can get away with charging. Here's a table of a few generations back what sort of markup they had on manufacturing costs, MP (median price of sale on the left) Cost of manufacture on the right.

    [​IMG]

    They could make a 6 cluster and still make money, it might be hot, and a little noisy, and not clocked as high as what will release but they could do it, they always used to do it, they now don't because they can charge more money for less this way.

    You say that the 980 is providing "ample power" and yet 30% of the enthusiasts on this site consider them a letdown (so far) and I'm not sure I follow your point about the Asus Mars cards? they're very different propositions, priced relative to the rest of the market Nvidia creates and it reads like you're saying, lets not bother making any faster cards, the 980's are all the performance we'll ever need?

    My whole point is that the 980 shouldn't cost £450, it's a mid range design, it should cost less, and a 6 cluster version should be in there at the top price point as was the way with 2x0 series, 4x0 series etc.

    And on the 980 vs 780 issue, don't you see that the 980 in itself is a hobbled card, it could have had a 6 cluster core and it hasn't, it's missing 2 GPC's to the 780's 3 SMX's!
     
  16. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    There will be a 6 cluster version of the 2nd generation Maxwell when they release the next Titan card based on the GM200, it will come with a suitably high price to reflect that out of a single wafer they may only get 1-5% that is faultless and can have every part enabled.

    It seems you're looking at price/performance strictly from a consumers perspective when that's not how GPU/CPU manufactures price their hardware, the primary concern is profit per wafer.
     
  17. Parge

    Parge the worst Super Moderator

    Joined:
    16 Jul 2010
    Posts:
    13,022
    Likes Received:
    618
    I went from a 770 to a 970. Absolutely incredible performance increase!

    Probably the most noticeable out of any of my GPU upgrades, and I've quite often skipped a generation.
     
  18. Nexxo

    Nexxo * Prefab Sprout – The King of Rock 'n' Roll

    Joined:
    23 Oct 2001
    Posts:
    34,731
    Likes Received:
    2,210
    The profit margin that they can get away with is what customers are prepared to pay minus the cost of production. "They always used to do it" is not a valid argument; things change. As GPUs get more powerful, the envelope of what is technologically possible is getting pushed to the limit.

    30% of the enthusiasts have a rather fluid and arbitrary grasp on GPU technology and their expectations are accordingly. The Asus Mars cards offer the highest performance possible --for a price. Turns out the 30% are not prepared to pay that, so why would they for a GM200?

    What you are basically arguing is that you want a GM200 for a 980 price, and that is just not reasonable. You're talking about bigger chips with more circuitry and a correspondingly substantially higher failure rate resulting in substantially fewer chips per wafer driving up the production cost. You are arguing that nVIDIA deliberately chooses not to make a faster card for that price when it could easily have done so. That's conspiracy theory that sounds suspiciously uninformed about the chip manufacturing process.
     
  19. Madness_3d

    Madness_3d Bit-Tech/Asus OC Winner

    Joined:
    26 Apr 2009
    Posts:
    1,040
    Likes Received:
    36
    Record Profits for Nvidia this year. I agree that as we approach the limits of what Silicon can do it will get more and more difficult to get the performance increases and yields we've become accustomed too, but we're not there yet. There's more architectures to come, It takes around 5-7 years to design, verify and bring these architectures to market, Pascals next and we're not down at the limits of silicon now so why shouldn't we expect the same increases as before? Moores law says we should expect it and they're beating that in mobile so why not desktop graphics?

    I mean you must be right, Nvidia are clearly terrible at manufacturing Maxwell cores, it's not like they got to practice with the 750Ti and it's not like they've got the stock clock for a 4 cluster core at over 1200Mhz, on the same process node as the 770 and 780's. Yeah I reckon if they made a 6 cluster it'd have to run at 20Mhz and would still have a 1000W+ TDP :)

    I'm not asking for a perfect, high clock example of a 6 cluster gpu. Contrary to your comments I do understand a fair bit about the chip manufacturing process. But even a 6 cluster design with an entire cluster disabled would be better, at least then we'd get 2560 ALU's, a 320Bit memory interface / 384bit with non symmetric memory, and the additional resources that come with. We'd have a meaningful performance update over the 780Ti, with probably a 250W TDP and a sub 1000Mhz clock speed sure but they could still be making 4 cluster versions for those who want them. And then next year they could release the full fat 6 cluster version when they've got the yields up where they want them. At least that way this year the consumer would have the *choice*. And from a Yield and manufacturing cost perspective, If they can clock 980's as high as they have with voltages as low as they use then there's no reason why the above isn't possible.

    I'm sure you'll say, well if they could make them they would, but I see it the other way. Why bother, from their perspective, releasing a card that's more powerful than it needs to be. If they can launch this and people buy it, then they can hold that extra in the bank, to draw out the lifespan of this architecture and to release more if needs be, only when the competition requires it. Makes perfect sense from a business perspective, it's not a conspiracy, but it is pretty anti consumer.

    I'm glad to have found that it's not just me that has this opinion. Please feel free to all buy 980/970's if you feel they offer a worthwhile upgrade and a solid investment but I personally shaln't be. I will vote with my Wallet as that's arguably all I can do.

    Also, I think you may also be a bit uninformed on the Asus Mars front, you're right in past generations Mars cards were basically the top end dual gpu card overclocked a bit (and the fact that they're dual GPU kind of nullifies the point in this argument but whatever) But the latest mars isn't even that. It's a pair of 760's, yes 760's, not 780 Ti's, not 780's, not even 770's (which are pretty much overclocked 680's) but 760's. They by no means represent the fastest GPU you can get, there are several which are much faster, which is probably why people haven't bought the latest one in droves.

    You ended your argument there by saying that people wouldn't be prepared to pay the premium for more performance, Surely the introduction of the Titan range has shown, that double precision floating point aside, many people are willing to pay way over the odds to get their hands on a higher performance, fully fleshed out GPU, at the time of the introduction of a new architecture. All I'm saying is they never used to need to, and they shouldn't have to now.
     
    Last edited: 13 Feb 2015
  20. Nexxo

    Nexxo * Prefab Sprout – The King of Rock 'n' Roll

    Joined:
    23 Oct 2001
    Posts:
    34,731
    Likes Received:
    2,210
    Moore's law is not a law of physics. It doesn't have to be true. And it is starting to run into the wall of what is physically possible with silicon. As for the rest: if it was commercially viable, it would have been done. For the rest of the power junkies there is quad SLI --the low wattage of the 980 actually makes that a reasonable proposition now. Knock yourself out.
     

Share This Page