The initial plan With the upcoming release of Titan, I made serious plans to treat myself to a new X79 rig with all the bells and whistles attached. Luckily, sanity prevailed and I realised that actually having savings was good. However, I was still having a yearning to upgrade my PC with some shiny bits. So, the two requirements for the replacement GPU solution were: 1) Have at least 3GB VRAM. 2) Be at least as fast as three 580s. Looking around, the EVGA 3GB 660 Ti Superclocked+ caught my eye. I still liked the idea of having three cards though (no particular reason - I just like keeping my mobo covered I suppose) and fortunately the 660 Ti supports tri SLI. First impressions Here they are: I have to say that EVGA have released a very sexy card! The surfaces have a nice burnished effect and have a nice quality feel to them. In fact, the card has more of a high end card feel than middle range. The cooling on the card is complete overkill but in a good way. The cooler is same one used for the 670 and 680 so keeping the 660 Ti cool is a trivial task. One card is inaudible - as is three in SLI! I never knew a tri SLI set up could be so quiet, let alone on air compared to water! Temperatures are kept at a steady 85C. Overclocking After doing a few tests I let loose and started overclocking the cards. To my surprise (and delight) I found that the cards actually core boosted to 1150MHz by default - a healthy +91MHz over what they were supposed to do! Unfortunately I couldn't push the core any further, even with putting the overvoltage to 122% through Afterburner. Memory overclocking, now this is where it gets fun. The cards by default are set at 6008MHz. I managed to achieve - wait for it - 7400MHz! Unbelievable! Results of the OC are included in most of the graphs in the results section. Results Testing methodology The tests were done on a fresh install of Windows 7 Ultimate x64 (I'm not going anywhere near Windows 8). The OS is on its own disk (250GB Samsung 840) and all benchmarking programmes were installed on a different SSD (once again a 250GB Samsung 840). All settings in the Nvidia control panel were left to their defaults, though the programme was made aware of the exes. All tests were run at maximum possible settings at 2560x1600 and each benchmark was run once before the actual benchmark run. I found that each benchmark had a deviation of +-1FPS. Additionally I added a comparison of release drivers for the 580 versus current drivers, which resulted in some interesting figures. I apologise for the lack of depth compared to my previous reviews but I'm not a student anymore and I value my free time! The previous review for my old 580s can be found here. Anyhoo, on to graphs! Batman: Arkham Asylum I love this game - what made it particularly sweet is that I got it free with a GTX 260 ages ago! Just Cause 2 I still haven't played through this game! It looks amazing! Heaven 2.0 As far as I'm concerned, Heaven 2.0 is the best version of the Heaven benchmark. Heaven 4.0 is ruined by the overzealous use of depth of field. Crysis (GPU benchmark) I don't care what review sites think now, Crysis is still the be all and end all of benchmarks in my opinion. Until the day a single mid range GPU can run Crysis at full settings, 2560x1600 and minimums of 60FPS, it's still valid. And there is STILL no better looking game despite it being released over five years ago. I tried to take some screenshots but unfortunately everything just results in a black screengrab. Look at how well two 660s compare to three 580s! I must be hitting a CPU bottleneck - a bit worrying when you consider it's an i7 920 running at 4GHz. Don't give me any crap about how a 5GHz 3970k would make a difference because I reckon it really wouldn't. Also, as a bonus, here's some additional antialiasing comparisons. Did any of us think we'd see the day Crysis could be run with 16xAA?! Memory use In Crysis, my old 580s couldn't handle 8xAA - a slide show ensued. The memory use went to ~1.6GB for 8QxAA on the 660s, with 16QxAA using up 2GB. With the high res texture pack, custom settings and 8QxAA I use for Crysis, memory use goes up to about 2.5GB. Warhead, with high res textures, 4xAA and custom settings, goes up to 3GB - 8xAA is fine for the most part but the last couple of levels result in needing more than 3GB! The other benchmarks seemed to be content with around 1.6GB usage. It seems that in the past my 580s were right on the borderline memory wise. All in all, I think that everything seems a little smoother now with the extra VRAM helping out. Driver, SLI scaling and overclocking analysis Driver percentage increase SLI scaling: Overclocking percentage increase: Conclusions First, it is important to note the huge improvement of Nvidia drivers. In particular, minimum values seem to be vastly improved, with Just cause 2 and Crysis showing 50+% increases in performance! SLI scaling is brilliant for two cards, with tri SLI still a bit game dependent. Overclocking results are a bit hit and miss but can really make a difference in some games. I am genuinely impressed by the 660 Ti. Although they may not appear to be much improvement over 580s, games actually feel a bit smoother and they are so quiet!I think that for sub 1920x1200 resolutions, one is a good trade off between performance and cost. For 1920x1200 to 2560x1600 resolutions, two cards in SLI represents the cost/performance sweet point. Finally, I'd like to leave you with something that made me smile:
That XFi card looks a bit squished there! Good results, beating the 580 setup with less noise, heat and power usage. That said, this little lot would have cost somewhere near £600-700, would dual 670/680s been better?
Hmm, difficult question. I'm inclined to think the 660s would happily outperform two 680s and would be laughing at 670s since one 660 is supposed to perform at the same level as a 670 (give or take).
Also time to change your avatar! I always find SLI setups intriguing, especially the question as to whether a trio of mid-range cards can overtake a duo of higher end ones. I guess it can be game specific - your Crysis results show little improvement in scaling from 2 to 3 cards, so dual 680s would probably be better in that situation.
Fantastic effort on the whole review! Put's my lame reviews to shame... (Well more to do with, not being arsed to do a full detail review like yours.) Lol. Also a valid point that 2GB VRAM on today's GTX 670 - 680's - 690's is just not enough with all the bells and whistles, as I have been telling people. Whether they choose to believe me or not! As some seem indeniel, and believe 2GB is enough at 1440+ Resolution. You have quite clearly shown that even 3GB VRAM can be maxed out in the older titles like warhead. The game does look absolutely stunning with my Custom config and the added High res textures, and other mods added. BTW - GTA IV has had that issue with cards with more than 2GB VRAM, and now the only way to max the game out is to add commands to the target shortcut. +Rep for a great read! Cheers, Si.
I doubt that - the memory bus has been reduced to 192 bits on the 660s. So, even if you clock their nuts off you probably wont match a stock 670 (which can still be overclocked). Having said that, thanks for taking the time to put up your review. I was surprised at the scaling of Tri-SLI in Crysis though, I thought it liked multi GPU setups.
This can be more down to this power saving crap, as I found running 3dMark Vantage not using all my boost clock once you got over 60fps say. The way around that, is to disable all power saving features in Nvidia control panel and in power options on the PCI-E bus etc.
Personally I think you should treat yourself to a faster CPU and pcie3 when haswell arrives. If u think you're CPU limited every little helps. Also to test you i7 9xx CPU limit theory why not try against Tom's article here: http://www.tomshardware.co.uk/geforce-gtx-680-geforce-gtx-660-ti-sli,review-32632-2.html Then you don't need to break the bank yet... P.s. Also, cool! Good work!
Pete, I beg to differ on that comment though: Here's my findings, though I should have done some more test... Whether the results show an improvement from the extra PCI-E bus and the CPU or one or the other, I beleive it's from both, helping boost better perfomance out of my GTX 580's X79 has a hell of a lot more bandwidth to give with the extra PCIe lane. Here's my old results running 3 GTX 580 1.5GB cards on a Gigabyte UD7 that does X16 x8 x8 And the Asus Rampage IV does x16 x16 x8. 990X @5GHz Gigabyte UD7 3960X @5GHz Asus Rampage IV 990X @5GHZ Gigabyte UD7 3960X @5GHZ Asus Rampage IV
Been running MSI Afterburner, rather than EVGA Precision, so guess I missed out on that. (Does this mean, I have to redo all my benchies with K-Boost enabled...Not today...)
Your wish is my command! In regards to the 680s, maybe. Unless someone sends me two to have a play with, I don't know .Given the minimal effect upping the antialiasing has, I'm more inclined to think that I'm hitting another bottleneck somewhere. Yup, completely agree. And your Crysis settings work nicely . Mm, dunno. But yeah, I was a bit crestfallen when the tri SLI results for Crysis came through - given how amazingly two performed, I thought I would be hitting 70-80FPS mean. Yup, 580s are still good! Retrospectively I wish I had waited for the 3GB 580s to release before buying them but being the idiot I am I rushed in.
I really like my X58 set up, it's like my Volvo estate in a way - sure, it may not be the fastest car around but it's still got a surprising amount of grunt and is very sturdy! At some point though I'll run the Metro 2033 benchmark to compare it to that Tom's article though - good find! For the Heaven benchmark, you're only losing about 3FPS off a 60FPS minimum (5% difference) and it's a similar story for the mean value. I would like to see results for 4GHz versus 5GHz though! Also, I know you like your 3DMark values but I've had a bit of a change of heart on them: in-game FPS mean more in my most humble opinion. Still, I do like watching that space ship one!
Heaven will show the same results running Stock CPU speeds, or 5GHz. As the benchmark is GPU dependent. Also the min FPS in Heaven 2.5 is taken from the first instance the benchmark runs. (If you watch your GPU usage, you will see it take a sec or 2 for the other 2 cards to change from 2D clocks to 3D clocks. So min fps is not valid in that benchmark. heaven 4.0 is a bit better when starting off, as the min fps is captured when going though the dark tunnel, and not on the first instance of the start of the benchmark.
Yeah. I think it'd be a good comparison point. Certainly nice to see them muscling in on 2 680s. And I hear 3 way sli has even less issues with microstutter. Not that nvidia are so affected...
Very few games have native tri SLI support you may need to make your own profiles to gain the performance or vist some overclockers site who will likely have the profiles.
Crysis benchmark with Crysis patched to 1.21. Notice the differences between 1080p and 1600p on the GPU usage. With all power saving settings turned off. Very interesting... 1080p CPU @ 5GHz GPU's @ Stock over boost. GPU 1 @1176MHz/2 @1163MHz/3 @1150MHz Results: Code: 17/03/2013 14:38:04 - Vista 64 Beginning Run #1 on Map-island, Demo-benchmark_gpu DX10 1920x1080, AA=16xQ, Vsync=Disabled, 64 bit test, FullScreen Demo Loops=3, Time Of Day= 9 Global Game Quality: VeryHigh ============================================================== TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s) !TimeDemo Run 0 Finished. Play Time: 29.31s, Average FPS: 68.23 Min FPS: 50.89 at frame 166, Max FPS: 110.14 at frame 998 Average Tri/Sec: -15430436, Tri/Frame: -226153 Recorded/Played Tris ratio: -4.05 !TimeDemo Run 1 Finished. Play Time: 23.67s, Average FPS: 84.48 Min FPS: 50.89 at frame 166, Max FPS: 111.39 at frame 1655 Average Tri/Sec: -17949498, Tri/Frame: -212471 Recorded/Played Tris ratio: -4.31 !TimeDemo Run 2 Finished. Play Time: 23.56s, Average FPS: 84.90 Min FPS: 50.89 at frame 166, Max FPS: 111.39 at frame 1655 Average Tri/Sec: -18116010, Tri/Frame: -213373 Recorded/Played Tris ratio: -4.30 TimeDemo Play Ended, (3 Runs Performed) ============================================================== Completed All Tests <><><><><><><><><><><><><>>--SUMMARY--<<><><><><><><><><><><><><> 17/03/2013 14:38:04 - Vista 64 Run #1- DX10 1920x1080 AA=16xQ, 64 bit test, Quality: VeryHigh ~~ Overall Average FPS: 84.69 1600p CPU @ 5GHz GPU's @ Stock over boost. GPU 1 @1176MHz/2 @1163MHz/3 @1150MHz 1440p was not selectable in the selected resolutions, so it still ran it at 1600p. Results: Code: 17/03/2013 14:46:06 - Vista 64 Beginning Run #1 on Map-island, Demo-benchmark_gpu DX10 2560x1600, AA=16xQ, Vsync=Disabled, 64 bit test, FullScreen Demo Loops=3, Time Of Day= 9 Global Game Quality: VeryHigh ============================================================== TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s) !TimeDemo Run 0 Finished. Play Time: 34.36s, Average FPS: 58.21 Min FPS: 44.29 at frame 178, Max FPS: 83.89 at frame 1012 Average Tri/Sec: -8419574, Tri/Frame: -144652 Recorded/Played Tris ratio: -6.34 !TimeDemo Run 1 Finished. Play Time: 28.77s, Average FPS: 69.51 Min FPS: 44.29 at frame 178, Max FPS: 83.89 at frame 1012 Average Tri/Sec: -9129396, Tri/Frame: -131330 Recorded/Played Tris ratio: -6.98 !TimeDemo Run 2 Finished. Play Time: 28.40s, Average FPS: 70.41 Min FPS: 44.29 at frame 178, Max FPS: 83.89 at frame 1012 Average Tri/Sec: -9383529, Tri/Frame: -133263 Recorded/Played Tris ratio: -6.88 TimeDemo Play Ended, (3 Runs Performed) ============================================================== Completed All Tests <><><><><><><><><><><><><>>--SUMMARY--<<><><><><><><><><><><><><> 17/03/2013 14:46:06 - Vista 64 Run #1- DX10 2560x1600 AA=16xQ, 64 bit test, Quality: VeryHigh ~~ Overall Average FPS: 69.96 I will now clock my CPU to 4GHz and report back. Where are you getting this from? That statement is so not true... If a game supports SLI, then it will support up to 4 GPU's, more so on games from the past couple of year's or more.