Nice to know. I keep getting Windows popping up mid game in BF3 asking me to turn off Aero - which I think indicates I'm running out of vRAM on the 570. This is at 1200p.
Just peeped this "leaked at CES" Sauce: http://www.obr-hardware.com/2012/01/exclusive-28nm-geforce-specs.html
GPGPU will increase by a good bit if that Tflops rating is correct. That memory bandwidth is insane for a 384bit memory bus, too. The memory must be incredibly fast to achieve it at 4.8ghz effective.
Judging by the fact it's "GK104"; It may well be the midrange part, I'd expect the higher end to be "GK110", much like the current 500 series is either GF116 or GF110, just actually done the right way around this time. Holy f'cking sh!t. I'm going to have to try and get ahold of one of these! Looks like it could be near or above 570 performance! (Accounting for the "N/A" here meaning; slower shaders)
HD7970 stock memory clock is 5500MT/s giving memory bandwidth of 265GB/s. Max CCC overdrive limit puts it up to 6300MT/s for an effective bandwidth of 302.4GB/s. Performance of this GK104 part could be interesting especially if they no longer have separate shader clock domains (assumed by the N/A on the table above). 576 CUDA cores sounds great but clocked at only 905MHz I wonder how fast it will actually be? Anyone with a GTX580 want to test by adjusting their Shader clock down to 905Mhz whilst leaving the uncore alone? Heaven would be a good and relatively quick test to run.
The 560 Ti 448 already matches and sometimes beats the GTX 570 when OC'd, especially the MSI Twim Frozr III one http://www.overclockersclub.com/reviews/msi_n560gtx448_pe/5.htm And now we are expecting 578 cores? This could be very interesting indeed.
shader core speed is linked to 2x uncore speed. can't change one with the other i doubt that "leak". either they now have super CUDA cores, or the speed figure is wrong. either way, under 600 cores at only 900MHz will not be as fast as 580. (but then, this is also rumoured)
Dammit, I thought you could unlink them in Afterburner? Indeed it does seem odd to drop the "super cores" but I suppose we will just have to wait and see.
Only with older GPU's, because the older GPU's supported a clock translator chip for every line. This has been removed for reduce heat, and most importantly and to allow smaller GPU die. AfterBurner just communicates with the Nvidia drivers using NvAPI (Nvidia SDK). The speed that you set may not be the exact specification that the GPU will go with. It will take the next or lower value. Some software can detect the actual clock (AIDA64), some can't (where it uses NvAPI to get the speed set from the GPU), like GPU-Z.
Thanks for the explanation Goodbytes. It has been a while since I ran a recent Nvidia card (GTX460 SLI) so I couldn't remember what was what. I know my 335mGT can have the clocks unlinked so just assumed you could on the newer cards. I remember your second point occuring back when Nvidia started to split the Shader domains from the uncore. A lot of people were confused by the actual speed being registered being different to what was set.
SAUSAGES... EVGA's classified 580GTX isn't even out yet is it? 2x8 and 1x6pin power connectors...eek!