News ATI to Bring Shader Model 3.0 & Multi-GPU Technology to the market

Discussion in 'Article Discussion' started by Tim S, 31 Oct 2004.

  1. Tim S

    Tim S OG

    Joined:
    8 Nov 2001
    Posts:
    18,882
    Likes Received:
    89
    Today, we learnt that ATI will be adopting technologies that its arch-rival, NVIDIA, have been promoting for a while. While they did not provide any concrete plans about the introduction of both Shader Model 3.0 and Multiple GPU technologies, they did discuss details of possible architectural features for their next generation products.

    John Carvill, ATI's public relations manager for Integrated and Mobile Products, stated that Shader Model 3.0 would be supported "when it becomes readily available in games and applications". He then went on to say "... this feature is not readily used by the developer community and today's top titles still largely rely on Shader Model 2.0". We find this interesting, as there is an ever-increasing list of Shader Model 3.0-enabled titles, which includes top titles such as: Far Cry and Half-Life 2, the latter is rumoured to support the new SM3.0 shader paths. We have also learnt that many leading developers are moving towards Shader Model 3.0, simply because it requires less code, and is generally easier to achieve the same effects that can be achieved with Shader Model 2.0.

    ATI have consistently downplayed Shader Model 3.0 since the release of the GeForce 6800 on April 14th. This downplaying became even more apparent when a presentation, titled "Save the Nanosecond!" was released in to the general public - this included speakers notes. The notes, which were not intended for public viewing, stated: "Steer people away from flow control in ps3.0 because we expect it to hurt badly. [Also it’s the main extra feature on NV40 vs. R420 so let’s discourage people from using it until R5xx shows up with decent performance...]".

    Typically, IHV’s (Integrated Hardware Vendors) usually release new graphics architectures every 12 to 18 months, and we expect that ATI's next-generation R520 graphics chip will emerge in the first half of 2005 based on a 90-nanometre fabrication process. Before that, ATI will deliver its code-named R480 GPU, which is a remake of the current R42x GPU on a 110-nanometre process.

    ATI's representative also made some passing comments in regard to Multiple GPU technologies, a capability that allows two identical graphics cards to work together in parallel to achieve higher performance than a single graphics card in graphically-intense 3D applications. Currently, this capability is available on specially designed ALX systems by Alienware, and also from NVIDIA with their own SLI technology.

    John Carvill indicated that, "Our [graphics] cards can already support dual-GPU configurations in such platforms as Alienware's ALX. We have a strategy in place for dual-GPU on the chipset side but it would be premature to discuss at this time".

    NVIDIA's SLI has was estimated to bring a 70-80% performance increase with two GeForce 6800Ultra's when running on an NVIDIA nForce 4 SLI chipset that boasts special enhancements for Multiple GPU technologies. Anandtech released numbers on MSI's nForce 4 SLI motherboard, which indicated that the performance increase could be over 100% in some cases where a 512MB frame buffer will enable the application to exist entirely inside the local graphics memory. NVIDIA's approach requires special circuitry to be incorporated into GPU’s and, for extra speed gain, into core-logic. Alienware's Video Array technology does not require any special logic to be incorporated into graphics or system chips.

    At present, ATI's potential desktop multiple graphics card direction is unclear. However, the company has some logic in its GPU's to allow them to work in parallel, as the firm supplies its graphics processing units to Evans & Sutherland for high-end graphics systems with up to 256 chips.

    ATI has the ability to make chipsets for both AMD and Intel processors, while NVIDIA do not have license to make core-logic products for Intel's chips.
     
  2. Fod

    Fod what is the cheesecake?

    Joined:
    26 Aug 2004
    Posts:
    5,802
    Likes Received:
    133
    didn't they have PS3.0 support in the r42x chip, but decided to axe it based on poor performance and the desire to develop other current features further?

    and, well, it's obvious they're gonna start supporting sm3.0. i mean, they're kinda shooting themselves in the foot if not.
     
  3. DeathAwaitsU

    DeathAwaitsU I'm Back :D

    Joined:
    27 Feb 2004
    Posts:
    2,104
    Likes Received:
    19
    Dual core cpu's, now dual core gpus, my rooms gonna turn into a ******* furnace :hehe: .

    Death
     
    Last edited: 1 Nov 2004
  4. Tim S

    Tim S OG

    Joined:
    8 Nov 2001
    Posts:
    18,882
    Likes Received:
    89
    It would be silly if they didn't introduce it, yes...

    The thing is that the game that they just spent multiple millions of dollars on is going to include Shader Model 3.0 at some point, whether it be from the start, or via a patch at a later date.

    They also played SM3.0 down to the ground, saying that it was useless. When clearly, it isn't as things currently stand. There are games arriving, and the list is ever-increasing. NVIDIA are going to have to fight the SM3.0 battle twice, but I think the fact that they already have experience with SM3.0 hardware in the mainstream bodes very well for their chances of retaining the high-end crown (according to the Mercury research figures) when next year's architectural refreshes arrive. :)

    We shall see.
     
  5. Firehed

    Firehed Why not? I own a domain to match.

    Joined:
    15 Feb 2004
    Posts:
    12,574
    Likes Received:
    16
    I think they mean SLI-style, not two on one board. And the DC CPU's don't run that much hotter IIRC.

    I was wondering when this would happen... I knew it wouldn't take that long.
     
    Last edited by a moderator: 1 Nov 2004
  6. Tulatin

    Tulatin The Froggy Poster

    Joined:
    16 Oct 2003
    Posts:
    3,161
    Likes Received:
    7
    hmm. do dual core cpus show as 2 cpus? If so, i can just see intel getting their mitts on this and releasing quadrathreading. Four virtual CPUS out of two cores on one chip :D
     
  7. play_boy_2000

    play_boy_2000 ^It was funny when I was 12

    Joined:
    25 Mar 2004
    Posts:
    1,543
    Likes Received:
    91
    intel is way ahead of you; might i introduce you to www.theregister.co.uk and www.theinquirer.net . I suggest incorpirateing those two sites into your daily reading.
     
  8. 731|\|37

    731|\|37 ESD Engineer in Training

    Joined:
    5 Sep 2004
    Posts:
    1,047
    Likes Received:
    0
    ok i started readong MR.GOO's botched thred and followed the link here so i didn't read the thread. but if ATi isn't going to use a "daughter board" or an external connector are they running their solution through the PCIe bus? if so wouldn't that chew up bandwith and be much slower and have other side effecys than NVidia's propriatary connection?
     
  9. Firehed

    Firehed Why not? I own a domain to match.

    Joined:
    15 Feb 2004
    Posts:
    12,574
    Likes Received:
    16
    Well they may end up having a daugterboard of some sort, or else they would need to design a proprietary northbridge chipset. PCIE has a ton of bandwidth but having to go through the bus instead of card-to-card just makes it more work for the chipset.

    Of course if they do a daughterboard solution they either need to make a revision 2 of all their current pcie cards or have them totally excluded, which I'm sure wouldn't thrill current owners.
     
  10. Tim S

    Tim S OG

    Joined:
    8 Nov 2001
    Posts:
    18,882
    Likes Received:
    89
    ATI have supported multi-gpu since R300... it's just a case of how they implement it in to a commercially viable product. It won't arrive until R5xx at least though, which will definitely include SM3.0 support.

    They chose not to implement SM3.0 in R4xx because their methods are slightly different to NVIDIA's, and it was likely that the transistor numbers required for SM3.0 to be implemented throughout both the Pixel Shader and the Vertex Shader would have cost them too many transistors, which consequently would have taken die size to an expensive, and unmanageable size at 130nm. A 90nm fabrication process should reduce the size by almost 50%, maybe even more, with the same transistor count. This will allow them to increase the transistor count, and include SM3.0 support along with FP32 precision, which is another thing that they will be considering with R5xx IMHO.
     
  11. Zephyr

    Zephyr Go V-Boy, Go!

    Joined:
    1 Oct 2004
    Posts:
    2,024
    Likes Received:
    1
    this kind've relates, but my question is how long is it until we can no longer produce the needed perfomance on one gpu/one cpu for what we need, and dual cpu/dual gpu becomes standard. that AMD/Intel and Ati/nVidia start making products that only work in conjunction with another identical model. that mobo manufacturers make their products solely for the idea of 2 processors and 2 graphics cards. it would be interesting, that the first distinction between an economy PC for a word processing and internet browsing and a kick-ass gaming PC or digital rendering PC is the number of gpu/cpu's. that would be kinda cool :D :rock:
     
Tags: Add Tags

Share This Page