1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News AMD offers to settle Bulldozer core-count suit

Discussion in 'Article Discussion' started by bit-tech, 28 Aug 2019.

  1. bit-tech

    bit-tech Supreme Overlord Lover of bit-tech Administrator

    Joined:
    12 Mar 2001
    Posts:
    3,676
    Likes Received:
    138
    Read more
     
  2. Spraduke

    Spraduke Lurker

    Joined:
    23 Sep 2009
    Posts:
    1,151
    Likes Received:
    464
    Hell, with those restrictions on claimants, I'm amazed they didn't just settle first time round. The amount of people who would claim must be about 100 (on a good day)
     
  3. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,133
    Likes Received:
    6,728
    Doesn't matter: the way a class action suit works is the settlement is fixed whether one class member bothers to claim or a million do. The vast majority - always more than half - of the settlement goes to the lawyers; the remainder is divvied up between the class members who apply, usually based on a maximum but without a minimum. In other words: if a million people claim, they'll get a couple of cents each. Whether a million people claim or nobody claims, AMD will still have to pay the same amount.
     
  4. wyx087

    wyx087 Homeworld 3 is happening!!

    Joined:
    15 Aug 2007
    Posts:
    11,996
    Likes Received:
    714
    What I said when I saw Bulldozer:
    (took some time finding this)
     
    Gareth Halfacree and damien c like this.
  5. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,133
    Likes Received:
    6,728
    When you're right, you're right!

    What's really interesting about Bulldozer is that it exactly mirrors Intel's NetBurst misstep. As I said back when Bulldozer was taken out back and shot:
     
    wyx087 likes this.
  6. Wakka

    Wakka Yo, eat this, ya?

    Joined:
    23 Feb 2017
    Posts:
    2,117
    Likes Received:
    673
    To be fair, it's a misstep that made Intel a lot of money... AMD just didn't have the "influence" to make up for the shortcomings in performance.

    Still, at least it forced AMD into a leaner, more aggressive and focused company.
     
  7. jb0

    jb0 Minimodder

    Joined:
    8 Apr 2012
    Posts:
    555
    Likes Received:
    93
    I still hold the opposite stance, that it was eight processors coupled to four math coprocessors.
    Which does not at all mean I don't think mistakes were made. It was an interesting technical approach, but one which proved to be a costly mistake. And an interesting approach that doesn't work out isn't worth a whole lot.
     
  8. wyx087

    wyx087 Homeworld 3 is happening!!

    Joined:
    15 Aug 2007
    Posts:
    11,996
    Likes Received:
    714
    We may have different definition of a core or processor.

    My definition comes from the age old "fetch, decode and execute" cycle. Have ability to do this cycle without shared resources count as a processor core for me.

    The bit-tech review shows each Bulldozer "module" (which AMD says each module contains 2 cores) contains 1 fetch, 1 decode block. So to me, it looks like it should count as 1 core, with more than one execute ALU's.
     
  9. jb0

    jb0 Minimodder

    Joined:
    8 Apr 2012
    Posts:
    555
    Likes Received:
    93
    I think there's a fair argument to be made that # of ALUs = number of processors. I wouldn't score a device that runs multiple instructions concurrently on different pieces of hardware as a single core. I would definitely agree each module is not two PROCESSORS, but I think saying one module has two cores is fair.

    If the fetch and decode blocks aren't a bottleneck, there is no meaningful difference between one block serving two ALUs and one block per ALU. My admittedly imperfect understanding of modern processor internals is that the execution phase takes long enough on most instructions that one instruction decoder can service two ALUs at no performance cost*. It seems like if that wasn't the case, hyperthreading wouldn't be a viable option either(hyperthreading is bottlenecked by the ALU, not the instruction decoder).


    This discussion, of course, highlights a big problem with the lawsuit: there's no technical definition of "core" in this context, and there exist sound arguments for multiple different definitions. In this context, "core" is a marketing term.





    *Yes, Bulldozer and descendants had performance issues, but my understanding was that it was due to other decisions, not the shared instruction decoder.
     
    wyx087 likes this.
  10. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    Unless you pretend that CPUs only ever deal with integers and floats are some mysterious creature that is rarely encountered. I can't find a good source of what proportion of int to float operations you might see in a modern workload, but the "Gibson mix" (see Table 1) gives a total of ~12.2% of instructions being float operations and ~11.3% being int operations (with the rest being loads, stores, branching, etc). With Bulldozer's shared FPUs, that makes a good argument that a module with a single FPU is a single core.
     
  11. jb0

    jb0 Minimodder

    Joined:
    8 Apr 2012
    Posts:
    555
    Likes Received:
    93
    There's a LOT of processors that don't have any floating-point power at all. Mostly older or embedded stuff, but if ye olde 6502, 386, and 68000 count as single-core processors, then a floating-point handler is not a core part of the, err, core.
    (I actually think AMD is settling here to avoid the chance that the court might create a legal definition of core broad enough to require even the lowest of microcontrollers to carry an FPU.)

    Also, the Gibson mix dates to 1959. I am reasonably sure that late 50s computing environments ran very different workloads than modern ones.
    I'm equally sure that mix has varied a lot over time, probably hitting a minimum in the late 80s and early 90s, as there was a huge market of systems that only had integer math hardware. As I understand things, even today you favor integer math if your code is performance-dependent.
    AMD's claim at Bulldozer launch was that most x86 workloads were 90% integer. Obviously, they are a biased source, but they almost certainly had SOME information about workloads guiding their designers. (Same page adds more nuance about HOW the floating-point module is shared. It apparently functioned as two independent FPUs outside of the highest-precision mode.)



    Your stance is, however, the stance the lawsuit presents, that # of floating-point units = number of cores.
    I think they would've had a better argument if they'd focused on the actual performance penalty of shared FPUs in a float-heavy workload instead of the core count.

    But focusing on the performance penalty of shared FPUs would be harder to class-activate since you need to show that people were genuinely losing performance by running float-heavy workloads AND that AMD's insistence most workloads of the time were integer-heavy was wrong, AND that AMD knew their claim was wrong.

    ...

    Or they could just focus on the abysmal single-threaded integer performance and claim that AMD misled people by implying that Bulldozer was any good at anything at all. Because I think we can ALL agree that was a case with merit. Bulldozer's problems had less to do with "not enough FPUs and instruction loaders" and more to do with "runs like dog poo under all conceivable loads". In that regard, it was unarguably more like a four-core processor than an eight-core processor.
     
Tags: Add Tags

Share This Page