1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News AMD Carrizo next-generation APU details leak

Discussion in 'Article Discussion' started by Gareth Halfacree, 29 Jul 2013.

  1. schmidtbag

    schmidtbag New Member

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    Actually, they did. AMD's Magny Cours Opterons were overall better than their intel counterparts in nearly every way except single-threaded tasks. They were cheaper, faster clock-per-clock, more power efficient, and had QUAD CHANNEL memory. The current Bulldozer/Piledriver architecture was also revolved around AMD server chips. They're decent performers in servers, but for whatever reason, companies would rather go for intel - I guess because Intel is the easiest way out. Intel has proven multiple times that throughout most of their history, they are not a cost-effective option in servers(when you include all CPU architectures). I feel x86 overall doesn't belong in most servers.

    Agreed, but with their current architecture, that isn't going to be a reality. This is probably why they're investing in ARM. I just hope they do the same thing with ARM as they did with x86-64.

    Better doesn't mean necessary, or cost effective. Intel's i3 and i5 series are pretty overpriced (except in laptops), and the average user hardly has a need for something much better than the best i3. The thing to consider too is most enthusiasts hardly *need* an i7. While there are undoubtedly good reasons to own one, I'm sure the majority of i7 users are just bad at managing their tasks, care about bragging rights, and don't have the patience to wait an extra few minutes to render or encode something.
     
  2. azazel1024

    azazel1024 New Member

    Joined:
    3 Jun 2010
    Posts:
    487
    Likes Received:
    10
    Okay...use Quicksynch. Handbrake beta has support for it, and especially with quicksynch enabled (for the bit that it can do), Intel SMOKES AMD with it, by a good 30-40%.

    If you looked at the latest Haswell benchmarks, Intel smokes ALL AMD APU options when it comes to GPGPU compute. It even beats the pants off some of the low/mid range Nvidia discrete graphics cards and mobile graphics in GPGPU compute. You know, like the GT650, which is a 45W TDP all on its own, the new Haswell Iris 5200 GPU beats it in, I think all/most, GPGPU benchmarks with the 47W TDP for the entire CPU/GPU. The GT650 beats the 5200 in most games (with one or two tiny exceptions)...but often not by a huge amount (sometimes the 5200 gets spanked pretty bad)...but again, that is comparing a 45w discrete graphics option to a CPU/GPU combo with 47w between them.

    Physics only allows you to get away with so much.

    Compared to 7000 series APUs, Intel Haswell IGP is significantly better. Compared with 8000 series APUs, the only thing that AMD can beat it with is DESKTOP APUs. The mobile APUs...where you are probably going to actually CARE about how powerful your IGP solution is, get spanked in most/all similar TDP comparisons.

    As for Intels TDP numbers...no, they are pretty darned accurate. The SDP crap you can take with a massive shaker of salt, but the TDP is pretty much what it means. Intel tends to be pretty far below it most of the time.

    Also, not sure if you checked lately, but most ARM 1W CPUs...are for a SINGLE core. Not the entire SoC, that is generally a lot higher (even phone SoCs). The SoC in the Nexus 10 can run as high as 8W for short periods of time, and averages 4W underload. The new Haswell 15W ULT chips bat up to about 15W under heavy load, and that is generally with CPU and GPU loaded. Just loading the GPU or CPU with very light work on the other isn't going to see 15W. Even with the CPU at full turbo, you are probably only going to see around 6-10W (Ivy Bridge ULV for it's 17W chips hit about 10W with both cores fully loaded under max turbos).

    If you bothered to read, a couple of places of done "mobile" workloads on new Haswell 15W ULT ultrabooks, and normalized for battery capacity...the laptops BEAT things like the iPad 4 under some workloads and came up short by only a small amount in some others. That means that 15W ULT and the platform as a whole uses LESS power than something like the iPad4. Sure that isn't comparing heavy load to heavy load...but you know what, that ULT Haswell chip is also punching out probably 4-5x the compute power under heavy load, and even then probably only comes up short by maybe 50% of the battery life under heavy load, normalized for battery capacity.

    I am very interested to see what the new Silvermont Atom chips are able to do.

    AMD has some interesting gear coming up with ultra low power mobile, but for standard mobile and regular low power mobile, they do NOT have a compelling offering right now nor in the near future. Their CPU performance just sucks, their power consumption sucks and their GPU performance often is NOT as good as a similar TDP Intel mobile chip.

    Low end desktop if you want a light to medium gaming rig and only need your machine to do light to moderate CPU tasks, AMD DOES have compelling offerings. They don't in mobile.
     
  3. azazel1024

    azazel1024 New Member

    Joined:
    3 Jun 2010
    Posts:
    487
    Likes Received:
    10
    I'd dispute the "majority" claim. I am sure some fall in to all three. At best you might be looking at one of those. I don't have an i7 (though I do have a 3570), but I fall in to the last category, though not "don't have the patience". I do a lot of photo editing work, and with 3 young kids, plus a full time job means I don't often have a lot of spare time. So being able to save 30-50% of my time compared to what a mid grade AMD Trinity processor could manage in similar tasks when it is something CPU compute dependent while doing photo editing work is pretty important. Baseline average workload in a month that saves me roughly 45-60 minutes (based on roughly 3 photo editing "sessions" which is about what I average). My time hasn't stopped being money. Even if it was only worth $10 an hour for my spare time (and I make a crap load more money than that)...10 months would pay back the difference between my i5-3570 and a mid grade Trinity processor.

    That doesn't include the time saved in a lot of other tasks, though I won't include Encoding tasks, because I simply set that stuff up to run and walk away.

    For me and a lot of people, though certainly only a large minority, having a really fast CPU is still very, very important. Especially when it comes to mobile, I have a requirement of a light weight laptop, but I also need a fairly fast CPU in it for on the road photo editing, and the AMD option would likely mean a good 40-60% hit in CPU performance, which would be unacceptably slow, or else going up in package size significantly, which would also be unacceptable.
     
  4. Harlequin

    Harlequin Well-Known Member

    Joined:
    4 Jun 2004
    Posts:
    7,071
    Likes Received:
    179

Share This Page