Discussion in 'Article Discussion' started by bit-tech, 22 Nov 2018.
I think this is why we have the saying "Many hands make light work".
I'll see your "many hands" and raise you a "too many cooks"
And how many ladders you have - if you have a few you probably need someone who's job it is just to look after ladders and shout at people telling them to return them. After changing you have to test it, if all the bulbs are on the same light switch then you have to wait until no one is inserting a bulb before turning it on or you'll blind/fry the fingers of the poor person who put the bulb in at the wrong moment. Until that's done that person can't move onto the next bulb. So you'll need someone who likes health and safety to shout at everyone and organise a time to flip the switch
If the lightbulbs are in 1 box then there is a *small* chance two people will put their hands in the box at the same time if they aren't thinking and grab the same bulb causing a fight. Hence we need a ticketing system to allow access to the box.
Spot the programmer who has spent far to long sorting out threaded code
Seriously? you actually think more cores could hinder performance, if used properly?
I didn't really want to go into full blown waffle as it's housework day and I am really busy, but I can tell you absolutely and utterly first hand that both my Broadwell E 14 core Xeon @ 3ghz on all cores and my 2.3ghz 10 core at my mother's both absolutely and utterly slaughter the 4590s Haswell I am typing this from in something many people don't even consider, and there is no benchmark for really (well, there is but it's in big need of a modern update) and that's multi tasking. IE, what will happen to my rig if I have 4c 4t and start throwing everything around on it. Like playing music, watching a muted video on YT, copying files, using Photoshop and ETC. Even though my 10 core Xeon has an almost 100% deficit in clock speed it absolutely kills the 4590s when it comes to doing lots of things at once.
It is also awesome for splitting up the CPU (literally) in VMware and running 5 operating systems or more at the same time. Kinda like when Linus does those "4 gamers one PC" videos. In the right hands more cores are simply better, end of argument. LOL can't believe people are saying they would rather have one fast one. That's like having a house being plastered, decorated etc. Would you rather have one guy doing his job fast or 8 people all working at the same time.
Adding more cores at the expense of, say, moving from uniform to non-uniform memory access? Yes, that absolutely can and does impact performance.
Well I am not a rocket scientist, so excuse me if I am completely wrong but isn't that what Infinity Fabric is for?
Either way in this instance I can genuinely say that if you are ragging on a machine hard these days 4 cores (even for the most basic of tasks) is nowhere near as good as having bloody loads. I thought it was the memory at first (fresh install, I had 4gb) but nope. I stuck in 8gb 2133mhz and it's still very easy to make it crawl.
"Me electrician. Me make light work"
You missed the "£65 per hour, plus VAT and I will arrive late and drag the job out as long as I can" part
Nah mate, I was just having a bit of proverb banter.
Edit: bit of an ipc boost, an increase in clock speed and more cores the better.
As the 24 and 32 core Rippers demonstrate Infinity Fabric is an abject failure for the purpose of providing memory access for cores that lack dedicated memory access as Infinity Fabric doesn't even come close to meeting the bandwidth and latency demands, so a hypothetical 16 core Ryzen 3xxx built on the same principles as the 24 / 32 core Ripper would be a 99.99% pointless exercise.
However, AMD has reinvented the way glue works for Epyc 2 and we have no clue if that will filter down to 2019 Ryzen or not...
There are memory issues that need sorting out. More IPC and (probably needed to achieve that) I'd say faster and larger internal cache.
APU's, these are always 'just not enough GPU' what I would like is an APU that offers a 1080p gaming experience at reduced settings (equivalent to a 1050ti?)
Right now we always seem to say 'This APU is great for light 720p gaming' - Which translates to "We're faster than the intel iGPU but in reality it's equally as useless because the step change isn't sufficient to do anything better"
More cores doesn't scale well for gaming, as others have said we seem to of been in a muit-core world for an age now and developers have yet to find a way to make multicore work, so why bother? 6c/12t should be enough for the largest market outside of professional use.
Last point is features, the PCI-E lanes and all the other jingle and jangles required today, they don't all necessarily have to be on the CPU or APU, but should be supported by the motherboards (the situation is not as bad as it once was, but it seems to take an age to get all the current features on the AMD platform)
It exists, the problem is you basically have to buy an expensive NUC to get it:
The main problem with traditional APUs which prevents them from having any performance is that they have to sit around twiddling their thumbs while waiting to access the RAM, but the Intel / AMD collab chip actually comes with its own dedicated HBM2 for the GPU which solves that problem and it can easily compete with the 1050ti.
I think we need a 256-core Ryzen.
I don't think it does need more cores.
There's going to be an increase in IPC and, presumably with the move to 7mn, a bump in clock speeds.
For most people 4, 6 or 8 cores will be plenty. There's little need for more in the home environment.
Damn it, my first guess was that we'd discovered an expert light bulb changer, i always pick the wrong option.
I think that it's good to keep pushing the core count up a bit to give the platform legs in the long run. For example I don't buy a machine to last me a year until the next upgrade cycle comes around and these machines stick around for a decade or more so processor upgrade options which make a difference should be applauded.
My current machine which I built two years ago has 16 cores over two sockets and has 32GB of RAM. I've specifically picked out parts to last me 4 to 5 years as a main machine and a decade or so as server and they were already second hand when I picked them up by a number of years.
I also have a 5 GHz i5-7600K as a backup gaming machine. It's okay, if all I'm doing is gaming a bit or surfing but if I have full AV installed and a browser open and running a game and have discord open and have a three or four other apps running in the background, which is more like my usual usage then 4 cores even at 5ghz are not enough.
Hell, even just running Dota2 runs my cores pretty hard on the i5 but my twin Xeons have plenty of headroom
The Intel/AMD APU is too expensive for what I feel a viable APU should be (i.e. A cost effective 1080p gaming platform) so an expensive NUC is not a solution ( as an example and not a point of argument consoles that are using x86 tech that's essentially Windows/DirectX/Vulcan compatible are half the price) but it's the start of the path to the right solution for PC users that want a decent casual gaming experience - but it's taken longer than I thought it ever would to get to even this point.
It's weird really, AMD would have the jump on NVidia if it was able to bring a better APU to market and steal market share in a very competitive (but highest volume) sector.
It's mostly a case of a "decent 1080p gaming platform" being composed of a certain minimum quality CPU and GPU. If you combine the CPU and GPU dies onto a single package (e.g. the Intel Kaby Lake G series) then at best it will cost the same as using discrete components, but more likely will cost more (one die failing during packaging effectively takes out another die at the same time, increasing binning costs. Plus you lose the economy-of-scale advantage of discrete parts). If you combine the CPU and GPU onto the same die, then fabbing that larger dies that will cost more than just the the cost of fabbing the two dies individually added together. You also run into the more fundamental issue of either needing to add two different memory interfaces and two banks of memory (as with Kaby Lake G and its on-board HBM stack), or use a single memory interface that will either bottleneck the CPU (if you optimise for the bandwidth-sensitive but latency tolerate GPU and use GDDR) or the GPU (if you optimise for the latency-sensitive CPU and use DDR).
It's possible to do this and come out ahead in terms of cost only if you can order a really, really large number of chips, and order everyone using them to code for the quirks of those specific chips. This is how consoles have managed to pull it off, but just taking a PS4 Pro or XB1 Scorpio APU and dumping it onto an ITX board will produce a really crappy gaming PC.
Intel/AMD APU is really a discrete gpu and a separate cpu, just in one package - the gpu has it's own memory, connects by the PCIE bus, etc.
A true APU is directly connected to the cpu and doesn't have separate memory. That obviously leads to the cpu's DDR4 being the bottle neck. This means you require a super memory efficient gpu. That is where AMD's biggest problem lies - their gpu architecture isn't very memory efficient, hence having to use HBM for high end cards where as their direct competitor can get away with slower GDDR memory.
I think this is a side effect of most of their APU dev work being for consoles which are APU's connected to fast GDDR memory (not slow DDR4) so they can get away with being less memory efficient.
I kind of get what you're saying the reasoning.
I'm not worried about bottlenecking the memory, again because we're not playing to get the high performance market, just enough CPU and GPU requires a balance on both sides.
I am also reminded how Iris Pro (Broadwell?) gave us the promise that APU's would soon be able to offer 1080P gaming (then Intel effectively scrapped it), I then thought the same of RyzenG, only to find more disappointment (I really hope this is just because of development budget cap and that Vega was a pretty poor architecture) .
Regardless of where we are today, I think AMD and Intel both have the tech to do it (AMD could take inspiration from Theadripper to mount CPU/GPU onto a single chip, Intel clearly have the tools/people should they wish to develop further).
Separate names with a comma.