Discussion in 'Article Discussion' started by Meanmotion, 12 Nov 2013.
can mantle do it? or can even this do it on an A88x board?
what about onboard GDDR5?
Iris Pro isn't bigger as a GTX660. The whole APU-package is.
And yes, I do care about singlethread-performance alot actually, and you can aswell look at allmost every rendering-benchmark to get an idea why an i5-4670k is most likely allways better than any quad-core from AMD.
I never spoke of the latest i7 there, but about the reasonable $200-parts from intel.
I speak generally ofc, and I allways try to keep in mind what the absolte majority of desktop/notebook-users is using their machnes for. And in that case we're speaking allmost exclusively of Windows-based PCs running nothing more than an office-suite, a mediaplayer for HD-content and maybe a few tools to edit a homevideo. PCs used for playing some taxing games are allready a very small minority of some 5-10% and another 5-10% is actually running some demanding software like 3d-software.
We people in these forums usually forget about the fact, that we're a neglectable minority for the hardware-vendors when itt comes to volume-sales.
actually it was the HD 7790 that's smaller than the HT3e
160mm2 for the 7790 core
174mm2 for GT3e (according to anandtech)
seems Intel have along way to go to catch up with AMD in this deprtment.
I wonder how much of a loss leader Intel are taking for actually selling these chips at all....
None at all as they are only sold to apple. The iris pro was made for apple by apples request after all. Not even certain if the chip is not just for apple exclusively.
4770k is faster than every AMD chip last I checked even in multi threaded work loads. If you actually need to make cash from your computer then it's an auto buy if not after the faster x79 chips.
If your after an APU then AMD are a good buy .
It's nothing to do with motherboard chipset. Onboard GDDR5 isn't going to happen outside special orders like the PS4. Maybe Ultrabooks - because that's the only scenario you'd have a fixed amount of memory - but GDDR is not exactly low power so that's a fail too.
and a 4770k is more expensive for the cpu alone than an entire amd apu system.
and define ` faster` - other than the od exception - when you have a reasonable gfx card , cpu stops being a limiting factor. anand proved this. An 8350 is perfectly great for dual cards.
Eh? I scrape by quite nicely on my APU. How would an Intel chip - which would cost me several times as much to buy - earn me more money?
I think he ment : "Those working in 3D rendering". But in these case, I'd build a small rendering farm using inexpensive and small nodes rather than FAT expensive X79 or core extreme CPU.
For anything else, an APU is pretty much what anyone need, throw RAM according to your need and an SSD to make it snappier. I saw personnal computer birth (read: consumer computer) .... and I'll prolly see it's death (under its current form). I really think that the era of fat CPU, GPU and big boxes will end in short to mid times. SOC is the future, could you like it or not this is where we are heading.
Demanding computing tasks will be offloaded to servers or farms. Home computing / entertainment will move to a single architecture. Console, tablet, smartphone, PC and even connected TV ... they all are offering quite the same features (internet, gaming, youtube, social network and emails). This is quite a big redundandcy and sooner or later all this world will fuse into something unique. Maybe I'm mistaking myself, but this is who I see the upcoming form of "computing".
The only difference between console, PC, tablet, etc. ... relies in maximum achievable performances. Once computer all had a sound card, a netword card, a video card, etc .... now computer are heading to motherboard (which house network and sound) + cpu (with onboard gpu, memory controler, etc.) ... the step before embedding the whole chipset inside the CPU is pretty small and console are closer and closer to computer architecture. Time will tell
Those needing there computer for Maths equations, 3d Rendering, Photo / Video editing. Code compiling. I dont know your own use of a computer gareth to say if a Intel chip would make you more money. If you are doing the news for bit tech on a day to day basis then it would not.
What you said has already happened guille at least in the casual sector. The Ipad has took alot of sales from the pc desktop sector. Even in my own household im the only one who still uses there PC on a regular basis 2 laptops just collect dust. Easier to just use a Ipad for general browsing.
If AMDs APUs were launched before the whole Tablet revolution in the casual sector and they were sold to the 2 major brands in lenova and dell then you never know what would of happened.
Instead the Ipad feels quicker than most pcs that do not have a SSD in the tasks it can do. Ill always say the biggest mistake companies made was to not force SSDs into cheaper pcs. As they make the system so much quicker than any cpu would.
Correct, it wouldn't. That goes for a goodly chunk of the PCs around today, as well - and it's already been mentioned upthread how unlikely it is that markets where it makes a real and immediate difference to the bottom line are doing the rendering locally anyway. Just look at Nvidia's Grid: virtualised GPUs for offloading your rendering remotely, so you don't *need* a kick-ass workstation at your desk.
Video editing is a good example of the sort of workload where having a good wodge of local compute power is important, and where spending more now will save you money in the long run. Code compiling? Arguable. It's not like you can't be working on something else while your code compiles, and you spend far more time looking at the IDE with your CPU idling than actually burning code. Photo editing? That's RAM-dependent, not CPU - my APU copes quite admirably with me editing print-resolution images, and while certain intensive operations may complete marginally quicker on an Intel chip it would be many years before those savings add up to break-even on the cost difference.
You're dismissing the overwhelming majority of the PC market - those who *don't* do large amounts of video editing, local 3D rendering and the like. You're claiming that the edge-cases who do are the majority, which is so wrong-footed as to be ridiculous. It's not the case that "if you actually need to make cash from your computer then [Intel is] an auto buy," nor that only AMD fanboys buy AMD as you claimed. For the overwhelming majority of the market, an AMD chip will allow them to "make cash from [their] computer" exactly as quickly as an Intel chip - unless you're claiming an Intel chip will help me type faster - but, potentially, at a lower capital expenditure and total cost of ownership. For your edge cases, sure, but next time you fancy arguing the point have a quick look at the comparative sizes of the overall desktop market and the professional workstation market and you'll see just how small a percentage those edge cases make up.
TL;DR: Don't make sweeping generalisations that can be easily disproved.
This is a very very very very marginal percentage of home computer usage. Even code compiling can be done on "low end" processor. Most compilation is incremental now, no need for full rebuild each time you hit the F9 key (or what ever it is ).
Math equation and 3D rendering can be happily offloaded to a farm, and this is what I'd do if I was living from this. I wouldn't like to wait for the render to end before I can continue using the computer.
Photo and video editing is more about RAM. Filters, for a huge part of them, are multi-threadable and thus offloadable. Video rendering / compression is also multi-threadable --> multi-threadable, but localy due to the amount of data to transfer.
I'd love to see FPGA working along side a SOC. The FPGA could be reprogrammable on the fly and thus provide a "CPU" matching what you're processing. Need a video compressing CPU ? Flash the FPGA. Need a (de)crypting specialized CPU ? Flash the FPGA etc etc etc.
Edit: oh .... a you can remote develop too. I know this is ultra marginal and people will think that I'm mentally ill ... but I've developped and compiled from my phone (Motorola Droid, with full landscape keyboard) using SSH to connect to my desktop home computer. VI was perfectly usable, only the keyboard was preventing fast typing. Then compiling was done as usual with the make command.
Ha, remote compiling, why would you want to do that? My Nokia N900 can run make, gcc, g++, etc on device so you can code and run your code right there, quite usefull at points when away from a desktop. While I've never tried it myself, it can reportedly also cross compile for x86 as well (although at that point I don't see why you would compile on a desktop as well). Alternatively just code in python and be done with compiling
Since most source code files are archived and versioned (SVN, GIT, etc.) somewhere on a server nowaday, we could imagine remote compiling them too. Pretty useful when all you have is a thin client or when you are on the go or lack the processing power.
Executable are pretty lightweight most of the time, compared to the compiling time sending it back to you is nothing. While being remotely compiled, you can do something else with your computer.
Python is not what I need, sorry. Nice scripting language but no use for what I do (C/C++ with no screen). Each object compilation can be distributed between several cores / CPU / machine, you then just have to gather all the .o files and bind everything together (but maybe GCC already works this way).
If you're seriously into 3d-rendering, then you go buy a small server with as many cores as possible. Something like a G34-board with two octacores for just under €1000 (only board + 2 CPUs).
Anyways, for the professional @ home, who does alot of video-editing or DTP (primarily AdobeCS) a combination of an intel CPU + HD7750 is still the best option currently. Such a system isn't really expensive and if you have a business, then you can write it off the taxes anyways.
Separate names with a comma.