Hi everyone, My SO is working on a Phd project that requires them to process vast amounts of data. This requires complex fractal calculations and needs some serious computing power - CPU and GPU. What is a good setup that I can put together? I would like to put in the best of the breed processors (4x octa or 16 core?) and perhaps 5-6 GPU's that the load would get distributed across. The assumption is that the software they use will tap into all computing resources available. Can someone suggest an ideal setup and what specifications I could go for CPU and GPU's if cost is not a constrain. Do people actually sell custom PC's like this online or in London? Appreciate your help and expertise folks. Thanks.
They have already launched and can be bought, I though? The 24c/48t chip is $2060 plus tax over here. I can't find the 32c/64t model anywhere, though.
Single point or double point precision for your GPU/GPGPU's? If the former, then 1080ti - it's still better than the Vega64 right, if not then Vega64? If the latter then Tesla or FirePro, and buckets loads of EEC ram too I'd wager... and if EEC ram, then a xeon or epyc chip rather than a core/ryzen chip. Oh, and how vast (and secret) are these vast amounts of data - 'cause another option would be to farm the work out to someone else; like google other compute hosting services are available rather than having the hassle of procuring, assembling, maintaining, and decommissioning the equipment yourself.
I would say double point, I have heard a lot about MSI GPU's are those an option / better? DD4 EEC RAM was in the works. The problem with cloud services is that most of them just offer CPU power and there is a lack of GPU computing power that is proportional to the amount of CPU available. Any estimates on budget? Are we thinking under £5k for a rig of this nature?
If you're looking for something really custom and sold online, you'd be better off contacting someone like Scan or OverclockersUK. They'll be able to assist you without too much effort. A rig of this nature can easily run you into the 20 or 30 grand region. You need to have a budget in your mind and then you can work from there.
You're probably looking at stuff like this or this. [there are others that sell that kind of thing but Boxx and Armari are the names that immediately come to mind] Tbh as @TheMadDutchDude points out, you can easily spend that just on GPUs or a CPU.
That's why I specifically mentioned google, and there are probably loads of others, but I'd read/watched something that mentioned them in particular - you basically rent one of these for 60p per hour. Have a workload that'll take 100 of those 100 hours to complete - that works out at £6,000 or about the same as a single K80 with a 10 core xeon a motherboard and 128GB ram... case, hdd's/ssd's and psu's not included... which'll take your one system 10,000 hours to complete... Hah! RedFlames, just noticed I'm one day older than you... and like 5K odd posts younger
...not to mention it's someone else's leccy bill. Only the cloud front, in addition to Google, MS' Azure offers VMs with up to 4x K80s or M60s, not sure what Amazon offer.
If his/her university is offering PhDs like this, can't you use their compute cluster? Alternatively, Amazon Web Services or something similar would surely turn out to be cheaper (and much more scalable!) than building your own monster machine.
I only just got here, but aren't maths programs and their workloads always optimised for specific architectures? I didn't think there was such thing as a workload so generalised that the hardware decision comes down to "bigger is better". So is the workload here specifically optimised for GPUs? I can imagine fractal work going either way. And if it is loads of heavy, precise visual rendering, is it driver-optimised for specific families of cards more than others? (They usually are. Lots of CAD programs say "do not use X AMD/Nvidia family, we haven't bothered to make them work efficiently. Our program is designed for family Y".) I'm not second-guessing you, just trying to ascertain what the architecture requirements are and how it's been determined that they're a mix of heavy CPU and heavy GPU requirements. This seems like the first and most important question to me. edit - I know this kinda addresses the question but it also kinda doesn't answer it. Apologies if a more precise answer would involve too much confidential info and you can't give it - there are just such massive amounts of money to be saved (or wasted) by pursuing the wrong hardware emphasis.
Not only that, but if the say no they should also be able to provide written evidence of it not being possible and what exactly he/she would actually need.