1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Build Advice Steam...Server?

Discussion in 'Hardware' started by Wicked_Sludge, 26 Feb 2019.

  1. Colonel Sanders

    Colonel Sanders Minimodder

    Joined:
    25 Jun 2002
    Posts:
    1,210
    Likes Received:
    4
    While this is possible and I think you have some good points, I think that ultimately the advantages would be offset by the disadvantage of having to manage a host operating system as well as the virtual machines. As for power usage, in theory yes, running 3 GPUs and one motherboard would be less power and thus more effecient than 3 GPUs and 3 motherboards. Hypothetically say your GPUs use 100W and your motherboard(s) use 5w, you would have either 3x100W + 3x5W or 3x100W + 1x5W for a total savings of 10W, sure you're saving power but not much, and your adding more complexity to the system. In addition, you would be limited to one powerful CPU, which might be powerful enough to handle all 3 users, or it might prove to be a bottle neck. Hypothetically, if you found that the CPU was a bottle neck for one user, you would have to buy a more expensive CPU capable of supporting multiple users, since I assume such a system would use a server class Xeon/Opteron/Threadripper CPU.

    I think this is a fun idea, and I think it is definitely very doable, but I do not believe it is the "make life easier" option. I think it would be a fun headache, I would love to try it with some of my old hardware, but I couldn't realistically recommend this as a smarter option compared to 3 separate systems.

    I think the other thing to pay attention to in Linus' crazy projects is that he uses actual dedicated GPUs hard-wired (DVI, DisplayPort, etc) instead of "streaming" over ethernet or even worse, WiFi. I think WiFi could probably handle a stream or two IF you have a fast enough WiFi router ($), good adapters ($$), and short enough range with no obstructions ($~priceless~$). If you already have ethernet ran, does it support gigabit or higher? Can your router actually serve a fast enough link to 3 PCs?
     
  2. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,084
    Likes Received:
    6,635
    Those numbers are way off. Using this power supply calculator I found on DuckDuckGo 'cos it's easier than working it out by hand, a system with a Core i7-9700K, 2x8GB DDR4, GeForce GTX 1070 Ti GPU, a single SSD, and a pair of case fans would draw 395W - so if you need three of those, that's 1,185W.

    Using the same CPU but upping to three graphics cards and 64GB of RAM ('cos why not?) gets us a load wattage of 778W. That's a difference of 407W, not 10W. Assuming, I dunno, four hours a day usage across all 'three' systems and using the UK average electricity price of 14.4p per kilowatt-hour gives us an annual saving of £256.70. Assuming a five-year lifespan before wanting to upgrade, that's £1,284 in electricity savings alone - more, given that the cost of electricity will rise in the meantime.

    I'm not saying it's a good idea; I'm just saying it's not a bad idea for the reason you've put forward there. Energy saving by shoving compute power into a single central point is most definitely a thing: why do you think cloud computing took off?
     
  3. Colonel Sanders

    Colonel Sanders Minimodder

    Joined:
    25 Jun 2002
    Posts:
    1,210
    Likes Received:
    4
    I think you definitely did a more scientific test than me and you might very well be right, but I will still disagree with you for just a couple reasons. You are correct that data centers care a lot more about power savings, I mean Facebook cares enough to go to extremes like removing USB controllers since they are not used on their servers and they might burn 5 cents worth of electricity over the course of a year. . . multiply that by a hundred or a thousand servers and you might save enough to buy a BigMac or something, but that doesn't mean removing the USB controller is a good idea for a home user. My point is being more power efficient is great for a server, but for a home user, I myself am totally fine with wasting a little power if it makes my life easier. (I'm American, we love wasting electricity on frivolous things)

    First and foremost, I'm stubborn. But setting that aside, I think the use of a power supply calculator is likely to overestimate the actual power draw, when outervision.com wants to recommend a power supply, I think they will aire on the side of caution and over estimate.

    With that in mind, I think the two biggest power draws are going to be the GPUs and the CPUs. A steam server will still have 3 GPUs. It will have 1 CPU instead of 3, but then it comes down to a question of the workload. I say this because a modern CPU (be it Intel or AMD) will dial the clock speed down and scale the frequency back to try to minimize power, even my Ryzen 2600 idles at a slow, low wattage speed by default. So if a modern CPU dials up the power based on the workload, is say 1 CPU running Doom, and 2 CPUs running Candy Crush significantly less than 1 big CPU running 1 instance of Doom and 2 instances of Candy Crush? I cant say for sure, and I doubt it could be reliably tested, but I am certain 1 CPU would be a little more efficient but I doubt it would be a big difference. Of course, you would also have 1 motherboard vs 3, and 1 SSD vs 3 SSDs, but similarly I think the total power draw of the extra devices (yes, RAM, MB, and SSD) is likely to be pretty small. You found the magic number to be 400W worth of difference, or I believe about 200W for each motherboard, RAM, and SSD? I think that number is over estimated. You may very well be right, and I would love to try to devise a test with a kill-a-watt meter to prove that I Am Right On The Internet test your theory since I'm sorta curious about this too. I just don't know if I could test this even with a kill-a-watt meter enough to get verifiable results, and I'm not certain my interest and stubborn ness is enough to convince me to invest much more time into it.

    Simply put, I personally would not feel like the power savings alone would justify the headache of trying to manage a host OS and the VMs, and all the clients, and the networks links. I think 3 separate PCs would be easier to maintain separately. In addition, I think another big advantage of 3 separate PCs is just the fact that they don't rely on each other since the server setup would mean 1 failure would bring down all 3 systems.
     
  4. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,084
    Likes Received:
    6,635
    Err, not aire.
    You're forgetting a CPU with, what, a near-100W TDP? A CPU boosted to the max running a single- or dual-threaded task should, in an ideal world, consume the same or similar power to the same CPU running its maximum threads - that's the whole point of boost clocks in the first place. In other words, and assuming Windows is as aggressive as Linux in clocking up when there's even a light load on the CPU, you're saving 200W in CPU power draw alone.

    I'm happy with you being unconvinced, though - but it's easy enough to test the core figures yourself on a single system and extrapolate. Hell, suggest it to @Dogbert666 or Ben over at CPC and if either of 'em has the budget and enough spares in the bits box I'll build one up for real and do the testing myself - and I'll bet you'll find my figures a lot closer to reality then yours.
     
    Last edited: 9 Apr 2019
  5. meandmymouth

    meandmymouth Multimodder

    Joined:
    15 Sep 2009
    Posts:
    4,261
    Likes Received:
    311
    Do it.
     
  6. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    A shared host will be more power efficient, but by how much is going to depend heavily on workload (and for non-enterprise use, idle time is going to be the biggest factor). Online power supply calculators (and the laughable 'required power' estimates from component makers who overspec to cater for the most bargain basement no-name PSU) are effectively garbage and have had no value since CPUs and GPUs stopped running 24/7 at a single clock speed. They will vastly overestimate actual power draw in every scenario.

    One foible of multi-user systems is that if you're swapping 3x systems to 1x with the same CPU, and have simultaneous usage, then tasks will (for all intents and purposes) take 3x as long to complete and therefore use 3x the power. On the flipside if there is little simultaneous use (i.e. everyone takes their turn) your power savings are that of 2x idling CPUs rather than that of 2x loaded CPUs, which for modern systems can be 10W or less for a CPU in a higher C-state + a motherboard + a sleeping GPU.
     
  7. GeorgeStorm

    GeorgeStorm Aggressive PC Builder

    Joined:
    16 Dec 2008
    Posts:
    7,000
    Likes Received:
    548
    Potential issue being if the gpus will clock up/down correctly when being used for less intensive tasks (I've no idea).
    With three PCs, you can have two off and one on if only one is being used, with one central pc, it's always on if anyone wants to use, and even with gpus clocking down they'll be consuming more power than if they weren't on at all type thing.

    Edit: timing hah
     
  8. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,084
    Likes Received:
    6,635
    I can only speak for the GeForce RTX 2080, 'cos my last dGPU was a GeForce 9800, but it does indeed clock down when I'm sat doing Firefoxy things - draws about 4W at the desktop, according to GreenWithEnvy.

    upload_2019-4-9_9-33-36.png
     
  9. GeorgeStorm

    GeorgeStorm Aggressive PC Builder

    Joined:
    16 Dec 2008
    Posts:
    7,000
    Likes Received:
    548
    I'm meaning when using them via a VM etc as I always thought you were limited (but I haven't even touched a VM for years now).
     
  10. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,084
    Likes Received:
    6,635
    Shouldn't make a scrap of difference: when you enable PCIe passthrough for a device, you're effectively disconnecting it from the host and connecting it to the guest which treats it as if it were connected at a bare-metal level. If it clocks down on the host, it'll clock down on the guest (assuming here that the host and guest are operating systems with drivers offering the same functionality, of course.)
     
  11. Colonel Sanders

    Colonel Sanders Minimodder

    Joined:
    25 Jun 2002
    Posts:
    1,210
    Likes Received:
    4
    LOL thanks I genuinely didn't know that since I almost never type that word, I think that's the first time this year I've used that phrase. Also. . .
    Than. :p Grammar Nazi over now.
    I'm not trying to be dense, but I'm not sure I follow your argument. Like if I have a dual core 95w CPU, when it's idle it's going to clock back to about 800mhz or less consuming minimal wattage. It will boost up to its advertised 3GHz speed as soon as I start a task consuming 95w for a little while before going back to sipping only 5w or whatever its idle power is.

    Now if I start a single threaded task, will a dual core CPU still consume 95w while the single threaded task is running? If the CPU is designed well, I would assume that either the single threaded power usage on a dual core (and dual thread) CPU would probably be about half of its max TDP. Or maybe it might boost to beyond stock speed to complete the task faster, thus resulting in a higher power draw for a shorter amount of time.

    In my opinion, power draw on modern CPUs should be more a function of load instead of purely the total number of physical CPUs. If I have a room with 3 95w TDP CPUs all sitting idle they should just be sipping a minimal amount of power, or better they should be suspended in an S3/S4 sleep state. As soon as I start playing games on each CPU they should then consume up to 95w per CPU until the game is over, they will probably use less than 95w since I think a lot of games are GPU instead of CPU bound. That raises the next question, can a single CPU handle 3 gamera at once? Hypothetically a multi core CPU should be able to handle 3 gamers but then will a multi core CPU have a higher TDP? Or let's say you compare a 125w TDP quad core CPU in a server to two 65 or 95w separate CPUs. In my opinion, a 125w quad core CPU should perform about as fast as 2 65w dual core CPUs and just a little slower than two 95w dual core CPUs. Of course, that is assuming all the CPUs are the same architecture, generation, manufacturing process, etc. Assuming two CPUs can perform at the same thermal efficiency, the same work per watt, then I think TDP might be a good half-assed way to estimate a CPU's total speed.
    If Dogbert can sperg out I would love to see his opinion of the situation.
     
  12. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,084
    Likes Received:
    6,635
    Hah! That's what I get for writing it on my phone halfway through breakfast, that!
    You're closer to correct with the latter than the former. Basically, Turbo Boost exists 'cos a chip has a certain amount of headroom to run all its cores at X speed - but if you're only running 4/2/1 cores (delete as appropriate) it can use that headroom to run the cores you're using at Y speed, where Y is greater than X. Doing so, of course, consumes more power than running said cores at X speed. In an ideal world where a perfectly spherical processor is constrained only by its TDP, a processor which draws 95W running all eight of its cores at 3.2GHz would also draw 95W running one of its cores at 5GHz (numbers pulled from thin air for illustrative purposes.) Naturally, in the real world, that's not the case - but an eight-core processor running 100% load on one of its cores with Turbo Boost enabled will certainly draw more than 1/8th the power of the same processor running 100% load on all eight of its cores.

    The 'complete its task faster' bit, though, doesn't apply here: gaming is a sustained, not bursty, workload. A processor that completes, say, an image conversion task faster might very well draw less power total for the duration of said task than a processor which takes longer even if the slower processor draws less power at peak (which is why performance-per-watt is a metric), but that only counts for tasks which have a defined end point; when you're gaming, your processor is under load the entire time you're gaming - and even if your processor is twice as fast, you're not going to get to the end of E1M9 any quicker.
    Why assume? Plenty of benchmarks out there measuring power draw under various loads. The linked one shows an Intel Core i7-7700K and a Core i7-6700K - both with a 91W TDP - and measures CPU-only power consumption as 16.6/17.2W idle, 77.3/76.8W gaming, 98.2/96.5W FPU-heavy benchmark load, and 137.4/128.8W when running the Intel Power Thermal Utility to stress things out good and proper. This, then, tells us that a CPU running a game can be expected to draw around 85 percent of its stated TDP. For the 95W from my earlier example, that means it should be drawing around 81W. And, remember, that's sustained; the game doesn't finish earlier the faster your processor is.
    I'm not sure when you looked at the CPU market last, but 125W quad-cores and 65/95W dual-cores aren't really a thing outside very specialised markets: I've just upgraded my desktop, and I have a 95W Ryzen 7 which has eight cores and 16 simultaneous threads. You can get 4C8T chips in a 15W power envelope, and AMD's entry-level Ryzen 3 1200 PIB has four cores in a 65W TDP. The Core i7-9700K I used for my earlier example, meanwhile, is an eight-core 95W part. Give two CPU cores to each of the virtual machines and leave two for background tasks, job's a good 'un - at least until games get better at making full use of multiple CPU cores.
    Nah, it's a terrible way of estimating speed. Look at the Ryzen/Athlon Pros AMD just announced: they range from a two-core four-thread 2.4GHz/3.3GHz part to a four-core eight-thread 2.3GHz/4GHz part - but they're all a 15W TDP.
     
  13. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    CPU power consumption will vary surprisingly dramatically during gaming. Once assets have been loaded, physics has been simulated, and draw calls called, the CPU will drop power state locally and precipitously. It's masked quite a bit by most practical means of measuring power draw being 2-3 regulator stages removed from the actual load (wall-wart meter -> AC-DC ATX PSU -> motherboard VRMs -> FIVR (if present) -> CPU power gating) each of which will act to 'average out' power draw over time. Gaming looks like a 'constant moderate' load compared to something like non-IO-bottlenecked 3D rendering, but at a millisecond/microsecond level it is a very bursty load (and in the case of Vsync, a very predictable bursty load). This is why FPS and time-per-frame are not strict inverses of each other, as the instantaneous and time-averaged performance can be quite different depending on how effective (and how fast) power gating can occur.
     
  14. Colonel Sanders

    Colonel Sanders Minimodder

    Joined:
    25 Jun 2002
    Posts:
    1,210
    Likes Received:
    4
    Thanks for your reply Gareth, I haven't checked the CPU market a lot lately and the last time I looked I didn't care about power usage so no I didn't know there were so many 4 core "95w TDP" CPUs. I bought a Ryzen 2600x a few weeks ago and didn't even think twice about the power consumption ('cuz America, we got cheap oil over here bro) and I also purchased an older AMD space heater FX-9350 CPU at about the same time mainly because it's unusual and interesting, but also because my basement goon cave gets cold sometimes.

    Is gaming a constant load? Sure it's a lot more constantly than say encoding audio/video, crunching a spreadsheet, or running a simulation. It is still a variable load especially if your system is GPU bound, but honestly that is kinda just splitting hairs.

    I am surprised by your last part about the wildly different speed CPUs with the same 15w TDP. I haven't yet read that article since prior to yesterday that would have been booking to me but I suddenly find myself with a strange interest in the subject.
     
  15. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,084
    Likes Received:
    6,635
    I'm not saying it's not variable, in that when there's not much happening it'll be less demanding than when there's a bajillion enemies on screen; I'm saying that, unlike tasks that can finish quicker on a faster processor like image editing or video rendering or what have you, the game doesn't finish any quicker if you throw more compute at it. If you're playing for half an hour, you're playing for half an hour whether you've got a 6502 or a Threadripper.
    You'll see it across the market: TDP isn't really "power draw," though it serves as a handy equivalent for most tasks; it's Thermal Design Profile (or Power, depending on who you talk to) and says "design the cooling system to sink up to this many watts." As the earlier-linked benchmark showed, a 91W TDP chip can draw north of 130W if pushed with the right (thoroughly artificial) workload.

    Manufacturers don't work out the TDP per-chip, either, but per-family. Hence all the top-end Ryzens being 95W and the rest 65W, and hence all those new mobile parts being 15W. In reality, the bottom end Athlon will almost certainly draw less than the top end Ryzen - but they're both 15W TDP.

    And that's why TDP is entirely unhelpful as a performance metric (apart from "the 95W family is probably gruntier than the 15W family, unless something has gone very, very wrong.") It's also useless generation-to-generation: dropping down a process node will drop the power for the same performance, meaning an Amtel Corezn 3rd Gen will draw less power and likely have a lower TDP for the exact same performance as a 2nd Gen - or higher performance for the same TDP.
     
  16. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    CPU-only (well, for practicality, CPU + motherboard; because who wants to try and make an LGA breakout shim?) fine timescale power draw testing would be interesting, and really unique. Out of the various reviewers with the Powernetics kit (or full benchtable test rigs), almost all just post average power draw figures for a run. Toms do charts of power over time for their GPU reviews (which highlighted the power spikes of the Vega Nano which are the source of the issue people have with laptop bricks feeding DC-DC converters), and some PSU reviewers do the same for PSU reviews to show ripple (e.g. Jonnyguru) but I have never seen the same done for CPU draw. It would need pretty high frequency capture, north of 200kHz to catch things like the infamous (and many argue mythical) '10 microsecond FIVR stall' of Haswell switching on the power to upper SIMD pipeline bits (the reduced throughput period supposedly being the smaller FIVR caps taking longer to stabilise vdroop after power-on than external VRMs would).
     

Share This Page