1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Blogs I believe it's time for a PC case revolution

Discussion in 'Article Discussion' started by Dogbert666, 12 Sep 2016.

  1. Dogbert666

    Dogbert666 *Fewer Staff Administrator

    Joined:
    17 Jan 2010
    Posts:
    1,655
    Likes Received:
    163
  2. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,355
    Likes Received:
    324
    I don't agree with that: Even with multiple GPUs and a large CPU cooler (and/or multiple radiators) using an SFX PSU can shave a decent chunk of wasted volume off of a case. The Kimera Cerberus (nee Nova) is a good demonstration of that.

    Hell, an ATX PSU is so oversized that you can build an entire discrete-GPU system in one!
     
  3. David

    David Take my advice — I’m not using it.

    Joined:
    7 Apr 2009
    Posts:
    13,395
    Likes Received:
    2,305
    Do you think the arrival of consumer 10GbE will help to drive the popularity of ITX systems and smaller chassis too? There is a case for having the bulk of your storage outside of your PC i.e. in a server or NAS, but you're still limited to ~ 100MB/s transfer rates.

    I'm already running my main rig in a Hadron Air, which doesn't use a standard ATX or even SFX PSU, and it's still a bit porky in comparison to some - The Dan A4 shows what is possible.
     
  4. fix-the-spade

    fix-the-spade Well-Known Member

    Joined:
    4 Jul 2011
    Posts:
    3,694
    Likes Received:
    343
    I like silent PSU's, if SFX power supplies can run as quietly as my ATX RM750 (which is passive except under high load) then sure why not. Fan noise is a far bigger no-no than case size to me however. A lot of ITX stuff is LOUD thanks to the need for so much forced airflow.
    .
    As for SSDs, why are SATA drives still using the 2.5 format? Surely they only need to be as wide as the data/power connectors. Stack those vertically and then you could fit two or three into a single 3.5 drive slot. It would be a more flexible solution than M2's up against the board layout.
     
  5. megamale

    megamale Member

    Joined:
    8 Aug 2011
    Posts:
    252
    Likes Received:
    3
    Why stop at the ATX PSU? the whole ATX system should be overhauled. Why are motherboards still square rather than long and thin. Or can't they have separate north and south bridges linked by come cable? I am a big fan of the Mac Pro dustbin form factor with the rads in the middle.

    Other crazy ideas:
    - Move all the power connectors to the "back" of the motherboard.
    - 90 degrees ATX plugs
    - Wide but thin power supplies.
    - "Arterial" watercooling printed in the pcb.
    - Flat mounting stackable RAM sticks.
    - IO port detached from motherboard
     
  6. Jimbob

    Jimbob Member

    Joined:
    2 Jul 2009
    Posts:
    191
    Likes Received:
    1
    I finished an ITX build for a customer with SFF PSU, M.2 etc (Gigabyte make an ITX board with the M.2 on the underside which is a neat idea) the other day which was pretty cool, sadly 99% of people that ask for a gaming PC want a full ATX with flashy lights etc. My own system is in a Corsair Graphite case, however if I was building again I would probably go down the ITX route.... Or would I, flashy lights are cool.
     
  7. Trance

    Trance Two steps forward, one step back

    Joined:
    6 May 2009
    Posts:
    610
    Likes Received:
    31
    Personally I think the biggest improvement that can be done is to get rid of the outdated 24pin, there are server grade mitx motherboards that can do all the power regulation themselves off 12V, why can this not be taken into the mainstream! Then PSUs could become even smaller.
     
  8. Taua

    Taua Member

    Joined:
    20 Sep 2014
    Posts:
    87
    Likes Received:
    0
    I agree, I think PC design could be made much smaller and less 'boxy', as long as standards are vigorously held to, whatever the new ones may be.

    After building a mini powerhouse PC into an EVGA hadron air case, it struck me just how overblown computers are.
     
  9. RedFlames

    RedFlames ...is not a Belgian football team

    Joined:
    23 Apr 2009
    Posts:
    11,705
    Likes Received:
    1,487
    The only way to make PCs appreciably smaller is to toss most of the ATX spec/standard in the bin.

    The main one is the power connectors... do we really need 32 different wires just to provide power to the cpu/motherboard? Esp as a few are multiples of the same voltage?

    Most PCs currently have the following [based on 24pin + 8-pin EPS]:

    6x 12v Wires + 3 [iirc] per PCI-E connector
    5x 5v
    4x 3.3v
    12x ground + 2-3 per PCI-E

    Do we really need all of them? [not really as there's ITX boards that are powered by a laptop power brick]
     
    Last edited: 12 Sep 2016
  10. schmidtbag

    schmidtbag New Member

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    I don't agree with all of your "crazy ideas" but I do think that the PSU as a whole needs an overhaul. To me, I think PSUs need to look into dropping legacy support. The only wires we need on a modern PSU is red, orange, yellow, green, and black (arguably, purple can be pretty nice too, but most people don't need that). For most modern PSUs, there are multitudes of the wires I mentioned. I don't see why they couldn't just drop the legacy wires, use fewer wires altogether, and use a thicker gauge for the red, orange, yellow, and black wires. Also, dropping support for the 4(+4) pin CPU connector in favor of a bigger, newer "ATX" connector would be nice.

    As for your ideas, back-mounted 90-degree power connectors would only be beneficial to certain cases, but for many people it could be a real inconvenience. I never looked into it, but I'm sure you can find L-brackets for the ATX connector anyway.
    The "arterial" water cooling sounds interesting, but I think the expense of that would heavily outweigh the performance gains of just using a standard water-block. What I think needs to be done is there should be motherboards and GPUs prepped for liquid cooling. To me, you waste a lot of money buying good parts where the heatsinks end up getting removed before they're used.
    I think instead of stackable RAM, we should move onto strictly SO-DIMMs. In my experience, they're very capable, but they're just simply smaller. You can lay they flat and "stack" them that way, much like in laptops, so they're great for ultra-thin PCs.


    Anyway back to the article, I think the easiest solution would be for PSU manufacturers to focus on STX, but supply mounting brackets for ATX cases. Think of it like the 2.5" SSDs that come with 3.5" brackets for desktop PCs. Companies don't need to do this forever, but if something like STX is to be the norm, they need to be commodities, and right now they're not.
     
  11. bawjaws

    bawjaws Well-Known Member

    Joined:
    5 Dec 2010
    Posts:
    3,461
    Likes Received:
    363
    How are SFX PSUs these days for fan noise? One complaint regarding smaller PSUs is that they generally have 80mm fans which are not the best for noise (either loudness or noise profile). I know you can get SFX-L PSUs with 120mm fans, and that increasingly we're seeing semi-passive PSUs, but I'd be reluctant to shift to a smaller PSU if the noise is going to be unpleasant, especially if it doesn't actually save that much space in an ATX case (where there tends to be a lot of wasted space as it is).

    SFX PSUs in an mITX box? Yes, please. But I think ATX has bigger problems than the size of the PSU, to be honest.
     
  12. ZeDestructor

    ZeDestructor Member

    Joined:
    24 Feb 2010
    Posts:
    226
    Likes Received:
    4
    The problem there is that for a lot of us who have decent use-cases for 10gbit right now, we also need more than 1 local SSD, and often times more add-in cards than just the GPU, so really, the mid-ATX tower is still the best fit for the job in most cases.

    Heh.. I have a sortof similar complaint about the NCASE M1.. if they'd raised it by about 20mm of height, it could have fit an mATX board fairly easily, and that annoys me more than it has any right to!

    Some cases are already doing the (IMO) much more sensible solution of having 2.5" bays behind the mobo, or under the PSU cabling or similar.

    As for why the 2.5" form factor remains, it mostly has to do with legacy, packaging or density reasons.

    Legacy: lots of machines (laptops in particular) have custom-designed 2.5" bays, sometimes completely toolless (like in my Dell M4800). For those machines, having something different is a nightmare of fitting in an adapter, validating said adapter to work in all conditions, EMI, etc.

    Packaging: The bigger chassis means more surface area for heat dissipation, and the tougher chassis means it can be handled less carefully.

    Density: if you need more than 2 disks worth of storage, or more than a 4 NAND packages worth of NAND, you can't use M.2 exlusively - you simply need some sort of cabling to a remote component installed somewhere else. Servers in particular are sensitive to this, with 4 disks in a machine the typical minimum, and for storage boxes, anything from 24 to 90 disks per chassis. (If you can show me how to fit 180 lanes of PCIe 3.0 to a single ATX or EATX board, I'll be really, really impressed).

    In short, for now, the 2.5" form factor is extremely practical for pretty much anything not an ultra-thin laptop, so don't expect it to go away any time soon.

    Realistically, we only need 3 voltages: +12V, +12VSb (or +5VSb.. the choice is completely arbitrary) and GND (0V). Everything else, we can just have a small VRM derive it from the +12V line anyways - in fact that's exactly what pretty much any decent PSU does already: one giant +12V rail (and a tiny independent +5VSb rail for standby power) and then VRMs to generate the +3.3V, +5V, -3.3V and -12V rails.

    If you want to see it in action, Dell and Lenovo have already transitioned to 12V-only desktops (+3.3V and +5V generated on the mobo for disks)... and people are super-unhappy that it's not standard ATX despite the obvious benefits of making PSUs simpler.

    As for slimming down PSUs, IMO we just just adopt a modified form of server PSUs: thicken it to 60mm, put in 2 fans and make it hotswappable and redundant. That should be quiet with any amount of decent design based on how quiet we can get some of the 150+W CPU coolers already.

    EDIT: also, arterial watercooling is well on it's way in order to allow for stacked processing dies. There's been a few demos over the years, but real money is being funnelled into it now to get it mass-producable.
     
    Last edited: 12 Sep 2016
  13. schmidtbag

    schmidtbag New Member

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    Though I'm not saying you're wrong, I do find it very hard to believe that most [good] PSUs use VRMs for all of their lower voltages. Take a look at the average PSU sticker - you'll find even many cheap ones clearly do not work that way, because you may find that the 5v wires offer many more amps than the 12v. When you use a VRM for the sake of getting a lower voltage, you're losing a lot of amps, and generating a lot of heat.


    To my knowledge, that's only for their all-in-one systems, which are effectively non-portable laptops. These usually offer a single power brick. I don't know if there's a new model for the power bricks, but last time I checked, both companies use 19.5v models. Though, perhaps this differs in countries with 220v outlets.
     
  14. Anakha

    Anakha Member

    Joined:
    6 Sep 2002
    Posts:
    587
    Likes Received:
    7
    Personally, I'd like to see one simple thing:

    Standardized bus bars.

    3 pieces of copper (+12, GND, +5v) running through the case, pre-run by the manufacturer. The PSU energizes the bars at it's securing point (whatever that might be), and everything takes it's power from the bar.

    The bars run down the spine of the case, at the edge of the motherboard tray, so the motherboard can clamp on for power.

    Drive cages are pre-run with power connectors (possibly in the form of bars on the sides of the cages that the drives will latch onto when installed).

    Also reach-around connectors (or pass-through points on the motherboard) so power-intensive expansion cards (like graphics cards) can reach around the edge of the motherboard to clamp on for extra juice also.

    Get something like that sorted, and 99% of the cable management issues with cases will be solved. It would make power delivery easier, and case manufacturers would have more scope to play with layouts and design. Less space would be required for cable routing, and PSU basements could be properly isolated.
     
  15. Elton

    Elton Officially a Whisky Nerd

    Joined:
    23 Jan 2009
    Posts:
    8,575
    Likes Received:
    189
    Probably the first change will have to be the ATX 24 and 4 pin plugs that require a change.

    The problem mind you is that people aren't going to be quick to adopt it because of an oddity with backwards compatibility.

    Which brings me to the suggestion of potentially doing it with modular power supplies first. The 6pin PCI-E is also a strange thing as the layouts for them really make no sense, but that's neither here or there.
     
  16. schmidtbag

    schmidtbag New Member

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    But that's the thing - it's not hard to get an adapter. Hell, there were adapters to make 20-pin connectors work as 24-pin connectors. All we're doing is passing through electricity here or changing the shape of something. It should be all that difficult to do a brand new efficient design. Over time, the old components won't need to be converted anymore. The only people who would be left out are the ones who depend on stuff like the -12v wire, which to my knowledge hasn't been used since the ISA slots were around (so, 20+ years ago).
     
  17. tristanperry

    tristanperry Active Member

    Joined:
    22 May 2010
    Posts:
    907
    Likes Received:
    38
    Good article; I definitely agree. There's a lot of potentially awesome designs that could be built, but there currently just seems to be all-too-many standard ATX tower style cases still being pumped out.

    Silverstone are doing some nice mini-ITX cases (which still can support full length GPUs) but it'd be nice to see more innovation and competition in that market.
     
  18. ZeDestructor

    ZeDestructor Member

    Joined:
    24 Feb 2010
    Posts:
    226
    Likes Received:
    4
    Switched-mode VRMs are over 95% efficient (as opposed to the amazingly poor efficiency of linear regulators), and my definition of decent PSU is pretty low.. something like Corsair's much-maligned (very unfairly at that) CX750M would be my minimum bar and does precisely that (you can pretty easily tell by the PSU being able to supply full or very near-full power to the +12V rail). My very own AX850 from 2012 does that as well.

    At this point, the reality is that group-regulated PSUs are a liability because it's super-easy to get a very +12V-biased crossload with modern machines, and then all of a sudden your users are complaining that their machines are shutting down or efficiency is crap. Meanwhile, switched-mode (as opposed to hot-running linear) VRMs sidestep that problem completely.

    The Haswell and up machines (at least the Optiplex/ThinkCentre towers and SFFs, should also extend to the Inspiron/Ideacentre line) are 12V-only designs - i.e: the PSU only supplies +12V and the VSb (which I think is still +5VSb, but standby circuit has always been independent to the rest of the PSU anyways).

    On the AIOs, laptops and USFF machines, the 18-20V (depends on who...) input is turned pretty much instantly into the final voltages - what's a change from +12V to +20V to the VRMs? It's basically changing nothing since the VRM handles every voltage from ~10V (battery on my M4800 is 11.1V to the charger's 19.5V) to ~20V.

    VRMs to generate the not +12V rails on the mobo are how the big 3 OEMs have been doing it for years on servers, and since very recently, on desktops as well.

    On the more system-buildy end of things, we have picoPSUs already, and on the more open side of server chassis builders, Intel and Supermicro have a dedicated power board in front of the basic +12V redundant PSU pair to generate the minor rails. I mean, what does it really change to have that set of VRMs on the mobo rather than a dedicated board/module?

    Bus-bars would be impractical (makes laying things out a royal pain. Really, all we need to do is to plug PSUs directly into the mobo (like in my DL380 G6) and then just route the power around - I mean, we already chuck around 100s of amps for VRMs on multi-CPU boards.. doing something similar for PCIe isn't that much of a stretch. Apparently that's not happening yet though. More likely NVLink will be a runaway success and nV will force it into desktops somehow (I'm really hoping they actually do) and essentially force the micgration form PCIe slots to mezzanine connectors and standardized GPU coolers.
     
  19. VipersGratitude

    VipersGratitude Well-Known Member

    Joined:
    4 Mar 2008
    Posts:
    3,033
    Likes Received:
    410
    Blah blah blah...technical talk...blah blah blah

    None of it matters.

    What the article argues for is an xbox - Small form factor and "good enough" computing power under the hood. This at a time when DX12's GPU scaling could restore the PC's distinctiveness after years of being neglected by a console-centric industry. I say make cases as big as they used to be - I want to experience VR on a Cray, not a Casio.
     
  20. Guest-16

    Guest-16 Guest

    Remove the ATX and EPS plugs altogether and have a single high voltage input. Cable reduction is a MUST. Let the components do the DC-DC conversion. Multiple connectors to a single device is ridiculous and all the connector standards we currently have are shite: SATA - flimsy ****. Molex - shitshitshitshit etc
     
Tags: Add Tags

Share This Page