1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Other Show me your... NAS/Server

Discussion in 'Hardware' started by Votick, 12 Aug 2011.

  1. crazyg1zm0

    crazyg1zm0 Minimodder

    Joined:
    20 Feb 2007
    Posts:
    2,334
    Likes Received:
    55
    Yes I believe the main reason is that its quite old, And we are moving alot of stuff to a new Data centre so a lot of our kit is being replaced.

    Going to grab 3x3TB drives on my way home from work tonight and that will be the start of it all.
     
  2. Votick

    Votick My CPU's hot but my core runs cold.

    Joined:
    21 May 2009
    Posts:
    2,321
    Likes Received:
    109
    Once our stuff is out of the 3 year warranty we throw it away.
    R610,R710's > BIN

    And we are taking 90+ at a time
     
  3. Icy EyeG

    Icy EyeG Controlled by Eyebrow Powers™

    Joined:
    23 Jul 2007
    Posts:
    517
    Likes Received:
    3
    I finally decided to upgrade my ghetto NAS build to something decent (albeit old).

    Here are the pictures (sorry for the potato quality):

    https://drive.google.com/file/d/0B-5SJga8y9EtWGxsQmVCRm5lWEk/view

    I firstly used this as a WHS machine (Point of View Ion ITX in a Tacens Ixion case, 500Gb+2x1TB), but then I upgraded it to OMV. I managed (somehow) to cram 4x3TB WD Reds in there, and threw in a fan from an old Chieftec PSU for good measure. Also, the nonstandard SFX PSU that came with the case died, and due to the lack of space, I ended up adapting a Seasonic Flex-ATX PSU with a custom backplate...

    https://drive.google.com/file/d/0B-5SJga8y9EtX0ZBb2hEc3dHSlU/view

    This week I found a HP ProLiant G7 N54L for 130€ and decided to buy it.

    https://drive.google.com/file/d/0B-5SJga8y9EtN2I2cUZrNTBmWG8/view

    The beauty of OMV is that I only had to swap the HDDs and the pen drive, Debian booted just fine on the G7, and OMV kept all the configurations.
     
    Last edited: 18 Jul 2015
  4. faugusztin

    faugusztin I *am* the guy with two left hands

    Joined:
    11 Aug 2008
    Posts:
    6,953
    Likes Received:
    270
    Icy EyeG, we can't see your private images.

     
  5. Icy EyeG

    Icy EyeG Controlled by Eyebrow Powers™

    Joined:
    23 Jul 2007
    Posts:
    517
    Likes Received:
    3
    Sorry about that. Apparently I can't embed the images... :blush:
     
  6. bionicgeekgrrl

    bionicgeekgrrl Minimodder

    Joined:
    1 Oct 2009
    Posts:
    223
    Likes Received:
    7
    My server isn't really worth taking a photo of. Its currently:

    • A battered Antec 900 case
    • Core2Duo E4600 (going to put an E8600 in soon, the 4600 was only ever meant to be temporary!)
    • 4Gb RAM, it doesn't like more memory for some reason
    • Asus P5Q-e
    • geforce 8400gs just for diagnostic stuff
    • 4x2Tb HDD in raid 5
    • Debian Linux
    • 2x 500Gb drives, one for boot, another for additional storage
    • 2x 500Gb drives - external
    • USB3 PCI-e card to speed up external data transfers

    Currently the raid 5 array is 'recovering' after a power outtage when adding the 4th drive today. I'm not sure if all the data on it is hosed, but I have backups of most data at least. When the recovery is finished I'll at least have a 6tb array available to use I guess.

    It could do with being upgraded, but funds don't permit it presently.
     
  7. Icy EyeG

    Icy EyeG Controlled by Eyebrow Powers™

    Joined:
    23 Jul 2007
    Posts:
    517
    Likes Received:
    3
    I'm really afraid of that, so I invested in an UPS. I deem it as essential, if you don't have a dedicated hardware RAID controller with battery.
     
  8. bionicgeekgrrl

    bionicgeekgrrl Minimodder

    Joined:
    1 Oct 2009
    Posts:
    223
    Likes Received:
    7
    I had a UPS until I moved and it didn't survive the move. Thankfully most stuff is backed up or duplicated as my other half has copies of some of the contents on a external drive he brings when he visits. the rest is backed up online or on other machines. Just annoying. First data loss in this fashion for a long while really.

    XFS filesystem usually handles most things, but until the recovery finishes i'm not sure if the filesystem has been damaged beyond xfs_repair's ability to fix (superblock could have been damaged/wiped). But I might get lucky and find the data is ok when the recovery finishes, but i'm rather doubtful sadly. Live and learn!
     
  9. gagaga

    gagaga Minimodder

    Joined:
    14 Dec 2008
    Posts:
    193
    Likes Received:
    10
    I've had endless problems with RAID 5, it's just not robust enough for home use. The parity is there to allow service to continue if a drive fails - it was never intended to be a backup mechanism.

    I've now gone to flexraid for my media stuff - you've still got the ability to recover the data on a failed drive without going to backups, but even if the parity is corrupt, the data on the other drives is still safe.
     
  10. bionicgeekgrrl

    bionicgeekgrrl Minimodder

    Joined:
    1 Oct 2009
    Posts:
    223
    Likes Received:
    7
    First problem in over 5 years I've had with it tbh. It worked flawlessly in that time, just this one time screwing up.
     
  11. gagaga

    gagaga Minimodder

    Joined:
    14 Dec 2008
    Posts:
    193
    Likes Received:
    10
    Meant to say good luck with the rebuild.

    My worst was the controller over-writing the partition table on 2 drives. I had to manually calculate the parity to set up the recovery tools - took 2 days....
     
  12. bionicgeekgrrl

    bionicgeekgrrl Minimodder

    Joined:
    1 Oct 2009
    Posts:
    223
    Likes Received:
    7
    This is software linux RAID, so it is hardware agnostic. While it loses the battery option of hardware raid, it doesn't bite me when the controller dies and I can't get hold of another one (had this happen in an old HP server I had), software raid at least allows me to throw it in almost any linux system with SATA to access data.

    Unfortuately whilst the array rebuilt ok in the end, the filesystem was toast :( so glad I keep backups of most stuff that can't be replaced.
     
  13. knuck

    knuck Hate your face

    Joined:
    25 Jan 2002
    Posts:
    7,671
    Likes Received:
    310
    Gigabytes GA-B75M-D3H
    i5 4690
    14GB of RAM (4+4+4+2)
    SYBA SI-PEX40064 (4 sata ports)
    Corsair Force GT 120GB for OS
    8x2TB for a usable 10TB raidz2-0 ZFS pool
    Fractal Design Node 804
    Thermaltake Toughpower 750W
    CentOS 7

    Job :
    -File server
    -web server
    -Openvpn server
    -minecraft server
    -ut4 alpha server (not configured yet)
    -teamspeak home made bot (afk, sound effects, etc)

    Excuse the very crappy photos. I have never figured out how to take decent photos inside. They're always dark and full of noise.

    [​IMG]

    (only half of the disk installed)
    [​IMG]

    In all its glory
    [​IMG]



    ZFS has worked great so far. I've had a few disk failed and my pool is file. Very satisfied overall. The pc is also very quiet, which is a great plus.
     
    Last edited: 20 Jul 2015
  14. gagaga

    gagaga Minimodder

    Joined:
    14 Dec 2008
    Posts:
    193
    Likes Received:
    10
    How's the power consumption? Is ZFS constantly spinning the disks / running CPU or can you tell it to spin down?

    Do you use an SSD cache drive?

    I'm using flexraid at the moment (dual parity) but would love something I can fit and forget like ZFS.
     
  15. knuck

    knuck Hate your face

    Joined:
    25 Jan 2002
    Posts:
    7,671
    Likes Received:
    310
    1) The powerstrip that I use says the server draws 76Watts. I doubt this is the case but meh, whatever.

    2) I think the disks are constantly spinning. I haven't looked into this at all. Sorry

    3) CPU usage is very very low.

    4) no ZFS cache on the SSD. The ZFS is basically just a backup and media server. I wouldn't need the extra performance.

    5) Give it a shot ! It's very easy to set up
     
  16. gagaga

    gagaga Minimodder

    Joined:
    14 Dec 2008
    Posts:
    193
    Likes Received:
    10
    Cool - my current server uses around 75w in spindown (~18 drives + 2x IBM M1015s).

    Think i'll have a play. Would it be wrong to try this with the 5x 840 pros i've got lying around at work? Guessing i'll get some decent throughput...
     
  17. play_boy_2000

    play_boy_2000 ^It was funny when I was 12

    Joined:
    25 Mar 2004
    Posts:
    1,618
    Likes Received:
    146
    Didn't realize we were doing cisco stuff.

    Mine:
    [​IMG]
    [​IMG]

    The scary part is, I have more, but lack space to mount it :(

    Did my CCNA recert in may, now on to CCNP!
     
  18. knuck

    knuck Hate your face

    Joined:
    25 Jan 2002
    Posts:
    7,671
    Likes Received:
    310
    I'm not sure if this is a good idea. You should make sure TRIM is supported by ZFS before you create a pool of SSDs
     
  19. gagaga

    gagaga Minimodder

    Joined:
    14 Dec 2008
    Posts:
    193
    Likes Received:
    10
    Should be okay, purely for play rather than prod. If I go ZFS it'll be on Seagate 8tb shingle drives which are a whole other challenge entirely...
     
  20. knuck

    knuck Hate your face

    Joined:
    25 Jan 2002
    Posts:
    7,671
    Likes Received:
    310
    damn I didnt know about those drives. Mmmmm this is tempting ! haha
     

Share This Page