Yes I believe the main reason is that its quite old, And we are moving alot of stuff to a new Data centre so a lot of our kit is being replaced. Going to grab 3x3TB drives on my way home from work tonight and that will be the start of it all.
Once our stuff is out of the 3 year warranty we throw it away. R610,R710's > BIN And we are taking 90+ at a time
I finally decided to upgrade my ghetto NAS build to something decent (albeit old). Here are the pictures (sorry for the potato quality): https://drive.google.com/file/d/0B-5SJga8y9EtWGxsQmVCRm5lWEk/view I firstly used this as a WHS machine (Point of View Ion ITX in a Tacens Ixion case, 500Gb+2x1TB), but then I upgraded it to OMV. I managed (somehow) to cram 4x3TB WD Reds in there, and threw in a fan from an old Chieftec PSU for good measure. Also, the nonstandard SFX PSU that came with the case died, and due to the lack of space, I ended up adapting a Seasonic Flex-ATX PSU with a custom backplate... https://drive.google.com/file/d/0B-5SJga8y9EtX0ZBb2hEc3dHSlU/view This week I found a HP ProLiant G7 N54L for 130€ and decided to buy it. https://drive.google.com/file/d/0B-5SJga8y9EtN2I2cUZrNTBmWG8/view The beauty of OMV is that I only had to swap the HDDs and the pen drive, Debian booted just fine on the G7, and OMV kept all the configurations.
My server isn't really worth taking a photo of. Its currently: A battered Antec 900 case Core2Duo E4600 (going to put an E8600 in soon, the 4600 was only ever meant to be temporary!) 4Gb RAM, it doesn't like more memory for some reason Asus P5Q-e geforce 8400gs just for diagnostic stuff 4x2Tb HDD in raid 5 Debian Linux 2x 500Gb drives, one for boot, another for additional storage 2x 500Gb drives - external USB3 PCI-e card to speed up external data transfers Currently the raid 5 array is 'recovering' after a power outtage when adding the 4th drive today. I'm not sure if all the data on it is hosed, but I have backups of most data at least. When the recovery is finished I'll at least have a 6tb array available to use I guess. It could do with being upgraded, but funds don't permit it presently.
I'm really afraid of that, so I invested in an UPS. I deem it as essential, if you don't have a dedicated hardware RAID controller with battery.
I had a UPS until I moved and it didn't survive the move. Thankfully most stuff is backed up or duplicated as my other half has copies of some of the contents on a external drive he brings when he visits. the rest is backed up online or on other machines. Just annoying. First data loss in this fashion for a long while really. XFS filesystem usually handles most things, but until the recovery finishes i'm not sure if the filesystem has been damaged beyond xfs_repair's ability to fix (superblock could have been damaged/wiped). But I might get lucky and find the data is ok when the recovery finishes, but i'm rather doubtful sadly. Live and learn!
I've had endless problems with RAID 5, it's just not robust enough for home use. The parity is there to allow service to continue if a drive fails - it was never intended to be a backup mechanism. I've now gone to flexraid for my media stuff - you've still got the ability to recover the data on a failed drive without going to backups, but even if the parity is corrupt, the data on the other drives is still safe.
First problem in over 5 years I've had with it tbh. It worked flawlessly in that time, just this one time screwing up.
Meant to say good luck with the rebuild. My worst was the controller over-writing the partition table on 2 drives. I had to manually calculate the parity to set up the recovery tools - took 2 days....
This is software linux RAID, so it is hardware agnostic. While it loses the battery option of hardware raid, it doesn't bite me when the controller dies and I can't get hold of another one (had this happen in an old HP server I had), software raid at least allows me to throw it in almost any linux system with SATA to access data. Unfortuately whilst the array rebuilt ok in the end, the filesystem was toast so glad I keep backups of most stuff that can't be replaced.
Gigabytes GA-B75M-D3H i5 4690 14GB of RAM (4+4+4+2) SYBA SI-PEX40064 (4 sata ports) Corsair Force GT 120GB for OS 8x2TB for a usable 10TB raidz2-0 ZFS pool Fractal Design Node 804 Thermaltake Toughpower 750W CentOS 7 Job : -File server -web server -Openvpn server -minecraft server -ut4 alpha server (not configured yet) -teamspeak home made bot (afk, sound effects, etc) Excuse the very crappy photos. I have never figured out how to take decent photos inside. They're always dark and full of noise. (only half of the disk installed) In all its glory ZFS has worked great so far. I've had a few disk failed and my pool is file. Very satisfied overall. The pc is also very quiet, which is a great plus.
How's the power consumption? Is ZFS constantly spinning the disks / running CPU or can you tell it to spin down? Do you use an SSD cache drive? I'm using flexraid at the moment (dual parity) but would love something I can fit and forget like ZFS.
1) The powerstrip that I use says the server draws 76Watts. I doubt this is the case but meh, whatever. 2) I think the disks are constantly spinning. I haven't looked into this at all. Sorry 3) CPU usage is very very low. 4) no ZFS cache on the SSD. The ZFS is basically just a backup and media server. I wouldn't need the extra performance. 5) Give it a shot ! It's very easy to set up
Cool - my current server uses around 75w in spindown (~18 drives + 2x IBM M1015s). Think i'll have a play. Would it be wrong to try this with the 5x 840 pros i've got lying around at work? Guessing i'll get some decent throughput...
Didn't realize we were doing cisco stuff. Mine: The scary part is, I have more, but lack space to mount it Did my CCNA recert in may, now on to CCNP!
I'm not sure if this is a good idea. You should make sure TRIM is supported by ZFS before you create a pool of SSDs
Should be okay, purely for play rather than prod. If I go ZFS it'll be on Seagate 8tb shingle drives which are a whole other challenge entirely...