So I feel like I've exhausted all avenues of research and I'm at the point of pulling the trigger on a new NAS, but just want one final sanity check before I do. I'm going for a Synology RS1219+ and looking to stock it with Seagate IronWolf Pro 6TB drives in theory. In practice I may start with reusing the WD Reds in my microserver and grow/swap from there, but want to minimise RAID migrations/expansions, at least somewhat. I'm conscious of the fact that the RS1219+ is nothing more than a a linux box with a fancy interface and some mediocre hardware, and the fact that I could build a comparative monster of a storage server for the £1000 or so the diskless Synology runs is what's making me take pause most at the moment. As a middle ground the QNAP 832XU offers better hardware, 10GbE baked in and a lower price point at the seeming expense of software. The reason I know this and I'm on the verge of entirely making up my mind is that I value a no-faff "Just Works" (TM) setup over something which is theoretically more capable, and frankly the RS1219+ is still overkill. But the techie in me is still irked by this comparison. So given my needs and wants... - 10GbE SFP+ capable - 2-post rackmount - Storage flexibility (e.g. gradual capacity expansion) - Minimum 8 disks base, and expandable (though I don't think it will ever come to that) - Decent enough performance for VMs on SSD - 1GB/sec seq read capability - Cloud Syncing for backup (Synology's own option looks to be a cheap way to get a flexible 1TB, and will backup in bulk to glacier deep archive when that hits the street) - Remotely accessible personal dropbox esque storage - Client backups - Automated backups to external USB/NAS devices Is there any really compelling reason I may not have considered for going home-brew or another appliance?
Yep, that was one of the first considerations, but breaks requirement number 3... - Storage flexibility (e.g. gradual capacity expansion) I didn't get past this hang-up to really check into how well it ticks the rest of the boxes though, admittedly.
If it takes a 33-page unofficial technical document written by "some guy on the internet" to step through how it may be possible, I'd argue that it still breaks requirement number 3.
True that...but it basically does the same thing as the off-the shelf solution's automated processes, so I figured it wouldn't be that difficult to write a script as you go, customised to your particular hardware, so it's a one-off pain.
It's a fair point, and I wasn't aware this was even a theoretical possibility. But stuff like that is approximately a million miles away from what I want to be doing on my main storage device. Maybe what it's doing exactly the same thing as happens under the covers as Synology does, but the difference is that on Synology, or anything else with the capability baked in, tens of thousands of people before me have clicked that "add disk, yo" button.
Synology all the way. Have probably 30-40 of them out in the wild with 2 being used for 10Gbe Storage in a Hybrid Flash setup.
Thanks for the input, your reference to hybrid flash brings me to something else I've been pondering. I was planning to throw a pair of SSDs in there for VM storage and not bothering with caching, though if I were to I could undoubtedly put in some smaller ones and save a few quid. Would you suggest the effect is transparent so long as the SSDs cover your working set?
I think they're now going to try an Asus. Thought i'd throw this in there, not helpful I guess but maybe something to be aware of.
They make a difference with VM's but if using it for typical file storage then I remember seeing an article that they don't make too much difference.
You have to take Steve with a pinch of salt - Most of these guys know sweet FA about storage. He had a bad PSU - It happens. If your buying a NAS thinking it's a backup then your already set for failure. He moans about the "migration" between the failed and replacement unit when in-fact it takes 5mins max and everything is back to how it was. - That's pretty darn good compared to other NAS devices. Once you start buying fully redundant boxes to cover every angle, you then buy another in case that fails, and then cloud backup of the data off of that. So for the price he makes from YouTube - he should of been running 2 in a HA pair minimum. - Plus he's using a home unit for his "business critical" data. So something from the Rack-station line that's dual PSU, ECC RAM. Same as Linus with his "45 Drives" - It'll fall over one day and everyone in enterprise IT will just laugh...
If you like the idea of the Synology for the software check out Xpenology. You can home brew the hardware with that. There’s a few LogicCase 2U and 3U cases that come with 8, 12 or in the case of the one I got, 8 3.5” and 3 5.25” bays which can be turned into 12 2.5” or 3-5 3.5” bays. Standard ATX or a SFX psu and something like a low end i3. Again in the case of the 3U case I bought you can use a full ATX motherboard although only five full height slots are accessible. Still, you can put in a hardware SAS array card in your x16 PCIE slots and with two cards that’s potentially sixteen drives anyway. Plus a ten gig NIC in an x1 or x4 slot?
I hadn't seen this latest one, but did see the last broken Synology video they posted - I kind of dismissed it though on account of the point they were trying to make: "if your hardware breaks you have to get another Synology to just access your data" was entirely false - you can pop the disks into any linux system (even just a livecd) and have access to the data in 5 minutes. This was one of the things I was worried about - what happens if 5 years down the line things go pop, I decide I don't want another Synology, yet I'm stuck with it unless I want to recover everything from backup (and if I'm using Glacier by this point as planned, that could be rather expensive) I'd like to think that the RS is a step up reliability-wise to the DS units, but hardware will break and a dead PSU in an RS1219+ compared to a dead PSU in a homebrew server would leave me in the same position, but the only thing I was particularly concerned about was the C2xxx death bug, which has been resolved in 18+ and later units. I did check this out, but I got the impression it was less than plain sailing - pot luck with regards to what hardware works and what does not, whether DSM will update properly, whether certain features and functions work at all. If I got the wrong impression here perhaps I need to take another look.
See, I learn more here than on the interwebs videos. Seems pretty much anything I ever watch I have to increase the boat load of salt I need before watching them.
Actually I'm not sure in fairness. I don't use Synology as I honestly find that openmediavault is more than sufficient for my needs. Maybe I might be able to tempt you with the knowledge that I'm going to be selling my i3 Gen 8 Microserver this week - 10TB disks are getting more sensible in price!! Plus you could put a big 2.5" drive in the CD bay instead of using the optical drive. If you're happy with software RAID - I am, it works fine - just put a 10GBe NIC in the PCI-E x16 slot of course.
How's sequential performance in your case? I.e. can it saturate 10Gbit? I'm using a microserver now and one of the downers is that it takes up so much vertical space, so definitely going for something rackmounted.