Discussion in 'Hardware' started by Pookeyhead, 1 Jan 2010.
this thread is for everyone, it's fine, don't go anywhere
I went super cheap in my fileserver build, most of which came from my junkpile^H^H^H spares box.
All I had to actually purchase was the cheapest s775 chip I could find, and 3 of these.
This gives me a server expandable to approx 28TB!
i got one of those, its dirt slow and after 6 months decided to corrupt a whole drive of data, so yeah not so happy with them....
I'll be looking around then. Thanks for the heads-up.
Exactly. I started the thread in case anyone was doing the same thing, so they can learn from my mistakes, and benefit from the other posters who have given me valuable advice.
If I was to do it again, I'd probably spec better hard drives. Not that the Cav Greens are too slow or anything, but they are not server grade drives, and there is a risk that if they fail to respond within a certain amount of time, they can cause a RAID corruption. I bought the drives when I was convinced I was going to use 2x NAS boxes until I realised they'd be too slow. The Cav Greens were on the recommended list for the Thecus NAS.
As I have a dual RAID5 setup, one mirroring to the other, this doesn't bother me, but with a single server solution, I would definitely pay the extra for server grade drives.
Yes, the LSI card is hot swappable. To be honest though... does this matter? Shut down, swap, reboot. It's only a home server, not a big enterprise server where you will be disconnecting 500 angry office workers while you swap drives Plus, if you mirror the server to a RAID NAS like I do, you can just work off that while you do any work on the server.
I would also pay a little extra for a 8 port card to give a little headroom.
I would probably not use Vista as the OS either. I used it because I had it, and didn't have the time to explore a Linux distro solution or Windows Server2008. I have had no problems whatsoever with Vista.. I just realise it's not ideal. Now it's up and running, I'm reluctant to change anything. Some of the hardcore guys on here will scoff at using a regular OS, but it's a home server. You'll not be setting up a domain network I imagine, and all it will be doing (in my case) is delivering files and writing back ups. Pretty much any OS can do that.
Not that I did get a cheap RAID card, but I almost did... Ensure that your RAID card is a genuine hardware RAID card. Some of the cheaper ones offer no advantage over using the chipset as they have no onboard processing. The Card that was posted a couple of posts back is an example of a cheap card that has no dedicated processor. It uses the CPU for parity calculations, just as your chipset does. Look for a Dell PERC controller on Ebay if you want to save cash... but if you can't be arsed looking for a needle in a haystack (which is what it seems to be with the PERC cards as they're so sought after), then I can recommend the LSI. It was a breeze to set up, it's fast (200MB/sec writes to a 3 disk RAID5 is good), and good online support for drivers and resources. I've certainly not heard anything bad about LSI.
Oh... if you DO get teh same RAID card... make sure you place a fan to blow over the heatsink. The only bad thing with it, is the heatsink is feeble, and gets VERY hot after a bit of hard work.
How would one tell if one was buying an actual RAID controller card as opposed to just a few extra SATA ports? There have to be more than 2 'genuine' cards on the market, if one isn't looking for quite the crazy-fast speeds that Pookey got.
Like a poster above, I would rather spend £200 than £300.
Has anyone checked out unRAID? Expandable arrays with parity - I love it.
We did a feature including it over Christmas and were going to look into it further sometime soon
I wanted to do two server features: building my own WHS and building my own unRAID server, but I can't find the right SFF case to go with stuff
unRAID looks great for people with a bunch of random hard drives of different speeds and sizes, but as it still writes parity data to dedicated drive, surely there's still some CPU overhead to take into account, and therefore still not ideal for low power CPUs? I notice the prebuilds on their site still use some pretty beefy controller cards.
Although am I correct in thinking parity can be written during idle times?
Am I also correct in thinking that performance doesn't scale with drive numbers?
For the record those hot swap cages rock, HOWEVER, they didn't fit quite right in my Thermatake Armor case. Made me kinda sad, but considering I actually had a 1.5TB drive fail on me a couple months ago (funny story to go along with that, were anybody to care) the swap and rebuild was beautiful.
pretty nice builds. I have a couple questions or statements i guess.
1. I hope you marked which drives are plugged into which ports or when a drive fails it might be a pain to figure out which drive it actually is.
2. For the love of all that is holy never use software raid. I don't care if you run windows, whs, linux, unix, whatever. Software raid is no replacement for a GOOD raid card. and on that note stay away from companies like high point.
3. The multi drive in ones are nice but most are not hotswappable. I have had bad experience wheere i tried to hot swap a drive in a 4 drive cage and by taking the one drive out it shut down 4 of my 8 drive array.
I recently built a NAS or SAN depending on how you look at it. using a more expensive supermicro chassis and i can say it works amazingly. i only have 1 data cable between my controller and the drives (up to 24 per case since it's SAS) But it was also much more expensive.
finally can you try running iometer on your arrays as well.
Agree, I did quite a bit of research since I'm a paranoid person, and when I built my server I ended up going with a 3ware 9560SE card, a bit more expensive, but god damn is it a nice card for raid. It plus Lian Li's hot swap cages (got 3 of the 4 drive ones) makes things a dream.
The card I used is NOT hot swappable. It can have a hot spare installed... but it's NOT hot swappable. If downtime is a no no for you, then despite it's awesomeness... I'd avoid my card.
Doesn't bother me, as I can work off my mirror if I need to rebuild.
that's not too bad. in some scenarios it's better to have a hot spare than hot swap since hot swap is not going to automatically rebuild your array when a drive goes south.
Very true. I'm not bothered about down time personally, but I know someone else asked, so I thought I'd share. I previously posted that it was you see... I was just correcting that.
didnt realise this thread existed.
ill post mine shortly but you can see the specs in the sig
needed slightly more than a simple file server so have a better cpu than most but im very happy with yhe outcome and currently sit a fraction under 100W
i7 server? Why such a beefy processor?
Probably the same reason I had a Q6600 as the CPU for my server - Web Services or transcoding
and here i am with an athlon x2 3800+ in my fileserver -.-
then again im speccing it up a new build with the new energy efficient athlon cpus
You could be right. It wasn't a "OMG you don't need that" type question. I'm well aware that we all build servers for different reasons. I'm just curious is all.
Separate names with a comma.