1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Storage Uni's Server & Network Project! [Update 05-06-14 - Cisco Meraki Installed]

Discussion in 'Hardware' started by Unicorn, 5 Aug 2010.

  1. azazel1024

    azazel1024 What's a Dremel?

    Joined:
    3 Jun 2010
    Posts:
    487
    Likes Received:
    10
    If you are going to be putting in new wires, you might as well put in twice the wiring that you think you need. Because it helps avoid needing to rewire sooner. If you can afford the price and you think you are going to be staying in your house a long time, consider Cat6a. That is the minimum spec for 10GbE. Not that I think you need 1,000MB/sec transfers anytime soon, but if you think you are going to be there for, say another 10 years, you might as well.

    From just having done a Cat5e backbone in my house for a GbE network I priced out Cat 6 and Cat6a. Cat6 seems to run around 50-70% more than Cat5e and Cat6a seems to run about another 50% premium above Cat6. That said, unless you have a huge house, you could probably wire all the rooms you want to for maybe $100-200 with Cat6a (figure 1,000ft. I ran 3 long lines not including stuff within my storage room and maybe used 200ft out of the 500ft of Cat5e bulk cable I bought).

    If you need VLANS, you can also set up VLANs using most Intel NICs. Its nice to be able to set it up through the switch, but you can do it on the adapter itself. So it is an option to avoid needing two adapters. You can setup a VLAN on the server to connect to customer's computers and screen it from the regular LAN traffic. Just a thought (Intel NICs also support teaming/link aggregation).

    When I wired I didn't bother with multiple lines per computer only because I have easy access to the locations I'd consider needing link aggregation later. 1 long line that is difficult to access is simply to my 10/100 router, so 1 line of Cat5e is way overkill as it is. The other long runs I have complete access to through my storage room and basement understeps storage all the way to the socket on the floor above. I ran two lines there, but my computer is hooked up to one and once it is handed down to my wife, my future computer will also be hooked up there. If I needed more speed another line or two for link aggregation might only take me an hour or two to run.

    Don't bother with a two port NIC unless you have, have to save the space. In US dollars Intel two port GbE NICs seem to run around $150 or so and require x4 PCI-e lanes. A pair of Intel GbE NICs require two 1x PCI-e lanes and run about $60 for both cards. Power use for two single port GbE NICs is within 1w of what a two port GbE NIC uses (1.9W TDP for the single port Intel GbE NICs and I think the two port uses 3.5W TDP or something like that).

    Also don't get too caught up on motherboard features. Just plan on what you need for at least the next 3 years. Motherboards aren't that expensive. What is the worst that happens, you planned poorly and in 3 years you have to buy a new $50-100 motherboard?

    I wouldn't bother with USB3 unless you already have external drives that use it. An eSATA enclosure is cheap enough and a case with an eSATA port or just buying an eSATA PCI-e card is cheap. Integrated graphics for sure to save PCI-e slots. SATA6Gbps is not necessary unless you are going to be running SSD and the fastest of the fast now. The fastest mechanical drives aren't even pushing SATA3Gbps max (15,000rpm 2.5" drives can run to about 200MB/sec, the fastest 7200rpm 3.5" and 10000rpm 2.5" drives only push about 150-160MB/sec sustained max, barely beyond SATA1.5Gbps). I doubt mechanical drives are going to NEED 6Gbps for at least another 5-6 years.

    Also if you are only running GbE you won't saturate even SATA1.5Gbps. Even if you do link aggregation you won't hit maximum of SATA3Gbps. It would take 3 aggregated links to match SATA3Gbps (okay, exceed it. SATA3Gbps is really only good for about 280MB/sec max w/ overhead, and 3 aggregated GbE links running perfectly might get you 320-350MB/sec w/ ethernet overhead). Heck, since mechanical drives aren't that fast right now, if you are going spinning disk you will have to RAID disks together to saturate more than 1 GbE link.

    Anyway, I guess my point is that the latest fanciest tech isn't really necessary except in an enterprise environment. At most it is geek points in a home network, even a demanding home network.

    From what you were talking about earlier, I'd spend most of your money on standarizing your disks and getting newer higher capacity ones. A small handful of 1, 1.5 or 2TB disks could consolidate all of your data and leave you room for expansion and cut down on the number of spinning disks you have enormously reducing your power use, heat production and noise. Also reducing the odds of a disk breaking (fewer disks and much newer). You'd also speed up network transfers as I imagine some of those older disks can't come close to saturating a GbE link.
     
  2. Unicorn

    Unicorn Uniform November India

    Joined:
    25 Jul 2006
    Posts:
    12,726
    Likes Received:
    456
    First of all thanks so much for reading the thread and replaying with a very well thought out and useful post!

    A few things: Yeah, I'm rewiring with Cat6a, that's already been decided. I did the whole garage and workshop with it towards the end of last year and I'm putting two to every room but 2, which will require 3 each. There are a total of 9 rooms in the house, not including the bathroom, garage or workshop which either don't need cabled or already are cabled. That makes a total of 20 lines that I need to run from the patch panel to the rest of the house, plus a second one to the workshop. I personally did all the old Cat5 cabling so I know my way around the attics of the house pretty well and it shouldn't take me long running it all.

    The only two machines on the network that are getting multiple Ethernet links are the server and my own desktop, which already has a dual port NIC in it.

    I do need VLANs, but I also need new switches. I figure if I'm buying a pair of GbE switches I may as well get managed ones and use them to set up the VLANs the "proper" way.

    You're right about the motherboard, I have decided that SATA 6Gb/s is not as important as I first thought and at the moment I have settled on the ASUS that has USB3 and SATA 3Gb/s. The USB3 is important for me because I already have several USB3 enclosures and a flash drive, although you're right in saying that I could just use eSATA as well.

    My discs are well on their way to being standardized - I've whittled them down to 5x 2TB discs and gotten rid of all those older multiple capacity discs. The server is going to be running two RAID arrays for speed and redundancy.

    This is going to be both an enterprise and a personal environment - my workshop is on the same network as the house so it's used for both business and personal tasks.
     
  3. Unicorn

    Unicorn Uniform November India

    Joined:
    25 Jul 2006
    Posts:
    12,726
    Likes Received:
    456
    There's an awful lot of double posting going on in this thread, but I just thought of another thing that I'd like to include in it! I have a 30GB OCZ SSD here that needs to be put to good use, and I would like to use it for cache like the guy who built the Black Dwarf NAS did, although I'm not sure how it's done to be honest. Is it a feature that your RAID card supports or is it to do with your on-board storage controller?

    Another question - what do you think would be faster for the boot drive, a couple of years old 74GB Raptor or a 1 year old 160 GB Scorpio Black drive?
     
    Last edited: 8 Apr 2011
  4. azazel1024

    azazel1024 What's a Dremel?

    Joined:
    3 Jun 2010
    Posts:
    487
    Likes Received:
    10
    Velociraptor or raptor? If a velociraptor, it is probably faster than a 160GB WD scorpio in large part because of faster small file transfers and probably also medium file transfers. Sequential, I don't know about though. You could consider just using the 30GB SSD as the boot drive. I bought one for my file server, but then decided to use it as a 2nd App disk in my main machine and just slice off 30GB on the 1TB 7200rpm drive in my file server for a boot partition instead.
     
  5. Unicorn

    Unicorn Uniform November India

    Joined:
    25 Jul 2006
    Posts:
    12,726
    Likes Received:
    456
    Nope, just a standard 74 GB Raptor - My pair of Velociraptors live in my gaming PC, as programs and games storage ;) I want to use the SSD as cache, plus it's not really big enough for the boot disk. I'm leaning slightly more towards the Scorpio drive simply because it's quieter, smaller and easier to mount and has a higher capacity than the Raptor. Both the boot drive and the SSD are going to be mounted flat on the motherboard tray, under the (mATX in an ATX case) motherboard.
     
  6. saspro

    saspro IT monkey

    Joined:
    23 Apr 2009
    Posts:
    9,613
    Likes Received:
    404
    A hardware RAID card will have it's own cache onboard (using RAM).
    You can only use a drive for cache if you do software RAID using something like freenas
     
  7. Unicorn

    Unicorn Uniform November India

    Joined:
    25 Jul 2006
    Posts:
    12,726
    Likes Received:
    456
    So did the guy who built Black Dwarf use software RAID? I thought he bought a RAID 5 capable Highpoint card?

    I've been pricing around for my Cat6a cable and have decided that because this is a really permanent installation, I'm going to put in high quality Excel S/FTP Cat6a (screened and fully shielded). It's working out at roughly £0.60 per metre.
     
    Last edited: 8 Apr 2011
  8. Unicorn

    Unicorn Uniform November India

    Joined:
    25 Jul 2006
    Posts:
    12,726
    Likes Received:
    456
    08-04-11 UPDATE - The Plan So Far

    Much of this is still subject to change although hopefully none of it actually will, because it's fairly solid if you ask me ;) The server will be based around a S1156 Gigabyte H55M-USB3 motherboard with an Intel Core i3 540 CPU and 4GB of low power OCZ Reaper 1066 MHz memory. The CPU may be under clocked to increase energy efficiency, although it remains to be seen how the machine performs in general and any such tweaks will be made after real world testing has been performed.

    The storage will be handled by an enterprise class Highpoint RocketRaid 3530LF RAID controller, which is capable of handling up to 12 drives of 2TB capacity and above. Attached to that controller will be 6x (six) 2TB Western Digital Caviar Green drives, which will be mounted in the six 3.5" bays of the Antec Three Hundred, behind the front intake fans. These six drives will complete the primary array in a RAID 5 configuration, totalling just under 10TB of usable storage capacity. Also attached to the card will be another six 2TB drives, in this case 2TB Samsung SpinPoint F4EG drives, five of which will be mounted in an Icy Box IB-555SSK backplane/hot swap enclosure which is installed in the Antec Three Hundred's 3x 5.25" drive bays, and one which will be mounted on the remaining space of the motherboard tray. These six discs will be configured in another single partition RAID5 array and will serve as an entire backup of the primary array. The backup will be performed incrementally on a daily basis, at an off peak time when usage is at a minimum.

    The systems boot disk will be a 160GB Western Digital Scorpio Black drive, with an additional 30GB of flash storage provided by an OCZ SSD, which may or may not be used as fast access data cache. The server will be connected to the new network using a dual port HP NC360T NIC, allowing two aggregated 1GbE links to the entire network for increased data transfer speed and multi tasking performance. The server will be powered by a 650W Corsair HX series PSU with custom wiring looms to power both six drive arrays and the boot drive & SSD.

    I am still working on the server design, and am attempting to implement a Matrix Orbital LCD and keypad to monitor network traffic, storage capacity, array health and other data as this is a headless server.

    The home network will be upgraded from the current Cat5e cabling to S/FTP Cat6a to match the data cabling recently installed in the workshop, including the addition of some rooms which were not previously networked. Routing and wireless access will be handled by a custom Linksys WRT320N running the DD-WRT firmware. Two Gigabit Dell PowerEdge 5324 (24 port) managed switches will handle all network connections and security will be handled by a Cisco PIX506E firewall. Exact network structure including VLANs and aggregated interconnects to follow. All of this networking hardware will probably be provided by forum member Zoon, who has also been an invaluable help so far in the design and layout of both the network and server.
     
  9. ShakeyJake

    ShakeyJake My name is actually 'Jack'.

    Joined:
    5 May 2009
    Posts:
    921
    Likes Received:
    71
    Too much power? All you need is an atom or an amd hudson if you're just serving files. The i3 will sit and drink electrickery for no good reason.
     
  10. Unicorn

    Unicorn Uniform November India

    Joined:
    25 Jul 2006
    Posts:
    12,726
    Likes Received:
    456
    Nope, it's going to serve workstations as well in the future, including server hosted software. You can't get much lower than 73W for what I need and if it needs more grunt (doubtful) in the future I can drop an i5 in. Think about it this way, the P4 Northwood that's in it at the moment has a TDP of approximately 80W, plus there is a Radeon 9800 pro discrete graphics card in it as well which is bound to be drawing a few Watts, along with an inefficient S478 motherboard and several IDE hard drives. I'm making a significant improvement on processing power and energy efficiency. Also, did I mention that I stream lossless audio to a stereo, media centre and the PCs in the house, along with full HD (sometimes 16GB) blu-ray video files?

    Anyway, if I just wanted a network attached file serving device I'd be building a NAS ;)

    Oh and don't think I didn't notice that electrickery comment... someone been watching The Boat That Guy Built? ;)
     
  11. ShakeyJake

    ShakeyJake My name is actually 'Jack'.

    Joined:
    5 May 2009
    Posts:
    921
    Likes Received:
    71
    Cool, I didn't realise you had big plans for it, though I bet you're still overspecced (which is definitely better than uderspeced!)

    I'm afraid I don't know what that is? TV show? Electricity is magic = electrickery!
     
  12. Unicorn

    Unicorn Uniform November India

    Joined:
    25 Jul 2006
    Posts:
    12,726
    Likes Received:
    456
    Yeah it's not actually mentioned anywhere in the thread but it runs WS 2008 at the moment and will be used for proper server tasks in the very near future :)

    Possibly could be overspecced, but if you knew me you'd know that's nothing unusual ;)

    Yeah it's a TV show about Guy Martin on BBC, he's used the word "electrickery" quite a lot on it :hehe:
     
  13. Ross1

    Ross1 What's a Dremel?

    Joined:
    15 Feb 2009
    Posts:
    194
    Likes Received:
    5
    Since my hardware raid card broke, im kinda wary of recommending using hardware raid, especially on a non-server non-eec mobo.

    I tried to have it all with my pc, having it be a storage server, music workstation, gaming pc, be silent, be overclocked, you name it, and for the most part its been great, but hardware raid is something i will look to get away from in the future.
     
  14. Unicorn

    Unicorn Uniform November India

    Joined:
    25 Jul 2006
    Posts:
    12,726
    Likes Received:
    456
    What card did you use? It mustn't have been great if it just "broke". You do realize there is absolutely no way for me to achieve the storage capacity or performance that I need without using hardware RAID? The card that I have chosen was designed to be used in an enterprise environment and frankly should not fail, no matter what board it is in. Did you mean ECC memory? I can't see how you understand that it makes a difference whether I put the controller in a server motherboard or a standard desktop one - it's a motherboard with a PCI-E slot that will accommodate and power the storage controller just the same as a server grade board would.

    I would love to use a server board for this build and also use ECC memory to maximise both performance and reliability but the simple fact is that I can't afford to buy a server motherboard, CPU and ECC memory for it, and I think I would be unwise to because for a lot less I am capable of building a great server that should serve its purpose very well.
     
  15. azazel1024

    azazel1024 What's a Dremel?

    Joined:
    3 Jun 2010
    Posts:
    487
    Likes Received:
    10
    Unless you need it pronto, I'd suggest holding off a few weeks. It looks like Intel might be introducing some of their low power dual core sandybridge chips. A 35w Intel pentium 620T would be your best bet. Low power and its expected price is somewhere around $70 in 1,000 lots. You might pay slightly more for the board, but maybe not. LGA1155 boards don't seem to be all that much more than 1156 boards.

    Or if you want something now, look at AMDs e series. Maybe a 245e or a 250e. You can find them off Ebay or other places in the range of $60-100. They are 45w parts and maybe not as fast as the i3, they do have plenty of processing power. My sempron machine runs 38w at idle with a 45w part (that said, it only ramps to about 50-55w full tilt, so a normal worst case it seems like the CPU isn't drawing more than about 25-30w or so, despite the 45w TDP spec).

    You are deffinitely going to be doing more with the system than I am, but I have a Sempron 140 in my file server and it has no issues handling transfers at line speed, plus concurent RDC and websurfing all without the CPU load seeing more than about 40% spikes (mostly form loading new pages, etc). Typically load is in the 20-30% range, and this is with a lot of stuff loaded on the CPU, and not on the network card (large segement offloading isn't supported in XP). Also just 2GB of memory with about 1,100MB free under worst case scenarios.
     
  16. Unicorn

    Unicorn Uniform November India

    Joined:
    25 Jul 2006
    Posts:
    12,726
    Likes Received:
    456
    I'm not going to be doing anything with the server for at least a few weeks because I'm not going to have time to and I need it online for various things anyway. Plus I have the new network cabling to get installed, so I may hold off and see what Intel do with SB in the next while. I haven't seen (nor can I find) any stats on the Pentium 620T yet, so unless they release that and it turns out to be stunning, I will stick with the i3 game plan for the moment, but see what 1155 SB chips surface in the next few weeks. It would have to be a sub £100 board and a sub £100 chip though, which may not be the case. If it isn't, the i3 is going straight in there.

    There's no point in faffing about with decisions that may hold my progress up and not actually make any real difference in the long term when I have already built, over-clocked, and sold countless energy efficient machines based on the i3 CPU and H55 chipset. I've seen what it can do and I've seen what it costs to run and it's more than acceptable compared to what is sitting there at the moment. Come the 30th of this month, if Intel haven't blown me away with some really affordable dual core SB CPUs, I'm pulling the trigger on the i3.
     
    Last edited: 12 Apr 2011
  17. Zoon

    Zoon Hunting Wabbits since the 80s

    Joined:
    12 Mar 2001
    Posts:
    5,888
    Likes Received:
    824
    I'm unsure just what link you think your computer's memory / motherboard has on a hardware RAID card - a hardware RAID card has its own processor and memory for parity calculations and is why its called hardware.

    You could in fact take a hardware RAID card, and all the disks, plug it into a new server, and off you go. In terms of operation its only requirement for a motherboard is so it can send data to it.

    I'm sorry if you knew this already - I don't want to sound patronising or anything - but what you're saying doesn't seem right?
     
  18. azazel1024

    azazel1024 What's a Dremel?

    Joined:
    3 Jun 2010
    Posts:
    487
    Likes Received:
    10
    I think his point was the death of the RAID card. Since not all RAID parity calculations, stripes, etc are made the same, the only way you can recover stuff from a RAID array if the hardware RAID card dies, is probably by buying the exact same RAID card (or with the same controller anyway) as the one that died.

    Software RAID is completely dependent on the software. The machine dies and any other machine running the same OS can recover the array.
     
  19. Unicorn

    Unicorn Uniform November India

    Joined:
    25 Jul 2006
    Posts:
    12,726
    Likes Received:
    456
    I've already given my reasons for using hardware RAID and I think most will agree it's the right way to go. Sorry to hear your card broke but I'm honestly sat here thinking it must not have been much of a card if it just simply died one day. Storage controllers get just as much attention paid to quality, performance and reliability as the storage mediums themselves like hard drives and SSDs. If you buy a good enough one, it shouldn't fail before it's MTBF and won't give you performance problems or compatibility headaches.
     
  20. Da_Rude_Baboon

    Da_Rude_Baboon What the?

    Joined:
    28 Mar 2002
    Posts:
    4,082
    Likes Received:
    135
    I have also had horrific experiences with RAID cards and I will never buy an Adaptec product ever again. The amount of lost time, data and additional expense that company has caused me is shocking. It was my employers money so I personally did not loose out but I still hate them and their sorry excuse for support.

    Uni good luck with your project and I am very interested in watching it progress. :thumb:
     

Share This Page