Build Advice 3 questions.....

Discussion in 'Hardware' started by winkyt, 26 Jan 2011.

  1. winkyt

    winkyt What's a Dremel?

    Joined:
    11 Nov 2010
    Posts:
    12
    Likes Received:
    0
    howdee

    im looking at building a new system but since the 2nd gen Intel chips came out my choices have changed.

    i will obviously be using the amazing i5-2500k processor and i wana OC it.

    i5-2500k
    *motherboard (look below)
    Gelid Tranquillo or Thermaltake Frio
    6-8gb Corsair XMS3 PC3-12800 or Dominator TR3X6G1600C8D
    2x ssd's in raid0

    i have a few questions i would love some feedback on:

    1) i want to build it in a small lian li case that will only take micro-ATX mobo's, so i was looking at using the Asus P8P67-M mobo. does any1 know if this is the same, minus a few extra Pci slots etc, as the Asus P8P67 mobo tested in CPC's last issue, but in a micro-ATX form??? is the Bios the same??

    2) im a bit confused about duel and triple channel ram. the Corsair XMS3 PC3-12800 is duel channel but would it be beneficial to go for Corsair Dominator TR3X6G1600C8D triple channel ram instead?? :thumb: Sorted thx to Fingers66!!

    3) why i waited to build this was for 6gb sata, so i could run two ssd's in raid0 and get some fast read and write speeds going. im looking at getting two 60gb drives and cant deicide between 2 Corsair 60GB Force ssd's (r=285m/s, w=275m/s) or 2 Crucial 64GB Real SSD C300.
    The C300 has a higher read speed but a poor write speed but its also sata 3 (6gb) although even with the other Corsair Force only an sata 2, running them in raid0 should mean i get close to double the speed anyway. But any advice to help, as its only from what i have read.

    Any help, advise or info on different\better components is massively appreciated!!

    Thx

    /wink
     
    Last edited: 26 Jan 2011
  2. Publ!c Enemy

    Publ!c Enemy or Richard for short

    Joined:
    4 Jul 2010
    Posts:
    176
    Likes Received:
    5
    For number 3, i wouldn't go with raid, benefits are good but because they're SSD's they kinda need TRIM, and when the drives are in RAID array it blocks the TRIM command. If it was my money i would just get a 128Gb c300, improves on the write speeds of the 64Gb models and is SATA 6Gbs.
    Hopes this helps:)
     
  3. Fingers66

    Fingers66 Kiwi in London

    Joined:
    30 Apr 2010
    Posts:
    8,699
    Likes Received:
    925
    For number 2, Sandy Bridge CPU's & m/b's utilise dual channel RAM so you need to buy in pairs. Buy either 4GB (2 x 2GB sticks) or 8GB (2 x 4GB). To get 6GB you would need to buy 4GB (2 x 2GB) and 2GB (2 x 1GB).

    RAM prices are very cheap at the moment and allegedly are going to rise. I would buy 8GB (2 x 4GB sticks) if buying for a Sandy Bridge build.
     
  4. GoodBytes

    GoodBytes How many wifi's does it have?

    Joined:
    20 Jan 2007
    Posts:
    12,300
    Likes Received:
    710
    duel?! Having 2 stick of RAM is not a making the RAM fight for their life, nor having fun trowing challenges to each other, no playing Street Fight 4 while you use your computer. :)
     
  5. Fingers66

    Fingers66 Kiwi in London

    Joined:
    30 Apr 2010
    Posts:
    8,699
    Likes Received:
    925
    Oh, I forgot to answer question number 1. The Asus P8P67-M is indeed micro ATX and should have virtually the same EFI BIOS as the P8P67. There is also a P8P67-M Pro model that is micro ATX and has a second PCI-E x8 slot for Crossfire/SLI should you want the option in future - it is only £12 more.
     
  6. winkyt

    winkyt What's a Dremel?

    Joined:
    11 Nov 2010
    Posts:
    12
    Likes Received:
    0
    thats the kinda help i was after Fingers66 so thanks again!!

    been looking at some ram on ebuyer and seen this http://www.ebuyer.com/product/173122
    it gets very good write ups and is nice and cheap. i wana get 8gb in 2x 4gb sticks so was then looking at this http://www.ebuyer.com/product/247675 only differene i can see is the lower timing on it, 7-8-7-20 as uposed to the 2gb sticks that have 9-9-9-24 (and the price ofc)
    i then also spoted this http://www.ebuyer.com/product/247676 which seem identical to the 2gb sticks with the good write up apart from the voltage. 2gb sticks = 1.65v, 4gb stick = 1.5v. is this sumthing i need to worry about or is it normal to have lower voltage on bigger sticks of ram?? :confused:

    thx
     
  7. adam_bagpuss

    adam_bagpuss Have you tried turning it off/on ?

    Joined:
    24 Apr 2009
    Posts:
    4,239
    Likes Received:
    152
    stick with 1.5V for RAM as 1.65V is on the very upper limit of safe for Vcore and Ram voltage margin <0.5V apart.
     
  8. GoodBytes

    GoodBytes How many wifi's does it have?

    Joined:
    20 Jan 2007
    Posts:
    12,300
    Likes Received:
    710
    G.Skill has excellent 1.5V RAM.
    I highly recommend them. I find them just as good as Corsair.

    i have the G.Skill Pi series 1.5V 7.8.7.24 1600MHz DDR3 RAM. Cool operation.
     
  9. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Since the OP's looking at SFs in R0 then there's no issue at all...

    Whilst trim can help to reduce the r-e-w cycles on them even further, all of the testing shows that they're robust enough solely with their GC, & so 2 of them in R0 will give much better performance than a 128GB C300 & not become shonky...

    [NB this isn't saying that trim has no +ve effect btw (re the recent thread) - simply that the SFs are a very good choice in non-trim environments.]


    The one thing you should bear in mind is that the C400, Corsair's seemingly quicker version of the C400 & the V3 (which specs at quicker than both of them) aren't far from being released.

    Obviously they're going to be more expensive than the current drives, but it's worth considering waiting; C400 is some time in Feb & Bittech posted that the V3 would be March (i'd previously understood that it was end of March or April so this fits in).
     
  10. Publ!c Enemy

    Publ!c Enemy or Richard for short

    Joined:
    4 Jul 2010
    Posts:
    176
    Likes Received:
    5
    Sorry PocketDemon i didn't realize about SF drives, i thought that all SSD's should use it.
     
  11. winkyt

    winkyt What's a Dremel?

    Joined:
    11 Nov 2010
    Posts:
    12
    Likes Received:
    0
    i read ur comments in 'SSD setup for boot drive' but tbh i got a wee bit confused.
    but i really like the idea of running to SSD's in a raid with a 6gb sata mobo to see what type of speeds i get so ur above comment regarding SF drives has not needing trip makes me very happy :D
    ur right about waiting to see what the next lot of SSD's are like but if i do ill never buy a new PC as ill always be waiting for the next thing round the corner. i have already waited till 6gb sata came out and i dont think i can wait any more :wallbash:
     
  12. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Right, the C300 limps over the 3Gb/s into 6Gb/s territory...

    Whereas the other drives mentioned are significantly faster 6Gb/s sata drives.


    Now, d.t. the current bandwidth limitations of the 6Gb/s onboard controllers you'd only be able to use a single one of the (soon going to be released) 'proper' 6Gb/s SSDs before becoming bandwidth limited on the larger sequential speeds...

    ...& if you wanted to run a SF (which the 60GB Corsair Force thing is one of), then this will run faster on the 3Gb/s controller than the 6Gb/s controller.


    So, 'if' you want a single SSD then you'd be much better off waiting for the new drives to be released & 'if' you're happy with R0 then stick a pair of SFs on the 3Gb/s controller.

    [NB the difference in performance between a pair of 3Gb/s SFs in R0 & a single 6Gb/s SF 'should' be pretty minimal - & you may well find that the former is faster.]


    No worries...

    it's also not the case that any current SSD 'should'/has to use it...

    Whilst there will always be some advantage with r-e-w cycles by using trim, it really comes down to some SSD controllers (esp the SFs) can better cope without it performance-wise than others (esp the C300).


    For a long time there have been people successfully using multiple C300s in R0 on 3rd party pcie raid cards (which will give faster reads but slower writes on the <=128GB models than the same no of SFs), but they aren't as robust/are more prone to performance degradation d.t. the poorer GC implementation than the SFs.

    &, separately, the the r-e-w cycle increase in non-trim is far less pronounced with the SFs.


    Anyway, because the OP was suggesting something sensible with the pair of SFs in R0, which doesn't have any major downside (well, important data should be backed up anyway so...) then this is simply extra information.
     
    Publ!c Enemy likes this.
  13. winkyt

    winkyt What's a Dremel?

    Joined:
    11 Nov 2010
    Posts:
    12
    Likes Received:
    0
    ahhh for some reason i thought that the sata ports on a motherboard shared the same controller so running 2 SSD's in raid0 on a 3gb/s sata board would lead to it being throttled to the 275mb/s limit ( the same speed I would get using just 1 SSD)

    maybe i should just buy the bits for my upgrade and just use my two raptor drives till the new C400 drives are out :confused:
     
  14. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Well, yes & no...

    Having looked at x number of the current onboard 6Gb/s controllers then they are linked, in pairs of ports, to a single pcie 2.0 lane giving a max bandwidth of 500MB/s (minus overheads) between them -> ~480-485MB/s irl.

    [Edit - corrected values] Whereas something like the 3Gb/s ich9r/10r controllers, cap out ~615-740MB/s (depending upon the no of drives in the array -> once the bandwidth is saturated, the more drives then the lower the max sequential) - which gets roughly approximated to 2.5 (so 3 being the optimum for sequential speeds in R0) 3Gb/s SSDs (dependent upon the model) in R0 in real life usage.


    Now, as these are obviously the max bandwidths, attempting to connect more drives in an array which exceeds this total bandwidth & their immediate impact will be on the large sequential reads...

    ...however, firstly, sequential writes can still scale slightly more &, secondly, smaller r/ws (esp with the higher QD testing that's vastly more representative of actual OS activity than single random small r/w testing) can still continue to scale beyond that.


    So the better choices would either be to buy 2x 3Gb/s SFs now (& put them on the 3Gb/s ich10r controller)...

    ...or waiting for comparative reviews between the V3, Corsair's faster version of the C400 (as the C400 is provisionally the worst of all of them - though better than the C300 naturally) &, possibly (as it's going to be later), whatever intel comes up with.
     
    Last edited: 28 Jan 2011
  15. winkyt

    winkyt What's a Dremel?

    Joined:
    11 Nov 2010
    Posts:
    12
    Likes Received:
    0
    /confused again :sigh:

    if i understand u correctly. the 3gb/s sata controllers on a motherboard max out at about 480 mb/s shared over all the sata ports on the board but are limited to 275mb/s for a single port. so putting a C300 ssd on a 3gb/s sata port it would be limited to 275mb/s but if i were to set up 2 SF drives in a RaidO on the same 3gb/s controller they i would be likely to see the 480/mbs limit reached????

    also.... if i buy 2x SF SSd's to run in a raid i should connect them to the motherboard via the 3gb/s ports and not use the newer 6gb/s port as using the 6gb/s ports would have a negative affect??? seems strange but that is would i have understood the above post.
     
  16. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    No, the current 6Gb/s onboard controllers max out at ~480MB/s (as each controller is linked to one pcie 2.0 lane)...

    Whereas the 3Gb/s onboard controllers max out at ~615-740MB/s (depending upon the no of drives in the array).


    So, a single 6Gb/s SSD on a current 6Gb/s onboard controller will only bandwidth limit the V3s (& possibly the 6Gb/s intels) of the soon to be released 'proper' 6Gb/s SSDs...

    [NB this means that, for example the Marvell onboard controller on the mobo you were looking at with 2 6Gb/s ports has ~480MB/s in total for both ports]

    ...but on a 3Gb/s ich9r/10r controller (which is what you'd have) the ideal number for sequential speeds is 3 SSDs in R0 -> 2 or >3 will be slower for sequentials, but >3 will be faster for small transfers.

    [NB this isn't saying to not use 2... Simply that 3 gives the highest sequentials in testing on the ich9r/10r controllers.]


    The onboard 6Gb/s controller limitation will change once pcie 3.0 comes out as that will allow more lanes for a controller &/or higher bandwidth per lane.
     
    Last edited: 28 Jan 2011
  17. winkyt

    winkyt What's a Dremel?

    Joined:
    11 Nov 2010
    Posts:
    12
    Likes Received:
    0
    ok well here is my plan:

    i5-2500k (oc'ed)
    Asus P8P67-M (i want to build a small pc)
    8GB of Corsair XMS3 (2x 4GB)
    Gelid Tranquillo or Thermaltake Frio
    2x SF SSD's running in Raid0 using the SATA II ports on the mobo
    1x 1Tb Hard Drive
    Antec TP-650 Power PSU

    is that the go u reckon???
     
  18. Fingers66

    Fingers66 Kiwi in London

    Joined:
    30 Apr 2010
    Posts:
    8,699
    Likes Received:
    925
    What graphics card are you going to get?
     
  19. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Well, the SFs in R0 on the ich10r controller bit is perfect. :)

    Just remember that, assuming you're still going for the equivalent of the V2 E (which the Corsair Force series is) ideally you want to increase the over provisioning by at least ~21.5% 15% of the free space to maintain speeds & increase nand lifespan - as stated in numerous threads this is the best practice for all SSDs (other than the non E V2s - though even these can gain from further increasing of the OP).

    The optimum way to do this with raid arrays is to reduce the actual size of the array by ~21.5% ~15% when creating it rather than creating an array using the full space & then under partitioning.


    Otherwise, nothing seems foolish in there.
     
    Last edited: 29 Jan 2011
  20. winkyt

    winkyt What's a Dremel?

    Joined:
    11 Nov 2010
    Posts:
    12
    Likes Received:
    0
    not to sure yet. dont rly wana spent above £150 as i only have a 22" 1050x1680 (or whatever that normal res is) monitor and i only play older games and some wow, which i would like to have on max settings but i dont think that is to much for a mid range card. the other stuff i do is not to video intensive
     
    Last edited: 29 Jan 2011

Share This Page