1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Storage SSD size for dual boot

Discussion in 'Hardware' started by monkiboi, 13 Nov 2013.

  1. monkiboi

    monkiboi Minimodder

    Joined:
    5 Feb 2012
    Posts:
    106
    Likes Received:
    2
    I'm currently dual booting from a 750Gb HDD which will be used as the bulk storage drive when I get an SSD.

    Given that, how small an SSD can I get away with for both systems? I'm assuming that for Windows the main install will go on one partition and the linux root partition will be on another on the SSD. I'm also assuming the home partition can go on the HDD or will it be better of on the SSD?

    I'm thinking a 256Gb drive would give me some future expansion but it would be handy to know if I could get away with a 128Gb drive.
     
  2. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    in theory you 'could' get both onto a 60/64GB HDD (~56-59.6GB of formatted space) - albeit you'd need to weight it to something like a 3:2 ratio; with Windows getting the lion's share of the space.

    Naturally though, you could then install next to nothing alongside them, as even after disabling the hibernate function in Windows, once you've allowed space for the respective page/swapfiles, room to download & install updates for both OSes & a home partition within Linux's share, you'd be really pushing the min recommended 20% free space on each of the main OS partitions...

    ...& naturally there'd be no room for extra OP.


    Of course a 120/128GB SSD would give you double the formatted space - but again you're going to be severely limited on what you can install s/w wise...

    ...plus 240/250/256GB SSDs have both a better £/GB ratio & each performs better than its equivalent 120/128GB model.

    Naturally i'd also be looking at 25% OP (ie reducing the total partitions to 75% of the total nand capacity) whether it were a single boot or dual boot set up, in order to better maintain performance - which would again make a 240/250/256GB SSD (they all have 256GB of physical nand btw) the better bet.


    So, with a 240/250/256GB SSD, giving 25% OP reduces the total formatted capacity to 192GB...

    ...& i'd then look at giving Linux something like 60-80GB of it (making sure that there was at least 20% unused on both the OS & home partitions) & the rest to Windows (again with 20% free space) - though naturally you'd want to adjust this for your own s/w usage - since running s/w from the SSD will naturally be faster than the HDD.


    Then, ttbomk (as my primary knowledge of Linux is more ancient info from Unix), the home partition is much better off on the fastest drive you have - in this case whatever SSD you buy; particularly d.t. faster access times.
     
    monkiboi likes this.
  3. monkiboi

    monkiboi Minimodder

    Joined:
    5 Feb 2012
    Posts:
    106
    Likes Received:
    2
    Thanks. So it's the home partition that should go on the ssd then. I figured root because of usr but home makes just as much sense.

    I was looking at getting a 512Gb drive but the cost was putting me off a little so I was wondering how small I could go.
     
  4. meandmymouth

    meandmymouth Multimodder

    Joined:
    15 Sep 2009
    Posts:
    4,082
    Likes Received:
    257
    PocketDemon pretty much nailed it.
     
  5. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    i'm not sure if you've misunderstood me, but i was meaning that you install all of whatever Linux distro you're using onto the SSD - just with 1 partition for the root & a 2nd for the home partition.

    So, in my example breakdown (though you know how large the specific partitions will need to be for your stuff +20% free space per partition), the 60-80GB for Linux included both - much as the Windows area includes its own hidden partition &, were you to choose to make them, any others you decided to make.

    Well, it's obviously not obligatory to have the 2nd - though naturally i understand that the reason for a separate home partition is so that most of the settings are retained when you upgrade to a later distro/if you ever need to wipe the Linux installation & reinstall...

    ...& as these settings are obviously going to be called by the OS & you want the access speed to be there; unencumbered by any r/ws that might also be occurring from/to the HDD.


    As to getting a 480/500/512GB drive, this will effectively (as there's negligible variance) be no faster or slower than its equivalent (ie same brand & model range) 240/250/256GB SSD.

    So, assuming you can at least afford the latter, it's whether something like the breakdown i proposed would work for your needs; at least in the short term.

    Well, it's perfectly feasible to add a 2nd SSD later - either (a) dedicating it to Linux, (b) Linux + a partition for extra Windows s/w or, assuming you buy an identical model, (c) sticking the two of them in R0...

    The latter, which is what i do with pairs of 256GB SSDs, is the faster of the options (& faster than a single 480/500/512GB one); though unless you have a Sandybridge (with modded bios), ivybridge or Haswell set up & use up-to-date MS drivers then there's no trim.
     
  6. monkiboi

    monkiboi Minimodder

    Joined:
    5 Feb 2012
    Posts:
    106
    Likes Received:
    2
    Yes I did misunderstand you. Thanks for clearing that up.
     
  7. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    i'm pleased that i checked. :)
     
  8. theshadow2001

    theshadow2001 [DELETE] means [DELETE]

    Joined:
    3 May 2012
    Posts:
    5,283
    Likes Received:
    183
    Normally for linux I would make a boot partition, a home partition, a root partition and a swap partition.

    What I have done on my dual booting laptop was to put boot, root and swap on the SSD along with the windows install.

    The HDD drive is then partitioned into two section NTFS and EXT4. I install windows on the SSD and then map the documents, pictures, downloads folders et al on to the NTFS partition of the HDD and have the home directory of the linux install on the EXT4 partition of the HDD.

    This way you end up with most of your software on the SSD with most of the large files on the HDD.

    I've done all this on a 128GB SSD and a 320GB HDD. But I don't use the laptop too often as its a bit of a micky burning POS so space isn't an issue.

    If I was using it regular the minimum size I would consider is a 256GB SSD and obviously more would be preferable. That said, the minimum space requirement for an ubuntu install is 6GB. It all depends on how you intend to use your setup.

    My desktop has hot swappable drive bays. So instead of fluting about with GRUB, I just shove in the hard drive of the OS I want to boot. Just something to consider.
     
    Last edited: 14 Nov 2013
  9. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    What's the rationale for having a separate swap partition on a SSD?

    Well, with HDDs, using either Windows or Linux, then both by setting the position of the partition to be an relatively early one (esp if you've got a 2nd drive for it) to max r/w speed & since it means that the swap file won't become fragmented (though naturally this could be worked around in at least Windows by disabling it & defragging straight after installing, before creating a fixed sized one) then there certainly could be advantages...

    ...however, with a SSD then these aren't exactly relevant - given that the access time is the same no matter where the data's stored & that it's only block fragmentation that's a particular issue, given that the data in any file (>1 block in size of course) will end up spread over the nand over time, if not from the get go.

    So, on a SSD, there's more arsing about in setting everything up for, as far as i can see, no material gain whatsoever.


    Similarly, is there some advantage to having a separate boot partition? Well, unless there's some separate benefit, as there is with having a separate home partition, again it appears to be arsing about somewhat.


    in both cases, obviously it's personal choice to have as many or as few partitions as you happen to want - well, simply for organisation/ease of knowing what to reinstall, i hive most s/w & games off onto a 2nd partition on my SSDs - so i'm just asking you to clarify where the material gains are for the 2 extra ones that you're proposing.

    * * * * * *

    As to most of the rest of it, whilst they're sensible suggestions, naturally it depends upon how much SSD space you both have & need for your various stuff.

    if you're short of space & need to make decisions, it's also dependent upon whether any of the data will specifically gain from being on a SSD - well, your documents & iTunes library & whatnot won't particularly, but if you're looking, for example, do bunches of batch processing (& can't dedicate one or more fast HDDs to it - though a decent SSD will still be faster) or for working & temp directories for things like huge PS files then you can gain more by leaving space for them & sticking less used s/w onto a HDD.

    So it's down to individual usage rather than there necessarily being a specific formula.

    * * * * * * *

    Then, whilst a base Ubuntu install may well be ~6GB, i'm 99% sure that this is before installing any updates (& there obviously needs to be sufficient free space to download, extract & install them), & separately leaves no space for either a swapfile or for maintaining 20% free space on the partition(s).

    This is why i was suggesting that a reasonable min would be ~2/5ths of 56-59.6GB SSD - ie something like 20-24GB - for a sensible Linux installation, whilst still leaving enough space for a workable Win7/8 one.

    Clearly though, the OP's was never talking about getting a 60/64GB SSD & i wasn't suggesting that they did - it's was just about answering what the min needed is to have an sensible(ish) working dual boot system.
     
  10. theshadow2001

    theshadow2001 [DELETE] means [DELETE]

    Joined:
    3 May 2012
    Posts:
    5,283
    Likes Received:
    183
    There's very little arsing about setting up partitions. Boot gparted from a live cd and the whole thing is done in a few minutes regardless of how many you are using. Then just select them as needed during the install.

    Basically the further you partition your hard drive the less likely it is that corruption in one area will affect the overall system. This can be taken to extremes or you can plop the whole lot on one large partition. It doesn't really matter. I typically just go with boot root home swap partitions. sometimes just root home swap. Or if I'm running VMs I typically whack the whole lot on one partition.

    The reason to put swap on the SSD is for anything using it will have faster access. Thats about it really. The same way I would leave the page file on the SSD. Or you could just do away with swap altogether. It depends on the situation.

    Yes, fine, Ubuntu may or may not need more than 6GB I've never done an install that small. But thats what it checks for when you are first installing. My point was you don't necessarily need massive space to run a linux distro. Puppy for example. You don't even need a hard drive to run that. Just a usb stick and some ram.It all depends on what you are using and how you use it.
     
  11. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Firstly, you've misunderstood somewhat there.

    i wasn't saying that you shouldn't have a swapfile/partition on the SSD - simply that Linux gives the option of using either & they work in the self same way...

    ie it's equivalent to, in Windows, either having a swapfile on the main partition or making another one & moving it there.


    Then, the 'asring about' comment relates to the fact that you have to do far more preplanning in deciding how to allocate space to the partitions to make best use of the space whilst ensuring that there's 20% of free space on each partition.


    Otherwise, because partitions don't work in the same way on SSDs - ie a partition does not relate to a defined portion of nand that remains constant over time; it's only what happens to be stored on it in any one moment & the size limitation of course - then it will not help one iota if there were corruption to a portion of the nand itself.

    Whilst naturally if cells are found to be problematic in writing then they're retired (or, at EOL, made read only)...

    ...because of the parallel nature of how the nand dies are accessed then, with the sole exception of the SFs as they effectively work as a R50 array rather than a R0 array (so they can survive a die failure), were an entire nand die to fail then it would wipe out far greater amounts of data - & this is completely irrespective whether you had all of the data in one partition or, somehow, millions of them.

    So it will have no impact whatsoever on the chance of the nand failing that contains any specific data - be it the swapfile/directory itself or anything else.

    There's also no advantage to attempting to protect the swapfile from corruption, as it's temporary data so, were the data that solely happened to be in it to become corrupted, you can simply reboot.

    [NB 1. i'm not saying that, on the consumer level at least, anyone should buy a SF for the above reason, as the risk of a nand die failing is incredibly slight... Almost all data loss from SSDs, beyond user error/malware, is either d.t. f/w problems or controller failure.

    2. & this bit of the discussion also wouldn't work for HDDs as, again, there's both no gain in trying to protect the data in the swapfile & it won't affect the chance of corruption occurring in any single place on the HDD - though naturally i accept that there can be other advantages.]​


    So i still really can't where there'd be any material advantage to having more than 2 partitions for a Linux installation - other than it being your personal choice which i can't fault as it's what you choose to do.

    Yeah, don't get me wrong, i'm actually trying to learn something here as it'd be handy to be able to better advise people if a similar query comes up as, whilst i know my SSD stuff, i'm far from being an expert on Linux - if there's something to learn of course.


    As to the size of installation, i'm not convinced we're actually disagreeing - simply that i was obviously working to create a min based upon a more usable system & one of the more common distros that could be upgraded & whatnot.
     
    Last edited: 15 Nov 2013
  12. theshadow2001

    theshadow2001 [DELETE] means [DELETE]

    Joined:
    3 May 2012
    Posts:
    5,283
    Likes Received:
    183
    I'm not talking about corruption of the NAND or the hard drive. Once your device is ****ed anything you can recover is just a bonus frankly. I'm talking about file system corruption.

    Partitioning can be taken beyond root home you can have partitions for /boot /var /tmp /usr and so on. You can break the whole system up into as many partitions you want. That is is what I was talking about. The more you break it into parititions the less effect one corruption can have across the system. You can go nuts with your partitions or not.

    Swap obviously doesn't require protection from this type of corruption.:duh:

    Typically the recommended partition schemes are root home swap or boot root home swap. In an SSD having a swap partition or just a swap file is probably equivalent. It probably takes up the same amount of room, although you can decide yourself before install with the how much space you want to use. Never going to hibernate? just throw in a small swap partition.

    The general rule for a swap partition is minimum equal to ram or closer to double. Its not rocket surgery.

    As for pre-planning it takes all of 5 seconds or 5 mins googling if your new.
    boot a few hundred megabytes
    swap = ram size or up or to double the ram depending on how much space you have
    /root = maybe 20 gigs or so should be fine (again it depends on what your going to use on your system for)
    /home whatevers left.

    Can be done quite quickly, easilly and visually on gparted. If your doing two partitions doing four or ten or however many isn't going to take that much longer frankly.

    For over provisioning I just leave free space at the end of the drive.

    If you are absolutely new to linux its probably easier just throw everything in one partition.

    In the OPs case I would put home on the hard drive and he can put the rest of the system on the SSD on a single parition or a hundred partitions.
     
    Last edited: 15 Nov 2013
  13. PocketDemon

    PocketDemon What's a Dremel?

    Joined:
    3 Jul 2010
    Posts:
    2,107
    Likes Received:
    139
    Okay, so you're agreeing that it doesn't matter whether it's a swapfile or a swap directory on a SSD as there's no speed advantage to its location - thus there's no benefit whatsoever to having a directory (ie separate partition) as if the swapfile corrupted you'd reboot & if the root (or wherever you actively choose to put it) partition failed then having a working swapfile will not help you at all.

    This therefore means that having a swap directory is choice - nothing more, nothing less - which answers the question that i had about this completely.

    * * * * * *

    Then, we're in agreement that there should be a 2nd partition for home - as this has a material benefit if you need to reinstall.

    * * * * * *

    Now, so accepting you're not looking at nand failure (so you weren't exactly clear as to what kind of corruption you were referring to), let's look at alt corruption that could reasonably take place.

    1. if critically important file.*** gets corrupted, i really cannot see how splitting things up into multiple partitions will help in any way.

    Well, if cif.*** were to be a critical OS file, you'd need to look at either copying a replacement in or reinstalling the OS - neither of which will benefit from separate partitions,

    [NB since the stated reason for having home on a separate partition is so that settings are retained when you reinstall, this logically means that a reinstallation would overwrite the other basic folders, whether they're in separate partitions or not. if this wasn't the case then this overwhelmingly widely made recommendation wouldn't be solely for home.]​

    if cif.*** were to be a critical s/w file, you'd need to look at either copying a replacement in or reinstalling the s/w - neither of which will benefit from separate partitions

    & if cif.*** were to be a critical document or something, you need to have a backup.

    So, in this corruption scenario, it's immaterial if you have 1 partition or millions of them.


    2. Now, if a partition failed entirely causing corruption, this would strongly suggest that there is something up with the storage device itself which would involve replacing the SSD/HDD & reinstalling from scratch.

    Assuming it somehow wasn't a drive failure however, either way you're looking at either a reinstallation of the OS - in which case you're going to have to be overwriting all of the partitions other than the home one anyway - or having to recover the home partition from a backup.

    So having any other folders split into various partitions on the same drive isn't going to give a benefit.


    3. & if the MBR/GPT failed entirely then, along with it again being far more likely to be symptomatic of a drive failure, you're looking at trying to recover what you can & starting from scratch.


    So, in all 3 situations, i can see no evidence to suggest that having any partitions beyond a separate one for home on a single SSD will benefit, though if you've got some other corruption scenario that i've not thought of then by all means share it.

    i've ignored disk controller failure since you simply plug a single drive into another controller, & either replace the controller or use something like R-Studio if it's a raid array & you've nothing compatible... & obviously these aren't corruption as such.


    Now the only things that could materially aid in any of these 3 corruption scenarios are -

    (a) using a raid array which has an element of duplication &/or parity - since it should largely reduce the risk of 1 & 2 having a detrimental effect & 3 from occurring (other than losing the redundancy until you replaced a drive)

    (b) & having a decent backup regime so that if there was corruption then you can recover from the backup.

    (or i guess having a separate drive for the home partition - since it would allow either the OS or the home drive to fail without having to recover both... though a backup's far better)


    Having said that, whilst it's always sensible to have a decent backup regime as it helps to prevent data loss from a far wider range of causes (ie should either user error or some h/w failure or malware or whatever occur), with the exception of a drive failure causing them (in which case you wouldn't want to be reinstalling to the same drive anyway - either recovering everything from a backup or reinstalling from scratch & recovering the needed data files across), all 3 of the specific corruption scenarios above are incredibly unlikely to occur.

    Now, it 'may' of course be that you're plagued by corruption for some reason that i cannot fathom, but whilst i've naturally had drives turn up DOA or fail over time, i just can't see any way that either random corruption should be treated as a significant risk or that randomly having large nos of partitions on the same drive will provide any kind of protection from anything more serious.


    * * * * * *

    Otherwise, there is a difference between free space on a partition & increasing the OP for the whole SSD.

    The former means that the partition can have a reasonable amount of block fragmentation before it starts to affect performance - since you can have all of the OP in the world, but if too many of the blocks in the partitioned area are fragmented then it will impact significantly upon write performance on non-SFs & read performance on SFs until GC has had sufficient idle time to do its thing.

    (naturally trim is only a quick solution for blocks which have become completely unused - everything else has to be block combined before erasing can take place)

    Also, esp on a partion where there are a reasonable amount of temporary &, particularly, non-sequential writes, this can lead to excessive & unnecessary block combination as GC tries to cope with the ltd space allocated to it - thus increasing write amplification.


    The latter, however, gives the SSD access to more pre-erased blocks ahead of time (since an area equal to the total non-partitioned area cannot contain any user data, other than on a temporary basis when, for example, block combining, prior to erasing in a read-modify-write-erase cycle or, very briefly, until the mapping table of the flash translation layer is updated following a write), which in turn -

    (a) improves the performance of GC's block consolidation, & hence the maintenance of speeds

    (b) improves the ability of both the translation layer to choose lower use nand when writing & GC to efficiently run its wear levelling algorithms

    (c) reduces write amplification - increasing lifespan

    (d) & means that as nand fails over time, you're protecting yourself from losing ground from the default OP for a-c.


    Anyway, simply put, doing the latter without the former isn't exactly sensible.

    So whilst i would actively recommend a min of 20% free space & 25% OP on a SSD that's used for other than (effectively) storing static data (generally including games only drives) or, with OP, a *very* basic usage with minimal writes, it's more important to look to first maintaining at least a reasonable amount of free space on the partitions (14-15% or so) before adding extra OP.


    [Edit]

    Just for clarity, blocks which are wholly unused & where the mapping table has been updated can be used for a-c above...

    ...however, starting from a clean SSD, unless your write load is very light (hence the earlier exemptions) this provides inconsistent performance as blocks up to the partitioned area become filled &/or fragmented - whereas the benefits are maintained with the blocks that happen to make up the OP d.t. them not being allocated to a LBA; allowing the controller's algorithms to use them differently.

    This is why neither simply having free space is a cure-all solution (excluding effectively static data or very low write level usage)...

    ...nor, to rephrase what was written earlier to fit, since (excluding effectively static data usage) block fragmentation increases the number of pages (& hence blocks) which are assigned LBAs, & so insufficient free space will be problematic.

    Hence the need to look at both.
     
    Last edited: 15 Nov 2013

Share This Page