Discussion in 'Article Discussion' started by Gareth Halfacree, 11 Apr 2014.
Hints at 16GB capacity for future Surface.
That's pretty cool, essentially using a compressed recovery partition as a 'snapshot' to work from and only storing changes to the few OS files that are actually modified during normal usage.
This sounds like it could make enterprise deployments a lot easier too.
This could cause slowness over time as windows updates are applied. Much the way leaving a snapshot on a VM causes increased I/O as more changes happen. A way to commit changes and recompress the OS files was not detailed. That makes me skeptical of how well this would work long term. Also it requires an SSD and UEFI, which means I cannot use it for VMs as my SAN is all spinning disk.
It always amazes me how something that provides and interface between you, the hardware and individual pieces of that hardware has bloated to such an immense size over the years. The underlying architecture it deals with is essentially the same (CPU, memory, disc, input/output devices) so if the disc/memory cost and size hadn't changed to keep pace I suppose we would still be running on DOS or Win3.1?
(I am, by the way, completely ignorant of how these things work. It just seems a bit strange to me).
Or we would have a Windows without all the fat that comes with it nowadays, Windows has put on so much weight from when it was a nipper if it was a person we would have called in the big body squad years ago.
I remember getting Win95 down to around a 100Mb install size, if you wanted to go crazy you could get that down to less than a 50Mb install.
Come over to the Penguin side: Damn Small Linux (DSL) is 50MB and includes a full graphical user interface, or there's TinyCore which does the same thing in 12MB. Aye, 12MB.
Alternatively, there's the old QNX Incredible 1.44MB Demo, which included (from memory, I haven't watched the linked video) GUI, networking stack, terminal, web browser, music player, at least two or three games including Towers of Hanoi, word processor, and some other stuff - on a bootable 3.5" floppy disk. That's the QNX that underpins BlackBerry's most recent operating system, incidentally. Amazing stuff.
the underlying architecture of a PC may be essentially the same, but its configuration is not. There are millions of different PCs out there, with components ranging from established brands to cheap Taiwanese knock-off clones, going back to 486 boxes from 1990's right to the latest CPUs and chip sets by VIA, AMD and Intel. Windows needs to play nice with all of them.
It needs to play nice on gaming rigs, and on office rigs. It needs to work with medical PCs hooked up to blood testing equipment, and lab PCs hooked up to physics instruments. It needs to work on computer cash registers, on ATMs, on PCs driving CNC machines, on image processing rigs.
At the same time it needs to cater to muggles who do not know how to install and tune an OS beyond "Insert disk and press any key" --and then will call the Tech Support helpline because they can't find the "any" key.
I remember when Windows was lean: it was a program that piggy-backed onto MS-DOS which required some CONFIG.SYS and AUTOEXEC.BAT magic to free up enough base memory to run it. It did not have universal graphics or sound drivers --each software application needed to support a specific graphic or sound card, Windows could not mediate. It did not have CD-ROM drivers. It did not recognise gaming devices. If you slotted a component into the motherboard, it did not know WTF it was. You had to install the driver yourself. In MS-DOS (yup, CONFIG.SYS and AUTOEXEC.BAT again).
Now? No need to configure memory. Slot it in, and Windows will find and use it. Any software that runs on Windows will generally be compatible with whatever graphic and sound cards that are in the case, and any gaming device that is hooked up (glory be DirectX). Slot in a card or hook up a device and Windows will generally recognise what it is and find the right driver for you. It will install it for you too. When a program crashes it does not take the whole computer with it. And Windows will run as many programs as you want, concurrently, until the memory or CPU power runs out. Windows will update itself, manage itself, repair itself, keep itself and your files reasonably clean from viruses.
Oh, and we expect Windows to be compatible with all Windows software ever produced, from Windows 3.11 onwards.
We've come a long way. Anybody who is not old enough to have used PCs in a meaningful way when all we had was MS-DOS and perhaps Windows 3.11, is not entitled to an opinion about Windows.
I remember when my 286 had 640K conventional memory and 384K of expanded. You needed a 386 in order to convert expanded to extended memory so for playing games like X-Wing, we had to create custom bootdisks with butchered autoexec.bat and config.sys to free up as much of the 640K as we could. Typically about 600K..... they were the days!
For Windows, this sounds like a massive security or malware risk, in which case the linked files will likely become useless for recovery. I'd rather MS just clean up the bloat so this wasn't necessary to begin with.
TreeDude - a couple of good points to consider.
I have open questions about how major servicing of the OS (not just the WIM) happens when a system is being WIMBooted. My hope is that the Windows 8.1 > servicing model has been updated to apply the changes to the WIM, not the OS disk (or at the first major servicing point, it falls apart.
Because of the way the WIM format works, there shouldn't be significant slowness even if the WIM has been serviced. WIM files can be modified in a variety of ways - recaptured from disk and appended to an existing file, and there is a means to export individual volume images to reduce orphaned files from earlier images. Finally, WIM files can be mounted by Windows directly and modified, which is the way I assume the servicing stack will modify a WIMBooted system.
shmidtbag - I'm curious why you think it's a massive risk? From the OS's perspective, it's running on a normal drive. From the recovery image's perspective, it's most likely read-only unless being serviced.
I can't see how you'd come to that conclusion, as it's the exact opposite: because you are storing deltas to the WIM, you can make that WIM inviolable, even at the Hypervisor level (if you're running win8 under the built-in HyperV).
read again, the linked files are write protected, so can only be read
That's what I thought too. Seems to me almost like a built-in sandbox.
Isn't this pretty much how linux live disks work? A read-only image and any changes are dumped [in the case of cd/dvd] or saved to a file on the drive [if you so choose on USB]...
So driver and some compatibility layers take up 3Gb
I can't see that stopping people from injecting unwanted stuff into the .WIM, DISM can be used to modify the Windows setup files and they use the same WIM format.
Small increases in a piece of software's "smartness" can have a significant effect on its size and complexity.
That said I'm sure it hasn't become as streamlined as possible yet. I would imagine they will continue to shrink the operating system in future revs.
Lots of drivers, a gui with a file browser that can access everything from SMB and FTP to snapshots of locals disks, the whole storage sub-system with it's whole software RAID-y goodness. A network stack that can speak all kinds of protocols, plus a firewall. An image viewer and web browser (I'm not sure if IE is included but Windows Help can read HTML).
Basically think of all the different bits of Windows that come as standard, that's where your 'bloat' comes from, and sure, you probably won't need a lot of it, but it's all there because someone finds it useful.
Yeah. That's all it is. Some drivers and a compatibility layer.
Most of the "bloat" in Windows comes from the WinSxS subsystem. That's the price you pay for the possibility of being able to run any program since Windows 95 in a safe, secure manner. Businesses absolutely depend on this as their LOB apps may be a decade or two old but must run reliably on modern hardware.
Sure, Linux and OS X are leaner and have cleaner code bases. But they achieve this by cutting out legacy compatibility (Apple is particularly aggressive on this front) in the hope that programs are updated to support the new interfaces. Both approaches are correct and serve different needs.
Try Telling that to mass effect 1 and windows 8 :/
Separate names with a comma.