Good grief.... what is it about the Amiga community that causes everyone to slightly miss a point, and descend into flames!
The underlying point *is* valid, and one that has been oft asked on the 'net.. why is it, that with machines getting faster, productivity seems to decrease (to cover most/all bases of "longer boot times", "longer load times", "harder to actually get anything done", etc).
I used the same "argument" on the occasional really obnoxious "my PC is faster than yours because" types - I;d challenge them to a simple test, pitting and betting my computer against theirs: from cold boot, write and print a one line letter. I'd even state that mine would be done before theirs finished POST. I'd just omit it was a C64 running EasyScript in ROM cartridge.
Part of the answer is that the Amiga, as a hardware platform, is pretty non-complex; ie the hardware is all the same. Its a 680x0 class, with ACS or ECS.
This affects us in two ways
1. The hardware "POST" is simpler, and quicker, on an Amiga than on a PC. Especially a PC with additional cards (stuff a few SCSI cards in a PC to see an extreme of this)
2. The number of possible base drivers necessary to ship - and potentially install - to boot a potential computer. In the case of the PC, its a heck of a lot.
Additional drivers are supplied by manufacturer, and really amount to a few libraries or device drivers.
Windows, otoh, has a vast potential range of hardware that might be found. MS ships many drivers, and manufacturers ship plenty more.
But that isnt the whole answer.
Windows has a greater amount of abstraction, ie the kernel, HAL, APIs, drivers, stacks, and so forth. There are a goodly number of services that are started, often by default, and many of which are modular and have dependencies. All of which equals disk access.
not being a programmer, I cant comment on the following, but I understand that modern compilers do not, by default, compile for speed - compile time switches are required - neither for compact code.
The next I can believe: MS does not, as a design goal, require compact code, but code that is, for example, "compatible", "secure", and "bug-free". IGNORE the reality of what comes out (those listed are by example, so dont detract by banging on about security or lack thereof), concentrate on the phrase design goal: if there is no requirement to produce compact code, then priority can be given to meeting the stated goals (ie those I listed, and whatever MS's design goals may be).
I can well beleive that compact code is not a requirement; in a world where the average PC ships with 512MB, where even 3 years aho it was 256mb, where hard drives are now averaging 300GB, where is the driver for compact code?
So we have a number of factors that lead to increased load times, even though drives are getting faster due to increased data density and spindle speeds (and hence lower latency), where bus speeds are faster, where CPU speed is faster, and so forth.
Oh, and add into the mix that despite the increase in RAM, Windows virtual memory system must be factored in - ie it pages out to disk on the slightest excuse. Now, a very very knowledgable acquaintance explained this, and frankly I was wibbling, but the crux is that the basic philosphy of Windows memory management was sound for its day (tis safer to swap out at start, than fail to load and have o swap, which takes a hck of a lot longer), but is perhaps not so relevant today.
And it isnt just Windows this affects (bar the memory management). Look at Linux running ona modern PC. I remember installing RH and Mandrake, v7/8 circa 2001, on a dual PII and a Thinkpad T20. Nice. Booted quick, was slick to use. Just stuck Suse 10 on my laptop and my desktop... feckin hell! Its like running through treacle.
None of course, answers the question.. just why does Windows slow down over time, especially shut down times!