If your texteditor crashes then you loose your work anyhow.
Whether this is on AMIGA-OS or on UNIX it does not matter.
And memory protection does not help here.
It does matter, indeed, if you were performing other tasks while using your text editor. For instance, if you were compiling your sources on cpu #2, downloading files and running other tasks on processors #3 and #4, having your stupid text editor crashing on cpu #1 and brin ging all other tasks to death with the rest of the OS, actually MAKES the difference between amigoid and unixoid OSes. Are you really still performing a single operation every time? Curious to hear, from the "we have had the first and most powerful mainstream multitasking machine in computer history" kind of people.
First of all - AMIGA OS supports threads.
And they would work far better if they only could be parallelized on different CPUs (or CPU cores, that's quite the same), as any other operating system already proved, without any chance of saying the contrary.
Your argument is very "simple" but OK lets follow it.
Just say stop when you think you have enough CPU power
2 Cores have theoretically more power than 1
4 Cores have theoretically more power than 2
8 Cores have theoretically more power than 4
16 Cores have theoretically more power than 8
32 Cores have theoretically more power than 16
64 Cores have theoretically more power than 32
128 Cores have theoretically more power than 64
256 Cores have theoretically more power than 128
512 Cores have theoretically more power than 256
1024 Cores have theoretically more power than 512
2048 Cores have theoretically more power than 1024
Not happy yet?
Still need more?
What bloated Software do you want to run?
It does not depend on software bloatness, but on how tasks you need to open in order to get your results as quickly as possible. This is the exact reson why we're talking about etherogeneous computing for 5-10 good years now, using GPUs to perform parallel computation instead of CPUs. And yes, 2048 cores are far better than 1024 for chemistry simulations and scientific computation, as like as 4096 would be better than 2048 and so on. And, believe in me, software running on Tesla-based servers are far from being bloated, since the algorhithm sent to every stream core must be as neat as possible.
There's also, indeed, a break even point for "normal" CPU core parallelization on home computing tasks. But this heavily depends on user needs as well. The more tasks you open, the more CPU cores you'll need to keep responsiveness, altohough the rest of your hardware should also cope with that (configuration balancing). Having 16 cores would be pointless without a huge amount of RAM and a good disk subsystem, since some tasks would end up filling available resources and place others on the to-do list, waiting for resources to be available again.
For all I want to do with my computer - 4 GB is enough.
As I said, I love people pretending their poor computing needs should be just enough for everyone.