If you start looking for UNIX everywhere you'll soon realize that even Windows has some UNIX legacy.
Huh, since when that? Windows had at best some DOS legacy, which was a quick and dirty (by name, DOS was QDOS when MS bought it cheap and sold it expensive to CBM) copy of CP/M, which was remotely inspired by Unix. But that DOS legacy got away with the arrival of Windows NT and Windows 2000. The Millenium edition is the last in line with DOS underneath.
UNIX was designed when I was growing up, its legacy is about 50 years old.
And it never was that efficient, nor is it anything like real time.
Unix was actually pretty efficient, being itself a pun of Multix, a much heaver and never really come to live operating system. Unix was designed for stability and maintainability, for a system you would not have to reboot every hour. Quite unlike the toy system AmigaOs is.
I remember Xenix running on 68K computers and Amigas would run circles around those in terms of performance.
Performance has its price. The design goals of Unix where actually to have an Os for operating a telefone network (namely, AT&T), and not to create a toy for consumers (AmigaOs). That certainly implies that the Os has to have a couple of more features, and different features, that cost performance.
To be pointed, I'm tired of all this monolithic buggy crap I'm being encouraged to use.
That's part of the kernel design. Actually, it is not that bad these days anymore given that you have kernel modules, but what tires me the most is the absence of any stable interface between kernel modules and the core kernel code in Linux - this interface changes on a daily basis, essentially disallowing vendors to program against a stable API. Either, you as a vendor have to continuously clean up behind the kernel hackers and adapt to the latest API changes, or you have to release your source code and have to have somebody else do the dirty work for you.
One way or another, it is an approach that does not scale very well. It only works because a lot of people put a lot of time into keeping the kernel running.
What also bothers me is a lot of API junk to support removable media. Currently, it is a hard to overview stack of layers from udev, dbus up to the desktop to support the trivial, namely to allow me to insert a CD any time and have it appear on the desktop. This stack is just there because *ix does not have, by design, a good interface for remote devices, simply because the original design did not need it. After all, you don't insert disks into your telefone server every five minutes, but rather want to have that under control of an operator.
Raw speed, however, does not really bother me a lot these times. Processors are fast enough, and I appreciate the additional stability due to additional checking and protection mechanisms in the Os.
Even my Samsung phone periodically wigs out and has to be restarted.
And Windows and OSX, while not nearly as problematic as in the past, also still occasionally crash.
But how is that related to the kernel design? It actually is a matter of implementation quality, and AmigaOs is not exactly a shining example here either.
As to the 1%, as a member of the definitely sub 1% (as are all Amigans), I actually feel pretty comfortable in the belief that things could be SO much better.
What "better" is depends on your needs and goals. Yes, there might be probably better Os designs than Linux, which is certainly a lot better than the design of AmigaOs (which has, admittedly, here and there some good ideas hidden under a pile of bad ideas). But whatever you design, the market chance of any "just another Os" are so minimal that I don't even bother. As said, if you want to waste your time, go ahead and be my guest. But I'm not placing a bet on such a project ever producing anything usable.