Amiga.org

Amiga computer related discussion => General chat about Amiga topics => Topic started by: ElPolloDiabl on June 20, 2015, 09:22:23 AM

Title: Reinvent the OS cont
Post by: ElPolloDiabl on June 20, 2015, 09:22:23 AM
This is continuing from the thread where one person wanted to break all legacy compatibilty.
Firstly I thought why not just run Linux if you want something more advanced. If someone wants to do it I would be happy if there was a more 'advanced' version of Amiga OS. It would be our own Unix variant for users who want stability etc.
We could still have single core classic OS for max compatibility.
Title: Re: Reinvent the OS cont
Post by: trekiej on June 20, 2015, 04:03:33 PM
Is Aros 68K tied to the chip-set?
edit:
I think it would be good to have a Linux Distribution that is Amiga oriented.
I have seen fdisk or cfdisk have amiga file system in a list of types.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 20, 2015, 06:54:04 PM
Linux is not well suited to hardware like the 68K.
And UNIX does do that well on low end hardware either.
I can't remember which, but there is a freeBSD or openBSD port for the Amiga.
But I would prefer a fresh approach.
I have used Microware's OS-9 on 68K based systems, and I have been pretty inimpressed with the open 8 bit versions of this OS.
We could start with a similar micro kernel and build out.
That, along with a lot of other advantages, would give us a much better priority based task switching system.
True real time performance could be realized.
The expansion bus is tricky.
In the 80's and 90's I used systems with ISA busses.
This is much easier to interface than PCI, but it is also a lot slower.
I have a few PCI bridge chips here that were designed to interface 68K and older PPC processors to a PCI buss, but its a complex device with a lot of software requirements.
You'd probably want to build a custom bridge chip using an FPGA.
Since its well known, an R200 video card would be a good first choice.
And sound from something simple like an SB Live audio card.

So, that would be the core I'd recommend.
'30 to '60 processor, FPGA PCI bridge chip, micro kernal based OS.
Euip that with some of the cards we are already using.
And there is your starting point.
Title: Re: Reinvent the OS cont
Post by: kolla on June 20, 2015, 07:09:53 PM
Hohum, you have much experience with Linux on m68k? It always worked well for me. There is both NetBSD and OpenBSD for Amiga too, at least old releases.
Title: Re: Reinvent the OS cont
Post by: NorthWay on June 20, 2015, 07:50:31 PM
http://www.osnews.com/story/28636/A_reimplementation_of_NetBSD_using_a_microkernel

That was interesting I thought.

However, for an Amiga experience I would never start with a Unix style kernel that insists on mapping each and every process in its own address space and using overlapping virtual address ranges. There are alternatives targeting 64-bit architectures, going under the label SASOS though many of those are not all that well grounded in the real world.

Personally, I think CPU evolution has not been a good match for the Amiga model: CPUs are going out of their way to separate and protect, and very few have mechanics for sharing. I have heard that the old PA-RISC did have the odd feature that was along that line.
The rise and power of FPGAs and what a single person can do on his own gives me hope that someone might force the transistors to yield to their will and make this something I'm looking for.
Title: Re: Reinvent the OS cont
Post by: Gulliver on June 20, 2015, 10:22:04 PM
I would choose a port of DragonFlyBSD with ZFS on AMD x64 hardware + FS-UAE for backwards compatibility.

I would customize its GUI interface to be more Amiga-like and would not support all hardware devices around, but just the three or four more widely available from different vendors (so no wasted resources in trying to support every piece of hardware out there).

BTW, DragonFlyBSD has some distant Amiga heritage (Matt Dillon of Dice C fame is its author). And many concepts of its design are taken from AmigaOS.
Title: Re: Reinvent the OS cont
Post by: wawrzon on June 20, 2015, 11:47:59 PM
isnt there enough linux distributions for everybody to choose? simply everyone who wants advanced amigaos may take the one he likes and call it amiga. case solved. right?
Title: Re: Reinvent the OS cont
Post by: Iggy on June 21, 2015, 12:17:20 AM
Quote from: kolla;791399
Hohum, you have much experience with Linux on m68k? It always worked well for me. There is both NetBSD and OpenBSD for Amiga too, at least old releases.


Yes, probably more than you, as I have been using 68000 processors since they were introduced.
And while linux may work for you, I'm not into pain.
Further, I've seen older implementations of Minix for the 68K that worked better than most Linux distros (and Minix isn't even meant to be anything other than an educational OS).
Title: Re: Reinvent the OS cont
Post by: Iggy on June 21, 2015, 12:27:03 AM
Quote from: NorthWay;791402
http://www.osnews.com/story/28636/A_reimplementation_of_NetBSD_using_a_microkernel

That was interesting I thought.

However, for an Amiga experience I would never start with a Unix style kernel that insists on mapping each and every process in its own address space and using overlapping virtual address ranges. There are alternatives targeting 64-bit architectures, going under the label SASOS though many of those are not all that well grounded in the real world.

Personally, I think CPU evolution has not been a good match for the Amiga model: CPUs are going out of their way to separate and protect, and very few have mechanics for sharing. I have heard that the old PA-RISC did have the odd feature that was along that line.
The rise and power of FPGAs and what a single person can do on his own gives me hope that someone might force the transistors to yield to their will and make this something I'm looking for.


One of the features I liked about OS-9 68K was the use of reentrant code. Multiple processes could call executable modules with only one copy of the program in memory.
The 68K has limited memory addressing (in most versions). The more efficiently we use memory the better.
The techniques that will benefit our processor aren't common to other platforms.
That is why a generic OS like Linux or UNIX will never maximise performance on the 68K.
Title: Re: Reinvent the OS cont
Post by: guest11527 on June 21, 2015, 07:22:13 AM
Quote from: Iggy;791410
One of the features I liked about OS-9 68K was the use of reentrant code. Multiple processes could call executable modules with only one copy of the program in memory. That is why a generic OS like Linux or UNIX will never maximise performance on the 68K.
Huh? First, in how far is this related to performance? It is probably related to available memory, but whether code is shared or not does not matter.

Second, Linux mmap's exectuables, which means that it only loads the parts that require execution, on demand, when needed. AmigaOs has to load the full thing to memory first. Worse, if you want to re-use executables between calls, they have to be written properly ("pure"), with no check of the Os whatsoever. Why that's an improvement I don't know.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 21, 2015, 01:33:17 PM
Quote from: Thomas Richter;791415
Huh? First, in how far is this related to performance? It is probably related to available memory, but whether code is shared or not does not matter.

Second, Linux mmap's exectuables, which means that it only loads the parts that require execution, on demand, when needed. AmigaOs has to load the full thing to memory first. Worse, if you want to re-use executables between calls, they have to be written properly ("pure"), with no check of the Os whatsoever. Why that's an improvement I don't know.

Yes, the primary advantage IS the reduction in memory usage.
I started out using systems with limited memory resources, so I'm still focused on to tight efficient code.
"Pure", "No check of the OS at all.."?
No, the only constraint I know of is that they can't be self modifying (and any programmer that uses that trick deserves to be flogged).
And there is no constraint on OS interaction, I have no idea where you got that idea.
Ideally, there would always be a background process keeping count of they alloted timeslices that each process was allocated.
What I'm talking about does bring a few constraints to programming, but its also a lot tighter OS.

BTW - Since I have a Coldfire development platform on hand, I'm thinking about working on a kernel with that.
Title: Re: Reinvent the OS cont
Post by: guest11527 on June 21, 2015, 03:08:01 PM
Quote from: Iggy;791417
"Pure", "No check of the OS at all.."?
No, the only constraint I know of is that they can't be self modifying (and any programmer that uses that trick deserves to be flogged).
No, not quite. Except you understand "self modifying" different from me. A program consists of three types of "memory regions", "code", "data" and "blank space" (bss). In principle, you can write to whatever region you like, but "code" usually contains the program binary, and modifying that by the program is what I would call "self modifying".

Unfortunately, that is only necessary but not sufficient for a program to be pure. It also must not write into the data or bss section, which - apparently - is not granted for your average program. Of course your average program wants to write to its data region, which typically contains C variables with program linkage (aka "global variables"), and of course, you want to write to the bss space (otherwise, it would just stay blank.).

So "pure" programs have to be a lot more careful. You essentially cannot use variables with program linkage (including library bases!), everything must be on the stack, or you must allocate from the heap yourself.

Some compilers come with startup code that create on each program start a copy of the data region on the heap, including manual relocation of the copy of the data space. Needless to say, while this makes programs pure (most of the time at least), it also causes additional overhead.  
Quote from: Iggy;791417
Ideally, there would always be a background process keeping count of they alloted timeslices that each process was allocated.
Why do you need a process for that? And keeping track of each single piece of memory a program allocated is usually too much overhead in first place. The way how Linux works is quite different. Every program gets its own heap, and can do with this memory region what it likes. The program usually includes compiler library routines that implement memory allocation from the heap. This memory allocation is not by the Os. The only exception is when the heap gets too small and requires resizing. Linux (and *ix operating systems in general) usually have a brk() or sbrk() system call to do that. But that's not a "memory allocator". It is usally several layers below malloc(), which is a C standard library function for maintaining the heap.  If the program ends, the entire heap is thrown away. If you start a new process, i.e. by the (infamous) fork(), the heap, stack and code are duplicated, and the separate copy continues execution behind fork(). (Actually, this is usually not a true copy, but rather depends on the "mad cow" trick, i.e. "copy on write").  
Quote from: Iggy;791417
What I'm talking about does bring a few constraints to programming, but its also a lot tighter OS.
Which makes it rather hard to implement any C type language on top of it, i.e. the "pureness" you seem to require. Bad idea, if you ask me.    
Quote from: Iggy;791417
BTW - Since I have a Coldfire development platform on hand, I'm thinking about working on a kernel with that.

If I may, I would probably suggest a good book on operating system design.
Title: Re: Reinvent the OS cont
Post by: kolla on June 21, 2015, 08:05:18 PM
Quote from: Iggy;791409
Yes, probably more than you, as I have been using 68000 processors since they were introduced.
And while linux may work for you, I'm not into pain.
Further, I've seen older implementations of Minix for the 68K that worked better than most Linux distros (and Minix isn't even meant to be anything other than an educational OS).

Yeah well. I have used Linux on Amiga since its birth so to speak, first kernel was 0.87 iirc, around same time NetBSD 1.0 came around too, in 1994-95. Anyways, for more than a decade I had a little park of 68k machines running Linux, which I used quite extensively for all kinds of purposes, web servers, my personal mail, file servers for my AmigaOS systems, IRC servers etc. and building packages for my private Gentoo/m68k effort (an effort I intend to reignite soon). Linux/m68k vs AmigaOS I do have quite a lot of experience with. In general, on Linux, all kinds of IO is faster. Filesystem (I use ext4, but it was aleays true) access is faster, and due to the way Linux caches filesystem, it acts much smoother than most FS under AmigaOS. Networking is not only faster, but way more reliable and feature rich on Linux. And of course online security is way better too.

That you think Minix works better on Amiga tells me two things - you have not spent much time using either.
Title: Re: Reinvent the OS cont
Post by: Retrofan on June 21, 2015, 08:15:56 PM
There is something on the works; Delicate Linux (http://delicate-linux.net/) and on May, 29 he started to port it to Amiga.
Title: Re: Reinvent the OS cont
Post by: kolla on June 21, 2015, 08:23:27 PM
And in general about UNIX on 68k, a small list from the top of my head, of projects whos people thought 68k was well suited for un*x usage.
* SunOS
* Domain/OS
* NeXT
* A/UX
* Atari SRV4
* AMIX
* Linux
* NetBSD
* OpenBSD
Title: Re: Reinvent the OS cont
Post by: kolla on June 21, 2015, 08:28:19 PM
Quote from: Retrofan;791438
There is something on the works; Delicate Linux (http://delicate-linux.net/) and on May, 29 he started to port it to Amiga.


Nice. I have also used OpenWRT earlier to build small uClibc based boot images for Amiga and Macintosh. For general use though, glibc is more convenient, albeit way more bloated :)
Title: Re: Reinvent the OS cont
Post by: Iggy on June 21, 2015, 08:41:37 PM
Quote from: Thomas Richter;791420
No, not quite. Except you understand "self modifying" different from me. A program consists of three types of "memory regions", "code", "data" and "blank space" (bss). In principle, you can write to whatever region you like, but "code" usually contains the program binary, and modifying that by the program is what I would call "self modifying".

Unfortunately, that is only necessary but not sufficient for a program to be pure. It also must not write into the data or bss section, which - apparently - is not granted for your average program. Of course your average program wants to write to its data region, which typically contains C variables with program linkage (aka "global variables"), and of course, you want to write to the bss space (otherwise, it would just stay blank.).

So "pure" programs have to be a lot more careful. You essentially cannot use variables with program linkage (including library bases!), everything must be on the stack, or you must allocate from the heap yourself.

Some compilers come with startup code that create on each program start a copy of the data region on the heap, including manual relocation of the copy of the data space. Needless to say, while this makes programs pure (most of the time at least), it also causes additional overhead.   Why do you need a process for that? And keeping track of each single piece of memory a program allocated is usually too much overhead in first place. The way how Linux works is quite different. Every program gets its own heap, and can do with this memory region what it likes. The program usually includes compiler library routines that implement memory allocation from the heap. This memory allocation is not by the Os. The only exception is when the heap gets too small and requires resizing. Linux (and *ix operating systems in general) usually have a brk() or sbrk() system call to do that. But that's not a "memory allocator". It is usally several layers below malloc(), which is a C standard library function for maintaining the heap.  If the program ends, the entire heap is thrown away. If you start a new process, i.e. by the (infamous) fork(), the heap, stack and code are duplicated, and the separate copy continues execution behind fork(). (Actually, this is usually not a true copy, but rather depends on the "mad cow" trick, i.e. "copy on write").    Which makes it rather hard to implement any C type language on top of it, i.e. the "pureness" you seem to require. Bad idea, if you ask me.    

If I may, I would probably suggest a good book on operating system design.


You have an interesting fixation on your own definition of "pure". And my definition of self modifying code is completely valid, code that directly modifies  the module it runs in.  I never brought in data modules, and have no idea why you would.
As to OS structure, having used operating system before Linux even existed, I'm pretty comfortable with my knowledge level.
Further, nothing I have mentioned limits the use of variables.
One thing you seem to lack is a basic  understanding of kernel calls and operations.

So far, the most intellegent comment I have seen in this thread is the reference to DragonFly BSD.
The hybrid kernel used in that comes close to what I have suggested.
The micro kernel in MorphOS comes closer.

All I have suggested beyond that is reentrancy.
And that is not nearly the burden you seem to think it is.

Maybe you should examine how code used to be written. Then you might have a better understanding of this concept.

Maybe not, after all most Linux and UNIX users tend to gloss over the flaws inherent in their OS' of choice (like the disadvantages in using a monolithic kernel).
Title: Re: Reinvent the OS cont
Post by: Iggy on June 21, 2015, 08:46:35 PM
And...to get past this point of contention, drop reentrancy as a goal.
The other points remain valid.
Like DragonFly, a nested kernel with a micro kernal core.
Basic rules that eliminate poor coding techniques that greatly complicate debugging (like self modifying code).
Title: Re: Reinvent the OS cont
Post by: Iggy on June 21, 2015, 08:55:43 PM
Quote from: Thomas Richter;791420
No, not quite. Except you understand "self modifying" different from me. A program consists of three types of "memory regions", "code", "data" and "blank space" (bss). In principle, you can write to whatever region you like, but "code" usually contains the program binary, and modifying that by the program is what I would call "self modifying".

Unfortunately, that is only necessary but not sufficient for a program to be pure. It also must not write into the data or bss section, which - apparently - is not granted for your average program. Of course your average program wants to write to its data region, which typically contains C variables with program linkage (aka "global variables"), and of course, you want to write to the bss space (otherwise, it would just stay blank.).

So "pure" programs have to be a lot more careful. You essentially cannot use variables with program linkage (including library bases!), everything must be on the stack, or you must allocate from the heap yourself.

Some compilers come with startup code that create on each program start a copy of the data region on the heap, including manual relocation of the copy of the data space. Needless to say, while this makes programs pure (most of the time at least), it also causes additional overhead.   Why do you need a process for that? And keeping track of each single piece of memory a program allocated is usually too much overhead in first place. The way how Linux works is quite different. Every program gets its own heap, and can do with this memory region what it likes. The program usually includes compiler library routines that implement memory allocation from the heap. This memory allocation is not by the Os. The only exception is when the heap gets too small and requires resizing. Linux (and *ix operating systems in general) usually have a brk() or sbrk() system call to do that. But that's not a "memory allocator". It is usally several layers below malloc(), which is a C standard library function for maintaining the heap.  If the program ends, the entire heap is thrown away. If you start a new process, i.e. by the (infamous) fork(), the heap, stack and code are duplicated, and the separate copy continues execution behind fork(). (Actually, this is usually not a true copy, but rather depends on the "mad cow" trick, i.e. "copy on write").    Which makes it rather hard to implement any C type language on top of it, i.e. the "pureness" you seem to require. Bad idea, if you ask me.    

If I may, I would probably suggest a good book on operating system design.


You have an interesting fixation on your own definition of "pure". And my definition of self modifying code is completely valid, code that directly modifies  the module it runs in.  I never brought in data modules, and have no idea why you would.
As to OS structure, having used operating system before Linux even existed, I'm pretty comfortable with my knowledge level.
Further, nothing I have mentioned limits the use of variables.
One thing you seem to lack is a basic  understanding of kernel calls and operations.

So far, the most intellegent comment I have seen in this thread is the reference to DragonFly BSD.
The hybrid kernel used in that comes close to what I have suggested.
The micro kernel in MorphOS comes closer.

All I have suggested beyond that is reentrancy.
And that is not nearly the burden you seem to think it is.

Maybe you should examine how code used to be written. Then you might have a better understanding of this concept.

Maybe not, after all most Linux and UNIX users tend to gloss over the flaws inherent in their OS' of choice (like the disadvantages in using a monolithic kernel).
Title: Re: Reinvent the OS cont
Post by: kolla on June 22, 2015, 02:25:34 AM
Quote from: Iggy;791443
And...to get past this point of contention, drop reentrancy as a goal.
The other points remain valid.
Like DragonFly, a nested kernel with a micro kernal core.
Basic rules that eliminate poor coding techniques that greatly complicate debugging (like self modifying code).


Oh lordy, please go on fantasizing about the DragonFly "micro kernal", it sounds orgasmic!
Title: Re: Reinvent the OS cont
Post by: Iggy on June 22, 2015, 02:51:33 AM
Quote from: kolla;791461
Oh lordy, please go on fantasizing about the DragonFly "micro kernal", it sounds orgasmic!

Actually, micro kernel OS' are no fantasy. QNX, DragonFly, Microware OS9, MorphOS, and almost all process control OS' (as well as plenty of other examples).
And unlike OS' based on a monolithic kernel, many of these can operate in true real time.
Another feature, which Linux has just caught up to, is the ability to dynamically load and unload drivers (much easier when the drivers reside outside the core of the kernel).
And btw, I have plenty of experience with Linux, which is why I am not fond of it.
No great surprise that an Amiga user would be satisfied with Linux as the Amiga never had real time response or stable operation either.
And if we are to continue a legacy of tight, compact software then Linux is definately to be avoided.
I grew up during the period Bell Labs and our nations universities were developing UNIX. Linux is little more than the illegitimate child of UNIX.

Don't you think we can do better than a OS with a 50 year history?
Linux reminds me of Java or Flash in that it isn't that great, but its momentum seems to keep it going.

And you guys can try to attack my credentials, but I was building SWTPC equipment before most of you were born.
Before we really had OS'.
I have worked for a company that built its own 68k based system.
And I had a hand in writing the video drivers for that system and porting a GUI to it.
So...diddle around with your Amigas as much as you like (I'm fond of the platform too), but I have already seen ways where 68k based systems could be improved.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 22, 2015, 03:03:37 AM
Now...could we get ON with this discussion.
I am sure the majority of us would be unwilling to trade what we have for yet another Linux derivitive.
So, if we assume a fresh start, with some coding done at levels as low as assembly, all in the name of efficiency - what features would that OS have?

I have already outlined my ideas.
How about a few from all of you that don't suggest a retread of something tired?
Title: Re: Reinvent the OS cont
Post by: Iggy on June 22, 2015, 03:22:23 AM
BTW - You all ARE aware that Amiga OS is considered a micro kernel OS, aren't you?
Title: Re: Reinvent the OS cont
Post by: ElPolloDiabl on June 22, 2015, 03:33:23 AM
There was a lot of advanced topics you covered. Good job. Could you just pick an OS that has the most software already available?
Title: Re: Reinvent the OS cont
Post by: Iggy on June 22, 2015, 03:41:43 AM
Quote from: ElPolloDiabl;791465
There was a lot of advanced topics you covered. Good job. Could you just pick an OS that has the most software already available?


Well heck, then we'd have to go with Windows, OSX, or Linux, and I don't want to do that.
Even Amiga OS has a better software base than what I'm suggesting.
This is where Thorham and I agree, the lack of software would be a major disadvantage.
Title: Re: Reinvent the OS cont
Post by: ElPolloDiabl on June 22, 2015, 04:31:29 AM
I like the OS 9 microkernel approach. Should you have software to run on it as its ready? E.g. goal:
Networking
File manager
are some suggestions.
Title: Re: Reinvent the OS cont
Post by: trekiej on June 22, 2015, 04:44:41 AM
If there was a community built 68K computer, what bus do you think would be good to use?
vme, zorro, s100, etc.
Title: Re: Reinvent the OS cont
Post by: guest11527 on June 22, 2015, 08:31:38 AM
Quote from: Iggy;791444
You have an interesting fixation on your own definition of "pure".
This is not my definition of "pure". It is the AmigaDos definition of "pure". Sounds strange? Maybe. But I did not design this stuff.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 22, 2015, 05:25:29 PM
Quote from: Thomas Richter;791477
This is not my definition of "pure". It is the AmigaDos definition of "pure". Sounds strange? Maybe. But I did not design this stuff.

My apologies. I am not here for contention.
Obviously much of what I am considering would definitely be "impure".
And after thinking about your posts, reentrancy isn't really a necessity with as much memory as we can use.
With a 1.5 GB memory range, and an efficient processor, the added complications may not be worth it.
Title: Re: Reinvent the OS cont
Post by: kolla on June 22, 2015, 07:13:17 PM
Iggy, I have issues with you calling the DragonFly kernel a micro kernel, it is not a micro kernel design, it is a hybrid at best, with some features most known from micro kernel environments. If DragonFly is a micro kernel, then FreeBSD is too, or any *ix that pushes features out in userland. Do you consider Darwin a micro kernel OS? It used to be.

I find it funny how DragonFly keeps coming up as a buzz word among amiga users, but how many of those mentioning it has actually used it?! For the record, I have, at work even, evaluating HAMMER for possible use in production environments. Eagerly awaiting HAMMER2 to evaluate for a possible multimaster cluster FS solution.
Title: Re: Reinvent the OS cont
Post by: kolla on June 22, 2015, 07:14:45 PM
Quote from: trekiej;791471
if there was a community built 68k computer, what bus do you think would be good to use?
Vme, zorro, s100, etc.

usb3 ;)
Title: Re: Reinvent the OS cont
Post by: kolla on June 22, 2015, 07:18:24 PM
Quote from: Iggy;791464
BTW - You all ARE aware that Amiga OS is considered a micro kernel OS, aren't you?


No it is not, it is an exokernel, that much is obvious.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 22, 2015, 09:33:06 PM
Quote from: kolla;791490
Iggy, I have issues with you calling the DragonFly kernel a micro kernel, it is not a micro kernel design, it is a hybrid at best, with some features most known from micro kernel environments. If DragonFly is a micro kernel, then FreeBSD is too, or any *ix that pushes features out in userland. Do you consider Darwin a micro kernel OS? It used to be.

I find it funny how DragonFly keeps coming up as a buzz word among amiga users, but how many of those mentioning it has actually used it?! For the record, I have, at work even, evaluating HAMMER for possible use in production environments. Eagerly awaiting HAMMER2 to evaluate for a possible multimaster cluster FS solution.


Yes, its not a true micro kernel. It has layers. I'm used to that with other true micro kernel OS'. I've mentioned OS9 before, its kernel grew into two distinct segments in its last 8 bit version and remains fairly large in the 68k version.
But your right, DragonFly really stretches the definition, and its more of a hybrid.
And its not the first UNIX or Linux variant that has tried this.
Still, if we were to approach this it would have to be a clean start.
Really all we are talking about is applying so of the  improvements that have become common in process control OS' since Amiga OS was created.
We can definately improve on real time response and reliable multi-tasking.
Having a GUI that was OpenGL ES compatible would be nice.
I don't know enough about audio standards to make any real suggestion, except maybe to have a coprocessor to help lighten that task.
Other issues, what processor ('030, '040, '060, or FPGA), what type of memory, and what kind of expansion bus?
So, again, any ideas?

All I'm sure about is the kernel, since a micro kernel is our legacy, our current reality (as in MorphOS, and probably OS4), and our likely future.

Oh, and one suggestion for video.
While its old, we know enough about the R200 to make that an easy target.
Title: Re: Reinvent the OS cont
Post by: Fats on June 23, 2015, 02:21:02 PM
Quote from: kolla;791490
I find it funny how DragonFly keeps coming up as a buzz word among amiga users, but how many of those mentioning it has actually used it?!


I do have it installed in a VM but did not actually use it much. I'm more a Linux guy than a *BSD guy.
The reason DragonFlyBSD is mentioned is of course because of Matt Dillon who has a history in the Amiga world. (People who know what a Fish disk is should also know Matt and DICE).
He did base some of the internals of DF on Amiga architecture. AFAICS they are not exported to user land but I think that is the first thing to do for a retought incompatible Amiga-like OS.
Personally I think anyone who want to come up with a totally new OS architecture is DOA (dead-on-arrival). The OS market is mature enough that I think it is impossible to get track on something totally different (see it like the flying cars etc.). GNU bootstrapped on commercial UNIX, Linux bootstrapped on GNU, Android bootstrapped on Linux, all *BSD bootstrapped on original Berkely UNIX, MINIX is now using NetBSD, Win NT has links with VMS although there are debates on how strongly they are, ...

Although my AROS dev work (any Amiga dev work actually) is really low to non-existing at the moment one of the goals for me was to see how far one could go with the Amiga architecture and API without breaking backwards compatibility.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 24, 2015, 02:08:36 AM
Understandable Staf,
I'm still unsure that a break with compatibility is such a hot idea.
I'm a bit of a Neanderthal when it comes to MorphOS.
I'm in no hurry to see an ISA change, and I'd like to see Abox remain intact while we pursue a second concurrent environment that could make use of the features that would break compatibility (SMP, larger memory maps, etc).

Hey Kolla,
What is an exokernel? I don't think I have heard that term before. Educate me.

Oh...I looked that up. Its a very apt description for the Amiga kernel.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 24, 2015, 02:30:39 PM
Quote from: ElPolloDiabl;791469
I like the OS 9 microkernel approach. Should you have software to run on it as its ready? E.g. goal:
Networking
File manager
are some suggestions.

We had those running in the early 90's.
What we could have used was a better GUI.
We ported an X-windows based package called G-Windows that had been created by Steve Adams for a company that specialized in process control systems, Gespac.
It worked, but like a lot of Linux (and virtually all UNIX) derivatives you still had to use the CLI to get some things done.
Personally, while I can see the utility in a terminal window, I'd prefer that it be the instrument of last recourse (you should be able to do virtually everything you need through the GUI).

Oh, and with all deference to Staf, I'm still farting around with obscure micro kernel OS'.
We have to remember that Linux only makes up about 1% of the marketplace.
BSD, considerably less.
Amiga like OS'? Probably an insignificant percentage.
So what?
Does that mean I have to use a more common solution?
No, not if what I am using provides me with the utility I need.

I'm sorry to see good, committed AROS developers dropping out just as the project is reaching completion.

Drawing users to a new 68K OS? Probably not too realistic, but I'd like to have it to play around with. And a Coldfire version would be an easy port.
With the Coldfire V1 cpu available as a royalty free FPGA core and cores for the 68K steadily improving, future hardware could be built with better performance than our legacy hardware.
Title: Re: Reinvent the OS cont
Post by: trekiej on June 24, 2015, 04:14:21 PM
I hope this question is not too outdated.
Video games aside, does the Amiga need a lot to do productivity software?
Would it need a special card for video editing and effects?
1983 posts
Title: Re: Reinvent the OS cont
Post by: Iggy on June 24, 2015, 05:06:20 PM
Quote from: trekiej;791545
I hope this question is not too outdated.
Video games aside, does the Amiga need a lot to do productivity software?
Would it need a special card for video editing and effects?
1983 posts

Yes, it requires additional hardware, but at one time the Amiga was the leader in low cost video manipulation.
Although, these days with the advent of digital video, its quite dated.
Title: Re: Reinvent the OS cont
Post by: kolla on June 24, 2015, 05:07:26 PM
This is depressing, nothing new here since the 90ies
Title: Re: Reinvent the OS cont
Post by: kolla on June 24, 2015, 05:17:19 PM
Iggy, what market place? All Android devices are Linux, all ChromeOS devices are Linux, most SOHO networking gear, NAS etc are Linux... embedded Linux is taking over the world. And if it isn't Linux, it is a BSD, all iOS devices, OSX, JunOS, more SOHO equipment, storage solutions... pretty much the entire world is running on various types of *ix.

Your 1% is meaningless!
Title: Re: Reinvent the OS cont
Post by: Iggy on June 24, 2015, 06:07:05 PM
Quote from: kolla;791550
Iggy, what market place? All Android devices are Linux, all ChromeOS devices are Linux, most SOHO networking gear, NAS etc are Linux... embedded Linux is taking over the world. And if it isn't Linux, it is a BSD, all iOS devices, OSX, JunOS, more SOHO equipment, storage solutions... pretty much the entire world is running on various types of *ix.
Your 1% is meaningless!


I guess I meant traditional computers.
But these days other devices have blurred that line.
I probably browse the internet on my phone more often than on my computer (and yes, its an Android phone).
And Android seems to have a lock on all tablets that aren't made by Apple.
Further, OSX is BSD, sort of anyway, although I'd tend to place it in its own category (as I would iOS).
If you start looking for UNIX everywhere you'll soon realize that even Windows has some UNIX legacy.

But does that make it good?
UNIX was designed when I was growing up, its legacy is about 50 years old.
And it never was that efficient, nor is it anything like real time.
I remember Xenix  running on 68K computers and Amigas would run circles around those in terms of performance.
And the process control OS' I used to be focused on, while having some similarity to UNIX, were all designed for real time mission critical applications.

To be pointed, I'm tired of all this monolithic buggy crap I'm being encouraged to use.
Even my Samsung phone periodically wigs out and has to be restarted.
And Windows and OSX, while not nearly as problematic as in the past, also still occasionally crash.  

As to the 1%, as a member of the definitely sub 1% (as are all Amigans), I actually feel pretty comfortable in the belief that things could be SO much better.
Title: Re: Reinvent the OS cont
Post by: trekiej on June 24, 2015, 06:26:05 PM
I believe the Amiga is a great computer platform and do not want people to think that I am cutting down on it.
I believe the FPGA Amiga's are a partial answer to the lack of hardware.
Sorry, I keep defaulting back to Aros Native as an answer to modern needs.
My Christmas list is having Aros 68K or Amiga OS 3.X to have 3D acceleration in Emulation.
Then what?
More Devs?
IDK.
edit:
This is what I like about Hollywood's latest release.
I like the idea of writing software for Amiga OS and have it easily ported to Aros, etc.
I am still leaning on the idea of Aros, Amiga OS 4.x and Morph OS as being good paths to take.
Aros first because of the availability of hardware.
If Amiga OS4.X is Amiga, then Amiga still exists, at least in some form.
Unfortunately I have not had the pleasure of using Morph OS.
It would be interesting to see an ad-in card for Aros that can be a Toaster like card.
That is, it would be awesome if Aros can do 1080p and up to 4K for video editing.
Could a program made in Hollywood do this?
Title: Re: Reinvent the OS cont
Post by: Iggy on June 24, 2015, 09:08:08 PM
Quote from: trekiej;791556
I believe the Amiga is a great computer platform and do not want people to think that I am cutting down on it.
I believe the FPGA Amiga's are a partial answer to the lack of hardware.
Sorry, I keep defaulting back to Aros Native as an answer to modern needs.
My Christmas list is having Aros 68K or Amiga OS 3.X to have 3D acceleration in Emulation.
Then what?
More Devs?
IDK.
edit:
This is what I like about Hollywood's latest release.
I like the idea of writing software for Amiga OS and have it easily ported to Aros, etc.
I am still leaning on the idea of Aros, Amiga OS 4.x and Morph OS as being good paths to take.
Aros first because of the availability of hardware.
If Amiga OS4.X is Amiga, then Amiga still exists, at least in some form.
Unfortunately I have not had the pleasure of using Morph OS.
It would be interesting to see an ad-in card for Aros that can be a Toaster like card.
That is, it would be awesome if Aros can do 1080p and up to 4K for video editing.
Could a program made in Hollywood do this?

Well, I'm convinced that any OS following the 3.1 API is essentially Amiga.
This will offend the purists, but the hardware was getting outdated at the end of production anyway and we were moving to RTG and accelerator cards at that time.
The thing that baffles me is why more people aren't proud of the fact that, primarily through community support, we still continue to soldier on.
And thanks to Cloanto, we now have a valid source for OS3.1.
Given time, AROS 68K may become competitive with that.
And the NG machines continue to mature.
I expect to see 4K displays on all three platforms become relatively common.
Title: Re: Reinvent the OS cont
Post by: Fats on June 24, 2015, 09:13:31 PM
Quote from: Iggy;791541
I'm sorry to see good, committed AROS developers dropping out just as the project is reaching completion.


FYI, I did not drop out; I'm just hibernated ;)
Title: Re: Reinvent the OS cont
Post by: trekiej on June 24, 2015, 09:52:14 PM
1984 posts.
Amiga Spirit?
Hmmm, I am not for sure what to say.
I believe people follow the money.
What do you think makes Aros better than Widows or Linux?
That is an old question.
Commercialization and popularity are two interesting factors that need to be dealt with.
Driving force is needed to make it move forward.
I know that there are some that are very dedicated.
I am for sure they deserve a lot a credit.
Sorry to all, I do not read all the posts as close as I should.
(No excuse)
Have a great day.
Title: Re: Reinvent the OS cont
Post by: kolla on June 24, 2015, 10:27:55 PM
Quote from: Iggy;791554
I guess I meant traditional computers.

Those all are traditional computers in all respects. HPC, telecommunication, networking, servers, terminals - all levels of computing is dominated by unix derivatives.

Quote
But these days other devices have blurred that line.

No, they are the same type of devices, nothing fundamental has changed.

Quote
I probably browse the internet on my phone more often than on my computer (and yes, its an Android phone).
And Android seems to have a lock on all tablets that aren't made by Apple.
Further, OSX is BSD, sort of anyway, although I'd tend to place it in its own category (as I would iOS).

Why? There is nothing fundamentally different between OSX and any other commercial UNIX, there is nothing fundamentally different between an iPad with iOS and a Zaurus with Qtopia.

Quote
If you start looking for UNIX everywhere you'll soon realize that even Windows has some UNIX legacy.

Oh really - apart from the old TCP/IP stack that has long since been replaced, what would that be? Didn't we all agree 20 years ago that NT is VMS with buttons?

Quote
But does that make it good?

Yes it does.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 24, 2015, 11:51:40 PM
Quote from: Fats;791565
FYI, I did not drop out; I'm just hibernated ;)

Hey Staf,
If it wasn't for you I never would have heard of Battle Beast, so you've got a pass.
Besides, you really put in some serious time working with others on AROS, so you ought to be allowed a break.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 25, 2015, 12:03:14 AM
Yes Kolla,
Many, quite rightly, pointed out NT's VMS legacy.
But VMS has, over the years, become quite similar to UNIX.
And I'm not sure that EITHER is a particularly good product.

As to what you think is good...hey opinions...you know...

And to reiterate, I would prefer a smaller, more modular approach, once again based on a micro kernel.
Title: Re: Reinvent the OS cont
Post by: kolla on June 25, 2015, 01:02:14 AM
So, enjoy Hurd!
Title: Re: Reinvent the OS cont
Post by: kolla on June 25, 2015, 01:07:10 AM
Don't tell me you are a OpenVMS Galaxy operator as well - what on earth do you mean by "VMS has become similar to UNIX"?!
Title: Re: Reinvent the OS cont
Post by: kolla on June 25, 2015, 01:18:22 AM
And UNIX is not as much "a product" as it is a set of standards, APIs and even a philosophy for how operating system components should work. Interoperability is at its center, and sure, anyone can deviatevin any directions they want, as long as they gather up around those standards and APIs that pretty much define UNIX - because it is the best for everyone, being different for the sake of being different has no merit whatsoever.

I have personally experience with Linux on m68k outperforming AmigaOS on basic things like disk and filesystem access, and networking, and on MacMini doing video playback outperforming that of MorphOS, and all I did was optimizing kernels and gcc parameters for the respective hardware, compiling everything myself instead of using generic Debian.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 25, 2015, 02:10:40 AM
Quote from: kolla;791576
Don't tell me you are a OpenVMS Galaxy operator as well - what on earth do you mean by "VMS has become similar to UNIX"?!

Ah, you DO know at least one of my causes of disdain for VMS' creators.
That tendency absorb what they didn't originate.
Come to think of it, that's a LOT like Windows.

But if that reflects badly on VMS, how do you excuse Linux?
Since it is a blatant copy of UNIX.

And damn man, UNIX just isn't that great.

All of this tends to point to one problem, a serious lack of real creativity.

Everyone copying from each other.

There are times when I have a hard time telling modern OS' from the Sun systems some of my supervisors were using 20 years ago.
Title: Re: Reinvent the OS cont
Post by: kolla on June 25, 2015, 05:20:32 AM
Yeah well, operating systems are not developed out of creativity, but out of necessity and productivity. Even AmigaOS - very little creative about it.
Title: Re: Reinvent the OS cont
Post by: guest11527 on June 25, 2015, 07:06:22 AM
Quote from: Iggy;791554
If you start looking for UNIX everywhere you'll soon realize that even Windows has some UNIX legacy.
Huh, since when that? Windows had at best some DOS legacy, which was a quick and dirty (by name, DOS was QDOS when MS bought it cheap and sold it expensive to CBM) copy of CP/M, which was remotely inspired by Unix. But that DOS legacy got away with the arrival of Windows NT and Windows 2000. The Millenium edition is the last in line with DOS underneath.
Quote from: Iggy;791554
UNIX was designed when I was growing up, its legacy is about 50 years old.
And it never was that efficient, nor is it anything like real time.
Unix was actually pretty efficient, being itself a pun of Multix, a much heaver and never really come to live operating system. Unix was designed for stability and maintainability, for a system you would not have to reboot every hour. Quite unlike the toy system AmigaOs is.
Quote from: Iggy;791554
I remember Xenix  running on 68K computers and Amigas would run circles around those in terms of performance.
Performance has its price. The design goals of Unix where actually to have an Os for operating a telefone network (namely, AT&T), and not to create  a toy for consumers (AmigaOs). That certainly implies that the Os has to have a couple of more features, and different features, that cost performance.
Quote from: Iggy;791554
To be pointed, I'm tired of all this monolithic buggy crap I'm being encouraged to use.
That's part of the kernel design. Actually, it is not that bad these days anymore given that you have kernel modules, but what tires me the most is the absence of any stable interface between kernel modules and the core kernel code in Linux - this interface changes on a daily basis, essentially disallowing vendors to program against a stable API. Either, you as a vendor have to continuously clean up behind the kernel hackers and adapt to the latest API changes, or you have to release your source code and have to have somebody else do the dirty work for you.

One way or another, it is an approach that does not scale very well. It only works because a lot of people put a lot of time into keeping the kernel running.

What also bothers me is a lot of API junk to support removable media. Currently, it is a hard to overview stack of layers from udev, dbus up to the desktop to support the trivial, namely to allow me to insert a CD any time and have it appear on the desktop. This stack is just there because *ix does not have, by design, a good interface for remote devices, simply because the original design did not need it. After all, you don't insert disks into your telefone server every five minutes, but rather want to have that under control of an operator.

Raw speed, however, does not really bother me a lot these times. Processors are fast enough, and I appreciate the additional stability due to additional checking and protection mechanisms in the Os.
Quote from: Iggy;791554
Even my Samsung phone periodically wigs out and has to be restarted.
And Windows and OSX, while not nearly as problematic as in the past, also still occasionally crash.  
But how is that related to the kernel design? It actually is a matter of implementation quality, and AmigaOs is not exactly a shining example here either.  
Quote from: Iggy;791554
As to the 1%, as a member of the definitely sub 1% (as are all Amigans), I actually feel pretty comfortable in the belief that things could be SO much better.

What "better" is depends on your needs and goals. Yes, there might be probably better Os designs than Linux, which is certainly a lot better than the design of AmigaOs (which has, admittedly, here and there some good ideas hidden under a pile of bad ideas). But whatever you design, the market chance of any "just another Os" are so minimal that I don't even bother. As said, if you want to waste your time, go ahead and be my guest. But I'm not placing a bet on such a project ever producing anything usable.
Title: Re: Reinvent the OS cont
Post by: KimmoK on June 25, 2015, 07:44:12 AM
Elsewhere I posted about modern (64bit+more) Amiga(like)OS:

>If updating is too hard task to do (as it seems to be, taking lifetimes)...
>Take the loved features and add new ones, compile...

I personally vote for totally new os branch where 3.1API or legacy compatibility is not the key feature. I can always reboot to legacy compatible AOS4 or MorphOS or use 68k via some runinUAE or JanusUAE kind of solution.

I want OS that behaves like AOS (with executive task scheduler), but can address more than 4Gb RAM, use multiple CPU cores and utilize memory protection.

What should be the starting point?
1) OPEN, to gain community support
2) NG API?
3) ?

What should be the priority-HIGH features...
-future proof design
-?

Priority-MED:
-Must it compile to 32bit CPU? IMO: no.
-Must it compile to several 64bit CPU ISA: yes
...

Priority-LOW:
-wrappers and tools to help recompiling PPC and 68k legacy sources
-legacy compatibility sandboxes (perhaps UAE like heavy one and some light version targetting recompiled non-memoryprotected legacy)
-posix/linux sandbox
Title: Re: Reinvent the OS cont
Post by: guest11527 on June 25, 2015, 07:48:28 AM
Quote from: KimmoK;791584
I personally vote for totally new os branch where 3.1API or legacy compatibility is not the key feature. I can always reboot to legacy compatible AOS4 or MorphOS or use 68k via some runinUAE or JanusUAE kind of solution.

I then wonder why (or if?) that has to be 68K. I mean, take a PC (which is much more modern) and try to implement something there. In how far is this then related to Amiga?
Title: Re: Reinvent the OS cont
Post by: KimmoK on June 25, 2015, 08:34:42 AM
@Thomas Richter
>I then wonder why (or if?) that has to be 68K.

I thought the topic was about Amiga(like)OS in general.

IMO it's futile to "Reinvent the OS" for 68k.

>In how far is this then related to Amiga?

I have experienced 32bit 68k to it's limits.

I would like to see and use next generation Amiga(like)OS as current offerings either A) are not amigalike or  B) they are limited to 31bit+shared memory+one core.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 25, 2015, 01:19:16 PM
@ Thomas

I appreciate the history lesson, but I lived through that period, so I've already got plenty of memories of it.
I don't really think that NT derived OS' have a true DOS legacy, they are actually pretty bad at running DOS applications. Not that that is a bad thing. I had to work with early PCs and frankly DOS was dreadful.
One time, I remember typing in a long string of commands that should have worked, only to find that DOS (unlike any decent OS) couldn't handle complex command lines.
Come to think of it, I'm that impressed with NT use of flat data bases/registries. DLLs are a total pain in the ass.

And you're right, the Amiga kernel leaves much to be desired (as does large parts of the OS), which is why I'm using MorphOS (not Amiga OS4).
Which could bring me back to raving about micro kernels...but even I'm bored with that.

Actually, I'm not THAT negative about UNIX/BSD/Linux. Compared to the alternatives, there are clear advantages.

Its just that I CAN see where a smaller, more tightly constructed, modular OS would have its advantages.
Title: Re: Reinvent the OS cont
Post by: guest11527 on June 25, 2015, 06:24:40 PM
Quote from: Iggy;791595
One time, I remember typing in a long string of commands that should have worked, only to find that DOS (unlike any decent OS) couldn't handle complex command lines.
The syntax of the DOS command line is totally screwed. Even more screwed than the Amiga shell (which is also weird). M$ would probably argue that you should use Powershell (has a sane syntax, but simple things can be surprisingly complicated).  
Quote from: Iggy;791595
DLLs are a total pain in the ass.
As Amiga libraries. Or Linux .so files. Same problem, they create dependencies between software packages that are hard to resolve. On Linux, you have a packet manager. On AmigaOs, you can at least move the libraries into local places, but you really have to clean up your LIBS: directory.  There are a couple of new ideas like "Docker" where all libs plus the program is isolated in a container. Somehow, it makes libraries absurd (couldn't the author have linked the binary statically, avoiding the dependency in first place?), so this is a bit strange.
Title: Re: Reinvent the OS cont
Post by: trekiej on June 25, 2015, 10:39:22 PM
The Amiga needs to be put back into the Professional Arena.
Title: Re: Reinvent the OS cont
Post by: matthey on June 25, 2015, 11:51:03 PM
Quote from: KimmoK;791586
I have experienced 32bit 68k to it's limits.

No. You can only have experienced 32bit 68k to 1994 limits and probably not that. Did you experienced x86 32bit limits with a Pentium 1? I think not. The 32bit 68k could use extended memory (like XMS for x86) giving several GB of memory per task, it could have partial memory protection and isolation, it could have multiple cores and threads, it could have much larger caches, it could have an SIMD, it could have a more efficient CPU design and it could have a more efficient ISA and ABI. Good Amiga compatibility is possible (although most new features would require AmigaOS changes) where 64bit pointers of a little endian 64bit processor would give none except UAE. Even x86_64 isn't a particularly good 64bit CISC ISA. I could create a better 64bit 68k like OpenCISC ISA to go with a reinvented Amiga like OS if you want to discard all the baggage (and compatibility) but would it not be a waste of time? There are more risks than money for investment to challenge the oligopolies.
Title: Re: Reinvent the OS cont
Post by: KimmoK on June 26, 2015, 08:52:01 AM
@matthey

Yes, I meant the limits of what 68060 offered.

Possible 68k-NG can be better, yes.

(by using busboard/backplane -kind of design to hold GPU+CPU etc, the 68k-NG can be inserted instead of motorolla68k / PPC when/if available one day)
Title: Re: Reinvent the OS cont
Post by: KimmoK on June 26, 2015, 08:54:53 AM
Quote from: trekiej;791615
The Amiga needs to be put back into the Professional Arena.


That would be very very nice thing.

Some Building blocks:
1) robust HW in production
2) robust, capable OS
3) mainstream level developer tools

1 is easy, 2 and 3 not so.
Title: Re: Reinvent the OS cont
Post by: guest11527 on June 26, 2015, 10:37:03 AM
Quote from: KimmoK;791636
1 is easy, 2 and 3 not so.

No, not at all. You seem to look at this problem from a perspective as a software engineer, but that's not quite appropriate. Building stable hardware requires stable financing, allowing vendors to verify their products sufficiently.  

Remember the first PPC chipsets from Motorola? Pretty bugged stuff. It's not that you can buy M68K or PPC hardware next door. It's not exactly mass-produced.
Title: Re: Reinvent the OS cont
Post by: KimmoK on June 26, 2015, 11:35:21 AM
Quote from: Thomas Richter;791646
No, not at all. You seem to look at this problem from a perspective as a software engineer, but that's not quite appropriate. Building stable hardware requires stable financing, allowing vendors to verify their products sufficiently.  

Remember the first PPC chipsets from Motorola? Pretty bugged stuff. It's not that you can buy M68K or PPC hardware next door. It's not exactly mass-produced.

Well, I'm pretty sure, with today SoC chips, the HW challenge is anyway easier than the SW.
(for example for a ARM, PPC or x86 product)

About three years ago I saw HW (like the beaglebone) being used professionally by "multibilliondollar" company.

+++
thinking about new slogan .... if a design is not readable in 120x120 pixel avatar area, the design is too complex?
No, actually it just means I have to use DPaint instead.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 26, 2015, 05:28:14 PM
Nice to see you guys still knocking this one back and forth.

I like Kimmok's ideas.
Extending the 68K's addressing capability to 64 bits could get tricky, but legacy compatibility IS slowing us down.
I'd also really like to know what the max speed of a 68K core would be with a better grade FPGA or dedicated silicon.
Title: Re: Reinvent the OS cont
Post by: trekiej on June 26, 2015, 06:43:12 PM
Iggy did you get my pm?
Title: Re: Reinvent the OS cont
Post by: Fats on June 28, 2015, 12:15:51 PM
Quote from: Thomas Richter;791583
... but what tires me the most is the absence of any stable interface between kernel modules and the core kernel code in Linux - this interface changes on a daily basis, essentially disallowing vendors to program against a stable API. Either, you as a vendor have to continuously clean up behind the kernel hackers and adapt to the latest API changes, or you have to release your source code and have to have somebody else do the dirty work for you.

One way or another, it is an approach that does not scale very well. It only works because a lot of people put a lot of time into keeping the kernel running.


It is only a problem as you say if you don't want to release the source code of your driver so most of the kernel devs see that as a feature and not a problem.
Once you have done the job of getting your code in the kernel it will be maintained. In the meantime you likely also have improved the quality of your code considerably.
Title: Re: Reinvent the OS cont
Post by: guest11527 on June 28, 2015, 12:51:00 PM
Quote from: Fats;791751
It is only a problem as you say if you don't want to release the source code of your driver so most of the kernel devs see that as a feature and not a problem.
You forget one thing. Even if I release the source code, it means that the source requires continuous updating and maintaining to keep it working with the kernel. Now, which company wants to make this commitment, namely not only release source, but also employ engineers to continuously update the source whenever the kernel interface changes - i.e. with every release.  
Quote from: Fats;791751
Once you have done the job of getting your code in the kernel it will be maintained.  
Using the passive here means that you don't know who maintains it. It is like throwing source code over a wall and hoping somebody picks it up, and that this "somebody" has enough knowledge and experience to do a good job. That is a problem for professional software engineering.  

If you are lucky, you do not need further commitment. If you are not so lucky, your driver will stop working with the next kernel release and your investment is lost.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 28, 2015, 01:03:14 PM
Quote from: trekiej;791672
Iggy did you get my pm?


Sorry for the delay. I just noticed that.

Oh, and these last few posts only reinforce my  opinion that some things should not be in the kernel.
And Thomas is 100% correct, the interface should be stable and completely documented for programmers.
If the core is changed regularly, what does it point to other than it had issues or simply wasn't what it could be.

Carve it down to its minimum, and move anything unnecessary to loadable modules.
It would have lots advantages, but Torvalds doesn't understand this.
And THAT makes me distrust his skill as a software engineer.

Copying an OS is easier than creating one.
Even with all the constant change, I don't see that much improvement in Linux.
Title: Re: Reinvent the OS cont
Post by: kolla on June 28, 2015, 04:22:27 PM
Quote from: Thomas Richter;791754
You forget one thing. Even if I release the source code, it means that the source requires continuous updating and maintaining to keep it working with the kernel. Now, which company wants to make this commitment, namely not only release source, but also employ engineers to continuously update the source whenever the kernel interface changes - i.e. with every release.


I can think of quite a few, most notably perhaps is Microsoft. But really, you are totally overstating this "problem", changes between releases are typically small, big changes in sub systems are announced and discussed to death long before they are actually implemented in a release.

Quote

Using the passive here means that you don't know who maintains it. It is like throwing source code over a wall and hoping somebody picks it up, and that this "somebody" has enough knowledge and experience to do a good job. That is a problem for professional software engineering.  

If you are lucky, you do not need further commitment. If you are not so lucky, your driver will stop working with the next kernel release and your investment is lost.


So your company wants to have drivers in the kernel, but you are too cheap and/lazy to actually maintain them? Well too bad, maybe you should find a different solution than a custom kernel driver, there are plenty of generic drivers and interfaces to use instead. That unmaintained code stops working is a good thing, good riddance.
Title: Re: Reinvent the OS cont
Post by: kolla on June 28, 2015, 04:31:52 PM
Quote from: Iggy;791755
Sorry for the delay. I just noticed that.

Oh, and these last few posts only reinforce my  opinion that some things should not be in the kernel.
And Thomas is 100% correct, the interface should be stable and completely documented for programmers.
If the core is changed regularly, what does it point to other than it had issues or simply wasn't what it could be.

Carve it down to its minimum, and move anything unnecessary to loadable modules.
It would have lots advantages, but Torvalds doesn't understand this.
And THAT makes me distrust his skill as a software engineer.

Copying an OS is easier than creating one.
Even with all the constant change, I don't see that much improvement in Linux.

Linux, for some strange reason, runs on just about anything, despite these so called flaws that you point out, flaws that btw you pretty much have made up in your fantasies.

Unless you show the world a better way yourself, you are in no position to question the software engineering skills of Torvalds, whos software projects reach world wide, affecting almost everyone's lives at these point.
Title: Re: Reinvent the OS cont
Post by: eliyahu on June 28, 2015, 04:39:22 PM
@thread

personal insults against fellow members will not be tolerated. they are a violation of our posting guidelines and only serve to demonstrate the immaturity of the person making them. if you have an argument to make, then use reason and logic, not ad hominem attacks. anyone who disregards this warning will be given a brief vacation from the site. please, this thread has come along nicely so far -- let's keep it that way. :)

-- eliyahu
Title: Re: Reinvent the OS cont
Post by: kolla on June 28, 2015, 04:57:48 PM
So pointing out that someone is talking nonsense and displaying ignorance is considered personal attack? Just... wow!
Title: Re: Reinvent the OS cont
Post by: guest11527 on June 28, 2015, 05:39:56 PM
Quote from: kolla;791761
I can think of quite a few, most notably perhaps is Microsoft. But really, you are totally overstating this "problem", changes between releases are typically small, big changes in sub systems are announced and discussed to death long before they are actually implemented in a release.
Probably you have never been in the position of maintaining a large scale system. I'm not talking about hobbyists system here. You cannot simply take "off the shelve" hardware for Linux and hope that just because it runs now it will continue to run with the next kernel.

Examples? Last week, kernel update, but vmware does no longer work. Some obscure call in the network layer of linux takes now one argument instead of two, where the second argument was always zero. The cost to include a backwards compatible call that ignores the second argument (which is in this particular case the right approach) would have been close to zero. The cost of patching vmware is (for us) larger because it includes downtime and man hours of work.

Another example? Apparently, some kernel call the NVIDIA driver needs is no longer in place, so if you update your kernel, graphics no longer works, and re-compiling the nvidia driver fails. There is a new driver, but this has other bugs that were not fixed. This is "annoying" for a personal user because the procedure of updating such drivers is more complex than on other systems, but it causes real costs on large scale systems.
Quote from: kolla;791761
So your company wants to have drivers in the kernel, but you are too cheap and/lazy to actually maintain them?
I don't have a company, but I'm working in a computing center. Even if I had a company, I would rather prefer to sell products that continue to work for ten years with drivers unchanged and have the operating system maintainer to keep care of it rather than to continuously have to invest money in a product that no longer creates any profit because it is no longer sold. Somehow the engineers that do all the patching and cleanup behind the kernel hackers have to be paid, too. You seem to forget that.

I'm not a fan of Windows and I don't use it privately, but for today, I can get a driver from Windows Vista (probably ten years old) and install it under Windows 8.1 and it works. You can think of this as you like, but if you ask me, then that shows at least that MS cares about their products and keeps the interface stable. It creates an ecosystem which makes it worth for hardware vendors to invest into it by selling products once and keep consmers happy. Of course MS make their customers pay for this, but one way or another, nothing is for free.

For Linux, you pay for service time because you need somebody to clean up behind the kernel hackers causing consistently interface changes. For Windows, you pay for the product. For an enterprise, it is often simpler and easier just to pay for the product and get the warranty that it continues to run. For Linux, this model also exists in the form of support contracts (e.g. by SuSe, the one we have) but my overall impression is that even with that you have to invest more time if you upgrade SuSe to the next release because there are so many dependencies between the installations that you cannot simply carry your software over. Basically, you have to replace the entire software layer with a new release.

Open source and closed source are just two different models. You have to pay for either, one way or another. Closed source is often easier for the user, and not necessarily more expensive, depending on how you value your time. If I, in my spare time, run my linux system, that's all fine for me. But if I had to write invoices to my boss for that, then that would be a quite expensive system.  
Quote from: kolla;791761
That unmaintained code stops working is a good thing, good riddance.

Actually, no. It rather means that the Os "vendor" does not have the goal of creating a commercial ecosystem that would allow hardware vendors to invest into. Which is pretty much the problem why open source never reached mainstream. Unixoid systems are successful where its vendor keeps care about it itself and uses a much stricter development model, and where hardware and Os are much more coupled than in the PC market. For example, see Android. That's "Linux with a Windows business model". Because Os vendor and hardware vendor are identical or much more dependent on each other, no problem.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 28, 2015, 05:55:49 PM
Quote from: kolla;791762
Linux, for some strange reason, runs on just about anything, despite these so called flaws that you point out, flaws that btw you pretty much have made up in your fantasies.

Unless you show the world a better way yourself, you are in no position to question the software engineering skills of Torvalds, whos software projects reach world wide, affecting almost everyone's lives at these point.

Actually, I feel I have EVERY right to question Torvalds, since he's merely ripped off UNIX, a project that took Bell and Universities across the United States years to create.
And his position on micro kernel OS' is assinine.
I've noticed that as Linux has moved forward, that there has been an attempt to absorb features from better structured OS' like the ability to dynamically load and unload some drivers.
But, for the most part, its still an oversized, poorly structured mess.
Oh, and those devices you think I'm benefiting from?
Experience with Android may be the one thing that could drive me to buy a new Apple product.
The only "fantasy" I have is that all the deluded fanboys of this crap wake up.
There is a reason its free, most people would not pay for crap like that.

BTW - I've actually had a hand in creating real hardware and software, so attacking my ego won't work. My opinions come from real experience.
Title: Re: Reinvent the OS cont
Post by: Karlos on June 29, 2015, 04:10:32 AM
Kernel.

That is all. ;-)

Quote
Actually, I feel I have EVERY right to question Torvalds, since he's merely ripped off UNIX, a project that took Bell and Universities across the United States years to create

Sounds like revisionist history combined with kernel/entire OS conflation to me. Unix is a complete OS.

Torvalds wrote a kernel, initially as an experiment, which was subsequently used for GNU since the intended kernel for the latter was not (and in truth still is not) ready.

GNU is the "rip off" of Unix that you are thinking of, not Torvald's work. And kudos Stallman and co for doing it; without their work, computing would be far more closed, restrictive and expensive than it is.
Title: Re: Reinvent the OS cont
Post by: KimmoK on June 29, 2015, 06:54:30 AM
Quote from: Iggy;791755
...Carve it down to its minimum, and move anything unnecessary to loadable modules.
It would have lots advantages, but Torvalds doesn't understand this.
And THAT makes me distrust his skill as a software engineer.

At least Torvalds is higher caliber OS talent than Gates. ;-)
(Gates is better in business/making money)

20 years ago I noticed that Torvalds does not think Amiga had proper multitasking when applications can disable task switching as they wish. Sure, it is not ideal, and most likely because Carl S. did not have time to implement other means to handle some things.

Torvalds took advantage of the fast pace of CPU development, not trying to implement a small, fast, responsive and efficient OS. Torvalds succeeded better than anyone else in competing with M$ in generic mainstream systems etc... He was far ahead in 64bit + SMP ++++ vs M$.

If Torvalds had had big box Amiga+RAM+HDD when he was a kid... who knows what had happened...

If we one day have all modern key features in our OS... perhaps Linus might help us with the Linux/posix sandbox build.... :crazy:

+++
It seems Linus is pretty unhappy with the desktop, check "Desktop environment criticism".
https://en.wikipedia.org/wiki/Linus_Torvalds
(It seems I'm not the only who is unhappy with Linux desktop evolution of past few years...)
++++
I did not know Linus was/is a PowerPC fan!
"In 2008, Torvalds stated that he used the Fedora distribution of Linux because it had fairly good support for the PowerPC processor architecture, which he had favoured at the time."
(or perhaps he just had happened to buy PowerPC apple HW)
Title: Re: Reinvent the OS cont
Post by: agami on June 29, 2015, 09:15:34 AM
Any reinvention of the Amiga OS should not use/copy anything from Linux. You will never find a more wretched hive of clump and complexity.

I use it for servers. When I did use it for desktops I spent too much time in Linux forums instead of actually being productive. I look forward to trying SteamOS; Perhaps the folks at Valve have abstracted all the crap I don;t want to deal with.
Title: Re: Reinvent the OS cont
Post by: Iggy on June 29, 2015, 12:53:32 PM
Quote from: agami;791797
Any reinvention of the Amiga OS should not use/copy anything from Linux. You will never find a more wretched hive of clump and complexity.

I use it for servers. When I did use it for desktops I spent too much time in Linux forums instead of actually being productive. I look forward to trying SteamOS; Perhaps the folks at Valve have abstracted all the crap I don;t want to deal with.

"wretched hive of clump and complexity" - Nice turn of phrase, I like it.

OK, now to address the two prior post, Kolla, Karlos, I know you both use Linux and It is not my intent to disturb your "universal scheme of things".
But Torvald's kernel is intended to be the core of a UNIX clone.
This HAS resulted in something that at times definately rises above what Microsoft has managed to do.
Could it be a lot better?
Considering its origin, probably not.
But if your argument is that its better than the alternatives, well that's a tough argument to counter.
Still, we have been retreading ideas with roots from over forty years ago.

With the hardware resources currently at hand, its easier to generate oversized, inefficient code than it is to create the tighter work our much more limited hardware encouraged in the past.

I've mentioned Xenix in this thread once before, because Linux REALLY reminds me of it.
Large, cumbersome, and incapable of true real time response.

Finally, this was started as a discussion on the merits of recreating the OS.
I didn't intend it to subvert it into a discussion on the merits of Linux or any other OS.
It has just been my contention that we could do better by avoiding using a monolythic kernel.
Title: Re: Reinvent the OS cont
Post by: Fats on June 29, 2015, 01:42:10 PM
Quote from: Thomas Richter;791754
You forget one thing. Even if I release the source code, it means that the source requires continuous updating and maintaining to keep it working with the kernel. Now, which company wants to make this commitment, namely not only release source, but also employ engineers to continuously update the source whenever the kernel interface changes - i.e. with every release.   Using the passive here means that you don't know who maintains it. It is like throwing source code over a wall and hoping somebody picks it up, and that this "somebody" has enough knowledge and experience to do a good job. That is a problem for professional software engineering.  

If you are lucky, you do not need further commitment. If you are not so lucky, your driver will stop working with the next kernel release and your investment is lost.


True that just releasing the source code is not enough; you need to do the job of getting your code up to kernel standards and upstream it. But this is a one-time effort and from then on your driver will be maintained by the kernel devs; if some internal ABI changes your driver will be updated if needed before the ABI change is included in a release kernel.

Quote from: Thomas Richter;791770
Examples? Last week, kernel update, but vmware does no longer work. Some obscure call in the network layer of linux takes now one argument instead of two, where the second argument was always zero. The cost to include a backwards compatible call that ignores the second argument (which is in this particular case the right approach) would have been close to zero. The cost of patching vmware is (for us) larger because it includes downtime and man hours of work.

Another example? Apparently, some kernel call the NVIDIA driver needs is no longer in place, so if you update your kernel, graphics no longer works, and re-compiling the nvidia driver fails. There is a new driver, but this has other bugs that were not fixed. This is "annoying" for a personal user because the procedure of updating such drivers is more complex than on other systems, but it causes real costs on large scale systems.  I don't have a company, but I'm working in a computing center. Even if I had a company, I would rather prefer to sell products that continue to work for ten years with drivers unchanged and have the operating system maintainer to keep care of it rather than to continuously have to invest money in a product that no longer creates any profit because it is no longer sold. Somehow the engineers that do all the patching and cleanup behind the kernel hackers have to be paid, too. You seem to forget that.


Two examples of companies that have chosen to not go for the right thing and upstream their drivers and have chosen to use the open source code from other people but not wanting to open their own code. Yes, in that case they are on their own. It means you should stick to configs supported by those companies.

Quote from: Thomas Richter;791770
It rather means that the Os "vendor" does not have the goal of creating a commercial ecosystem that would allow hardware vendors to invest into.


Sure the core kernel devs are not marketing driven; one can debate if that is good or bad. From the other end Red Hat seems to be doing pretty fine in commercializing Linux. Recently also long term supported older kernels have been made more official.

Quote from: Thomas Richter;791770
Which is pretty much the problem why open source never reached mainstream. Unixoid systems are successful where its vendor keeps care about it itself and uses a much stricter development model, and where hardware and Os are much more coupled than in the PC market.


As if PC drivers don't cause problems. The number of people doing PC support is much bigger in our company than for Linux; and we are doing EDA so Linux is not a minority overhere. Some of those PC support people do nothing else then testing drivers and even user land software to see if they don't cause problems before rolling out company wide.
Title: Re: Reinvent the OS cont
Post by: guest11527 on June 29, 2015, 04:30:17 PM
Quote from: Fats;791808
True that just releasing the source code is not enough; you need to do the job of getting your code up to kernel standards and upstream it. But this is a one-time effort and from then on your driver will be maintained by the kernel devs; if some internal ABI changes your driver will be updated if needed before the ABI change is included in a release kernel.
That's wishful thinking. In reality, if there is nobody testing the kernel module and checking its correctness, including the API to your hardware, chances are better than even that it breaks. Even if it compiles. Examples? I'm currently trying to get a couple of ancient intel 830M graphics based laptops working. They used to work with the i810 intel X11 driver. This one was made obsolete and replaced by the intel driver, which requires a kernel module. This kernel module failed to support the i830 for a long time, so I could only run VESA on these laptops. The kernel devs, most notably the folks from intel, made a huge effort to re-create a kernel driver from scratch, though it was never really entirely working due to some exotic DVO chips present in the mentioned laptops. I committed myself fixing this up, and kernel 3.19.x included the necessary fixes for both the intel DVO, a natsemi DVO and to work around some bios/ACPI bugs of the machines. Just to find myself that in 4.1.x i830 support is again broken, simply because the kernel devs do not have such ancient machines themselves. So I have to test the code again and find the bugs, made due to modifying code that, quite frankly, should have better been put into a separate kernel module and left alone. Yes, one unqiue driver for all intel chipset graphics might be nice, clean and lean, but as this example proves, it is not a practical approach.

You have to continuously test the code, make bug reports and help the devs to keep it working. Unlike what you say, it is *not* a one-time commitment.
Title: Re: Reinvent the OS cont
Post by: psxphill on June 30, 2015, 05:09:20 PM
Quote from: Thomas Richter;791814
You have to continuously test the code, make bug reports and help the devs to keep it working. Unlike what you say, it is *not* a one-time commitment.

Exactly the problem we are seeing already with Android phones.
 
 If you're lucky then a teenager has figured out how to hack together all the different cherry picks and has cobbled something together that barely works. Otherwise you're stuck with the OS that shipped on the phone and can't run new apps.
Title: Re: Reinvent the OS cont
Post by: Iggy on July 02, 2015, 12:55:48 AM
So, considering Thomas issues, is anyone ready to admit that I might be right and that driver don"t belong inside the kernel.
I think these issues clearly illustrate one of the major weaknesses of a monolythic kernel.
Beyond the fact that the ever changing kernel makes ever new revision a chore to adapt to.
This is far different in a micro kernel OS where core processes are relatively stable and the external modules are well documented.
Title: Re: Reinvent the OS cont
Post by: KimmoK on July 02, 2015, 06:52:24 AM
Quote from: Iggy;791934
So, considering Thomas issues, is anyone ready to admit that I might be right and that driver don"t belong inside the kernel....

It has been clear to me always. ;-)

(+modern OSs come with 2Gb of drivers, but not all of them run all the time even when kernel is monolithic, so I personally need to learn how it's done on those monster OSs)

Is there study anywhere how monolithic vs true microkernel perform on modern HW...?
During ancient times monolithic design performed better.


And what if the kernel is able to update itself on the fly like on QNX?
(I suppose it does not change anything vs driver usage.)

++
Just spotted from mzone... it seems Linux kernel has now 20 million codelines:
https://www.bellingcat.com/wp-content/uploads/2015/06/Low-loader-vehicle-yard_DG-imagery_Final.pdf
Now considering that original AmigaOS had only 1million codelines ... WOW.

As an oldschool programmer I tend to think that the more codelines you have, the more bugs you most likely have.
So, in that theory, whole AmigaOS should have less bugs than what just linux kernel has.
Then I imagine ... on linux those 20million codelines are run in kernel space (no mp) vs 1million codelines as total AmigaOS ...

(I know it's not that simple and perhaps there is 15million testing related codelines in linux kernel, ;) .)
Title: Re: Reinvent the OS cont
Post by: Iggy on July 02, 2015, 04:25:06 PM
Quote from: KimmoK

...Is there study anywhere how monolithic vs true microkernel perform on modern HW?...

...During ancient times monolithic design performed better...

...Just spotted from mzone... it seems Linux kernel has now 20 million codelines:..

...As an oldschool programmer I tend to think that the more codelines you have, the more bugs you most likely to have...[/jQUOTE


Actually it has always been my experience that micro kernel OS' have significantly better real time response. And its an obvious fact that monolithic kernels are far too large.