@bloodline
Don't worry about answering my previous question, let's start the MP discussion now.
Okay, I'd like to start this post by making a couple of things clear. First of all, there are three separate questions we could discuss (as far as I can see) when it comes to the possibility of MP:
1. Can it be done?
2. Can it be done elegantly?
3. Should it be done?
I hope you will agree that there is little point answering questions 2 and 3 if we do not agree that the answer to question 1 is yes, so that's what we should focus our discussion on IMO (at least at first). What this means is that if the MP implementation is a complete bodge job but works successfully then it can satisfy the criteria for answering question 1 and we can move on to the 'should we/how best do we' do it part of the discussion.
Next, I'd like to outline what I'd class as successful MP. To me the job of MP isn't to make individual applications less prone to crashing, but rather to ensure that when applications crash they don't take down the operating system and other applications with them. That is the base functionality of MP IMO, do you agree? I would also say that close to that core MP functionality is the ability for the OS to attempt to recover the data the application was using before the crash, but that is not a function I will focus on for now.
==============================================================
Right, down to business. A program in AROS is loaded. The memory manager assigns a unique ID to that program. The program starts requesting RAM space from AROS to store its data. These calls will either be for:
A space in memory of a specific size but undefined location.
or...
A space in memory of a specific size and location.
The AROS memory manager I am proposing doesn't give programs a memory address based on the location requested, but rather provides a 'virtualization' layer. When the program then tries to call data from a memory address it thinks is set in RAM, the memory manager calls the data from where it is really stored and presents it to the program. The program doesn't know the difference.
Lets take an example to illustrate this further. The program Lunapaint is loaded. AROS assigns Lunapaint the ID 0111. Lunapaint then requests a 5-bit memory address space starting at address 3FDF. That address is already taken in RAM by the program Wookiechat, but this doesn't matter to us. The 5-bit memory space is put in the address starting A21C instead. When Lunapaint tries to read or write to the memory space it thought was at 3FDF, the memory manager knows that the data stored in 3FDF for program ID 0111 is actually at A21C and pulls this data instead.
So what does this mean? It means the OS has taken control of memory allocation away from programs without letting the programs know there is anything different happening. Now lets look at what happens when programs try to pass data between each other through memory addresses.
What we want to be able to define here are the processes that should be allowed to share memory addresses with each other, and those programs that should not be allowed to share memory addresses (prevent problems like buffer overflows for example). The approach I'm proposing here is what I'd call the 'AI' approach, even though the intelligence is not artificial... I propose we use our own brains!
When two programs try to write to the same address space, we get a dialog box pop up on AROS (much like an Internet firewall, such as ZoneAlarm) asking whether we would like for these programs to be able to share memory address space. Taking the Lunapaint/Wookiechat example used above, the dialog box would ask us if Lunapaint and Wookiechat should share address spaces. Our options could be something like:
Share 3FDF, no universal share.
Do not allow programs to share any memory address spaces.
Allow both programs full access to each others memory address spaces.
You might think this would be a pain to set up (having a dialog box pop up every time you open two certain programs at the same time), but a way around this is to use patches. How would this work? Well for patches to be applied the memory structure needs to be consistent every time the program is run (so that if I apply a patch setting to 3FDF one day the patch will work on the same data the next day). However, as we have 'virtualized' memory allocation this is possible. The memory manager is the OS layer doing all of the assigning of memory locations. When a program asks for a memory address in a specific location, we give it a memory location with that label. When a program asks for memory but doesn't specify where that memory should be we make sure that the 'virtualized' memory allocation follows rules, so that we will always be able to find that memory.
For example, on program boot up we are dealing with requests for memory that are always in the same order, so why not give them the labels 0001, 0002, 0003, 0004, etc... From this point on the memory needs of the application are dynamic, so we look at assigning memory labels not on sequence but on a mathematical algorithm based on how they are first assigned and the memory spaces already in use (there may be an easier way of doing this, basically I'm looking for ways to pinpoint the setting of a variable in a program).
Assuming the above variable pinpointing is possible, then we can apply our patches and they can work consistently every time. The memory labels assigned don't affect real memory usage in the sense that all programs can have the 0001, 0002, 0003, etc... labels and the real memory can still be fully utilised.
What would we use these patches for? Well, let's say we never want Lunapaint and Wookiechat to ever share memory addresses. We can create a patch for this. We can also create a patch so that Lunapaint will share certain memory addresses with Wookiechat but won't share those same memory addresses with UAE. The beauty of the patch system is you can make your system as secure (or as open) as you like. You could also distribute these patches between AROS users (like the way WHDLoad patches are created and distributed).
Okay, so we've looked at taking control of memory allocation, and how we can let programs interoperate in memory with our permission, the final piece of the puzzle is how we handle application crashes. Let's say I write a program for AROS, but I make a mistake so that in certain circumstances I'm trying to write a long string into a space defined for an integer, which causes the program to freeze/crash due to a buffer overflow.
Where does this buffer overflow go? How do we know the program is misbehaving? We have options here. We could let the program write the extra bits into a new 'virtual' address, hoping that the program will run smoothly afterwards (and have rules about how often this 'overflow' storage can happen before we flag up an error to the user) or we could simply flag an error as soon as the the buffer overflow happened. Link this with CPU throttling (so that programs caught in an infinite loop can have their CPU time reduced to avoid crashes) and you have a system that allows you to stop any misbehaving programs.
==============================================================
So we've looked at memory allocation control, memory permissions and crash control. Is there any other factors an OS needs for MP?
Note that the only part in my mind with a question mark over it is how to flag individual variables for 'patching', I'll have to look at how other systems (like compilers) do this, but even if this is not possible you'd still be able to have interoperability settings at the application level (i.e. Program 1 can share memory addresses with Program 2 but not with Program 3), which is still useful for system stability.
Please let me know what you think bloodline. Thanks.