Welcome, Guest. Please login or register.

Author Topic: Layers.library V45 on the aminet  (Read 128830 times)

Description:

0 Members and 5 Guests are viewing this topic.

Offline Thorham

  • Hero Member
  • *****
  • Join Date: Oct 2009
  • Posts: 1150
    • Show only replies by Thorham
Re: Layers.library V45 on the aminet
« Reply #239 from previous page: September 15, 2014, 02:19:27 PM »
Quote from: olsen;773051
What did bring improvements was to replace the sorting algorithm, so that doubling the number of directory entries only about doubled the amount of time needed to keep it sorted.
It would've been even better if they simply read the whole directory first, then sorted it with the right sorting algorithm, and finally display the results. DirectoryOpus 5.90 does that.

Quote from: olsen;773051
This is how you get to "super fast", and Thomas is your man. Cosmos, I'm not so sure about.
Perhaps, but when you work with some resourced binary, it can't hurt to clean up the compiler mess so that you get much more readable code. After that you can try to replace algorithms.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #240 on: September 15, 2014, 02:20:01 PM »
Quote from: OlafS3;773039
Thomas (as Olsen and others) "would" be perfect but he has already explained that he is not allowed to do so because of contracts he signed in the past. Everyone who was involved in AmigaOS development in the past has signed such contracts it seems, almost like a weapon to hinder competition. But nevertheless they cannot directly contribute because it could used against Aros then :(


Speaking for myself, I am not aware of any NDAs which still cover the field I'm working in that prevent me from contributing to a project such as AROS. With very few exceptions the NDAs I signed are no longer relevant because the companies with which I signed them went out of business a long time ago. Such is the nature of the Amiga business :(

I'm just a cautious fellow, and I don't want to be the guy who compromises a project such as AROS because somebody got it into his head that a knowledge transfer has taken place which must have happened because one guy had access to some original source code. In my humble opinion AROS is better off if its designs are based strictly upon available documentation only (the "clean-room implementation"), and there is no reason whatsoever to suspect that privileged information (if there is such a thing in the Amiga field) may have been used to help it along.
 

Offline OlafS3

Re: Layers.library V45 on the aminet
« Reply #241 on: September 15, 2014, 02:37:30 PM »
I understand :(

it is a pity that the few experienced devs left cannot contribute because of silly contracts from long ago
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #242 on: September 15, 2014, 02:43:36 PM »
Quote from: Thorham;773054
It would've been even better if they simply read the whole directory first, then sorted it with the right sorting algorithm, and finally display the results. DirectoryOpus 5.90 does that.
The standard file request (in asl.library) interleaves reading and sorting the list as new entries come in. This can be done very quickly, in parallel while the file system is reading the directory. It's a reasonably clean design, whose "downfall" was the choice of sorting algorithm which eventually took more time to run than reading the directory.

If you read the entire directory before you start sorting, you're out of luck if reading the directory takes much longer than sorting it would. It happens. The way the standard file requester solves the problem is arguably a solution which handles both the short and the long directory case more elegantly. And you can even change the window size, type in a new directory or file name, etc. while it's reading and sorting the directory contents.

Quote from: Thorham;773054
Perhaps, but when you work with some resourced binary, it can't hurt to clean up the compiler mess so that you get much more readable code. After that you can try to replace algorithms.
I don't think it works any more at a certain size of project. You can't necessarily infer from disassembly, even after cleanup and documentation work, why the original high-level language (that would be 'C' or something more complex such as C++) implementation does what it does, and if the implementation is correct.

For example, at the heart of Intuition there is a state machine which receives, translates and distributes input events depending upon which events arrived earlier. If you move the mouse over the screen title bar and press the left mouse button, Intuition will change into a state in which every movement of the mouse will result in a screen drag operation. This is how it works. If you broke down the entire Intuition binary into plain 68k assembly language, I would venture that you will have hard time identifying the individual event state handlers. For that you are best advised to stick to the original 'C' code, because there you can see plainly how the design fits together, and why it makes sense.

Another example: for OS4 the original timer.device was ported to 'C'. The original timer.device was written in 68k assembly language, and documented source code exists. As it turned out there was a surprise waiting in that code after the initial 'C' language port was complete. Back in 1989/1990 Michael Sinz at Commodore modified the timer.device not to use two different time sources any more (UNIT_VBLANK and UNIT_MICROHZ used different CIA A and CIA B timers, which had different granularities), but to use single CIA timer instead. That timer had much higher resolution and precision, which was a great improvement.

It turned out that when the 'C' port of timer.device was reviewed, all the old obsolete CIA A and CIA B timer code was still in there, and a good part of the 'C' port was effectively useless. Again, observations such as these, which lead to irrelevant code being discovered and removed, require a high level view of the code, which for assembly language (by its very nature) is difficult to find.
« Last Edit: September 15, 2014, 02:47:04 PM by olsen »
 

guest11527

  • Guest
Re: Layers.library V45 on the aminet
« Reply #243 on: September 15, 2014, 02:48:08 PM »
Quote from: wawrzon;773049
just make pfs3 the official filing system, it includes all functionality, is open and currently maintained by toni wilen and include ffs as is for legacy and backwards compatibility and you are done.

Are we? Ok, to be frank, I do not know (nor have personally met) Toni, nor have I looked at PFS3. Let me tell you why I'm a bit critical about PFS3 (and actually, most alternative filing systems in general, this is not specific to PFS3 which, as said, I really haven't tried).

First of all, the whole dos (Tripos) is written around the FFS construction, with all its weaknesses and benefits. For example, transferal of locks between two drives but the same medium (good), or the almost unimplementable ACTION_EX_NEXT. (bad)

FFS may not be "smart", but it may be faster than you think. Or "fast" depending on which type of operation you want to perform. If I just want to open one (or multiple) files, FFS needs not to read an entire directory (unlike FAT or ext) but uses a pretty fast hash-algorithm. All the information about a file is in a single block, and FFS can block-transfer data between the file on disk and the target buffer by going directly into the device. (Leaving the MaxTransfer and Mask aside). That's not unique to FFS, of course, but what is probably unique (and what I haven't seen anywhere else) is that while FFS is "busy" with a long DMA transfer, additional incoming requests can be handled simultaneously. That is, the FFS is a threaded file system and handles each request in a separate thread (not task, not process). Thus, one can fire off a request, and a second task can do the same at the same time, and FFS will be able to get the second request done while the first is running, provided there are no conflicts. I'm not sure whether PFS3 can do this, but SFS did not (back then, when I checked), and no other FS on the Amiga could.

FFS is *not* slow. Ok, it is slow in *one particular* discipline, and that's listing of directories. This is because every file requires a single block as file header. That is, when reading a directory, more IO transfers have to be made (unfixable) and a lot of disk-gronking happens (avoidable by smarter allocation). This is, as always, a compromize that has been made when FFS was constructed, and the compromize was simply to make "opening and reading of files" as fast as possible, with the drawback of "listing directories" being slow. Which operation is used most? I don't know, and I haven't measured, but my gut feeling is that the FFS decision to optimize for fast data transfer (and not for fast directory transfer) was actually not such a bad decision.

Other filing systems use more complex directory structures, with the benefit of making directory reading faster, but making file manipulation slower.

I haven't measured, but I consider it hard to construct a filing system that requires less disk operations to actually find and open a file on disk than the FFS. That seems to be a good choice if I/O is slow and the CPU is fast. Maybe that's the wrong assumption today (I don't know) but before I would pick another FS, I would prefer to see some hard facts about the performance of PFSn, and I do not only mean speed.

Are all important packets implemented? Correctly? Does it support hard and soft links? File notes? File Dates? Does it operate correctly if multiple tasks operate simultaneously on the disk? Does it perform well if multiple tasks operate on the disk? How many disk operations does it take to open a file? Create a file? Write a file? List a directory? How does it behave if we try to corrupt the disk? Turn the system off (FFS is not exactly a top performer here, don't tell me, I know).

My personal feeling is that the FFS has reached a certain level of stability given that it has been around for such a long time that it's hard to replace it by anything more stable. If any, I would only make minor changes that are backwards compatible to existing FS-structure, such as improving the block-allocation policy (keep directory blocks close to each other, do not scatter them, avoiding the disk gronking) or improving the 64-bit support.

Oh, and last but not least: You surely want a file system that can read your existing disks. From what I read I believe PFS3 can handle this?
 

Offline modrobert

  • Newbie
  • *
  • Join Date: Nov 2008
  • Posts: 47
  • Country: th
    • Show only replies by modrobert
Re: Layers.library V45 on the aminet
« Reply #244 on: September 15, 2014, 02:50:38 PM »
Quote from: OlafS3;773056
I understand :(

it is a pity that the few experienced devs left cannot contribute because of silly contracts from long ago


They can contribute to Aros if a bounty making OS3.1 open source succeeded. This would effectively end the NDA, or did I miss something (again)?

Also, Thomas Richter and olsen have effectively convinced me that binary patching is bad in the current situation, only took like ten posts of explaining to do it (hehe). Still, I can't help liking Cosmos and respect what he does, it goes beyond logical reasoning, so no need to convince me further.
A1200: 68020 @ 14 MHz (stock), 2MB Chip + 8MB Fast RAM, RTC, 3.1 ROMs, IDE-CF+4GB, WiFi WPA2/AES, AmigaOS 3.1, LCD 23" via composhite - Thanks fitzsteve & PorkLip!
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #245 on: September 15, 2014, 02:58:25 PM »
Quote from: wawrzon;773053
i trust that since thor and olsen seriously consider that there is a threat then there must be one. beyond all else they have personal experience with the commercial entities in question they have been working for and im sure they are basing their opinion on some experience, be it personal or general, which may be not available to others.

We may not agree with the situation, but the fact is that money changed hands to acquire the Amiga operating system, and as such represents a significant investment for the buyer.

The owner of the technology is naturally interested in preserving the value of the investment, which is why programmers who were involved in AmigaOS development work signed contracts governing what we may or may not do with the knowledge we gained. Unless these contracts are canceled, we are bound by them.

How much of a risk there would be in violating the terms of these contracts is difficult to say. Speaking for myself, I don't really want to find out because it is not something which I consider *that* important.
 

Offline OlafS3

Re: Layers.library V45 on the aminet
« Reply #246 on: September 15, 2014, 03:04:03 PM »
Quote from: modrobert;773059
They can contribute to Aros if a bounty making OS3.1 open source succeeded. This would effectively end the NDA, or did I miss something (again)?

Also, Thomas Richter and olsen have effectively convinced me that binary patching is bad in the current situation, only took like ten posts of explaining to do it (hehe). Still, I can't help liking Cosmos and respect what he does, it goes beyond logical reasoning, so no need to convince me further.


To be honest I do not believe that such a bounty has any chance because of both the money it would need and the parties involved. You read the discussions here, people will say why donating for such a bounty when you can get everything in amigaforever or for free (illegal). Most NG supporter will say for what do we need the old sources (in case of AmigaOS they already have access). It would be of big benefit for AROS and MorphOS to remove unknown compatibility issues and similar but that brings me to the third party I do not need to name who has no interest to support AROS or MorphOS. And a common spirit between the camp is not there (expecially regarding parts of the core developers). AROS certainly is the exception because of the openness. Short: Someone can try it out like I did with Magellan. Ask the known parties and if they really say yes make a bounty.
 

Offline OlafS3

Re: Layers.library V45 on the aminet
« Reply #247 on: September 15, 2014, 03:06:36 PM »
Quote from: olsen;773060
We may not agree with the situation, but the fact is that money changed hands to acquire the Amiga operating system, and as such represents a significant investment for the buyer.

The owner of the technology is naturally interested in preserving the value of the investment, which is why programmers who were involved in AmigaOS development work signed contracts governing what we may or may not do with the knowledge we gained. Unless these contracts are canceled, we are bound by them.

How much of a risk there would be in violating the terms of these contracts is difficult to say. Speaking for myself, I don't really want to find out because it is not something which I consider *that* important.


We understand that, nobody can expect someone else taking legal risk, even if they would be theoretical (and I do not think that they are totally theoretical even today). Nobody is forced to sign such a contract.
 

Offline warpdesign

  • Sr. Member
  • ****
  • Join Date: Feb 2008
  • Posts: 256
    • Show only replies by warpdesign
    • http://www.warpdesign.fr
Re: Layers.library V45 on the aminet
« Reply #248 on: September 15, 2014, 03:46:31 PM »
Seeing what happened since 1999 with owners of the OS, I don't see a bounty happening either, or it would be insanely expensive and would come with unacceptable conditions anyway.

A bounty to bring AROS on par with AOS would be possible though, and free of any evil conditions.
 

guest11527

  • Guest
Re: Layers.library V45 on the aminet
« Reply #249 on: September 15, 2014, 03:59:52 PM »
Quote from: OlafS3;773045
take the software and resell it yes of course, but in this case you have said it forbids you any source contributions to Aros even if there is no original line of code included just because you had access to the old sources. That is completely different...

No, look. It does not forbid me to contribute *anything* to AROS. But if parts of the contribution to 3.9 was created by taking the original AmigaOs source (say, for the sake of the argument "C:more") and this source was improved by modifications paid under contract, it is quite clear that the work created this way cannot be contributed. It is, essentially, still CBM code, under license (or owned by, I don't know) H&P or Hyperion. Same goes for layers - yes, a good deal of code had been replaced, but the majority is still CBM code. If I get code from a client (H&P, or Hyperion) and we have an agreement that this is proprietary code, I believe it is also a matter of trust that it remains proprietary, leaving all legal concerns aside.  

Again, as Olsen said (and I did before) the best possible solution for AROS is really a clean-room development starting from the documented API because you're then really free from any claims from third parties. Again, I can provide answers to "dark spots" in the API (to my very knowledge, which may or may not be correct - and given that there are already very reliable sources like the Guru book) but code brings me (and AROS) into a situation which is quite delicate.
 

Offline psxphill

Re: Layers.library V45 on the aminet
« Reply #250 on: September 15, 2014, 04:22:49 PM »
Quote from: olsen;773057
Back in 1989/1990 Michael Sinz at Commodore modified the timer.device not to use two different time sources any more (UNIT_VBLANK and UNIT_MICROHZ used different CIA A and CIA B timers, which had different granularities), but to use single CIA timer instead. That timer had much higher resolution and precision, which was a great improvement.

It turned out that when the 'C' port of timer.device was reviewed, all the old obsolete CIA A and CIA B timer code was still in there, and a good part of the 'C' port was effectively useless. Again, observations such as these, which lead to irrelevant code being discovered and removed, require a high level view of the code, which for assembly language (by its very nature) is difficult to find.


Sounds like "Jumpy the Magic Timer Device", are you sure the code was all unused?
 
 
 
Quote from: Thomas Richter;773066
but code brings me (and AROS) into a situation which is quite delicate.

You can't contribute any of the code you received or were paid to write (and that doesn't matter whether you signed an NDA or not), but you were implying that under no circumstances could you contribute anything to AROS ever.
 
It would be pretty easy to prove in court where your contributions came from though, so as long as you are honest then you're fine.
 
 I'm pretty sure that quite a bit of AROS was written by people who had disassembled commodore's code & it already isn't a clean room implementation.
« Last Edit: September 15, 2014, 04:33:48 PM by psxphill »
 

Offline wawrzon

Re: Layers.library V45 on the aminet
« Reply #251 on: September 15, 2014, 04:36:12 PM »
Quote from: Thomas Richter;773058
Are we? Ok, to be frank, I do not know (nor have personally met) Toni, nor have I looked at PFS3. Let me tell you why I'm a bit critical about PFS3 (and actually, most alternative filing systems in general, this is not specific to PFS3 which, as said, I really haven't tried).

First of all, the whole dos (Tripos) is written around the FFS construction, with all its weaknesses and benefits. For example, transferal of locks between two drives but the same medium (good), or the almost unimplementable ACTION_EX_NEXT. (bad)

FFS may not be "smart", but it may be faster than you think. Or "fast" depending on which type of operation you want to perform. If I just want to open one (or multiple) files, FFS needs not to read an entire directory (unlike FAT or ext) but uses a pretty fast hash-algorithm. All the information about a file is in a single block, and FFS can block-transfer data between the file on disk and the target buffer by going directly into the device. (Leaving the MaxTransfer and Mask aside). That's not unique to FFS, of course, but what is probably unique (and what I haven't seen anywhere else) is that while FFS is "busy" with a long DMA transfer, additional incoming requests can be handled simultaneously. That is, the FFS is a threaded file system and handles each request in a separate thread (not task, not process). Thus, one can fire off a request, and a second task can do the same at the same time, and FFS will be able to get the second request done while the first is running, provided there are no conflicts. I'm not sure whether PFS3 can do this, but SFS did not (back then, when I checked), and no other FS on the Amiga could.

FFS is *not* slow. Ok, it is slow in *one particular* discipline, and that's listing of directories. This is because every file requires a single block as file header. That is, when reading a directory, more IO transfers have to be made (unfixable) and a lot of disk-gronking happens (avoidable by smarter allocation). This is, as always, a compromize that has been made when FFS was constructed, and the compromize was simply to make "opening and reading of files" as fast as possible, with the drawback of "listing directories" being slow. Which operation is used most? I don't know, and I haven't measured, but my gut feeling is that the FFS decision to optimize for fast data transfer (and not for fast directory transfer) was actually not such a bad decision.

Other filing systems use more complex directory structures, with the benefit of making directory reading faster, but making file manipulation slower.

I haven't measured, but I consider it hard to construct a filing system that requires less disk operations to actually find and open a file on disk than the FFS. That seems to be a good choice if I/O is slow and the CPU is fast. Maybe that's the wrong assumption today (I don't know) but before I would pick another FS, I would prefer to see some hard facts about the performance of PFSn, and I do not only mean speed.

Are all important packets implemented? Correctly? Does it support hard and soft links? File notes? File Dates? Does it operate correctly if multiple tasks operate simultaneously on the disk? Does it perform well if multiple tasks operate on the disk? How many disk operations does it take to open a file? Create a file? Write a file? List a directory? How does it behave if we try to corrupt the disk? Turn the system off (FFS is not exactly a top performer here, don't tell me, I know).

My personal feeling is that the FFS has reached a certain level of stability given that it has been around for such a long time that it's hard to replace it by anything more stable. If any, I would only make minor changes that are backwards compatible to existing FS-structure, such as improving the block-allocation policy (keep directory blocks close to each other, do not scatter them, avoiding the disk gronking) or improving the 64-bit support.

Oh, and last but not least: You surely want a file system that can read your existing disks. From what I read I believe PFS3 can handle this?


i cannot exactly tell if pfs3 is threaded, but i do even guess it incidentally is. toni, who unfortunately is only a member on eab, but not here, could answer this in detail. the original coder whose name i dont remember was posting here i guess but im not sure anymore, the other person who comes to my mind is piru who did initial port and fixes to current gcc.

im absolutely not advocating sfs whatever in this respect, since from my user experience is rubbish, sorry to say.

also i did not say pfs is able to read ffs formatted media. sory, i realize it sounds like i did. what i wanted to say was, to keep ffs as is for legacy.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #252 on: September 15, 2014, 05:13:46 PM »
Quote from: psxphill;773069
Sounds like "Jumpy the Magic Timer Device", are you sure the code was all unused?
 
Yes. I started rewriting the 'C' port so that I could understand its inner workings better. Also, it provided an opportunity to pull code from subroutines (which became short functions) which were used exactly once into the respective functions which called them.

In the end I found that some of the functions were not getting called or referenced from anywhere else, and sure enough, these were the parts of the old timer.device which used to deal with the UNIT_VBLANK and UNIT_MICROHZ CIA timers, separately.

As far as I recall this specific code was not part of the timer.device in ROM, it was not even linked against it. But this obsolete code was still part of the SVN repository contents (and the CVS repository before that, and the RCS files before that), so it wound up getting ported to 'C'.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #253 on: September 15, 2014, 05:34:50 PM »
Quote from: modrobert;773059
Also, Thomas Richter and olsen have effectively convinced me that binary patching is bad in the current situation, only took like ten posts of explaining to do it (hehe).
It's not necessarily a bad idea, you just have to know to which end the patches are created. Collapsing more complex assembly code to less complex code, saving space and reducing execution time used to get a lot more respect when storage space was scarce and CPUs used to be far less powerful. Like, say, in the 1980'es and 1990'ies.

Let's say you had to ship a hot fix for a criticial firmware error to a few hundred thousand customers (or make that a few million), yet your operating system was firmly planted in ROM and the only way to make the fix work was to put it into a jump table in RAM, and that jump table was so small that you had to rewrite existing patch code to make room for you new patch. Then you'd call upon a specialist who would work on the task of letting the extra air out of the code and build the shortest possible patch that would fit.

This used to be such a highly specialized talent, and it solved such dire and unique problems that I have it on good authority that this kind of assembly language optimization was called a "spell", as in "magic spell".

Cosmos may not view his work this way, but I'd say that the changes he makes work better if considered as optimizations for space than as optimizations for performance. One question which this raises is what you do with the extra space, but let's not go there.

Optimizing assembly code can be a rewarding exercise, like solving a chess puzzle, or doing calculus (yes, some people do that as a hobby, like playing "Candy Crush"; I'm still holding out for "Calculus Crush" for the iPhone). It follows a certain set of rules, there are rigid constraints and the number of possible solutions is small. Perfect entertainment!

Nothing makes this a bad idea, but what you can achieve is limited, especially when you are shooting for performance optimizations. You have to find code that both can be optimized and "wants" to be optimized, too.

Code that can be optimized but "doesn't want" to be optimized is contributing very little to the running time of the software which it is a part of; if you improve its running time by 200%, but it's only getting invoked some 0.2% in total then you may have spent an entertaining evening, but the effect of you change is negligible.

Code than can be optimized and "wants" to be optimized could have its running time improved by 5%, but if it's used 60% in total you'll have a noticeable improvement, and will have spent an entertaining evening, too ;)
« Last Edit: September 15, 2014, 05:37:12 PM by olsen »
 

guest11527

  • Guest
Re: Layers.library V45 on the aminet
« Reply #254 on: September 15, 2014, 06:33:15 PM »
Quote from: olsen;773078
It's not necessarily a bad idea, you just have to know to which end the patches are created. Collapsing more complex assembly code to less complex code, saving space and reducing execution time used to get a lot more respect when storage space was scarce and CPUs used to be far less powerful. Like, say, in the 1980'es and 1990'ies.

Yes, indeed, these were the "even older days" of computing. Back then, in the 6502-times, squeezing more program into less RAM was pretty much a necessity given that you had so little of it. I remember back then on the Atari (yes, the Atari 800XL, same chip designer, different company), the file management system (back then called "DOS") was bootstrapped from disk, took probably 5K of your precious RAM space, and had pretty limited capabilities. Plus it took time to bootstrap that 5K (it wasn't a 1541, so it wasn't as bad as on the C64, after all.)

Indeed, one can try to rewrite the whole thing, throw out the less-used part of ROM space (for a parallel port interface that came so late to the market that no devices were ever made to fit into this port), and replace the newly available 3K of ROM with a more powerful version of the 5K DOS, and cleanup the math stuff on the way. For such extremely tiny systems, this type of hobby did make sense because it was a noticeable improvement (as in: 5K more for your programs from the total of 40K available). Not that it was commercially viable - it wasn't.  

Anyhow, byte counting stopped making sense, already when 512K were the norm, and priorities changed. As soon as projects go bigger, one starts to notice that there is no benefit in squeezing out each possible byte, or each possible optimization. There is too much code to look at, and problems are typically related to maintain the full construction rather than to make it fast.

As Olsen already said, either execution time is not critical because I/O or human input is limiting the speed, or 80% of the program time is spend in less than 20% of the program. In such a case, the 20% are then hand-tuned, probably written in assembly. For 68K, I did this myself. Nowadays, not even than anymore, we had a specialist for that in the company when I worked on problems that required this type of activity. Even then, it turns out that the really critical part is not even the algorithm itself, but to keep the data in cache, i.e. construct the algorithm around the "worker" such that data is ideally pipe-lined, and that again was done in a high-level language (C++).

To keep the story short, even today the use of Assembly even for optimization is diminishing. There are hot-spots where you have to use it, but if speed is essential, you typically want to be as flexible as possible to re-arrange your data structures to allow for fast algorithms, and to organize data such that the data access pattern fits to the CPU organization - and you don't get this flexibility in Assembler. It sounds weird, but a high-level language and more code can sometimes make an algorithm faster.

But anyhow, I confess I did byte counting in the really old days, two generations of computers ahead, and yes, it created a good deal of spaghetti code, though requirements were quite a bit different. http://www.xl-project.com/download/os++.tar.gz

It's part of becoming a good engineer to learn which tools you need to reach your goal, and which tools to pick for a specific use case, and foremost to understand what the problem actually is (this is more complicated than one may guess). Ill defined problems create ill program architecture. Not saying that I'm the perfect software engineer - I have no formal education in this sector - but at least I learned a bit of by failing often enough.