Welcome, Guest. Please login or register.

Author Topic: Layers.library V45 on the aminet  (Read 128429 times)

Description:

0 Members and 11 Guests are viewing this topic.

Offline olsen

Re: Layers.library V45 on the aminet
« on: September 09, 2014, 04:47:24 PM »
Quote from: stefcep2;772648
Amiga software authors must be the most difficult to contact

I remember the owner of Miami years ago "disappearing", then the MUI guy, the IBrowse guy.

Everyone these days is contactable but no, not Amiga programmers.

And if you do manage to contact him it would not surprise they refuse, well because they can.

Unbelievable.
This may sound harsh, but people's lives change greatly between their twenties and forties, and the behaviour you criticize is most definitely a result of that.

Many of us Amiga developers started out when we were in our early twenties, and it has been almost 20 years since Commodore folded. That's a lot of time in which we went to university, moved, married, got a job, changed jobs, etc. As soon as you have a family to care for, your life is never going to be the same as it was before. You lose contact with your friends, your spare time shrinks, you have commitments to your job and your company which take priority over what used to be more fun.

Now, I'm still single, don't have a family of my down to care for, but many of my friends I knew since school, friends I met at university, friends I met in the Amiga field, their lives have changed so much in 20 years that it baffles me, and it humbles me, too.

So, please don't consider the lack of "love" for the old Amiga material to be arrogance or negligence. Things change, and they change at different speeds for different people.

There's that, and technology has changed, too. How do you contact somebody on the Internet whom you could reach 20 years ago via e-mail or IRC? This has become more difficult, not less difficult. You don't need to go back 20 years, it's been difficult for even 10 years.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #1 on: September 14, 2014, 05:10:29 PM »
Quote from: modrobert;772976
Isn't the original graphics.library written in C?

All versions (1.x through 3.x) are written in a mix of 'C' and 68k assembly language, strongly leaning towards 68k assembly code. Looking at the API, you'll find that a large part of the operations are about filling in data structures, hooking them up and then eventually telling the hardware what to do with them. Typically, the "clerical operations" are written in 'C', and the part which talks to the hardware is written in assembly language.

graphics.library, like intuition.library, is one of the more complex operating system components, both in how it uses data structures and how many distinct operations it can carry out. Hence, you'll find that 'C' plays a major part in its implementation.

That said, when you need to talk to the hardware in a very specific manner, you couldn't do that quite so elegantly back in 1986 if you didn't use 68k assembly language. There is complex assembly language code in graphics.library, it even uses its own preprocessor (!) in order to simplify writing loops (while .. do) and control structures (if .. then .. else).

Quote from: modrobert;772976
You can't expect a C compiler to match the optimizations made manually by an assembler programmer. I've heard so many stories about how well C compilers optimize these days and how assembler is made redundant, and then when you actually disassemble the code the compiler produced, it's bloated, filled with jump tables, and inefficient code.
If you have specific requirements for carrying out operations, you may get better control through the use of assembly language than any modern 'C' compiler could provide you with. C11 has just gained new control keywords in order to make interfacing to hardware more straightforward, but it will take a while for the language to evolve to give you the kind of control only assembly language can give you.

As for assembly language becoming largely redundant, it's probably unavoidable. 'C' in particular is a more expressive language which encodes in fewer lines what assembly language, by its very nature, requires much more effort to express. The thing is, if you can say the same thing with fewer words, the chances of mucking it up are somewhat reduced. If you have the right language, you can even verify the correctness of the instructions and data structures you used, which is something that eludes assembly language by its very design.

As for inefficient code, performance nowadays does not necessarily come out of implementing an algorithm through the optimum low level language encoding you could choose (that being assembly language). You can't necessarily predict how your code will be executed, and if you can, you might have to run the gauntlet of observing a handful of operating conditions under which it executes, such as regarding optimum pipeline use, prefetch, etc. This can get so ugly that it has to be automated (look up how one programmed the Motorola 56000 DSP in its day, just to get an idea how weird this can get) just to let the programmer worry about implementing his design correctly.

This is how progress looks like: the gains made through building faster processors are used in such a way that writing more complex software, more secure software and more reliable software becomes an easier goal to achieve through the use of tools and code generation which does not necessarily attempt to squeeze the last cycle out of the machine. This is, in the end a trade-off, if not a sacrifice.
« Last Edit: September 14, 2014, 05:30:38 PM by olsen »
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #2 on: September 14, 2014, 05:29:30 PM »
Quote from: Cosmos;772972
Have you watching my changes into my code ? Certainly no...

For example, the function R_AndRegionRegion in the graphics.library use two AllocMem (and of course two FreeMem at the end) for two tiny buffers (only $C) : I replaced by six clr.l -(sp) for these buffers on the stack now...

This function is now more than 1000 times faster, so the improvement is real...


You are a professionnal troller, you talk in the void, you have lost all credibility...



:(

Oh dear, the two of you do not even speak of the same thing when you discuss optimizations.

Replacing a memory allocation, which is guaranteed to be more expensive than substituting stack operations, certainly is an optimization. But the actual gains are likely to be entirely absorbed by the respective functions in graphics.library/intuition.library, etc. calling layers.library.

From what I have learned, there is only so much you can achieve at this level of optimization. You are more likely to get exponentially higher improvements if you perform the optimization at the next higher level, through the use of different data structures and algorithms, which is exactly what Thomas did with his layers.library implementation.

For this reason, I remain doubtful if a 1000-fold improvement is possible at this level. As the saying goes, extraordinary claims requite extraordinary explanations and proof to back them up.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #3 on: September 14, 2014, 06:18:26 PM »
Quote from: itix;772988
I dont think we are going to see AOS recompiled ever again. It is irreversibly sucked to a black hole.

Huh? Been there, done that (1999). OS4 is based upon that work.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #4 on: September 15, 2014, 08:44:34 AM »
Quote from: itix;772992
That is 15 years ago. I don't think we are going to see new 68k release. Only that matters to people here.

Never say never. Technically, building the operating system is easy enough, and it doesn't work any less good than it did back then.

The hard part is in figuring out if the result is sufficiently robust and does not cause important application software to fail. In short, while the code does work, it needs a QA team to make sure it's up to standards: beta testing, and fixing the bugs.

If you can find somebody to pay for that, then the current owner of the operating system may actually want to sell it, if asked. Mind you, that would only give you an updated OS 3.1, and OS 3.5 and beyond would still require more work for integration into the whole.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #5 on: September 15, 2014, 10:50:43 AM »
Quote from: Thomas Richter;773026
Well, actually, it would give you a bit more than just 3.1 plus an updated workbench because you can surely count me in. But we would be still short of Reaction, the updated FFS, the updated console, the updated RAM, the updated exec, the updated SCSI and the updated SetPatch. The only stuff from my side that depends on Reaction is BenchTrash, and I would hope that I'll find an older release without Reaction (just gadtools) somewhere.

Anyhow, whether that's financially viable is the next big question. I cannot answer that.


ReAction is one of H&P's contributions to the 3.5/3.9 product. As for FFS and other operating system modules which Heinz Wrobel worked on (FFS, SCSI, SetPatch), I suppose it would be his call whether or not these could be included. My own contributions to the product make up a sizable bunch, too, so there's a possibility to roll all of that into a product. Let's call it "AmigaOS 𝜋" ;)
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #6 on: September 15, 2014, 10:55:09 AM »
Quote from: biggun;773027
Olsen , Thomas,

I have to ask a stupid question here.

When you would such a possible buyout of AMIGA OS - with the existing free AROS.

I do not understand the question, could you reword it?

Quote from: biggun;773027

Where are the advantages?

Is AROS so far from this?
Or would AROS have already some areas where its ahead - like a MUI clone etc?

What is AROS really missing today?

I have no idea, since I did my best not to get involved with AROS. Not out of spite or antagonism, it's just that because of my involvement with the Amiga operating system I did not want to give anybody enough rope to hang the AROS project by.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #7 on: September 15, 2014, 01:55:01 PM »
Quote from: biggun;773037
Hi Olaf,

From what you say AROS 68K is already or can already be a very good AMIGA OS replacement?

I'm sorry, this is not for me to say because I neither know the current state of affairs of the AROS 68k build, nor am I familiar with the AROS project as it is today. Before I could give you an assessment I ought to know what I'd be talking about, which I don't :(

Quote from: biggun;773037

If there are areas where performance could be improved like e.g EXEC, or LAYERS.
Would these be areas where some geeks ould help?

From the impression I got so far, it seems to me that the interest may be there, but the question is whether this is sufficient to polish the code as it is.

We are not talking about some old, small sized operating system component for which tangible improvements are easily achieved (the "low-hanging fruit" if you want to call it that). Any measurable improvements would have to come through analyzing and re-engineering the code. This requires a bit of experience and knowledge of the technology. Back in the day you could learn all that, and apply it, in a few years.

I have no idea how the talent situation is today. Let's see some code, that is usually the best way to get an impression of how well-prepared a programmer is to tackle Amiga software development.

Quote from: biggun;773037

I would assume Cosmos would be talented for tuning Exec?

And I would think that Thomas would be the perfect guy to do layer super fast?

Frankly, I cannot judge how far Cosmos' talents extend beyond the 68k assembly language optimizations he showed. Exec is pretty well-designed and well-implemented (actually, the InitStruct() stuff was partly obsolete when it shipped, and how Signal exceptions are handled makes you wonder why the API is incomplete), and the best thing you can do without making radical changes to the implementation seems to be to shave off the rough edges through small optimizations. The thing is, for optimizations to made in this type of software you both need to know the context in which your optimization would have to be effective, and you need to measure if the optimization actually did make things better. So far, from Cosmos' own words, he does not seem to be into measuring the effects, he prefers to infer the effect from the changes he made.

As for Thomas, you may not be aware of it, but he is a physicist by training, which accounts for his background in mathematics and computer science. He has lectured, published papers, etc. He's an actual scientist. Why is this important? Physics is an empirical science, which builds models of the world through the use of mathematics. To make sure that your models are sufficiently accurate representations of reality, you need to test and verify them. Any claim you can make about the models must be backed up by evidence. See where I'm going?

Thomas built his layers.library by analyzing how the original worked, built a new one designed to solve the same problem better and verified that it does accomplish this goal. This approach represents best engineering practice. As far as I know the performance improvements are significant and can be measured. These improvements are on a scale which exceeds what could be achieved by fine-tuning the underlying assembly language code. No matter how much effort you put into shaving cycles off an inefficient 'C' compiler translation of the original code, if that code uses a technique (algorithm) that solves the wrong problem, or solves it in such a way that it wastes time, then you still have a poor solution. What's the alternative? Replace the algorithm with something that is more suited to the task. This is what Thomas did.

Replacing the algorithm produces significant leverage. To give you an example: if you have used the standard file requester in AmigaOS 3.1 and 3.5 you may have noticed that there is a performance difference between the two. The original 3.1 version became noticeably slower the more and more directory entries it read and displayed. The 3.5 version did not become noticeably slower. This was achieved by replacing the algorithm by which the file requester kept the directory list in sorted order. In the 3.1 version, doubling the number of directory entries read caused the file requester to spend four times as much effort to keep the list sorted, and no degree of low level assembly optimizations would have helped to improve this. What did bring improvements was to replace the sorting algorithm, so that doubling the number of directory entries only about doubled the amount of time needed to keep it sorted.

This is how you get to "super fast", and Thomas is your man. Cosmos, I'm not so sure about.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #8 on: September 15, 2014, 02:20:01 PM »
Quote from: OlafS3;773039
Thomas (as Olsen and others) "would" be perfect but he has already explained that he is not allowed to do so because of contracts he signed in the past. Everyone who was involved in AmigaOS development in the past has signed such contracts it seems, almost like a weapon to hinder competition. But nevertheless they cannot directly contribute because it could used against Aros then :(


Speaking for myself, I am not aware of any NDAs which still cover the field I'm working in that prevent me from contributing to a project such as AROS. With very few exceptions the NDAs I signed are no longer relevant because the companies with which I signed them went out of business a long time ago. Such is the nature of the Amiga business :(

I'm just a cautious fellow, and I don't want to be the guy who compromises a project such as AROS because somebody got it into his head that a knowledge transfer has taken place which must have happened because one guy had access to some original source code. In my humble opinion AROS is better off if its designs are based strictly upon available documentation only (the "clean-room implementation"), and there is no reason whatsoever to suspect that privileged information (if there is such a thing in the Amiga field) may have been used to help it along.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #9 on: September 15, 2014, 02:43:36 PM »
Quote from: Thorham;773054
It would've been even better if they simply read the whole directory first, then sorted it with the right sorting algorithm, and finally display the results. DirectoryOpus 5.90 does that.
The standard file request (in asl.library) interleaves reading and sorting the list as new entries come in. This can be done very quickly, in parallel while the file system is reading the directory. It's a reasonably clean design, whose "downfall" was the choice of sorting algorithm which eventually took more time to run than reading the directory.

If you read the entire directory before you start sorting, you're out of luck if reading the directory takes much longer than sorting it would. It happens. The way the standard file requester solves the problem is arguably a solution which handles both the short and the long directory case more elegantly. And you can even change the window size, type in a new directory or file name, etc. while it's reading and sorting the directory contents.

Quote from: Thorham;773054
Perhaps, but when you work with some resourced binary, it can't hurt to clean up the compiler mess so that you get much more readable code. After that you can try to replace algorithms.
I don't think it works any more at a certain size of project. You can't necessarily infer from disassembly, even after cleanup and documentation work, why the original high-level language (that would be 'C' or something more complex such as C++) implementation does what it does, and if the implementation is correct.

For example, at the heart of Intuition there is a state machine which receives, translates and distributes input events depending upon which events arrived earlier. If you move the mouse over the screen title bar and press the left mouse button, Intuition will change into a state in which every movement of the mouse will result in a screen drag operation. This is how it works. If you broke down the entire Intuition binary into plain 68k assembly language, I would venture that you will have hard time identifying the individual event state handlers. For that you are best advised to stick to the original 'C' code, because there you can see plainly how the design fits together, and why it makes sense.

Another example: for OS4 the original timer.device was ported to 'C'. The original timer.device was written in 68k assembly language, and documented source code exists. As it turned out there was a surprise waiting in that code after the initial 'C' language port was complete. Back in 1989/1990 Michael Sinz at Commodore modified the timer.device not to use two different time sources any more (UNIT_VBLANK and UNIT_MICROHZ used different CIA A and CIA B timers, which had different granularities), but to use single CIA timer instead. That timer had much higher resolution and precision, which was a great improvement.

It turned out that when the 'C' port of timer.device was reviewed, all the old obsolete CIA A and CIA B timer code was still in there, and a good part of the 'C' port was effectively useless. Again, observations such as these, which lead to irrelevant code being discovered and removed, require a high level view of the code, which for assembly language (by its very nature) is difficult to find.
« Last Edit: September 15, 2014, 02:47:04 PM by olsen »
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #10 on: September 15, 2014, 02:58:25 PM »
Quote from: wawrzon;773053
i trust that since thor and olsen seriously consider that there is a threat then there must be one. beyond all else they have personal experience with the commercial entities in question they have been working for and im sure they are basing their opinion on some experience, be it personal or general, which may be not available to others.

We may not agree with the situation, but the fact is that money changed hands to acquire the Amiga operating system, and as such represents a significant investment for the buyer.

The owner of the technology is naturally interested in preserving the value of the investment, which is why programmers who were involved in AmigaOS development work signed contracts governing what we may or may not do with the knowledge we gained. Unless these contracts are canceled, we are bound by them.

How much of a risk there would be in violating the terms of these contracts is difficult to say. Speaking for myself, I don't really want to find out because it is not something which I consider *that* important.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #11 on: September 15, 2014, 05:13:46 PM »
Quote from: psxphill;773069
Sounds like "Jumpy the Magic Timer Device", are you sure the code was all unused?
 
Yes. I started rewriting the 'C' port so that I could understand its inner workings better. Also, it provided an opportunity to pull code from subroutines (which became short functions) which were used exactly once into the respective functions which called them.

In the end I found that some of the functions were not getting called or referenced from anywhere else, and sure enough, these were the parts of the old timer.device which used to deal with the UNIT_VBLANK and UNIT_MICROHZ CIA timers, separately.

As far as I recall this specific code was not part of the timer.device in ROM, it was not even linked against it. But this obsolete code was still part of the SVN repository contents (and the CVS repository before that, and the RCS files before that), so it wound up getting ported to 'C'.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #12 on: September 15, 2014, 05:34:50 PM »
Quote from: modrobert;773059
Also, Thomas Richter and olsen have effectively convinced me that binary patching is bad in the current situation, only took like ten posts of explaining to do it (hehe).
It's not necessarily a bad idea, you just have to know to which end the patches are created. Collapsing more complex assembly code to less complex code, saving space and reducing execution time used to get a lot more respect when storage space was scarce and CPUs used to be far less powerful. Like, say, in the 1980'es and 1990'ies.

Let's say you had to ship a hot fix for a criticial firmware error to a few hundred thousand customers (or make that a few million), yet your operating system was firmly planted in ROM and the only way to make the fix work was to put it into a jump table in RAM, and that jump table was so small that you had to rewrite existing patch code to make room for you new patch. Then you'd call upon a specialist who would work on the task of letting the extra air out of the code and build the shortest possible patch that would fit.

This used to be such a highly specialized talent, and it solved such dire and unique problems that I have it on good authority that this kind of assembly language optimization was called a "spell", as in "magic spell".

Cosmos may not view his work this way, but I'd say that the changes he makes work better if considered as optimizations for space than as optimizations for performance. One question which this raises is what you do with the extra space, but let's not go there.

Optimizing assembly code can be a rewarding exercise, like solving a chess puzzle, or doing calculus (yes, some people do that as a hobby, like playing "Candy Crush"; I'm still holding out for "Calculus Crush" for the iPhone). It follows a certain set of rules, there are rigid constraints and the number of possible solutions is small. Perfect entertainment!

Nothing makes this a bad idea, but what you can achieve is limited, especially when you are shooting for performance optimizations. You have to find code that both can be optimized and "wants" to be optimized, too.

Code that can be optimized but "doesn't want" to be optimized is contributing very little to the running time of the software which it is a part of; if you improve its running time by 200%, but it's only getting invoked some 0.2% in total then you may have spent an entertaining evening, but the effect of you change is negligible.

Code than can be optimized and "wants" to be optimized could have its running time improved by 5%, but if it's used 60% in total you'll have a noticeable improvement, and will have spent an entertaining evening, too ;)
« Last Edit: September 15, 2014, 05:37:12 PM by olsen »
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #13 on: September 16, 2014, 12:39:09 PM »
Quote from: Thomas Richter;773148
The problem goes deeper. The problem is that it's very likely that you do not even realize that you have an architecture problem somewhere in the code if you write everything in assembler. If you take enough time, you'll probably get working code if you work hard. But you're still lost irrelevant details. It's likely that you picked an algorithm because it looked on the microlevel pretty ideal. But whether that's relevant for the big picture is another question, and you'll easily loose the big picture in assembler - you're not forced to organize your code, and you don't have a compiler that helps you at the detail level.


If somebody is deeply committed to using the language of his choice (say, assembly language, Perl, Visual Basic, Delphi, you name it), and has a clear idea of the limitations of the language, then there is literally no problem he cannot solve using that language, even if that means having to put in extra work to solve it.

You can mention how much more leverage a different language provides, tell war stories to illustrate the time and effort you saved by switching tools, but it won't leave any impression whatsoever. If somebody grew up learning one programming language, and found that it is fit to solve all the problems he ever encountered, never saw the need to look beyond it, you won't be able to talk him out of it.

Which is fine, until you have to collaborate with such a savant and find common ground to work with him. I trust you've been in this kind of situation, and so have I.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #14 on: September 17, 2014, 08:10:40 AM »
Quote from: kolla;773207
Thoram: code an IPv6 capable TCP stack for AmigaOS in asm, it is very much needed, is perfectly doable and worth while on 68k, and complex enough to drive you insane.
Don't tempt me ;)

But seriously, implementing an IPv6 TCP/IP stack in plain 68k assembly language should be doable. You'd have to start off small with the 1983 TCP/IP implementation, as documented and used in 4.2BSD, upgrade it to the 1988 implementation (with congestion control support) and then you're basically in business: this can be upgraded for IPv6 support.

The hard part is in fitting this into the bsdsocket.library framework which already exists, so that it can be used for existing client software on AmigaOS (and not just exist as a proof that anything is possible in 68k assembly language, even if you have to forgo shaving, bathing and seeing your family for an extended period of time). There is a blueprint in AmiTCP, but it will only get you so far. The only IPv6 API support which I am aware of exists in Miami Deluxe, and it has never been replicated; the number of IPv6 clients on the Amiga always was very small, too, which would make testing difficult.

Anyway, before somebody even thinks about it: don't write IPsec code in plain 68k assembly language, because this is going to end in tears. As soon as you're having to do serious heavy lifting in cryptography you're best advised not to write the code in a language which is difficult to audit and review for errors.
« Last Edit: September 17, 2014, 08:37:00 AM by olsen »