Welcome, Guest. Please login or register.

Author Topic: New improved intuition.library version from the Kickstart 3.1  (Read 73982 times)

Description:

0 Members and 6 Guests are viewing this topic.

Offline olsen

Quote from: Thorham;771862
Wow, that's some amazing crapz0r code right there :( Compilers that produce such crap should be burned :(

Excuse me, this is very little code to judge the compiler by. You're looking at one toenail of the left forepaw of the elephant, so to speak.

Intuition was built with the same 'C' compiler which was used for building the entire original Amiga operating system (Green Hills 'C'). That 'C' compiler was truly excellent, and just to put this into perspective, it took the Lattice/SAS 'C' compiler almost ten years to catch up to it in terms of code quality.

The one big difference between Green Hills 'C', and the Amiga-native Lattice and SAS/C compilers is in how parameters are passed to functions. Lattice and SAS/C could pass function parameters in registers (a0/a1/d0/d1, other registers could be specified as needed), whereas Green Hills 'C' exclusively passed parameters on the stack.

Passing parameters on the stack was not an indication of poor code generation, it was merely how the ABI of the operating system target platform wanted it to be done: Green Hills 'C' was a *Unix* targeted optimizing compiler (back then the focus was on portable 'C' compilers and optimizing 'C' compilers were rare), for the Sun 2 and Sun 3 workstations, which was cleverly reworked into a cross-compiler for the Amiga.

If you want to critique the quality of the Intuition code, please cast your eye on the meat, bones and nervous system of the beast.
« Last Edit: August 28, 2014, 09:16:09 AM by olsen »
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #1 on: August 28, 2014, 11:25:54 AM »
Quote from: Cosmos;771885
All my intuition.library source is full of examples I gave... This library is the slowest of the Kickstart 3.1 for me...

I'll show you another example for the beta 7 release if you want...

Note : passing args with registers is more 4 or 5 times faster than by the stack, specially on 000/010/020 who don't have datacache... And we don't need the addq/lea after the subroutine...

There are, give or take, some 800 functions which make up intuition.library V40, each of which passes parameters on the stack, and that's not counting the function calls made through amiga.lib stubs. This may be a ripe field for peephole optimizations, but my best guess is that the impact any such optimizations could have on overall system performance will be rather low.

Intuition is largely event-driven: its main purpose is capturing user input and routing these events to the clients which consume and react to them. This type of operation is very slow to begin with and typically does not happen more than 60 times per second (e.g. moving the mouse pointer), and more likely happens less than 10 times per second (hitting a key, clicking a mouse button). These operations happen as fast as the user can produce these events.

Aside from the event processing and routing, Intuition also contains API wrapper code which makes interacting with screens and windows, and rendering into them, possible (and nicer, too, from a programmer's point of view). These wrappers connect directly to graphics.library and layers.library, respectively.

Then there's the rest of the code, which consists of utility functions. For example, if you click on a gadget in a window Intuition will need to figure out if the click hit the gadget or the window. Utility functions such as the one which figures out geometric relationships are written in pure 68k assembly in intuition.library V40.

The parts of Intuition which interface to graphics.library and layers.library are more likely to produce improvements if optimized than those parts which merely react to the user's input in his own time.

If the work you are doing in order to optimize code is fun for you, then that's OK, no harm done.

For the record, I would like to point out which scope your optimizations fit into, and where you might want to make specific choices on what to look into.

You could chip away at the code which reacts to user input in user time and you would see no benefit whatsoever (if a mouse click is processed a microsecond faster than without optimization, the user will most definitely not notice the difference), but changes in the interaction between intuition.library and graphics.library/layers.library might have an impact. Assuming that you can measure it, and not just imply that the impact will be there because the number of cycles spent in the modified code is smaller than they used to be.

Quote from: Cosmos;771885
And making thousand supertinysubroutines (3 or 4 mnemonics) called by bsr/jsr is not a sign of a good compilator... Inlining is MUCH faster, really...


:(

The Green Hills 'C' compiler had, for its time, really great data flow analysis capabilities, which allowed it to optimize the operations carried out by a single function. The compiler knew well how the function-local operations were carried out but it did not have an idea of the bigger picture of which function called which.

As far as I can tell Intuition was not written to benefit from function inlining, which would be constrained by the size of the respective function (back then they used preprocessor macros instead). You would have had to mark local functions as being 'static' and let the compiler decide whether inlining them would make sense.

Anyway, as great as the compiler was, it needed help from the programmer to tell it what to do, and this being absent, no function inlining happened. You are asking too much of an optimizing compiler which was a product of its time.
« Last Edit: August 28, 2014, 11:30:57 AM by olsen »
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #2 on: August 28, 2014, 01:03:24 PM »
Quote from: Thorham;771882
Okay, perhaps it's crap because it was compiled from crap C code? This crap had to come from somewhere, and if it's the compiler's fault, then that's not good.
As far as I can tell Intuition is not poorly written. Poorly written code could would lack structure, function and variable names would be poorly chosen and fail to convey their respective purpose, the purpose and intent behind the code as written would be hard to fathom and comments would either be absent, meaningless or wrong. I have seen code like that: I've written it myself.

How could one reasonably hope to measure code quality on such a small scale? The disassembly is of stub code which calls DrawImageState() with two additional parameters (state=IDS_NORMAL, drawinfo=NULL) which are absent in the basic DrawImage() function.

How could the compiler optimize this? Try squeezing blood from a turnip, there is not enough substance here. An optimizing compiler tries to remove redundancies from the flow of control through a function, and how it uses data: it tries to change the implementation of an algorithm without changing the result. There is no control flow to speak of here, it's just reusing function parameters and calling a different function with these. There is no algorithm to change here.

Quote from: Thorham;771882
After all, if such simple code is already crap, then what on earth is it going to do with more complex code?

If it's the programmers fault, then they should've been fired :D
Throw more complex code at the optimizer, and it will get shorter and/or faster, assuming that it can be optimized.

The disassembly of the stub code is a poor example in this context because there is no algorithmic optimization possible (pulling parameters off the stack and pushing them back onto the stack is mandated by how the platform wants this to be done, so this does not count as a fault of the compiler).

Here's the thing: an optimizing compiler allows you to spend more time on getting your code to work correctly and still be readable.

Most of the time, and I'm not making this up, attempting to optimize code is a futile effort. For one thing, optimization requires effort to transform your code, and you need to verify that it still does the job as the original code, just better. This transformation inevitably results in more complex code that is hard to verify as correct.

Put another way, in your attempt to make things better you might not just have to spend a lot of time in getting there, you will probably make things less robust than they were before.

Even if you did succeed, the chances that your change will have any positive impact will be very low. There are rare exception to this, e.g. if the optimization is prompted by prior analysis which exposed the target of the optimization as a bottleneck. The rest of the time you will both have difficulties figuring out which code to optimize and justifying why you spent so much time on trying to optimize it.

Quote from: Thorham;771882
Quote
If you want to critique the quality of the Intuition code, please cast your eye on the meat, bones and nervous system of the beast.
That's going to be hard without the source code, isn't it?
Sure, but so is trying to optimize the code at the instruction level, viewing a translated version of the original 'C' code. At the instruction level the knowledge which would inform the choices the original programmer made in implementing algorithms is hard to fathom.

In this thread the absence of this information was, so far, considered completely beside the point. This thread is more about the challenge and the sport of peephole optimization than about global optimization.
« Last Edit: August 28, 2014, 01:24:39 PM by olsen »
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #3 on: August 28, 2014, 01:10:53 PM »
Quote from: kamelito;771889
Green Hill compiler is still available for 68k.
http://www.ghs.com/products/68k_development.html

I read somewhere that Atari Mint uses a more recent GCC that is producing better code. Maybe we should port it to Amiga?
Kamelito

Green Hills charges big bucks for the use of the compiler.

Amiga, Inc. (the Los Gatos operation) bought the compiler with complete source code, so that it could be built on the respective workstation (Sun 2 program code does not necessarily run on a Sun 3 workstation, and the other way round).

Incidentally, the Green Hills 'C' compiler used for building the Amiga operating system was written in Pascal.
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #4 on: August 28, 2014, 08:25:57 PM »
Quote from: kamelito;771911
@Olsen
As they bought it is the owner now being Amiga Inc or Hyperion?
Kamelito

Probably neither... Amiga likely bought the compiler source code under license, and licenses such as these are not necessarily transferrable. If the company ownership changes, or the company goes into liquidation, you might have to talk to the licensor about terms, and you may have to pay a fee in order to continue using the source code.

When ESCOM acquired certain Commodore assets, paperwork and documentation on software licenses, even contracts with publishing companies such as those who made the RKMs and the AmigaDOS manual, were lost. They staid lost, or became more lost when Gateway 2000 acquired patents and stuff from ESCOM.
« Last Edit: August 28, 2014, 08:37:18 PM by olsen »
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #5 on: August 28, 2014, 08:31:24 PM »
Quote from: itix;771914
CopyMemQuick() is perfect example of redundant micro optimization. Assuming that routines are done right, CopyMemQuick() is never faster than CopyMem(). Saving six asm instructions on each CopyMemQuick() call is not worth it.

You're assuming that CopyMemQuick() was always supposed to leverage an unrolled movem.l loop.

There are notes in the old autodocs which hint that somebody was dreaming about having hardware-accelerated data copying operations available at some point. I take it that this type of hardware actually did exist for 68k Sun workstations, so this wasn't completely unrealistic.

Given how cheap Commodore was, the ambition never resulted in such hardware showing up, though.

I'm speculating: had this hardware existed for the Amiga, it would have hooked into CopyMemQuick().
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #6 on: August 29, 2014, 08:41:05 AM »
Quote from: Thorham;771971
Optimizing that crap code example Cosmos showed is still useful, simply because it's easier to read :)

What would you rather deal with in your source code, this:

Code: [Select]
R_DrawImage
    move.l  d1,-(sp)
    move.l  d0,-(sp)
    move.l  a1,-(sp)
    move.l  a0,-(sp)
    jsr     _DrawImage
    lea     $10(sp),sp
    rts

_DrawImage
    move.l  d2,-(sp)
    move.l  8(sp),d0
    move.l  $C(sp),d1
    move.l  $10(sp),d2
    clr.l   -(sp)
    clr.l   -(sp)
    move.l  $1C(sp),-(sp)
    move.l  d2,-(sp)
    move.l  d1,-(sp)
    move.l  d0,-(sp)
    bsr.w   _DrawImageState
    lea     $18(sp),sp
    move.l  (sp)+,d2 ; make the wrong CCR for BenchTrash 1.73
    rts

Or this:

Code: [Select]

R_DrawImage
    clr.l   -(sp)
    clr.l   -(sp)
    move.l  d1,-(sp)
    move.l  d0,-(sp)
    move.l  a1,-(sp)
    move.l  a0,-(sp)
    bsr.w   _DrawImageState
    lea     $18(sp),sp
    rts


Actually, what the DrawImage() stub should do is the following:
Code: [Select]

    include "exec/macros.i"
    include "intuition/imageclass.i"

    section text,code

    xdef    _DrawImageStub

_DrawImageStub:

    movem.l a2/d2,-(sp)
    move.l  #IDS_NORMAL,d2
    move.l  #0,a2
    JSRLIB  DrawImageState
    movem.l (sp)+,a2/d2
    rts

    end

The DrawImage() stub in intuition.library calls the local DrawImageState() function and bypasses the LVO, which means that if you should patch DrawImageState() through SetFunction(), then DrawImage() will not invoke the patched function. I'd call this a bug in intuition.library V40.

Even that bit of code could shortened by taking advantage of the fact that IDS_NORMAL=0, so you could use moveq to fill in d2 and then set a2 to zero by copying d2 into it.
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #7 on: August 29, 2014, 03:33:46 PM »
Quote from: psxphill;771990
Commodore poured a load of money into the Amiga before the A1000 launch. They then invested in cost reduction for the A500.
 
 Up until 1987 what they were doing made sense. The problem came in 1988 when AAA was started. It wasn't that they didn't invest any money, what they invested didn't turn into real products.
 
 Finally sense prevailed and AGA was started. While AGA rollout was delayed due to management silliness, the money spent on AAA was wasted with mutually consent.

Product development in the Amiga line stagnated and Commodore subsequently began to put more resources into the PC market. At least, this is how it played out in the european market when I was a commercial Amiga developer in the late 1980'ies/early 1990'ies. We were basically ignored by Commodore management, and the scarcity of competetive product in Commodore's lineup made our jobs harder. Well, you know how that turned out: when the PC market contracted, the company was left with little room to make something out of the Amiga technology. In the mean time, the company did a really poor job at promoting their product and developing their market.
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #8 on: August 29, 2014, 03:38:45 PM »
Quote from: Thomas Richter;771989
I agree. That's at least sub-optimal. Then, however, one shouldn't really depend on patching the function in first place... except if there would be a bug.

Patches do come in handy in the most unexpected ways. For example, the Picasso II, EGS, Retina, CyberGraphX, Picasso96 software made possible what Commodore struggled with for years. SetFunction() can also be very handy for monitoring or performance analysis. However, if the operating system itself does not use the prescribed method for calling operating system functions, it gets harder to make the patches/probes stick...
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #9 on: August 31, 2014, 11:51:36 AM »
Quote from: matthey;772087

I believe the biggest use of exec/CopyMem() is the ram disk but I recall the graphics or maybe layers library using it as well. I'm out of town at the moment or I would run the SnoopDOS script and tell you.

"ram-handler" may be among the most effective users of CopyMem(), maybe next to "scsi.device", but if you count the number of calls made to CopyMem() in the source code, "intuition.library" comes out on top. In ROM CopyMem() counts as a space saving measure, whereas on disk the operating system components happily use memcpy(), etc. instead.
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #10 on: September 04, 2014, 07:14:53 AM »
Quote from: Minuous;772304
Why pick an arbitrary obsolete version like V3.1 and not, for example, V1.1? Seems pretty random. I don't see what's so special about OS3.1.


Trick question: what's the big difference between programming for Kickstart/Workbench 1.1 and 3.1, other than the number in front of the dot? The APIs evolved big time between 1.1 and 3.1, putting much more power into the hands of the programmer. What required a lot of inside knowledge and fiddling with barely documented data structures back in 1985 became easier to achieve and was a lot less error prone, too. Documentation and example code arguably became better, too, although much of the original documentation was quite good already (even though it was not exactly complete).

Also, the operating system became more robust, thanks to tons of bugs getting found and fixed when QA tools such as "The Enforcer" were widely used within Commodore engineering (which may not be relevant for AROS, though).
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #11 on: September 04, 2014, 12:23:28 PM »
Quote from: Kronos;772312
Well maybe you should take a step back to see what really has happened:

Hack&Patch were doing 3.5/9 on the cheap, scamming contributors left and right.
Doing those few actually new tools in 3.5/9 based on just plain GadTools+BOOPSI was no option.
Stuntz was to smart to give MUI away for free.

So they had to find a 2nd class GUI system that could be aquirred at next to 0 cost, and hell did ClassAct fill that bill.

I was there, on the ground, when the decision was made to go with BOOPSI classes instead of either sticking with gadtools or using MUI. Cost may have been a factor, but the plan clearly was to use what the operating system could support out of the box withour resorting to middleware, if you want to call MUI that. Problem was, ClassAct was not as mature as we had hoped for, and additional time was needed to integrate and test it. In retrospect, the usefulness of ClassAct was limited, and investing into upgrading it to become a fully featured user interface toolkit did not pay off. Moving every user interface to MUI would have taken a bigger effort, and user interface work certainly was challenging and arduous at the time.

As for contributors getting the short end of the stick, it's very well possible. This was a small project with limited capital investment. Let's not forget that back in 1998 there was not much faith in an Amiga operating system upgrade popping into existence, and just about everybody who might have contributed to working on the product had cold feet. You had to be somewhat crazy to jump in. I certainly was...
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #12 on: September 04, 2014, 12:36:04 PM »
Quote from: itix;772332

OS 3.5/3.9 had almost no improvements (scsi.device with large hd support was only major improvement there) but just 3rd party software you can download from Aminet for free.


Hey, no fair, that hurts :(

I'm one of the crazy people who worked their asses off for those "almost no improvements" in OS 3.5.

You casually dismiss the whole effort of rebuilding and fixing most of the disk-based operating system components, including the datatypes, the printing subsystem, preferences, Workbench and its icon system. We even managed to put in bug fixes for ROM-based components. OS 3.5 brought API changes, code cleanup and the original plan was to build upon that in future development work. We even managed to beta test the whole product.

In retrospect, the OS 3.5 update was too little too late, but it certainly was no frivolous attempt at selling basically nothing to the gullible.

Now OS 3.9, that certainly deserves it share of criticism.
 

Offline olsen

Re: New improved intuition.library version from the Kickstart 3.1
« Reply #13 on: September 06, 2014, 02:17:40 PM »
Quote from: itix;772449
Not really hack and patch. NewIcons just replaces original icon image with image encoded in tooltypes. It is slower but works and is quite clean. Glowicons append IFF ICON chunk to end of icon file. Users dont really care how icon data is stored.

I doubt that. The size of icon files directly affects how quickly Workbench reads directories, and even under optimal conditions things aren't so nice to begin with. Due to how Workbench is designed, and how the icon file layout looks like, there is no clean way to provide for caching, and even changing one small aspect of an icon (say, a tool type) requires that the entire file is written back to disk; prior to icon.library V44 this would amount to a series of individual Write() calls with no write buffering whatsoever.

I added an icon.library API for streamlining snapshot operations (which ends up modifying the icon file on disk) but for all other operations things are not really sunny.

The NewIcons text encoding/decoding of the image data contributes significantly to the time it takes to read/store an icon on disk. Also, the NewIcons patches have to hide the encoded image tool types from display, which again burns CPU cycles. This stuff adds up. It works, but sacrifices are being made on your behalf.

With icon files you have a choice between a bad solution and a worse solution. The icon image chunk appended to the icon files by icon.library V44 and beyond is an optimization. It reads more quickly than the NewIcons encoding of the image data (remember, prior to icon.library V44 this data was read by a series of individual Read() calls), and it takes up less space in the file. In this context I consider the new icon file format a bad solution, and NewIcons a worse solution.

Bottom line is, icon I/O operations are expensive and how Workbench reads icons adds insult to injury. The modified icon file format attempts to remedy this.