Welcome, Guest. Please login or register.

Author Topic: Layers.library V45 on the aminet  (Read 131153 times)

Description:

0 Members and 7 Guests are viewing this topic.

Offline itix

  • Hero Member
  • *****
  • Join Date: Oct 2002
  • Posts: 2380
    • Show only replies by itix
Re: Layers.library V45 on the aminet
« Reply #224 from previous page: September 15, 2014, 12:23:28 PM »
Quote from: olsen;773022

If you can find somebody to pay for that, then the current owner of the operating system may actually want to sell it, if asked.


This is the black hole I mean. There is nobody to invest this money and then this current owner, whatever company it is. Re-establishing OS3 development would help MorphOS and AROS but makers of that another OS what I dont mention here surely wont agree.
My Amigas: A500, Mac Mini and PowerBook
 

Offline OlafS3

Re: Layers.library V45 on the aminet
« Reply #225 on: September 15, 2014, 12:29:43 PM »
Quote from: biggun;773037
Hi Olaf,

From what you say AROS 68K is already or can already be a very good AMIGA OS replacement?


If there are areas where performance could be improved like e.g EXEC, or LAYERS.
Would these be areas where some geeks ould help?

I would assume Cosmos would be talented for tuning Exec?

And I would think that Thomas would be the perfect guy to do layer super fast?


Thomas (as Olsen and others) "would" be perfect but he has already explained that he is not allowed to do so because of contracts he signed in the past. Everyone who was involved in AmigaOS development in the past has signed such contracts it seems, almost like a weapon to hinder competition. But nevertheless they cannot directly contribute because it could used against Aros then :(
 

Offline kolla

Re: Layers.library V45 on the aminet
« Reply #226 on: September 15, 2014, 12:41:51 PM »
Lesson to learn: never sign any NDA when it comes to software development, it can and will work against you.
B5D6A1D019D5D45BCC56F4782AC220D8B3E2A6CC
---
A3000/060CSPPC+CVPPC/128MB + 256MB BigRAM/Deneb USB
A4000/CS060/Mediator4000Di/Voodoo5/128MB
A1200/Blz1260/IndyAGA/192MB
A1200/Blz1260/64MB
A1200/Blz1230III/32MB
A1200/ACA1221
A600/V600v2/Subway USB
A600/Apollo630/32MB
A600/A6095
CD32/SX32/32MB/Plipbox
CD32/TF328
A500/V500v2
A500/MTec520
CDTV
MiSTer, MiST, FleaFPGAs and original Minimig
Peg1, SAM440 and Mac minis with MorphOS
 

Offline warpdesign

  • Sr. Member
  • ****
  • Join Date: Feb 2008
  • Posts: 256
    • Show only replies by warpdesign
    • http://www.warpdesign.fr
Re: Layers.library V45 on the aminet
« Reply #227 on: September 15, 2014, 12:57:02 PM »
Maybe these NDAs are time-limited (I wish they are) ?
 

Offline OlafS3

Re: Layers.library V45 on the aminet
« Reply #228 on: September 15, 2014, 01:09:57 PM »
Quote from: warpdesign;773041
Maybe these NDAs are time-limited (I wish they are) ?


it seems not :(
 

guest11527

  • Guest
Re: Layers.library V45 on the aminet
« Reply #229 on: September 15, 2014, 01:13:37 PM »
Quote from: olsen;773029
ReAction is one of H&P's contributions to the 3.5/3.9 product. As for FFS and other operating system modules which Heinz Wrobel worked on (FFS, SCSI, SetPatch), I suppose it would be his call whether or not these could be included. My own contributions to the product make up a sizable bunch, too, so there's a possibility to roll all of that into a product. Let's call it "AmigaOS 𝜋" ;)

Given that I'm a mathematician, I kinna like the name. I believe the fixes for console could be re-done (there isn't really much), there's not much to do for RAM at all, but FFS requires at least a couple of tweaks (ACTION_CLOSE returning the wrong value, ACTION_FLUSH is not waiting, 4GB support, and I would strongly suggest to include NSD *and* TD64 support, and probably a couple of other issues I don't remember). SetPatch... Well, ExAllEnd() was broken, but there are probably more issues that got fixed. scsi.device  - I have no clue.
 

guest11527

  • Guest
Re: Layers.library V45 on the aminet
« Reply #230 on: September 15, 2014, 01:15:55 PM »
Quote from: kolla;773040
Lesson to learn: never sign any NDA when it comes to software development, it can and will work against you.

Then you will never be able to work in professional (as in: for money) software development. Such NDAs are quite common, and of course, the they forbid you to take the developed software and re-sell that to a competitor. Or would you pay for a software just to see that your competitor would get access to it, too?

It's a bit different for software that was licensed.
 

Offline OlafS3

Re: Layers.library V45 on the aminet
« Reply #231 on: September 15, 2014, 01:18:59 PM »
Quote from: Thomas Richter;773044
Then you will never be able to work in professional (as in: for money) software development. Such NDAs are quite common, and of course, the they forbid you to take the developed software and re-sell that to a competitor. Or would you pay for a software just to see that your competitor would get access to it, too?

It's a bit different for software that was licensed.

take the software and resell it yes of course, but in this case you have said it forbids you any source contributions to Aros even if there is no original line of code included just because you had access to the old sources. That is completely different...

good to take a lot of devs out of game
« Last Edit: September 15, 2014, 01:22:29 PM by OlafS3 »
 

guest11527

  • Guest
Re: Layers.library V45 on the aminet
« Reply #232 on: September 15, 2014, 01:32:58 PM »
Quote from: vxm;773028
And the problem is that each of you is right.
One said where to go while the other say how to go.
Then, each of you take one end of the same rope and pull very exactly in the opposite direction of the other. So, inevitably, nothing is moving forward.

Remenber, your discussion is about a 7 MHz clocked hardware.
Synergy will always be more profitable than a true false antagonism.

Well, look, first of all, I don't think we're "pulling in opposide directions", or that we aren't moving. This thread is about a move, after all, and I hope it's a move in the right direction.

Second, whether a processor is 7Mhz or not does not matter: If a given function (say AndRectRegion()) takes 1% of overall calling time to move a window, it does not matter whether you speed this up by a factor of two (realistic, if AllocMem is bypassed) or a factor of 1000 (unrealistic, unless you replace it by a NOP), the net effect will be NIL. It's really a very elementary truth that is independent of the processor speed. To get a speedup of two, *every single* function in the call path would have to be speed up by the very same factor, and that's not going to happen.

The problem really is that Cosmos has apparently never worked in a larger software project with exploding complexity, and thus has no feeling in what type of modifications one would want to make, and in which step of the project. Yes, of course it makes sense to optimize a bottleneck of a program, and there to use processor-specific code. But it does not make sense if you have additional constraints that are harder to characterize, such as maintainability, or portability. If the code can work on an old machine, and the speed impact by not using the latest processor instructions is below measurable, it makes no sense to use such optimizations. Basically you compromize compatibility and get nothing in return. In the same way, it does not make sense to replace an AllocMem() by a copy on the stack (all provided this is valid) if you don't receive a measurable effect. Again, you would compromize maintainability (as in: There is a single constructor call for a certain object) and would not gain anything measurable in return.

As always, you have to find compromizes in development, especially when it comes to larger and complex problems, and "running time" is not the one and only goal. You not only want to deploy your software on a wide variety of hardware, you also want compatibility to existing software, and you also want to be able to read your code from ten years from now. You also want that customers can install and use the software easily, and are not confused by compatibility issues that version C of a library can only work with programs P and Q, but program R requires version B, and program S may work with C, but only if specific settings are made...

The overall problem cannot be reduced to a simple count of cycles.
 

Offline psxphill

Re: Layers.library V45 on the aminet
« Reply #233 on: September 15, 2014, 01:37:40 PM »
Quote from: Thomas Richter;773044
Such NDAs are quite common, and of course, the they forbid you to take the developed software and re-sell that to a competitor.

That isn't legal even if you didn't sign an NDA. As long as you don't disclose anything that could only be learnt during your contract then you can contribute to AROS just fine.
 
 What you are describing sounds more like a non-compete clause, which isn't going to be in force by now (if they try to say it is then the court would rule that it was an unfair clause).
 

Offline OlafS3

Re: Layers.library V45 on the aminet
« Reply #234 on: September 15, 2014, 01:40:43 PM »
Quote from: Thomas Richter;773046
Well, look, first of all, I don't think we're "pulling in opposide directions", or that we aren't moving. This thread is about a move, after all, and I hope it's a move in the right direction.

Second, whether a processor is 7Mhz or not does not matter: If a given function (say AndRectRegion()) takes 1% of overall calling time to move a window, it does not matter whether you speed this up by a factor of two (realistic, if AllocMem is bypassed) or a factor of 1000 (unrealistic, unless you replace it by a NOP), the net effect will be NIL. It's really a very elementary truth that is independent of the processor speed. To get a speedup of two, *every single* function in the call path would have to be speed up by the very same factor, and that's not going to happen.

The problem really is that Cosmos has apparently never worked in a larger software project with exploding complexity, and thus has no feeling in what type of modifications one would want to make, and in which step of the project. Yes, of course it makes sense to optimize a bottleneck of a program, and there to use processor-specific code. But it does not make sense if you have additional constraints that are harder to characterize, such as maintainability, or portability. If the code can work on an old machine, and the speed impact by not using the latest processor instructions is below measurable, it makes no sense to use such optimizations. Basically you compromize compatibility and get nothing in return. In the same way, it does not make sense to replace an AllocMem() by a copy on the stack (all provided this is valid) if you don't receive a measurable effect. Again, you would compromize maintainability (as in: There is a single constructor call for a certain object) and would not gain anything measurable in return.

As always, you have to find compromizes in development, especially when it comes to larger and complex problems, and "running time" is not the one and only goal. You not only want to deploy your software on a wide variety of hardware, you also want compatibility to existing software, and you also want to be able to read your code from ten years from now. You also want that customers can install and use the software easily, and are not confused by compatibility issues that version C of a library can only work with programs P and Q, but program R requires version B, and program S may work with C, but only if specific settings are made...

The overall problem cannot be reduced to a simple count of cycles.

one question... I understand you are not allowed to contribute any changes to Aros because of the contracts. Would you be allowed to look at the Aros sources and give tips what to improve and how (in abstract form) or would that be problematic too? And someone else does the real changes? I think even the Aros devs would be grateful for tips.
 

Offline wawrzon

Re: Layers.library V45 on the aminet
« Reply #235 on: September 15, 2014, 01:41:21 PM »
Quote from: Thomas Richter;773043
Given that I'm a mathematician, I kinna like the name. I believe the fixes for console could be re-done (there isn't really much), there's not much to do for RAM at all, but FFS requires at least a couple of tweaks (ACTION_CLOSE returning the wrong value, ACTION_FLUSH is not waiting, 4GB support, and I would strongly suggest to include NSD *and* TD64 support, and probably a couple of other issues I don't remember). SetPatch... Well, ExAllEnd() was broken, but there are probably more issues that got fixed. scsi.device  - I have no clue.


just make pfs3 the official filing system, it includes all functionality, is open and currently maintained by toni wilen and include ffs as is for legacy and backwards compatibility and you are done.
 

Offline wawrzon

Re: Layers.library V45 on the aminet
« Reply #236 on: September 15, 2014, 01:53:32 PM »
Quote from: OlafS3;773045
take the software and resell it yes of course, but in this case you have said it forbids you any source contributions to Aros even if there is no original line of code included just because you had access to the old sources. That is completely different...

good to take a lot of devs out of game


this argument is void in respect to reimplementing aos functionality in a clean room environment. if he had not signed the nda and not gained an insight into amiga system source code he would have no knowledge of its internals to contribute to aros. now, that he did and have its knowledge he could be extremly helpful if only there was a counterpart to cooperate and establish such a clean room approach. ability to read and understand the code en gross and communicate things clearly is as important as coding itself i guess.
 

Offline olsen

Re: Layers.library V45 on the aminet
« Reply #237 on: September 15, 2014, 01:55:01 PM »
Quote from: biggun;773037
Hi Olaf,

From what you say AROS 68K is already or can already be a very good AMIGA OS replacement?

I'm sorry, this is not for me to say because I neither know the current state of affairs of the AROS 68k build, nor am I familiar with the AROS project as it is today. Before I could give you an assessment I ought to know what I'd be talking about, which I don't :(

Quote from: biggun;773037

If there are areas where performance could be improved like e.g EXEC, or LAYERS.
Would these be areas where some geeks ould help?

From the impression I got so far, it seems to me that the interest may be there, but the question is whether this is sufficient to polish the code as it is.

We are not talking about some old, small sized operating system component for which tangible improvements are easily achieved (the "low-hanging fruit" if you want to call it that). Any measurable improvements would have to come through analyzing and re-engineering the code. This requires a bit of experience and knowledge of the technology. Back in the day you could learn all that, and apply it, in a few years.

I have no idea how the talent situation is today. Let's see some code, that is usually the best way to get an impression of how well-prepared a programmer is to tackle Amiga software development.

Quote from: biggun;773037

I would assume Cosmos would be talented for tuning Exec?

And I would think that Thomas would be the perfect guy to do layer super fast?

Frankly, I cannot judge how far Cosmos' talents extend beyond the 68k assembly language optimizations he showed. Exec is pretty well-designed and well-implemented (actually, the InitStruct() stuff was partly obsolete when it shipped, and how Signal exceptions are handled makes you wonder why the API is incomplete), and the best thing you can do without making radical changes to the implementation seems to be to shave off the rough edges through small optimizations. The thing is, for optimizations to made in this type of software you both need to know the context in which your optimization would have to be effective, and you need to measure if the optimization actually did make things better. So far, from Cosmos' own words, he does not seem to be into measuring the effects, he prefers to infer the effect from the changes he made.

As for Thomas, you may not be aware of it, but he is a physicist by training, which accounts for his background in mathematics and computer science. He has lectured, published papers, etc. He's an actual scientist. Why is this important? Physics is an empirical science, which builds models of the world through the use of mathematics. To make sure that your models are sufficiently accurate representations of reality, you need to test and verify them. Any claim you can make about the models must be backed up by evidence. See where I'm going?

Thomas built his layers.library by analyzing how the original worked, built a new one designed to solve the same problem better and verified that it does accomplish this goal. This approach represents best engineering practice. As far as I know the performance improvements are significant and can be measured. These improvements are on a scale which exceeds what could be achieved by fine-tuning the underlying assembly language code. No matter how much effort you put into shaving cycles off an inefficient 'C' compiler translation of the original code, if that code uses a technique (algorithm) that solves the wrong problem, or solves it in such a way that it wastes time, then you still have a poor solution. What's the alternative? Replace the algorithm with something that is more suited to the task. This is what Thomas did.

Replacing the algorithm produces significant leverage. To give you an example: if you have used the standard file requester in AmigaOS 3.1 and 3.5 you may have noticed that there is a performance difference between the two. The original 3.1 version became noticeably slower the more and more directory entries it read and displayed. The 3.5 version did not become noticeably slower. This was achieved by replacing the algorithm by which the file requester kept the directory list in sorted order. In the 3.1 version, doubling the number of directory entries read caused the file requester to spend four times as much effort to keep the list sorted, and no degree of low level assembly optimizations would have helped to improve this. What did bring improvements was to replace the sorting algorithm, so that doubling the number of directory entries only about doubled the amount of time needed to keep it sorted.

This is how you get to "super fast", and Thomas is your man. Cosmos, I'm not so sure about.
 

Offline wawrzon

Re: Layers.library V45 on the aminet
« Reply #238 on: September 15, 2014, 02:10:54 PM »
Quote from: psxphill;773047
That isn't legal even if you didn't sign an NDA. As long as you don't disclose anything that could only be learnt during your contract then you can contribute to AROS just fine.
 
 What you are describing sounds more like a non-compete clause, which isn't going to be in force by now (if they try to say it is then the court would rule that it was an unfair clause).


i trust that since thor and olsen seriously consider that there is a threat then there must be one. beyond all else they have personal experience with the commercial entities in question they have been working for and im sure they are basing their opinion on some experience, be it personal or general, which may be not available to others.
 

Offline Thorham

  • Hero Member
  • *****
  • Join Date: Oct 2009
  • Posts: 1150
    • Show only replies by Thorham
Re: Layers.library V45 on the aminet
« Reply #239 on: September 15, 2014, 02:19:27 PM »
Quote from: olsen;773051
What did bring improvements was to replace the sorting algorithm, so that doubling the number of directory entries only about doubled the amount of time needed to keep it sorted.
It would've been even better if they simply read the whole directory first, then sorted it with the right sorting algorithm, and finally display the results. DirectoryOpus 5.90 does that.

Quote from: olsen;773051
This is how you get to "super fast", and Thomas is your man. Cosmos, I'm not so sure about.
Perhaps, but when you work with some resourced binary, it can't hurt to clean up the compiler mess so that you get much more readable code. After that you can try to replace algorithms.