Welcome, Guest. Please login or register.

Author Topic: Faster directory listing possible?  (Read 13721 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline djkoelkastTopic starter

  • Full Member
  • ***
  • Join Date: Jul 2004
  • Posts: 200
    • Show only replies by djkoelkast
    • http://www.retroforum.nl
Re: Faster directory listing possible?
« Reply #29 on: June 16, 2013, 12:48:50 PM »
Save images to memory was already enabled, I'll try the icon.library next for as far as I can work with a tft monitor that tells me NOT OPTIMAL RESOLUTION every 2 minutes.
Amiga 4000/060 cybervision64, CyberSCSI MKII, AlfaData BSC MFC3 I/O, Ariadne II, OS 3.9(bb2), 2x IDE > CF 8GB Seagate Microdrive, 1x HD FDD, 1x SCSI ZIP 100

http://www.retroforum.nl
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show only replies by matthey
Re: Faster directory listing possible?
« Reply #30 on: June 16, 2013, 03:00:33 PM »
Quote from: olsen;738006
Um, my advice, if you want to hear it, would be not to bother with attacking the workbench.library performance problems in this manner. The problems which cause Workbench to perform so poorly are due to poor choice of algorithms, data structures and overall architecture.


No doubt. The original reason for a new workbench.library was to fix a few bugs. A simple cleanup and peephole optimizations that saves 10k is just a bonus and not an attempt to fix the AmigaOS 3.9 bloat :/.

Quote from: olsen;738006

Workbench and its partner in crime icon.library are very closely-coupled and share dependencies which are spread through the entire Workbench code. These dependencies are what make it hard to change either part of the couple. Because Workbench and icon.library have to work under modest memory conditions, and with legacy software, there is little room for making larger changes. So this is why necessary architectural changes have rarely been implemented, if at all.

Looking at Workbench through the lens of 68k assembly language, and the optimizations that are possible at this level, will likely give you only a tiny picture of what makes Workbench tick, and how its chief problems look like (or even how these might be addressed). There is likely nothing you could do that will have a measurable effect on Workbench performance, no matter how much effort you put into it.


Have you tried PeterK's icon.library? It gives a remarkable speedup (including low end systems) without changing the interface. There is a lot possible with assembler but it is a lot of work. It shouldn't be necessary if compilers were doing a better job.
 

Offline freqmax

  • Hero Member
  • *****
  • Join Date: Mar 2006
  • Posts: 2179
    • Show only replies by freqmax
Re: Faster directory listing possible?
« Reply #31 on: June 16, 2013, 04:44:34 PM »
Quote from: matthey;738011
not an attempt to fix the AmigaOS 3.9 bloat :/.


I thought AmigaOS was the pinnacle of tight OS ?

Or perhaps they got lost somewhere after KS 1.x .. ;)
 

Offline paul1981

Re: Faster directory listing possible?
« Reply #32 on: June 16, 2013, 06:07:59 PM »
Quote from: ChaosLord;737965
I don't entirely agree with this.

If he opens his window with thousands of icons then Workbench will completely thrash the memory list and there will be thousands of memory fragments and the Amiga will be permanently slower from that point on.  If he opens and closes that window multiple times and then runs random software he is in for a crash.  Its like Workbench doesn't use memory pools at all and just bashes the OS with zillionz of malloc()/free().

I very strongly suggest that djkoelkast download TLSFmem from Aminet and install it near the top of the startup-sequence.  This will dramatically reduce the memory fragmentation and dramatically increase the speed of the Amiga and dramatically lengthen the uptime.

TLSFmem is the best utility for Amiga ever.  It greatly improves the speed of any program that performs a lot of memory allocations.

I've always used Thomas Richter's PoolMem, and infact the whole MMU package of his. Here's PoolMem as a seperate download:

http://aminet.net/package/util/sys/PoolMem

I can honestly say I've never knowingly had a memory fragmentation problem, or any apparent slowdown due to memory fragmentation. Then again though, I've always used PoolMem!
There's a program within the download called "MemoryMess" and "FragMeter". I've ran MemoryMess for a while whilst doing other stuff and it doesn't seem to affect my system in any way... Do I have some sort of super Amiga? :roflmao:
Maybe the slowdown from memory fragmentation isn't noticable on faster Amiga's? I seem to recall 68000 Amiga's feeling "snappier" after a fresh boot, but I can't say I've noticed it on my 030 or 060.
 

Offline nicholas

Re: Faster directory listing possible?
« Reply #33 on: June 16, 2013, 08:48:45 PM »
Quote from: paul1981;738021
I've always used Thomas Richter's PoolMem, and infact the whole MMU package of his.

+1 to that.  MMULib is a fantastic product.

I particularly have to reccomend the MuRedox tool.  Works much better than CyberPatcher does.

Executive is another mainstay of all my 3.x machines too.

http://aminet.net/package/util/misc/Executive
http://aminet.net/util/misc/Executive_key.lha
« Last Edit: June 16, 2013, 10:04:30 PM by nicholas »
“Een rezhim-i eshghalgar-i Quds bayad az sahneh-i ruzgar mahv shaved.” - Imam Ayatollah Sayyed  Ruhollah Khomeini
 

Offline nicholas

Re: Faster directory listing possible?
« Reply #34 on: June 16, 2013, 10:02:29 PM »
Quote from: matthey;737986
I haven't. I meant "my" unmodified commodity.library on "my" Amiga. Sorry for the confusion.



PatchFor020 works well most of the time but is limited in what it patches. It only patches compiler math functions to faster ones. It will patch with 64 bit integer instructions that are trapped on the 68060 making the code slower for the 68060! I've seen a bug in one of the patches where a signed multiply was used when an unsigned should have been. Most of the time this would not cause a problem but could completely fail in rare cases. Make a backup before you apply the patch and test the speed, output and for MuForce/Enforcer hits. There are a lot more 68020 optimizations that can be done than math routines although these offer some of the biggest gains and are big enough that finding and patching them is reasonably safe. Hand optimizing is slow and error prone but has the potential for a much greater speedup and memory savings. Sometimes 2x the speed and 1/2 the size is possible where the compiler wasn't very good.


Well just for kicks I ran PatchFor020 over itself and amusingly it patched a few operations. :)

Perhaps there will be a PatchFor060 one day? :)
“Een rezhim-i eshghalgar-i Quds bayad az sahneh-i ruzgar mahv shaved.” - Imam Ayatollah Sayyed  Ruhollah Khomeini
 

Offline nicholas

Re: Faster directory listing possible?
« Reply #35 on: June 16, 2013, 10:06:02 PM »
Would this patched scsi.device help for the OP?

http://eab.abime.net/coders-system/67067-open-source-scsi-device.html
“Een rezhim-i eshghalgar-i Quds bayad az sahneh-i ruzgar mahv shaved.” - Imam Ayatollah Sayyed  Ruhollah Khomeini
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show only replies by matthey
Re: Faster directory listing possible?
« Reply #36 on: June 16, 2013, 10:34:47 PM »
Quote from: nicholas;738031
Well just for kicks I ran PatchFor020 over itself and amusingly it patched a few operations. :)


Don't do that! That's probably the code it's using to detect the slow code. If you patch it, it might not find what it's looking for to patch. There are some other types of critical code that should not be patched too. Patching some of the P96 libraries (and probably CGFX) for example will not work because there are a lot of exec/SetFunction() calls. You don't want to patch the 68060.library or SetPatch for obvious reasons also.

Quote from: nicholas;738031

Perhaps there will be a PatchFor060 one day? :)


From me? Probably not. I've thought about making an automated optimizer but the code really needs to be disassembled correctly first to make a lot of changes. I have worked on a new version of ADis which does a pretty good job but needs more works and I've been involved with other projects recently. It would still never be completely safe to do an automated disassemble, patch and assemble but it could be pretty good and give disassembly warnings. It's better to focus on getting compilers to generate better code.

Quote from: nicholas;738032
Would this patched scsi.device help for the OP?

http://eab.abime.net/coders-system/67067-open-source-scsi-device.html


Don Aden's a good guy and knows what he's doing. A faster transfer speed should give a speedup. I have a CSMK3 68060@75MHz with a 15k UltraSCSI hard drive that gives a sustained 30MB/s, PFS, PeterK's icon.library and Voodoo 4 and there really isn't much delay on large directory listings B).
 

Offline ChaosLord

  • Hero Member
  • *****
  • Join Date: Nov 2003
  • Posts: 2608
    • Show only replies by ChaosLord
    • http://totalchaoseng.dbv.pl/news.php
Re: Faster directory listing possible?
« Reply #37 on: June 16, 2013, 11:18:42 PM »
Quote from: olsen;738005

Because of how Workbench loads icons, it has to talk to icon.library, and the programming interfaces of icon.library are just not up to snuff in terms of efficiency. For example, icon.library performs its own memory management and tracking.

This has to account both for image data and the rest. This is something which you cannot easily switch over to memory pools. For example, you would have to have a separate pool for each Workbench window in order to make pool management more efficient.


The way I figured it, if Workbench used its own pool and PeterK's icon.library used its own pool then that could double the speed and/or reduce memory fragmentation by 10.  Or something like that :)

I assume this is why PeterK's icon.library is so much faster (+ all the other optimizations he did.)
Wanna try a wonderfull strategy game with lots of handdrawn anims,
Magic Spells and Monsters, Incredible playability and lastability,
English speech, etc. Total Chaos AGA
 

Offline nicholas

Re: Faster directory listing possible?
« Reply #38 on: June 16, 2013, 11:23:41 PM »
Quote from: matthey;738036
Don't do that! That's probably the code it's using to detect the slow code. If you patch it, it might not find what it's looking for to patch. There are some other types of critical code that should not be patched too. Patching some of the P96 libraries (and probably CGFX) for example will not work because there are a lot of exec/SetFunction() calls. You don't want to patch the 68060.library or SetPatch for obvious reasons also.


Ok, I was just playing around with it just to see how it worked.  Haven't used it on anything important yet.  I took a look at the code to try and understand it a bit more but my 68k assembly is as fluent as my Swahili. ;)


Quote
From me? Probably not. I've thought about making an automated optimizer but the code really needs to be disassembled correctly first to make a lot of changes. I have worked on a new version of ADis which does a pretty good job but needs more works and I've been involved with other projects recently. It would still never be completely safe to do an automated disassemble, patch and assemble but it could be pretty good and give disassembly warnings. It's better to focus on getting compilers to generate better code.


Please keep doing what you are doing.  It's very much appreciated!



Quote
Don Aden's a good guy and knows what he's doing. A faster transfer speed should give a speedup. I have a CSMK3 68060@75MHz with a 15k UltraSCSI hard drive that gives a sustained 30MB/s, PFS, PeterK's icon.library and Voodoo 4 and there really isn't much delay on large directory listings B).


I've got an ACard IDE to SCSI adapter for my CSMK2 with an SDHC attatched to it but my A3000 is currently in pieces (with a PIV,a Deneb and a few other bits too) until i get some free time to put it back together, but I've got a nicely configured UAE setup that I keep tweaking until i rebuild the A3000 and can transfer it across.
“Een rezhim-i eshghalgar-i Quds bayad az sahneh-i ruzgar mahv shaved.” - Imam Ayatollah Sayyed  Ruhollah Khomeini
 

Offline ChaosLord

  • Hero Member
  • *****
  • Join Date: Nov 2003
  • Posts: 2608
    • Show only replies by ChaosLord
    • http://totalchaoseng.dbv.pl/news.php
Re: Faster directory listing possible?
« Reply #39 on: June 16, 2013, 11:31:22 PM »
Quote from: paul1981;738021

I can honestly say I've never knowingly had a memory fragmentation problem, or any apparent slowdown due to memory fragmentation. Then again though, I've always used PoolMem!

Of course I used to use PoolMem too!  From the day it first came out I was using PoolMem and it was awesome!

But then TLSFmem came out which is 10x better than PoolMem.

I ran software that is very hard on the Amiga's memory system.  It creates megabajilion of memory fragments.  And TLSFmem was massively better than PoolMem.  It uses a new better algorithm.

You can always comment out PoolMem with a ; and then add in TLSFmem in place of it in your startup-sequence to see what I mean.  If u don't like it you can always switch back.
Wanna try a wonderfull strategy game with lots of handdrawn anims,
Magic Spells and Monsters, Incredible playability and lastability,
English speech, etc. Total Chaos AGA
 

Offline nicholas

Re: Faster directory listing possible?
« Reply #40 on: June 16, 2013, 11:32:53 PM »
Quote from: ChaosLord;738046
Of course I used to use PoolMem too!  From the day it first came out I was using PoolMem and it was awesome!

But then TLSFmem came out which is 10x better than PoolMem.

I ran software that is very hard on the Amiga's memory system.  It creates megabajilion of memory fragments.  And TLSFmem was massively better than PoolMem.  It uses a new better algorithm.

You can always comment out PoolMem with a ; and then add in TLSFmem in place of it in your startup-sequence to see what I mean.  If u don't like it you can always switch back.


I prefer TLSFMem too.  It's rock solid stable on every machine I use it on and it makes a really noticable speed difference too.
“Een rezhim-i eshghalgar-i Quds bayad az sahneh-i ruzgar mahv shaved.” - Imam Ayatollah Sayyed  Ruhollah Khomeini
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show only replies by matthey
Re: Faster directory listing possible?
« Reply #41 on: June 16, 2013, 11:50:05 PM »
Quote from: ChaosLord;738046

I ran software that is very hard on the Amiga's memory system.  It creates megabajilion of memory fragments.  And TLSFmem was massively better than PoolMem.  It uses a new better algorithm.


Compiler? Listing a drawer full of icons doesn't fragment anything like compiling. Vbcc is about the best test. It doesn't allocate much on the stack at all. I was surprised at the shear volume of memory allocations. A log file of only allocations and deallocations from a few seconds is several megabytes and then I found some illegal deallocations that weren't even in the log :/. I think we are on the right track to getting at least some of the problems fixed. PoolWatch, BDebug (now working as a source debugger) and ThoR's MuTools have been great for catching problems that weren't even noticed on other operating systems. Only Amiga makes it possible ;).
 

Offline JimDrew

  • Lifetime Member
  • Full Member
  • ***
  • Join Date: Jun 2012
  • Posts: 241
    • Show only replies by JimDrew
Re: Faster directory listing possible?
« Reply #42 on: June 17, 2013, 04:37:55 AM »
I meant a real caching program, not just adding buffers to the FFS.  There were several different real caching programs that we used back in the day, and these greatly improved the speed of hard files and single block read devices (like floppies), so this would really help the case you talking about here.  I don't recall the names of the caching programs... I will have to boot my A3000 development machine and see what I used last.
 

Offline olsen

Re: Faster directory listing possible?
« Reply #43 on: June 17, 2013, 09:29:36 AM »
Quote from: matthey;738011
No doubt. The original reason for a new workbench.library was to fix a few bugs.


No, not exactly. The original reason for updating workbench.library was to produce something which would make AmigaOS 3.5 a more attractive product. As Workbench is the "face" of AmigaOS, changes to make it more responsive (e.g. real-time scrolling) and more attractive (e.g. colourful icons) were at the top of the list. As for bugs, there were not that many to take care of. The code, architectural burdens aside, was in remarkably good shape.

The other half of the necessary changes happened to the workbench.library API, which was opened up, and also supplemented by an ARexx interface.

Quote
A simple cleanup and peephole optimizations that saves 10k is just a bonus and not an attempt to fix the AmigaOS 3.9 bloat :/.


Bloat? Well, the most work that went into Workbench itself was intended to open up the APIs, integrate the new icon.library functionality, make Workbench more responsive and of course replace the built-in "Information..." requester.

I wrote the new "Information..." requester from scratch, and it's this part which made workbench.library much larger than it previously used to be.

For the 3.5/3.9 updates we switched from Lattice 'C' 5.04 to SAS/C 6.55, which should have produced better code. But as the optimizer was set for speed (rather than size, stun or kill) the overall size of workbench.library went up rather than down.

Quote
Have you tried PeterK's icon.library? It gives a remarkable speedup (including low end systems) without changing the interface. There is a lot possible with assembler but it is a lot of work. It shouldn't be necessary if compilers were doing a better job.


No, I can't say I did. If I never have to see workbench.library or icon.library from the inside I won't regret it ;) I did my part in updating both (I rewrote icon.library from scratch for the 3.5 update), but like so many programmers before me, I was unable to address the architectural limitations of either. There is only so much you can do if the Workbench design is actively playing against you...
 

Offline olsen

Re: Faster directory listing possible?
« Reply #44 from previous page: June 17, 2013, 09:49:12 AM »
Quote from: ChaosLord;738040
The way I figured it, if Workbench used its own pool and PeterK's icon.library used its own pool then that could double the speed and/or reduce memory fragmentation by 10.  Or something like that :)


No, Workbench does not use memory pools. It pulls everything from the globally available memory, which leads to fragmentation.

Quote
I assume this is why PeterK's icon.library is so much faster (+ all the other optimizations he did.)


I don't know how it works, but there is only so much that can be done, given how Workbench reads icons. What breaks the camel's back is that icons are stored in individual files which must be found first and read quickly. Workbench tries to do that by scanning directory contents and switching back and forth between scanning and reading individual icons, which causes disk accesses to jump around wildly.

If this process could be sped up, it would have to be smarter about when to read the icons. For example, if you could cache the icons to be read from a directory then this would reduce the amount of time which Workbench spends on switching between scanning the directory and reading the icons.

Let's say you had an icon cache. If you could tell Workbench to hold off reading the icons until it has finished scanning the directory, then you could cut down the overall time spent by a significant amount. Of course, once Workbench has read the directory contents, reading the icons must be very quick. I think all of this would be doable.