Amiga.org
Amiga computer related discussion => Amiga Software Issues and Discussion => Topic started by: djkoelkast on June 15, 2013, 04:00:12 PM
-
I've got my old 3.9 bb2 installation running again. It's on a 8GB microdrive that has multiple partitions.
One of the partitions contains a LOT of folders with WHDLoad games, each folder has a game in it. When I open this partition in workbench it takes a few minutes before the listing shows. In the meantime I can't do anything else.
This partition is either in FFS of SFS, I don't quite remember, but it takes so long. Is there a way to speed up the directory listing?
I remember from older installations there was a .fastdir file that contained the listing? Never seen that on 3.9 though.
-
I think everything depends on what progran you're using to read the directory.
WB is slow. Dopus is probably faster. Ordering is (written in assembler?) much faster than Dopus. ;)
-
Using Directory Opus is an excellent idea!
The other thing you should do is use PFS3 as it is very much faster than SFS at reading dirs and at the same time it uses less CPU power which means there is more cpu power available to poll that silly gawdawful IDE HD interface.
Another thing you can do to speed things up is switch to a quality SCSI DMA HD interface.
Another thing you can do is to buy a faster accelerator.
But really if you just switch to PFS3 + Directory Opus your dirs will be lightning fast.
Workbench is simply not coded for speed. The underlying AmigaOS is fast but the Workbench.exe is sloooow.
-
I've got my old 3.9 bb2 installation running again. It's on a 8GB microdrive that has multiple partitions.
One of the partitions contains a LOT of folders with WHDLoad games, each folder has a game in it. When I open this partition in workbench it takes a few minutes before the listing shows. In the meantime I can't do anything else.
This partition is either in FFS of SFS, I don't quite remember, but it takes so long. Is there a way to speed up the directory listing?
I remember from older installations there was a .fastdir file that contained the listing? Never seen that on 3.9 though.
Peter K's icon.library helps here: http://m68k.aminet.net/package/util/libs/IconLib_46.4
If you're not bothered about keeping the drawer icons for each game (2500 for example all in one drawer), you could always delete all the .info files for the game drawers which will make directory listings in file managers much quicker. Of course though, you will loose the ability to snapshot the drawer icon positions from within Workbench. This is okay for most people as they use a game launcher.
Check your disk buffers as well on your games partition...100 minimum, I use 300.
Even FBlit & FText helps though. The truth is, even after all the drawers have been read and the disk access has finished, it can still take AGES to sort and display the icons...so a SCSI setup won't make much of a difference...it's Workbench that's slow in sorting and displaying the drawer contents (does it still use chip ram or something? as I don't see much of a difference in fast machines compared to slower ones).
-
I do have a CyberSCSI MKII, but SCSI drives are noisy and expensive, that's why I'm using a microdrive currently.
-
I do have a CyberSCSI MKII, but SCSI drives are noisy and expensive, that's why I'm using a microdrive currently.
You are showing a CyberVision 64 so check that your sys:prefs/Workbench prefs have the gadget "Images in: Other Memory" selected. That with PeterK's icon.library will greatly accelerate icons. The 68060 version of PFS is much faster than alternative file systems for listing as mentioned.
-
How reliable is PFS btw? (and which version?)
Anyway.. is listing entries in CLI slow too?
GUIs tend to love resources of all kinds..
-
PFS3AIO version from Aminet is recommended by Toni Wilen, coder of WinUAE.
Everyone in Team Chaos (except me) has been using PFS3 for many many years and they all swear by it. Most of them tried SFS vs PFS3 and only then did they choose PFS, that was many years ago.
I have read the results of many timing tests performed by many different ppl over the years. PFS3 FTW!
-
You are showing a CyberVision 64 so check that your sys:prefs/Workbench prefs have the gadget "Images in: Other Memory" selected.
+1
That will speed things up a lot. Workbench is coded the slowest way possible so it always stores gfx in the slowest memory possible, which is just the wrong way to do things on AGA or on accelerated Amigas.
That with PeterK's icon.library will greatly accelerate icons. The 68060 version of PFS is much faster than alternative file systems for listing as mentioned.
There is a special version of PFS3 for 060? Awesome!
060 FTW!
-
That will speed things up a lot. Workbench is coded the slowest way possible so it always stores gfx in the slowest memory possible, which is just the wrong way to do things on AGA or on accelerated Amigas.
Workbench is coded in the most compatible way possible with settings and utilities to speed up and enhance the appearance.
There is a special version of PFS3 for 060? Awesome!
The original PFS3 package uploaded to Aminet has CPU specific versions. Toni's version does not but his version has a few minor bug fixes. The last regular version of PFS3 has been very stable for me though.
-
Workbench is coded in the most compatible way possible with settings and utilities to speed up and enhance the appearance.
I don't entirely agree with this.
If he opens his window with thousands of icons then Workbench will completely thrash the memory list and there will be thousands of memory fragments and the Amiga will be permanently slower from that point on. If he opens and closes that window multiple times and then runs random software he is in for a crash. Its like Workbench doesn't use memory pools at all and just bashes the OS with zillionz of malloc()/free().
I very strongly suggest that djkoelkast download TLSFmem from Aminet and install it near the top of the startup-sequence. This will dramatically reduce the memory fragmentation and dramatically increase the speed of the Amiga and dramatically lengthen the uptime.
TLSFmem is the best utility for Amiga ever. It greatly improves the speed of any program that performs a lot of memory allocations.
-
I'll try all posibilities asap, my Multisync monitor is suffering from a broken cable, the cable is directly attached to the monitor so it's not possible to just take another cable :(
-
What does TLSFmem that reduces the load?
-
What does TLSFmem that reduces the load?
TLSFmem helps to prevent memory fragmentation.
Once your memory becomes fragmented then any time a program allocates memory or frees memory it then takes 100x longer or 1000x longer or 10000x longer or memory allocations FAIL completely.
It happens to everyone's computer on every OS. That is why after running certain programs your 3Ghz computer starts running at the speed of a 50Mhz Amiga. It has happened to me countless times. It can be triggered by a certain flash ad or by opening a certain PDF. PDF viewers were quite notorious for fraggling your memory on Windoze for many years.
I have had the same problem as the original poster many times. I accidentally open a dir with Workbench that has a zillion icons and my Amiga becomes functionally unusable after that because my memory is so badly fragmented that it is totally impossible to allocate any large chunks of memory even though my memory is nearly empty. It doesn't matter how empty your memory is. If it is all fragmented it is completely useless.
If you are like me and you leave your computer running 24/7/365 then TLSFmem greatly speeds up your Amiga after a few days of use and greatly lengthens the uptime.
-
Windows XP is bad for doing this. Doesn't happen that often on Windows 7 unless a program has crashed. Programs that do that tend to be 3rd party badly written programs.
Most cases that I see is either the above, a badly written driver or 7 is just not setup correctly.
Have two server boxes here one with Windows 7 which has been up and running a few months now without needing a restart.
The other is Server 2008 R2 which is solid a rock (can even disable the gui if you wish)
Windows 8 doesn't seem to suffer from it, I've been running it for over year now. The only thing I dislike is the new task manager (But wont get into that).
Linux, now here's an OS that I can't kill. Had one server running for years without needing to reboot it and that's using x windows not just the shell.
-
@ChaosLord, My curiousity was as to how it prevent this fragmentation?
I think I found the answer here:
http://www.platon42.de/files/util/TLSFMem.readme
-
Installing matthey's CopyMem from Aminet will speed up your 060 machine a great deal too.
-
If he opens his window with thousands of icons then Workbench will completely thrash the memory list and there will be thousands of memory fragments and the Amiga will be permanently slower from that point on. If he opens and closes that window multiple times and then runs random software he is in for a crash. Its like Workbench doesn't use memory pools at all and just bashes the OS with zillionz of malloc()/free().
I snooped the memory allocation functions and it looks like some of the allocations use memory pools and some don't so TLSFmem should provide some speedup. AllocMem() and AllocVec() are probably faster than TLSFmem until the memory becomes fragmented. Some programs really do thrash memory like you say. TLSFmem uses the BFFFO instruction which isn't exactly fast (and slow relative to other instructions on the 68060). Memory pools are another solution for reducing memory fragmentation but I expect they are slower too. A whole pool can be released at once which is an advantage though. It's nice for memory cleanup. Vbcc compiled programs use memory pools for malloc()/free() making the cleanup easy and guarding against memory that did got get deallocated.
Installing matthey's CopyMem from Aminet will speed up your 060 machine a great deal too.
It doesn't appear that Workbench, the workbench.library or the icon.library are using the exec/CopyMem() functions while listing icons in a workbench window. My commodity.library used CopyMem() with a 16 byte size several times while listing though. Any speedup from my CopyMem patch would probably be minor on a fast processor like the 060 or 040 in this case. Every little bit helps though :).
-
Ooh, I wasn't aware you'd done a commodity.library too. I'll have to grab myself a copy of that! :)
What's your opinion on the PatchFor020 tool on Aminet? Is it worth running over all the executables on an 3.9 install? I have a CSMK2 060@50MHz.
-
Ooh, I wasn't aware you'd done a commodity.library too. I'll have to grab myself a copy of that! :)
I haven't. I meant "my" unmodified commodity.library on "my" Amiga. Sorry for the confusion.
What's your opinion on the PatchFor020 tool on Aminet? Is it worth running over all the executables on an 3.9 install? I have a CSMK2 060@50MHz.
PatchFor020 works well most of the time but is limited in what it patches. It only patches compiler math functions to faster ones. It will patch with 64 bit integer instructions that are trapped on the 68060 making the code slower for the 68060! I've seen a bug in one of the patches where a signed multiply was used when an unsigned should have been. Most of the time this would not cause a problem but could completely fail in rare cases. Make a backup before you apply the patch and test the speed, output and for MuForce/Enforcer hits. There are a lot more 68020 optimizations that can be done than math routines although these offer some of the biggest gains and are big enough that finding and patching them is reasonably safe. Hand optimizing is slow and error prone but has the potential for a much greater speedup and memory savings. Sometimes 2x the speed and 1/2 the size is possible where the compiler wasn't very good.
-
Alternative approach.. patch workbench icon-hell? ;)
-
Alternative approach.. patch workbench icon-hell? ;)
I do have a working lightly modified (mostly vasm peephole optimizations) AmigaOS 3.9 workbench.library at 191168 bytes where the original was 200856 bytes. I've been testing it for a while for problems and haven't found any yet. I thought PeterK might want a working complete disassembly which is easier than patching the executable for his modified workbench.library but he was worried about it being more error prone which it is :). I don't want to release another version without his patches and I didn't think the code was too bad.
I'm currently working on trying to improve vbbc. It's much better to have the compiler do the work better to begin with. I also am looking at possibly optimizing some AROS libraries and trying to incorporate them with AmigaOS 3.9.
-
Have you tried installing any type of device caching software? Single block reads (which are common when parsing the directory) is extremely slow with flash card type devices.
-
Have you tried installing any type of device caching software? Single block reads (which are common when parsing the directory) is extremely slow with flash card type devices.
Hey that is a very good point! When reading from a flash device the device can only read whole blocks at a time right? And 1 block of that drive is probably like 1MB, right? Even if it is only 512K that is still going to be really tremendously slow for that crummy IDE v1.0 interface that Mehdi Ali stuck us with.
SCSI 4ever! :)
-
When reading from a flash device the device can only read whole blocks at a time right? And 1 block of that drive is probably like 1MB, right?
No, you only have to read 512 bytes at a time (or on new sata drives it's 4k).
addbuffers gives more buffers to the filesystem, if that buffering is fast then adding it to the device as well is not worth it.
FFS gets slower if you add a lot of buffers & someone did do some device patching software that allowed you put buffering at the device layer.
My understanding is that SFS/PFS don't suffer from this.
-
If it's just WHDLoad listing that are slow due to amoint, you could use a frontend called iGame, it lists all games after scanning a chosen directory. I've used iGame on 030 and 060 - It's fast on both :)
Robert.
-
I've got my old 3.9 bb2 installation running again. It's on a 8GB microdrive that has multiple partitions.
One of the partitions contains a LOT of folders with WHDLoad games, each folder has a game in it. When I open this partition in workbench it takes a few minutes before the listing shows. In the meantime I can't do anything else.
This partition is either in FFS of SFS, I don't quite remember, but it takes so long. Is there a way to speed up the directory listing?
Yes. But you would have to change how the data in side that folder is organized by breaking it down into subdrawers, of which each would holds a part of the whole set. You don't have much leverage here: the most effective change you can make is in giving the operating system less work to do.
The source of the performance problems is manyfold.
The file system performance is not too good on AmigaOS, both because of how file operations are being performed by the operating system, but also how poorly the Amiga default file system works.
The way in which Workbench accesses the contents of a directory, figuring out which icons should be displayed, is very complex. Workbench has to collect the names of all icon files in the directory, then load the icons and place them in the drawer window. This may not sound like too much effort, but the way in which this work is being done comes with a large overhead: whenever Workbench finds an icon, it switches to loading that icon, then resumes reading the directory (going back and forth between the two tasks is very costly). While it is scanning, Workbench cannot do anything else, except react to drawer windows being resized or borught to the foreground.
Lastly, if you did not snapshot the icons which Workbench found, additional time will be spent by placing them in the window, so as not to have them overlap with other icons. The process by which the icon position is picked works reasonably well for small numbers of icons, but it quickly becomes slower the more icons are in a drawer.
The deck is stacked against you. All the software layers of the operating system which are involved in eventually showing icons in a Workbench drawer window were originally designed for the small computer system which the Amiga was in 1986. And for this target platform, the software just about worked (arguably, it worked poorly even back then).
What you can do today in order to obtain better performance is in taking measures to make the Amiga do less work. Because the more work it has to do, the more time will be spent: for two times the work, you may see an increase of 4-6 times the time spent on doing it.
I remember from older installations there was a .fastdir file that contained the listing? Never seen that on 3.9 though.
That's because the ".fastdir" files are from an entirely different age, and a different software. They were created by a program called "CLImate", which could be considered one of the many precursors to "Directory Opus" (just to name the most prominent program of its kind). If I remember correctly "CLImate" was released in 1986/1987.
The ".fastdir" files were snapshots of the contents of the respective drawers which they were found in. By reading the ".fastdir" files, "CLImate" could quickly display what was stored in the drawer without going through the lengthy process of scanning it. Scanning a drawer involves reading about 40-50 times the amount of data that is actually going to be used for showing the drawers listing (name, size, date, etc.). Hence the name ".fastdir".
-
I don't entirely agree with this.
If he opens his window with thousands of icons then Workbench will completely thrash the memory list and there will be thousands of memory fragments and the Amiga will be permanently slower from that point on. If he opens and closes that window multiple times and then runs random software he is in for a crash. Its like Workbench doesn't use memory pools at all and just bashes the OS with zillionz of malloc()/free().
Problem is, Workbench and icon.library were designed to work together, but have completely separate programming interfaces. In fact, icon.library has always had a public programming interface, whereas there was no such thing for Workbench until Kickstart 3.x came along.
Because of how Workbench loads icons, it has to talk to icon.library, and the programming interfaces of icon.library are just not up to snuff in terms of efficiency. For example, icon.library performs its own memory management and tracking.
This has to account both for image data and the rest. This is something which you cannot easily switch over to memory pools. For example, you would have to have a separate pool for each Workbench window in order to make pool management more efficient. But Workbench just isn't designed to work that way, and really really hard to change.
As I wrote before: if you want better performance out of the whole stack of software layers which Workbench sits on top of, you should put effort into making the whole stack do less work.
If you try to improve only part of how this whole mess comes together, chances are that any improvements will be consumed by the rest.
-
I do have a working lightly modified (mostly vasm peephole optimizations) AmigaOS 3.9 workbench.library at 191168 bytes where the original was 200856 bytes. I've been testing it for a while for problems and haven't found any yet. I thought PeterK might want a working complete disassembly which is easier than patching the executable for his modified workbench.library but he was worried about it being more error prone which it is :). I don't want to release another version without his patches and I didn't think the code was too bad.
Um, my advice, if you want to hear it, would be not to bother with attacking the workbench.library performance problems in this manner. The problems which cause Workbench to perform so poorly are due to poor choice of algorithms, data structures and overall architecture.
These choices go way back to 1985/1986 when the original Workbench was designed, and subsequent reworkings/enhancements made to this code did not resolve the limitations of the original design. If anything, they compounded the problems.
Workbench and its partner in crime icon.library are very closely-coupled and share dependencies which are spread through the entire Workbench code. These dependencies are what make it hard to change either part of the couple. Because Workbench and icon.library have to work under modest memory conditions, and with legacy software, there is little room for making larger changes. So this is why necessary architectural changes have rarely been implemented, if at all.
Looking at Workbench through the lens of 68k assembly language, and the optimizations that are possible at this level, will likely give you only a tiny picture of what makes Workbench tick, and how its chief problems look like (or even how these might be addressed). There is likely nothing you could do that will have a measurable effect on Workbench performance, no matter how much effort you put into it.
There is a reason why the long list of maintainers of the Workbench code each advocated throwing the code away and starting over from scratch.
-
Save images to memory was already enabled, I'll try the icon.library next for as far as I can work with a tft monitor that tells me NOT OPTIMAL RESOLUTION every 2 minutes.
-
Um, my advice, if you want to hear it, would be not to bother with attacking the workbench.library performance problems in this manner. The problems which cause Workbench to perform so poorly are due to poor choice of algorithms, data structures and overall architecture.
No doubt. The original reason for a new workbench.library was to fix a few bugs. A simple cleanup and peephole optimizations that saves 10k is just a bonus and not an attempt to fix the AmigaOS 3.9 bloat :/.
Workbench and its partner in crime icon.library are very closely-coupled and share dependencies which are spread through the entire Workbench code. These dependencies are what make it hard to change either part of the couple. Because Workbench and icon.library have to work under modest memory conditions, and with legacy software, there is little room for making larger changes. So this is why necessary architectural changes have rarely been implemented, if at all.
Looking at Workbench through the lens of 68k assembly language, and the optimizations that are possible at this level, will likely give you only a tiny picture of what makes Workbench tick, and how its chief problems look like (or even how these might be addressed). There is likely nothing you could do that will have a measurable effect on Workbench performance, no matter how much effort you put into it.
Have you tried PeterK's icon.library? It gives a remarkable speedup (including low end systems) without changing the interface. There is a lot possible with assembler but it is a lot of work. It shouldn't be necessary if compilers were doing a better job.
-
not an attempt to fix the AmigaOS 3.9 bloat :/.
I thought AmigaOS was the pinnacle of tight OS ?
Or perhaps they got lost somewhere after KS 1.x .. ;)
-
I don't entirely agree with this.
If he opens his window with thousands of icons then Workbench will completely thrash the memory list and there will be thousands of memory fragments and the Amiga will be permanently slower from that point on. If he opens and closes that window multiple times and then runs random software he is in for a crash. Its like Workbench doesn't use memory pools at all and just bashes the OS with zillionz of malloc()/free().
I very strongly suggest that djkoelkast download TLSFmem from Aminet and install it near the top of the startup-sequence. This will dramatically reduce the memory fragmentation and dramatically increase the speed of the Amiga and dramatically lengthen the uptime.
TLSFmem is the best utility for Amiga ever. It greatly improves the speed of any program that performs a lot of memory allocations.
I've always used Thomas Richter's PoolMem, and infact the whole MMU package of his. Here's PoolMem as a seperate download:
http://aminet.net/package/util/sys/PoolMem
I can honestly say I've never knowingly had a memory fragmentation problem, or any apparent slowdown due to memory fragmentation. Then again though, I've always used PoolMem!
There's a program within the download called "MemoryMess" and "FragMeter". I've ran MemoryMess for a while whilst doing other stuff and it doesn't seem to affect my system in any way... Do I have some sort of super Amiga? :roflmao:
Maybe the slowdown from memory fragmentation isn't noticable on faster Amiga's? I seem to recall 68000 Amiga's feeling "snappier" after a fresh boot, but I can't say I've noticed it on my 030 or 060.
-
I've always used Thomas Richter's PoolMem, and infact the whole MMU package of his.
+1 to that. MMULib is a fantastic product.
I particularly have to reccomend the MuRedox tool. Works much better than CyberPatcher does.
Executive is another mainstay of all my 3.x machines too.
http://aminet.net/package/util/misc/Executive
http://aminet.net/util/misc/Executive_key.lha
-
I haven't. I meant "my" unmodified commodity.library on "my" Amiga. Sorry for the confusion.
PatchFor020 works well most of the time but is limited in what it patches. It only patches compiler math functions to faster ones. It will patch with 64 bit integer instructions that are trapped on the 68060 making the code slower for the 68060! I've seen a bug in one of the patches where a signed multiply was used when an unsigned should have been. Most of the time this would not cause a problem but could completely fail in rare cases. Make a backup before you apply the patch and test the speed, output and for MuForce/Enforcer hits. There are a lot more 68020 optimizations that can be done than math routines although these offer some of the biggest gains and are big enough that finding and patching them is reasonably safe. Hand optimizing is slow and error prone but has the potential for a much greater speedup and memory savings. Sometimes 2x the speed and 1/2 the size is possible where the compiler wasn't very good.
Well just for kicks I ran PatchFor020 over itself and amusingly it patched a few operations. :)
Perhaps there will be a PatchFor060 one day? :)
-
Would this patched scsi.device help for the OP?
http://eab.abime.net/coders-system/67067-open-source-scsi-device.html
-
Well just for kicks I ran PatchFor020 over itself and amusingly it patched a few operations. :)
Don't do that! That's probably the code it's using to detect the slow code. If you patch it, it might not find what it's looking for to patch. There are some other types of critical code that should not be patched too. Patching some of the P96 libraries (and probably CGFX) for example will not work because there are a lot of exec/SetFunction() calls. You don't want to patch the 68060.library or SetPatch for obvious reasons also.
Perhaps there will be a PatchFor060 one day? :)
From me? Probably not. I've thought about making an automated optimizer but the code really needs to be disassembled correctly first to make a lot of changes. I have worked on a new version of ADis which does a pretty good job but needs more works and I've been involved with other projects recently. It would still never be completely safe to do an automated disassemble, patch and assemble but it could be pretty good and give disassembly warnings. It's better to focus on getting compilers to generate better code.
Would this patched scsi.device help for the OP?
http://eab.abime.net/coders-system/67067-open-source-scsi-device.html
Don Aden's a good guy and knows what he's doing. A faster transfer speed should give a speedup. I have a CSMK3 68060@75MHz with a 15k UltraSCSI hard drive that gives a sustained 30MB/s, PFS, PeterK's icon.library and Voodoo 4 and there really isn't much delay on large directory listings B).
-
Because of how Workbench loads icons, it has to talk to icon.library, and the programming interfaces of icon.library are just not up to snuff in terms of efficiency. For example, icon.library performs its own memory management and tracking.
This has to account both for image data and the rest. This is something which you cannot easily switch over to memory pools. For example, you would have to have a separate pool for each Workbench window in order to make pool management more efficient.
The way I figured it, if Workbench used its own pool and PeterK's icon.library used its own pool then that could double the speed and/or reduce memory fragmentation by 10. Or something like that :)
I assume this is why PeterK's icon.library is so much faster (+ all the other optimizations he did.)
-
Don't do that! That's probably the code it's using to detect the slow code. If you patch it, it might not find what it's looking for to patch. There are some other types of critical code that should not be patched too. Patching some of the P96 libraries (and probably CGFX) for example will not work because there are a lot of exec/SetFunction() calls. You don't want to patch the 68060.library or SetPatch for obvious reasons also.
Ok, I was just playing around with it just to see how it worked. Haven't used it on anything important yet. I took a look at the code to try and understand it a bit more but my 68k assembly is as fluent as my Swahili. ;)
From me? Probably not. I've thought about making an automated optimizer but the code really needs to be disassembled correctly first to make a lot of changes. I have worked on a new version of ADis which does a pretty good job but needs more works and I've been involved with other projects recently. It would still never be completely safe to do an automated disassemble, patch and assemble but it could be pretty good and give disassembly warnings. It's better to focus on getting compilers to generate better code.
Please keep doing what you are doing. It's very much appreciated!
Don Aden's a good guy and knows what he's doing. A faster transfer speed should give a speedup. I have a CSMK3 68060@75MHz with a 15k UltraSCSI hard drive that gives a sustained 30MB/s, PFS, PeterK's icon.library and Voodoo 4 and there really isn't much delay on large directory listings B).
I've got an ACard IDE to SCSI adapter for my CSMK2 with an SDHC attatched to it but my A3000 is currently in pieces (with a PIV,a Deneb and a few other bits too) until i get some free time to put it back together, but I've got a nicely configured UAE setup that I keep tweaking until i rebuild the A3000 and can transfer it across.
-
I can honestly say I've never knowingly had a memory fragmentation problem, or any apparent slowdown due to memory fragmentation. Then again though, I've always used PoolMem!
Of course I used to use PoolMem too! From the day it first came out I was using PoolMem and it was awesome!
But then TLSFmem came out which is 10x better than PoolMem.
I ran software that is very hard on the Amiga's memory system. It creates megabajilion of memory fragments. And TLSFmem was massively better than PoolMem. It uses a new better algorithm.
You can always comment out PoolMem with a ; and then add in TLSFmem in place of it in your startup-sequence to see what I mean. If u don't like it you can always switch back.
-
Of course I used to use PoolMem too! From the day it first came out I was using PoolMem and it was awesome!
But then TLSFmem came out which is 10x better than PoolMem.
I ran software that is very hard on the Amiga's memory system. It creates megabajilion of memory fragments. And TLSFmem was massively better than PoolMem. It uses a new better algorithm.
You can always comment out PoolMem with a ; and then add in TLSFmem in place of it in your startup-sequence to see what I mean. If u don't like it you can always switch back.
I prefer TLSFMem too. It's rock solid stable on every machine I use it on and it makes a really noticable speed difference too.
-
I ran software that is very hard on the Amiga's memory system. It creates megabajilion of memory fragments. And TLSFmem was massively better than PoolMem. It uses a new better algorithm.
Compiler? Listing a drawer full of icons doesn't fragment anything like compiling. Vbcc is about the best test. It doesn't allocate much on the stack at all. I was surprised at the shear volume of memory allocations. A log file of only allocations and deallocations from a few seconds is several megabytes and then I found some illegal deallocations that weren't even in the log :/. I think we are on the right track to getting at least some of the problems fixed. PoolWatch, BDebug (now working as a source debugger) and ThoR's MuTools have been great for catching problems that weren't even noticed on other operating systems. Only Amiga makes it possible ;).
-
I meant a real caching program, not just adding buffers to the FFS. There were several different real caching programs that we used back in the day, and these greatly improved the speed of hard files and single block read devices (like floppies), so this would really help the case you talking about here. I don't recall the names of the caching programs... I will have to boot my A3000 development machine and see what I used last.
-
No doubt. The original reason for a new workbench.library was to fix a few bugs.
No, not exactly. The original reason for updating workbench.library was to produce something which would make AmigaOS 3.5 a more attractive product. As Workbench is the "face" of AmigaOS, changes to make it more responsive (e.g. real-time scrolling) and more attractive (e.g. colourful icons) were at the top of the list. As for bugs, there were not that many to take care of. The code, architectural burdens aside, was in remarkably good shape.
The other half of the necessary changes happened to the workbench.library API, which was opened up, and also supplemented by an ARexx interface.
A simple cleanup and peephole optimizations that saves 10k is just a bonus and not an attempt to fix the AmigaOS 3.9 bloat :/.
Bloat? Well, the most work that went into Workbench itself was intended to open up the APIs, integrate the new icon.library functionality, make Workbench more responsive and of course replace the built-in "Information..." requester.
I wrote the new "Information..." requester from scratch, and it's this part which made workbench.library much larger than it previously used to be.
For the 3.5/3.9 updates we switched from Lattice 'C' 5.04 to SAS/C 6.55, which should have produced better code. But as the optimizer was set for speed (rather than size, stun or kill) the overall size of workbench.library went up rather than down.
Have you tried PeterK's icon.library? It gives a remarkable speedup (including low end systems) without changing the interface. There is a lot possible with assembler but it is a lot of work. It shouldn't be necessary if compilers were doing a better job.
No, I can't say I did. If I never have to see workbench.library or icon.library from the inside I won't regret it ;) I did my part in updating both (I rewrote icon.library from scratch for the 3.5 update), but like so many programmers before me, I was unable to address the architectural limitations of either. There is only so much you can do if the Workbench design is actively playing against you...
-
The way I figured it, if Workbench used its own pool and PeterK's icon.library used its own pool then that could double the speed and/or reduce memory fragmentation by 10. Or something like that :)
No, Workbench does not use memory pools. It pulls everything from the globally available memory, which leads to fragmentation.
I assume this is why PeterK's icon.library is so much faster (+ all the other optimizations he did.)
I don't know how it works, but there is only so much that can be done, given how Workbench reads icons. What breaks the camel's back is that icons are stored in individual files which must be found first and read quickly. Workbench tries to do that by scanning directory contents and switching back and forth between scanning and reading individual icons, which causes disk accesses to jump around wildly.
If this process could be sped up, it would have to be smarter about when to read the icons. For example, if you could cache the icons to be read from a directory then this would reduce the amount of time which Workbench spends on switching between scanning the directory and reading the icons.
Let's say you had an icon cache. If you could tell Workbench to hold off reading the icons until it has finished scanning the directory, then you could cut down the overall time spent by a significant amount. Of course, once Workbench has read the directory contents, reading the icons must be very quick. I think all of this would be doable.
-
OK, I booted up my A3000 (and WinUAE) and see that I use "FastCache". You can set the lines and such from the tooltype info. I tried this with my A1200 and it makes a huge difference (with the real HD as well as the 8GB CF drive).
-
OK, I booted up my A3000 (and WinUAE) and see that I use "FastCache". You can set the lines and such from the tooltype info. I tried this with my A1200 and it makes a huge difference (with the real HD as well as the 8GB CF drive).
Now install PFS3 v5.3:
http://aminet.net/disk/misc/PFS3_53.lha
up the buffers to about 300 and install PeterK's latest icon.library from here:
http://eab.abime.net/coders-system/64079-icon-library-46-4-test-versions.html
And then report back whether "FastCache" is faster :).
-
I always "just assumed" that the reason PFS3 was so fast was because it essentially had a secret built-in copy of FastCache.
Is that not true in some sense?
Everyone has told me that PFS3 uses a large chunk of RAM that FFS does not use. All that extra ram is used to cache stuff, right? Or ?
Does anyone know if FastCache works with FFS when I use 4K or 8K or 16K sectors?
Ppl always complain about FFS being slow but if you increase the sector size to 4K or larger you get a giant speedboost. No caching software needed. If you then add caching software you should then be around the same speed as SFS or PFS.
-
I think PFS is speedy because it simply is smart about things, or rather its author.
-
I always "just assumed" that the reason PFS3 was so fast was because it essentially had a secret built-in copy of FastCache.
Is that not true in some sense?
Caching is helpful, but it is not essential. A cache will boost performance only if what you need to access next is already stored in the cache. This is the case with data which is accessed repeatedly, but often enough data on the disk is accessed once and then never again.
To provide for a consistent speed improvement you need to organize the file system's data structures in such a way as make accessing the data fast. For example, instead of having to walk through a series of data blocks before you can use the data you are looking for, you could have a tree data structure, which would let you pinpoint within 3-4 steps what would take 15-30 data blocks to walk through in sequence.
The FFS consistently uses data structures which are very simplistic in construction, and in theory ought to be robust. But the robustness is just not there, and the simplicity only succeeds in dropping performance the more data has to be managed.
Ppl always complain about FFS being slow but if you increase the sector size to 4K or larger you get a giant speedboost. No caching software needed. If you then add caching software you should then be around the same speed as SFS or PFS.
Changing the block size (a block is made up of several sectors) will have a positive impact on performance only if your disk drive and the controller hardware do not make data accesses noticeably slower if you end up reading 4-8 times the previous amount of data.
You still have the same scalability issues with the FFS, which means that the more data you have to manage, the quicker FFS loses performance. Not all file systems lose performance as quickly as the FFS, as the amount of data increases. It's particularly nasty with the FFS, though.
There is one major drawback to increasing the block size with the FFS: if the file system data structures become corrupted, then the amount of data affected by the corruption will be much higher than with smaller block sizes. Given that the FFS lacks robustness by design, trading greater speed for even smaller robustness may be a bad idea...
-
I've got my old 3.9 bb2 installation running again. It's on a 8GB microdrive that has multiple partitions.
One of the partitions contains a LOT of folders with WHDLoad games, each folder has a game in it. When I open this partition in workbench it takes a few minutes before the listing shows. In the meantime I can't do anything else.
This partition is either in FFS of SFS, I don't quite remember, but it takes so long. Is there a way to speed up the directory listing?
I remember from older installations there was a .fastdir file that contained the listing? Never seen that on 3.9 though.
Sorry if someone's already mentioned this (not read the full thread) but why not use a WHDLoad front-end rather than launching them all from Workbench?
I just have the excellent, and very simple to use iGame in my dock and load all the games from that in a few seconds:
http://winterland.no-ip.org/igame/
-
There is this launcher too.
I've not used either myself though.
http://www.jimneray.com/xbench.html
-
X-Bench is really awesome. I use it on my A600 and it rox as iGame is really nice but not the best in speed on plain 030's.
-
Now install PFS3 v5.3:
http://aminet.net/disk/misc/PFS3_53.lha
up the buffers to about 300 and install PeterK's latest icon.library from here:
http://eab.abime.net/coders-system/64079-icon-library-46-4-test-versions.html
And then report back whether "FastCache" is faster :).
Ok, what are we using to bench mark this with? I think FastCache is faster - I am using an A3000 w/40MHz 040 w/80MB of RAM and an A1200 with 50MHz 030 w/256MB of RAM. The A1200 has the stock hard drive and a 8GB CF drive.
-
Ok, what are we using to bench mark this with? I think FastCache is faster - I am using an A3000 w/40MHz 040 w/80MB of RAM and an A1200 with 50MHz 030 w/256MB of RAM. The A1200 has the stock hard drive and a 8GB CF drive.
All I can say is this reviewer used DiskSpeed 4.2 in July 1993 to race FastCache vs. PowerCache. So if you use that exact version of that program then you can compare your own results to those of the reviewer.
http://de4.aminet.net/docs/rview/FastCache.txt (http://de4.aminet.net/docs/rview/FastCache.txt)
In 1997 both FastCache and PowerCache were updated:
FastCache 1.1 1997 http://aminet.net/package/disk/cache/fcache11 (http://aminet.net/package/disk/cache/fcache11)
And if you really want to figure out the best one there is also HyperCache Professional (commercial) and Dynamicache (commercial).
-
The new icon.library is in fact *a lot* faster, in stead of minutes it "only" takes 30 seconds now. Only thing I did was change the icon.library, because images to other memory was already set.
It's still a bit of a wait, but not by far as long as before, this is managable.
If only I could connect 2x Microdrive to the thing. I can't :(
-
Install FastCache and see if you can get it down to 10 secs. You'll hafta make sure to set the settings to use FASTram and use a reasonably large cache.
-
Of course I used to use PoolMem too! From the day it first came out I was using PoolMem and it was awesome!
But then TLSFmem came out which is 10x better than PoolMem.
I ran software that is very hard on the Amiga's memory system. It creates megabajilion of memory fragments. And TLSFmem was massively better than PoolMem. It uses a new better algorithm.
You can always comment out PoolMem with a ; and then add in TLSFmem in place of it in your startup-sequence to see what I mean. If u don't like it you can always switch back.
Yep, I tried TLSFmem a couple of years ago, and I have a newish version on my hard drive which I downloaded earlier this year. I did try it, but I couldn't tell whether it was any better than PoolMem or not. But if everyone on here says it is, then I'll have to take your word for it. The whole world can't be wrong. I may install it for good yet... LOL.
Thanks Chaos...
-
Install FastCache and see if you can get it down to 10 secs. You'll hafta make sure to set the settings to use FASTram and use a reasonably large cache.
How?
-
Yep, I tried TLSFmem a couple of years ago, and I have a newish version on my hard drive which I downloaded earlier this year. I did try it, but I couldn't tell whether it was any better than PoolMem or not. But if everyone on here says it is, then I'll have to take your word for it. The whole world can't be wrong. I may install it for good yet... LOL.
Thanks Chaos...
I still have my PoolMem in my startup-sequence too. But it is commented out. I have not uncommented it in years.
I am a software developer and there is some sort of hardcore debugging tool that I sometimes have to run (I can't remember which one... its been years since I did something silly like code a bug :). Anyway this debugging tool hacks into the AmigaOS memory list system. Only TLSFmem doesn't USE that system which is why it is so much faster and less fraggly. So when running that hardcore debug tool I hafta switch back to PoolMem temporarily.
So like I say, ur not getting married to TLSFmem, or if u r then u can still cheat on the side with PoolMem when u get the urge. :D
-
How?
http://aminet.net/package/disk/cache/fcache11
FastCache Free Software + instructions are there. I think it might actually default to FASTram nowadays so that u don't hafta actually configure anything.
-
Using Directory Opus is an excellent idea!
The other thing you should do is use PFS3 as it is very much faster than SFS at reading dirs and at the same time it uses less CPU power which means there is more cpu power available to poll that silly gawdawful IDE HD interface.
Another thing you can do to speed things up is switch to a quality SCSI DMA HD interface.
Another thing you can do is to buy a faster accelerator.
But really if you just switch to PFS3 + Directory Opus your dirs will be lightning fast.
Workbench is simply not coded for speed. The underlying AmigaOS is fast but the Workbench.exe is sloooow.
HMMM!!!
My directory listing is plenty fast enough on Amiga Forever running on a Windows PC utilizing an AMD 6 core processor. That listing just zips on by running at 3.2 ghz.
-
Yes, 3.2 GHz is a lot more than say 50 MHz 68030..
-
How do I increase the cache on that partition? AddBuffers?
-
How do I increase the cache on that partition? AddBuffers?
I've got AddBuffers DH2: 600 on my PFS3 partition. That's on MorphOS with 1GB RAM though.
-
I have addbuffers Work: 1000 on my 32MB FFS A1200
1000 buffers = 512k
Addbuffers does speed things up noticeably.
But it is not as dramatic as FastCache or PowerCache or etc.
-
Should I use like only FastCache or FastCache and AddBuffers together?
-
I have addbuffers Work: 1000 on my 32MB FFS A1200
1000 buffers = 512k
Addbuffers does speed things up noticeably.
But it is not as dramatic as FastCache or PowerCache or etc.
The type of buffer which the file system manages through "AddBuffers" is only used for file system data structures. The contents of the files are not buffered in this manner.
It's possible that you can obtain the same performance with a lot less buffers thrown at the file system to manage. The old FFS has to walk through the entire list of buffers before it can find the one it is looking for.
As such, the more buffers you throw at the file system, the more time it will spend on managing the buffers, rather than making good use of what is in the buffer.
-
Should I use like only FastCache or FastCache and AddBuffers together?
Because the file system is very limited in what it can do with the buffers assigned to it, you might be best served with a block caching solution (FastCache) doing the hard work, and the file system using only the bare minimum of buffers it needs (10-15 buffers).
-
I use addbuffers 200 with FastCache. Anything higher was just wasting memory. I remember PowerCache, but I didn't use it for some reason... I don't think it worked as well with hardfiles as FastCache.
FastCache is a night and day difference with the 8GB CF drive.
-
I use addbuffers 200 with FastCache. Anything higher was just wasting memory. I remember PowerCache, but I didn't use it for some reason... I don't think it worked as well with hardfiles as FastCache.
FastCache is a night and day difference with the 8GB CF drive.
Are you able to use more than one CF drive (I use MicroDrives, essentially the same, but it's really a hard drive in stead of flash memory)? I can only use one at a time, I really need both of them to work together.
-
I always list my huge directories from the cli. It makes finding stuff a lot easier, too.
-
Workbench sucks :) Use Dopus 4 or Dopus Magellen (free and open source now). Especially Magellen is pretty fast (but do yourself a favor and don't use it as a Workbench replacement, instead set it up to resemble Dopus 4, but that's just my opinion ;)).
-
Are you able to use more than one CF drive (I use MicroDrives, essentially the same, but it's really a hard drive in stead of flash memory)? I can only use one at a time, I really need both of them to work together.
I have not tried using more than one CF drive. I only have one currently.
-
So like I say, ur not getting married to TLSFmem, or if u r then u can still cheat on the side with PoolMem when u get the urge. :D
And the best part is... neither TLSFmem or PoolMem will remember a damn thing!
WIN WIN!! :laugh1:
-
X-Bench is really awesome. I use it on my A600 and it rox as iGame is really nice but not the best in speed on plain 030's.
X-bench cause graphics corruption of window titles on my CGX screen, so I'm not using it at the moment.
-
Are you able to use more than one CF drive (I use MicroDrives, essentially the same, but it's really a hard drive in stead of flash memory)? I can only use one at a time, I really need both of them to work together.
What is it that is stopping you from using more than one "CF Drive" at a time?
AmigaOS supports a lot of drives.
The only way something could "stop you" from using multiple drives would be... a screwey hard drive controller? Or a screwey driver? Or ?
Addbuffers works with everything automatically without any problems. I believe FastCache also works with everything automatically without any problems.
-
What is it that is stopping you from using more than one "CF Drive" at a time?
AmigaOS supports a lot of drives.
The only way something could "stop you" from using multiple drives would be... a screwey hard drive controller? Or a screwey driver? Or ?
Or I don't know why but it does happen with my A4000's IDE. In fact, the CF card doesn't even work with a CDROM on the same cable. This is a Sandisk 4 GB.
-
Or I don't know why but it does happen with my A4000's IDE. In fact, the CF card doesn't even work with a CDROM on the same cable. This is a Sandisk 4 GB.
Its either a problem with the Adapter that you use, or a problem with the CF Card.
I have 3 devices on my A1200 IDE for 13 years without problem but I am cheating and using an IDEfix97 4-way adapter thingamajiggy to get faster speeds.
Countless ppl use 2 drives on A4000s and A1200s.
Its just that as soon as you try to use a CF card your chances of having problems go up dramatically. Lots of ppl can't even get 1 CF Card all by itself to work. Lots of strange bizarre problems. I have read many threads where it is stated that many CF cards do not 100% conform to IDE specifications.