Welcome, Guest. Please login or register.

Author Topic: a question about block and buffer sizes in HDToolBox with OS3.1.4  (Read 6820 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline olsen

Unlike what Gulliver said, I wouldn't go with the number of buffers beyond a reasonable number, e.g. 256, unless you really have to (see above, for hardfiles). More buffers are not necessarily faster, because it also means that the FFS needs to search in a larger set to find the right block, and this takes longer if there are more blocks.

And that is precisely the problem: in the FFS (in Workbench/Kickstart 1.3 of 1987 and beyond) and also its precursors (Workbench/Kickstart 1.1 and 1.2 of 1985/1986) the "buffers" managed by the AddBuffers command are not really a cache :(

A cache only has value if it already contains data that is likely to be retrieved next. For the Amiga ROM file system that seems to hold true only for a handful of blocks, namely the root block, the bitmap blocks, the metadata blocks of a file or directory currently being scanned, and that's mostly it (for the OFS variant, which has data blocks with checksums, these go into buffers, too).

Data in those buffers is not intended to stick around and age, with more frequently-used data lasting longer than only briefly-used data. The file system may notice if a block that's needed is already present in the buffers, but it's more a convenience than a goal to create such lucky accidents. The buffer management is intended to provide temporary storage on demand when reading or writing, and whatever is available is pulled off the rack and put to use. Never mind if the next disk access could have been avoided if what has just been reused would have stuck around a bit. These file system buffers are "bounce buffers", not cache buffers.

Another unfortunate clue that this is not supposed to be a cache is that the file system becomes slower in retrieving a "useful" block from the buffer because the currently unused buffers are stored in a linked list. Every access has to check the entire list (at about O(n^2) complexity, everything considered).

So, with the 68k FFS and its precursors, it is good advice to keep the number of buffers small because larger numbers will slow things down noticeably. The design aspect which produced these side-effects probably mattered little at the time it was created, when the mass storage device of the day was the floppy disk (perfectly fine with 10 buffers) or the 20 MByte hard disk drive (perfectly fine with 20 buffers).

Side-note: because the FFS reimplementation for AmigaOS4 (and MorphOS) was a rewrite from scratch, its built-in buffers do act as a cache (with about O(log(n)) complexity), and it has dedicated seperate bounce buffers for the rare cases when these are needed.
 

Offline olsen

Re: a question about block and buffer sizes in HDToolBox with OS3.1.4
« Reply #1 on: December 10, 2019, 09:09:26 AM »
A question: What is the algorythm used to discard cached data when buffers are full? LRU? are they just FIFO buffers? or do they follow other algorithm?

The algorithm, as far as I can tell, knows no purge conditions and has no deliberate replacement scheme. Its goal is to find a buffer which is not currently being used for a read or write operation in progress and put it to use. If data is to be read, it will make an attempt to check if the data to be read is still sitting in an used buffer (the more buffers it has to check, the slower it becomes; you got lucky that 4000 buffers worked for you). If it's not sitting there, then it will make do with the next conveniently available buffer.

This is not a cache.
 

Offline olsen

Re: a question about block and buffer sizes in HDToolBox with OS3.1.4
« Reply #2 on: December 11, 2019, 01:11:26 PM »
Just a slightly off-topic heads-up - if you use PFS (like pfs3aio for example), the default 30 translates to 150.
http://eab.abime.net/showpost.php?p=1227879&postcount=205

Hard to say what you're going to get with any file system if you use the AddBuffers command or change the number of buffers in the partition data or the mount file  :-\  That 150 may be the new 30 is probably misleading to some degree: what quantity exactly is being scaled?

The AmigaDOS documentation is unclear about the effects of increasing the number of buffers, but it suggests that more buffers will increase speed by reducing disk access time (who knew?). The thick binder which came with the Amiga 3000 even mentions the magic word "cache" and explains that each single buffer added will consume about 500 bytes of free memory (could it be 512, and why?).

Because the Amiga file system does not use the buffers primarily as a cache (more like "accidentally"), and adding more buffers can improve performance, there's a different reason for it.

The buffers are statically allocated memory which the file system uses for temporary storage, such as when reading and writing blocks, and the associated low level metadata structures associated with these blocks. Because the buffers are preallocated, the file system is unlikely to run out of dynamically allocated memory when doing its job.

It will, however, run out of buffers if too many files/locks are active or disk access operations (e.g. updating the root directory block or the bitmap blocks) are in progress at the same time. At that point new operations will be delayed until one of the of the older operations finishes and releases the buffers it had claimed.

So any speedup that can be observed, other than the odd cached block that is reused instead of reread from disk, is likely the result of multiple operations waiting their turn until the buffer they need becomes available: more buffers will translate into fewer delays.
« Last Edit: December 11, 2019, 01:13:34 PM by olsen »