Considering the large number of Amiga FFS implementations, one would think it's a well understood filesystem.
How the file system data structures look like and work together is reasonably well-understood. One might still wish for some of the unexplainable mysteries to be resolved (e.g. why are the data block references in a file header processed in reverse order, why does the root block indicate the size of the hash table when this property follows directly from the size of the block), but you generally get a pretty good idea of how the parts hang together.
Still, that isn't the whole story. The implementation language and its operating system set a few rules which you are supposed to know, except nobody really wanted to explain them to you in the early days of the Amiga. Things didn't get much better, documentation-wise, in the decade that followed, save for Ralph Babel's "Amiga-Guru book" which succeeded in explaining everything that could be explained. The file system design assumes that the basic building block for all data structures, the 32 bit word, is a signed quantity, but both design and implementation tend to ignore this. Put another way: the design is generally unaware of its limitations, and so is the implementation. This is why you should be worried if you are about to deal with files larger than 2,147,483,647 bytes. You cannot necessarily predict what's going to happen. Same thing with the volume size, for which strange things might just happen if more than 2,147,483,647 blocks are used.
It does not at all look sunny if this file system has to run on the Amiga operating system, because it has to interface with the dos.library and application software which uses the dos.library API and/or the packet interface. What is often overlooked is just how complex the file system implementation has to be, because it has schedule the packet I/O and treat each client fairly. Since dos.library doesn't do file systems any favours, what you end up with in the FFS, or rather the default ROM file system, is almost a self-contained multitasking operating system of its own.
The original BCPL version did lean on the BCPL runtime library so as to provide for coroutine threading (think Python generators: this is the closest thing I can think of in terms of how they work). The assembly language versions which followed had to implement their own version thereof, and as the file system functionality became more complex, it became harder and harder to guarantee that certain operations actually worked out correctly.
For example, writing to a file can trigger a series of operations which all have to complete in a specific order and need to be reversible in case of trouble: the file may have to be extended, for which new storage space will have to be allocated, the storage space needs to be linked up with the bookkeeping data structures which also may have to be reallocated, and then everything has to be recorded in the file header, and eventually the client will need to be notified that the write operation succeeded. Unless something goes wrong, of course, in which case the whole cascade will have to be rolled back. And don't forget that during the rollback errors might occur.
All of this uses strange data structures which don't always suffice for carrying around all the information that is needed. Weird workarounds exist, such as for stuffing everything that doesn't fit into a file handle or the "file info block" into the humble file lock, which then has interesting repercussions all over the place. It's not just that file locks become "relay batons" for multi-stage operations, the file system has to watch out for race conditions (oh, the irony), too.
This threading model is one reason why the file system is so brittle, and desaster might follow if the "relay" is aborted (crash, disk ejected, etc.). The file system design generally does not care in which order the operations that modify its data structure are completed, but the implementation should. The FFS version 40 tries its best to modify the disk data structures in an order which minimizes the irreversible damage should one modification go awry. But this only goes so far: the Amiga file system has a write cache which tends to aggravate any problem that may occur in this domain.
I've seen the belly of the beast from the inside when I reimplemented the FFS in 'C'. This was, in retrospect, the toughest software project I have ever undertaken. It certainly is the most complex piece of software I have written so far.
That's what you get from seemingly simple designs...