Welcome, Guest. Please login or register.

Author Topic: Article: DCFS and LNFS Explained  (Read 3393 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline olsen

Re: Article: DCFS and LNFS Explained
« on: September 01, 2016, 05:45:31 PM »
Quote from: Krashan;813330
And slow like hell.


Well... I wouldn't want to conclude from the performance of the FFS (or rather, the lack of performance) that Amiga file systems have to be slow.

The fundamental design of the OFS/FFS on-disk data structures limits the scalability of that particular file system architecture. It was never intended to work with storage devices much larger than 20 MBytes, never even worked correctly with devices larger than about 50 MBytes (there was a bug in that the file system didn't know that it could not manage more than a fixed number of bitmap blocks and thus could corrupt its root block by mistake).

From the inside, the file system code is quite clever. The BCPL version (the original file system) even performs its own "multithreading" through coroutines. The FFS (written entirely in assembly language) is basically an asynchronous event-driven automaton. The actual processing overhead is very, very low.

What kills the mood is that the OFS/FFS has to read and process so much data to get anything done. Every algorithm used by that design is the simplest you could imagine. Singly-linked lists are the workhorse of all data structures on disk and in memory. Basically, everything you need from a file system the OFS/FFS does by brute force using O(n^2) complexity algorithms.

Each chunk of metadata on the disk is stored along with almost 90% redundant chaff which the file system has to read, process and usually pad and write back to disk. There are no strategies for keeping management, layout and retrieval of information under control.

None of that mattered back in the late 1970'ies and early 1980'ies because the storage devices were so slow and so small that their limited performance completely covered up the shortcomings of the file system design. Only around 1987/1988, when "larger" hard disks (well, 50 MBytes and larger) became available, did the need for the FFS enhancements become apparent. Even then the hard disks were so slow that their limited performance covered up the shortcomings of the file system design :(

The enhancements for the FFS and DCFS were very modest, by tweaking the on-disk data structures, but they would yield very noticeable performance improvements. Unfortunately, the changes never went so far as to work algorithmic improvements into the design, which would have really made a difference in reliability and performance. It's possible to do that by adding journaling support to the FFS.

Replacing the ubiquitous singly-linked lists in the FFS by extents, managing storage by search trees, replacing directory entries by inodes and directory lists could make a real difference.

All of this and journaling still would't propel the FFS into the btrfs domain of performance and features, but then the FFS has very little demands on memory, which is something you can't say about "big iron" file systems ;)