Welcome, Guest. Please login or register.

Author Topic: Article: DCFS and LNFS Explained  (Read 3301 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline eliyahuTopic starter

  • Lifetime Member
  • Global Moderator
  • Hero Member
  • *****
  • Join Date: Jan 2011
  • Posts: 1218
  • Country: us
  • Thanked: 4 times
  • Gender: Male
    • Show only replies by eliyahu
Article: DCFS and LNFS Explained
« on: August 30, 2016, 05:47:40 PM »
Steven Solie just posted an update to the Hyperion developer blog letting us know about a new wiki article on FFS, DCFS, and LNFS. It's a good read....

Quote
The AmigaOS Fast File System (FFS) was created back in 1988 and, believe it or not, is still in use today by some die hard enthusiasts who insist it is still pretty good. FFS has several modes which has enabled it to survive far past its expiry date. Two of those modes have never been described before. That is about to change.

Back in 1992, Randell Jesup added what is known as the Directory Caching File System (DCFS) mode to FFS. This was meant to speed up directory listings on floppy disks. A rarely used and mysterious mode with very little documentation.

Fast forward to 2001 when Olaf Barthel created a FFS reimplementation and added the Long Name File System (LNFS) mode. Up until this point users were stuck with 30 characters due to the original implementation.

Curious? Read all the gory details on the most complete and official AmigaOS Documentation Wiki.
There's some terrific docs on FFS from Olaf included with the AOS4 default install in the Documentation drawer, but additional details like this are always fun to read through.

-- eliyahu
"How do you know I’m mad?" said Alice.
"You must be," said the Cat, "or you wouldn’t have come here."
 

Offline psxphill

Re: Article: DCFS and LNFS Explained
« Reply #1 on: August 30, 2016, 08:17:24 PM »
Quote from: eliyahu;813233
There's some terrific docs on FFS from Olaf included with the AOS4 default install in the Documentation drawer, but additional details like this are always fun to read through.

Fun to read, but as ex-commodore employees have said they only developed new standards where existing ones weren't good enough.... Then I think if commodore were still around then they would have switched to btrfs. A 68000 RDB version of btrfs would be awesome.
 

Offline krashan

  • Full Member
  • ***
  • Join Date: Jan 2003
  • Posts: 247
  • Country: pl
  • Thanked: 1 times
  • Gender: Male
  • Hardware designer and programmer
    • Show only replies by krashan
    • Personal homepage
Re: Article: DCFS and LNFS Explained
« Reply #2 on: September 01, 2016, 07:29:26 AM »
Quote from: psxphill;813238
A 68000 RDB version of btrfs would be awesome.
And slow like hell.

Offline olsen

Re: Article: DCFS and LNFS Explained
« Reply #3 on: September 01, 2016, 05:45:31 PM »
Quote from: Krashan;813330
And slow like hell.


Well... I wouldn't want to conclude from the performance of the FFS (or rather, the lack of performance) that Amiga file systems have to be slow.

The fundamental design of the OFS/FFS on-disk data structures limits the scalability of that particular file system architecture. It was never intended to work with storage devices much larger than 20 MBytes, never even worked correctly with devices larger than about 50 MBytes (there was a bug in that the file system didn't know that it could not manage more than a fixed number of bitmap blocks and thus could corrupt its root block by mistake).

From the inside, the file system code is quite clever. The BCPL version (the original file system) even performs its own "multithreading" through coroutines. The FFS (written entirely in assembly language) is basically an asynchronous event-driven automaton. The actual processing overhead is very, very low.

What kills the mood is that the OFS/FFS has to read and process so much data to get anything done. Every algorithm used by that design is the simplest you could imagine. Singly-linked lists are the workhorse of all data structures on disk and in memory. Basically, everything you need from a file system the OFS/FFS does by brute force using O(n^2) complexity algorithms.

Each chunk of metadata on the disk is stored along with almost 90% redundant chaff which the file system has to read, process and usually pad and write back to disk. There are no strategies for keeping management, layout and retrieval of information under control.

None of that mattered back in the late 1970'ies and early 1980'ies because the storage devices were so slow and so small that their limited performance completely covered up the shortcomings of the file system design. Only around 1987/1988, when "larger" hard disks (well, 50 MBytes and larger) became available, did the need for the FFS enhancements become apparent. Even then the hard disks were so slow that their limited performance covered up the shortcomings of the file system design :(

The enhancements for the FFS and DCFS were very modest, by tweaking the on-disk data structures, but they would yield very noticeable performance improvements. Unfortunately, the changes never went so far as to work algorithmic improvements into the design, which would have really made a difference in reliability and performance. It's possible to do that by adding journaling support to the FFS.

Replacing the ubiquitous singly-linked lists in the FFS by extents, managing storage by search trees, replacing directory entries by inodes and directory lists could make a real difference.

All of this and journaling still would't propel the FFS into the btrfs domain of performance and features, but then the FFS has very little demands on memory, which is something you can't say about "big iron" file systems ;)