Welcome, Guest. Please login or register.

Author Topic: Hyperion announces OS 3.1 update  (Read 90802 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #29 from previous page: December 31, 2017, 11:31:58 PM »
Quote from: LoadWB;834548
Sun implemented journaling in UFS in Solaris 7.  I am curious how it did so but I have never spent the time to dig up the information.

If I remember correctly, the NetBSD WAPBL feature did not require the file system to cooperate much. You could implement it as an abstraction layer on top of the low level block access. It probably did call for a more optimized, higher performance journaling implementation.

Quote
I liked it, but I think I enjoyed the first season more for Brian Blessed.

Now that you mention it... It's hard to think of Brian Blessed not making the kind of contribution which elevates the end result far beyond what it might otherwise have deserved.
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #30 on: January 03, 2018, 03:39:58 PM »
Quote from: Gulliver;834620
@Thomas Richter
@olsen

A journaling filesystem woulld certainly improve reliability.

But why not better kill more birds with one stone?

A versioning filesystem would be a much better overall solution, because not only it would allow the user to roll back to previous versions of files in case of system corruption, but it also means that if the filesystem does the work twice (comitting to disk), you at least do the job completely and not halfway like in a journaled filesystem, and also keep the benefit of having an older backup automatically.

The drawback, is of course, increased disk space usage, but is that really a problem in an Amiga system where files are really small in size and when storage media keeps getting bigger each day? And of course, you can also provide some algorithm for garbage collection when the amount of backup exceeds certain amount to free some space.


I don't quite see this yet. There is not a lot of wiggling room within the constraints the FFS on-disk data structures impose upon you. With a log of changes growing as write operations take place, you will have to deal more and more with managing disk space. This is where this file system design is really weak. There are no data structures to assist making optimal storage allocations under certain constraints (e.g. allocate closest to the tail end of the file, allocate closest to the file header block, pick the largest consecutive chunk of free space, etc.). As far as the FFS is concerned, the storage space is broken down into two big sets of almost identical size with no block preferable to the other.

If I remember correctly how log-structured file systems work, I can see how they are more attractive for use with SSDs. You rarely rewrite anything, if at all. The downside is that you have to deal with accessing the versions (through the existing file system API? we can't conveniently upgrade dos.library), representing the available storage space (minus the versioned data which could be reclaimed) and with scrubbing the log. You have to have all three, and each of these is quite challenging to implement.

By comparison, journaling with write-ahead logging would be much simpler to implement. It could be integrated into the metadata block access operations and the remainder of the file system, its APIs and data structures would remain unchanged.

I'd rather prefer to tinker with something that I understand up to a point, and which is extensible, than having to invent a big bag of new stuff, all of which would need a lot more testing just to make sure that it works as intended, including how it deals with defects.
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #31 on: January 21, 2018, 01:29:14 PM »
Quote from: gregthecanuck;835247
On a related note, I would like to add.....

Stefan "bebbo" Franke continues to work on his 68K gcc6 port. He appears to be getting close to a working compiler. At the moment he is working on debugging his code optimizer.
 
There are a couple of developers filing some good bug reports and helping to iron out the last few issues.
 
I think now is a good time for some donations to keep Stefan motivated. He has mentioned he would like to get a working debugger as well (gdb) which would be very good.
 
Come on folks, pull out some spare change. A good base gcc compiler is important for many reasons. Let's give him some more motivation to keep going. I have done my part for now.
 
His PayPal donation link is part way down this page: https://github.com/bebbo/amigaos-cross-toolchain
 
Let's keep the party going!  :)


I for one would really, really like to see a decent source-level debugger become available. While we can make do with SAS/C and the CodeProbe debugger, if it's going to be GCC, there has to be a debugger to go along with it.
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #32 on: January 22, 2018, 11:50:40 AM »
Quote from: LoadWB;835255
FANTASTIC!  Indeed, thank you, Olaf.  You know we all want to hear about it.


Bit of a dry subject, I suppose... Well, somebody asked for it ;)

The new Disk Doctor is the result of some hairy research into how the original version worked and why it failed. Commodore stopped shipping it with Workbench 2.1, and since then no file system repair or data recovery tool has been included with subsequent Workbench versions for Amigas with 68000 family CPUs.

The new Disk Doctor should solve a couple of hairy problems. Volumes can be much larger than they used to be in the 1980'ies and 1990'ies. Not only do you have to deal with storage media much larger than 4 Gigabytes, the number of files, directories and the associated management data structures have to be analyzed and their contents need to be kept in memory. Memory constraints are the biggest problem here, actually. The original Disk Doctor needed about 1.5% the size of the volume as working memory (RAM). For example, in order to "repair" a 20 Megabyte hard disk partition, you would have to have at least 330 Kilobytes of free RAM available. This would not fly on the original Amiga 500/1000/2000. Now imagine how the math would work out for a 1 Gigabyte hard disk partition. For the new Disk Doctor I developed a special type of data structure which lowers the memory requirements to around 0.1% of the volume size. Which means that about 1 Megabyte may be sufficient to deal with a 1 Gigabyte partition, and 8 Megabytes for an 8 Gigabyte partition. At least, this is what testing revealed so far.

Unlike the original Disk Doctor, the new version does not currently modify the contents of the volume. It only does two things: 1) examine the contents of the volume, looking for defects and 2) copy the contents of the directory tree to a different volume. It does what Dave Haynie's DiskSalv program did 32 years ago, but of course it does a lot more than that ;)

The "examination" begins by looking into every single block of the volume, taking note of the contents, the type of data found, damage to the contents. This is followed by another pass to figure out what files, directories, hard links and soft links exist and can be reached through the root directory (if there is one). Finally, this information is tied together so that one can tell which files, directories, etc. are still "sound" and undamaged, which ones are deleted, damaged or "orphaned", i.e. have no valid parent directories.

This information can be stored in a "disk and block information" (DABI) database file (actually, it's just a gzip-compressed plain text file) which might just become useful later. Instead of rerunning the examination (which takes quite a while to complete on a large volume), you can reuse the information gathered later.

You can use the new Disk Doctor just to check if a volume is in good shape, and not bother with the DABI files or the copying operation. But there's a lot more you can do. For example, you could use the DABI file to have Disk Doctor show you what it would have copied and then narrow down the set of files to copy through very simple filter restrictions (e.g. copy only "sound" files, copy only damaged files, copy only deleted files, copy only files matching a wildcard pattern). The copy process itself works very much like a "copy all clone #? .." command would, preserving all the properties (comment, user/group ID, modification time) of the original directory entries. When damaged files are copied, the damaged sections are skipped. The copy will retain the entire directory tree structure, if possible.

The new Disk Doctor takes a very thorough look at the state of the volume and its data structures. This includes, for example, making sure that all directories are consistent. It's possible for directory entries to show up when you list the directory contents, yet remain inaccessible when you try to open or delete them. The linkage information underlying hard links to directories and files is validated, too. Cycles in the many list data structures which the file system uses are detected. The root directory and its associated data structures (e.g. the bookkeeping information on what blocks are still available for use) are examined, too, which covers the bitmap extension block information. The extension blocks were added in 1987 with the introduction of the FFS. Their "Achilles heel" is the lack of a checksum which would make detection of corruption easier. The directory cache lists which give the directory cache (DCFS) file system variant its name are validated, too, and any differences found between a directory and the cache contents are recorded.

All this information is in part intended for a repair operation which is currently not implemented. A repair strategy would be needed first, and I have yet to come up with one. The more I learned about what can go wrong the more alternatives to dealing with the defects revealed themselves. You may be able to correct the smaller problems, such as restoring the consistency of directories, but if the root directory is damaged, how do you rebuild its bookkeeping information without destroying other data that might still be recoverable? So, after four months of work, this is still a research project, I'm afraid.

Finally, in case you wondered, the new Disk Doctor supports all Amiga file system variants which stem from the 1986 ROM file system and the 1987 Fast file system. This includes the international mode (introduced with Workbench 2.1), the directory cache mode (introduced with Kickstart 3.0) and the long name mode (introduced with AmigaOS 4). Hard links and soft links are supported (currently only soft links are restored by the copy operation, restoring the hard links still needs work). Volume block sizes of 512..65535 bytes per block are supported, too.

I can't promise you that using the new Disk Doctor will be an enjoyable experience (ha!), but it should be a whole lot less exciting than it used to be with the old Disk Doctor. If you need a tool to check if your volumes are sound and in good shape, and which will help you to recover data from them when you really need it, the new Disk Doctor should get the job done.
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #33 on: January 23, 2018, 03:37:32 PM »
Quote from: psxphill;835286
When I've done similar data recovery, I've ended up using a "what if" algorithm that tries multiple things when there are ambiguity & picks the most consistent one.
It's a load of planning and a load of code. You shouldn't aim for perfect recovery in all cases though because it's impossible. Getting consistency back means you don't have to choose between reformatting and losing more data when the corruption upsets the file system.


Thank you, this is good advice. I do tend to spend a lot of time researching a problem and eventually overengineering a solution (why not? I don't like to return to the same project over and over again to fix issues I could have caught at the research & design stage). The simplest solution that works might just be the ticket here.

Hard to believe, but it appears that the original Disk Doctor's purpose was to only get the file system into a state which allows its contents to be copied to a different medium, using the tools available at the time (the "Copy" command, for example). It did just enough to make the basic file system structures work again, even rebuilding the consistency of directories. Broken files, etc. remained broken, sometimes becoming even more smashed if you reran Disk Doctor again. It was all too tempting to view this process as a repair operation, but it was never even that. If you only had one single disk drive, though, you had to hope for the volume to be "repaired", because you could not easily copy its contents to a separate volume (i.e. switch disks for each single file copied).

From that perspective the new Disk Doctor can already deliver what the original command tried to make possible: copy/salvage data from a compromised, if not defective medium.

If the new Disk Doctor is to repair a volume, it should be able to deliver a consistently readable and writable file system. The simplest working approach to make this happen would be to remove all the damaged contents and to rebuild both the root directory and its bookkeeping data structures. I reckon that this is doable without overengineering the solution too much ;)

Because the new Disc Doctor already supports a "preview" feature, one could show which files and directories would be cut. The user could then decide which data to copy before letting the new Disc Doctor make changes which would then result in loss of data.
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #34 on: March 26, 2018, 02:24:45 PM »
Quote from: kolla;837844
I wonder how wise that is, I believe I have seen installer scripts use C:Edit, for altering startup-sequence and other things.

Because I made a mistake when I wrote the Roadshow installation script I thought I could put the "Edit" command to good use in order to fix the problem I had created. The "Installer" tool has only very limited string/file manipulation functions, so "Edit" seemed like an option to try. However, it turned out that the V36 "Edit" command could not be used because the editing commands I needed either did not work at all, or crashed the command.

The "Edit" command which shipped with Workbench 2.0-3.1 suffered from several bugs and limitations which rendered it mostly unsafe for use. For example, the DTA, SA and SB commands never worked at all, the WIDTH and PREVIOUS arguments didn't work either, lines longer than 120 characters could cause Edit to slip up badly and path names longer than 120 characters would trash memory before even loading the respective file.

When I got curious after I ran into the "Edit" bugs I took the plunge and fixed the bugs, using the original BCPL implementation as a reference. The updated version is available in a form usable within the context of Workbench 3.1.4. As far as I can tell it's now on the same level as the Workbench 1.3 version (last updated in 1985) and "only" suffers from the same design limitations.

However, this is a Catch-22 situation bordering on satire: if "Edit" saw limited use because it was rarely safe to use in the first place, replacing it with a version which actually does what it's supposed to will accomplish exactly what? :(
« Last Edit: March 26, 2018, 02:33:06 PM by olsen »
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #35 on: March 27, 2018, 12:58:32 PM »
Quote from: kolla;837920
I would say that depends entirely on whether it is CBM or post CBM, but whatever - doesn't look like anything of the v42 stuff from CBM ever went to H&P's OS3.5.


That is correct. Work on V42 stopped when development was ongoing, and we were in no position to evaluate if the current state was preferrable to what the V40 code was in. Even the V40 code was still being worked on when Commodore went out of business. So we stuck with the V40 code.

Quote
So who has rights/ownership to the CBM v42 code?
Does it matter? As far as the shell commands, etc. are concerned a rewrite based upon the V40 code isn't such a demanding task if you wanted to implement the same functionality or even something better. The shell commands were the "long hanging fruit" anyway. Just look at how the "Sort" command does its job. It can be improved merely by randomizing the order in which the individual lines are added to the list. With only very few exceptions none of these commands had been updated since 1990.
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #36 on: March 27, 2018, 01:05:19 PM »
Quote from: kolla;837921
This is AmigaOS, what's this "safe to use" thing you speak of? :laughing:
How about lack of constant embarrassment as the next best thing? ;) Much of the code which received the long overdue upgrade in Workbench 2.0 was never updated, its limitations and failures known for a very long time indeed. Not much happened here in the next 3-4 years, and software such as "HDToolBox" and "prodprep" arguably became less stable and robust as development "progressed" on them.

Quote
I think it is great that Edit has been worked on, a standard scriptable editor is in my opinions quite important and useful for the OS. If it now is more stable and reliable, then the worst that can happen is that it will be used more often.

One small problem remains to be resolved: how does a script file figure out if it's using the more well-behaved "Edit" command or its more embarrassing predecessor?
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #37 on: March 28, 2018, 09:37:56 AM »
Quote from: kolla;837955
Well, playing "roborally" in a text file using arexx isn't exactly honkadory either. In addition, it requires rexxmast to be running.

Yes, that's the uncomfortable truth: the "Edit" command is completely self-contained, free of dependencies, and ought to be the smallest solution to the problem (provided it doesn't crash, the editing commands you need are actually implemented and don't cause trouble).

String manipulation is one of the core features of the REXX language, so ARexx should be the more powerful tool. The Installer program can launch ARexx scripts through the REXX command, but it requires the RexxMast program to be already running, of course.

Quote
OS4 comes with an edit version 53.4, what about that?

That's the version which I mentioned, and which I recently backported.
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #38 on: April 20, 2018, 11:38:01 AM »
Quote from: Orphan264;838620
I have *really* enjoyed reading this thread. Any further updates to share? :)

Um, since you've asked: The major contribution I am making is the new Disk Doctor, and the "third leg" of the feature set (1. Examine the volume to gather information for recovery, 2. Recover data from the volume, 3. Repair the volume) has finally taken shape.

At the moment I am tinkering with the repair process, and this is slow going. Part of the challenge is in figuring out how to deal with damage. The current plan is, to keep things simple for a first test release, to restore the volume to full working condition. This approach has side-effects, because it may mean deleting corrupted files, links and drawers from the volume altogether. Note that the affected data still remains on the volume, it's just that you can no longer access it once the repair process has concluded. Due to how the new Disk Doctor works, you will still be able to recover the data deleted by the repair process from the volume later if you change your mind.

Another challenge here is to find real world examples of damaged volumes which the recovery/repair operation can be tested with. Disk Doctor can simulate media damage as part of its test rig, but there's nothing like the real thing.

Because repairing the volume is already sufficiently complicated as it is, I made a decision on how to deal with DCFS (directory cache mode) volumes. For simplicity, the repair process will convert them to the respective international mode OFS/FFS format. This can be done without loss of data and is almost instant. It saves the time to rebuild all the affected directory caches, which for DCFS volumes easily takes up 50% of the total repair time.

Other repair tools left the job of rebuilding the directory cache and the block allocation information to the file system validator, which was never a popular choice. It always took too long, you never knew when it was finished, and if the repairs were insufficient the validation process could get stuck, even crash. The new Disk Doctor will, if repairs are possible, always leave the volume in a proper, validated state.

Disk Doctor aside, I have backported a few more shell commands to Workbench 3.1.4 in the hope that they may prove useful. This includes the "Sort" command, for example, which is interesting because it does not use the same sorting algorithm as the current Workbench 3.1.4 version (which is essentially an implementation of Quicksort, performed on a list).

Fun fact (for those who are into algorithms and not easily bored): the original Workbench 1.x "Sort" command would read a file one line at a time and then stuck that line into a binary tree, making no attempt to balance the tree. The sort operation then involved walking through the tree "in-order". Whatever could go wrong?

The backported "Sort" command brings back the binary tree idea, this time using the same algorithm which the ASL file requester and the Workbench use to keep a list sorted at all times, and in real time while it is being filled with data (without using recursion). In my tests I found that the new algorithm made the "Sort" command twice as fast.
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #39 on: April 21, 2018, 12:20:44 PM »
Quote from: nyteschayde;838641
Me too! I really wish more Amiga development worked this way. Even better a github link would be nice. Too bad getting github source to an Amiga is hard enough without git support, let alone porting git.

I feel like there needs to be better source control on the Amiga. I think there is a SVN client for accelerated, memory capable, Amigas.


Ported by yours truly, this is still version 1.1.4, and it badly needs an update to handle TLS 1.2, now that nobody really deploys SSL/TLS 1.0/TLS 1.1 any more. With TLS 1.2 support you could access GitHub again. This is on my ever-growing TODO list...
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #40 on: April 21, 2018, 12:24:02 PM »
Quote from: Orphan264;838643
Perhaps you mean Subversion (http://aminet.net/package/dev/misc/subversion-1.1.4)
I have used it a little, but only against local repositories.


We have been using that SVN client for AmigaOS development since around 2008, when the CVS repository of AmigaOS4 was eventually migrated (which was quite the adventure -- this is when I found out how many corrupted RCS files were part of the Amiga operating system source code tree, owing to ancient bugs in the rcs tools).
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #41 on: April 21, 2018, 12:36:39 PM »
Quote from: kolla;838649
Not ironic at all :)


Careful, the irony in this has so many layers, it might actually be recursive by now, and nobody noticed.

Back in the 1990'ies when I was working on the AmigaOS 3.1 build, the foundation for the development work were RCS files which eventually came together in a single CVS repository through a tedious and thoroughly grinding process of massaging the files until the CVS server stopped choking on them. Building the operating system on my Amiga 3000UX took hours, but it worked. Turnaround times for fixing build/runtime errors were epically bad (but then the build times for intuition.library under NetBSD were even worse).

Things changed by the time Amithlon came around, which cut the build time to under an hour. When I upgraded my old laptop, I discovered that I could run WinUAE on it, and with the new JIT the whole setup was faster and more convenient than the Linux setup I was struggling with. While I still had an A4000T at the office, the combination laptop+WinUAE made me much more productive.

There haven't been that many changes since then. The build tools are still all 68k (no cross-compilation is involved), but now they either run within WinUAE or through the vamos emulation layer. The major limitations of building everything natively on 68k hardware were available memory and I/O bandwidth. Emulation and virtualization remove these boundaries. You still have to test the results on actual hardware, though (some things you cannot conveniently debug on emulated hardware; actual hardware will always manage to kick your butt and make you feel generally miserable).

The compounded irony is in that we still use Amiga-native tools to build the Amiga operating system, only that the tools are no longer running natively on Amiga hardware.
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #42 on: April 22, 2018, 07:03:36 AM »
Quote from: kolla;838660
Wasn't AmigaOS built on other systems most of its lifespan anyways? :)

I think it's closer to 55:45 given the whole time span between 1983-1994 (assuming that the operating system took shape around 1983 and not 1984).

At Commodore they used Sun 2 and Sun 3 machines for development, but I don't remember what was in use at Hi Toro/Amiga: there was a reason why the Sun workstations were preferred, and it may have had something to do with how expensive they were.

Native Amiga development tools continued to improve through 1987/1988, and work on what became Kickstart 2.0 did use the native Lattice 'C' compiler, several versions of which were used during the development of the subsequent 2.04, 2.1, 3.0 and 3.1 operating system versions. The CDTV project most prominently used the Aztec 'C' compiler.

It could have been Commodore's penny pinching which drove the developers to native tools, but the improved performance of Amiga hardware, along with the better quality of the native development tools did yield better quality software. A whole battery of tools for QA work was created specifically for Kickstart/Workbench 2.0 which the original operating system developers could not use (due to lack of MMUs, for example).

One downside of this move from cross-development to native development was that the ability to build the entire operating system on a single machine was lost. The local builds (still using makefiles) drifted apart, and by 1994 you could expect that no two operating system components were built similarly. In fact, you probably had to wrangle three different compilers, two different assemblers and start the respective build manually. While it was still used in production work (1989-1994) that "build process" must have produced and lead to the resolution of integration problems on a major scale :(

Quote
What I find ironic here is that ThoR uses Linux, of all things, to get work done.
Actually, one of the reasons why this approach was picked was the idea of having a build server which at the end of the day would crank out a complete AmigaOS build, or stop and complain if this didn't work out ("the daily build & smoke test"). We couldn't conveniently do this with an Amiga, but with the vamos setup it became possible.

I'm assuming you've done your lot software development work. One of the most important aspects of software development productivity is to be able to build and rebuild the product as quickly as possible. If you have to wait hours just to find an integration error, chances are that the whole development process will make you utterly miserable.

Eating your own dogfood and everything which building the Amiga operating system on an Amiga, and testing it in this context brings with it, there's no single tool which can do everything well. I'm accepting that a build done on any POSIX system through vamos is now part of the process of building the 68k Amiga operating system. Irony be damned ;)
« Last Edit: April 22, 2018, 07:05:44 AM by olsen »
 

Offline olsen

Re: Hyperion announces OS 3.1 update
« Reply #43 on: April 22, 2018, 11:44:58 AM »
Quote from: Kronos;838676
https://en.wikipedia.org/wiki/SAGE_Computer_Technology


Yes, SAGE was my first idea (the SAGE II and SAGE IV models), but I know next to nothing about the roles these machines played during development of the Amiga operating system precursor. If I remember correctly, they were both suitable for software development and for hardware prototyping, but less powerful than the workstations of its age. But: CP/M 68k as a software development system? I can't quite picture that.

I don't know how expensive the SAGE machines were in comparison to the IBM-PCs of the same time. What I do know is that the firmware for the Amiga keyboard processor was written in 6502 assembly language, translated into production code using an MS-DOS based assembler program.

It's very well possible that SAGE machines were in use all over the place during the Hi Toro/Amiga days (one of the company's founders mentioned this in an interview which I read), but it's also possible that other computers available more cheaply could have played an important part, too.

For example, when Electronic Arts committed to the Amiga, their developers would use MS-DOS for development work. The source code to "Prism" (precursor to "Deluxe Paint") shows that the MS-DOS Lattice 'C' compiler was used to build it. IBM-PCs of the time (1984/1985) were probably much cheaper than the Sun 2/3 machines available then.