Amiga.org
Amiga computer related discussion => Amiga Software Issues and Discussion => Topic started by: Amigaz on May 09, 2010, 05:30:26 PM
-
Was ages ago I installed an Amiga HDD with PFS3 so I've forgot what the best block size is for this filesystem?
I plan to use PFS3 on a SCSI hdd hooked up to a Cyberstorm MKIII
The Amigaguide doc's for PFS3 says nothing about the block size
-
IIRC it should be set to 512.. kb (?)
-
^ 512 bytes, I think that should be.
-
what the best block size is for this filesystem?
512 bytes
-
Thanks, guys :-)
-
iirc, someone here before said that pfs3 ignores block size and always uses 512bytes?
-
Actually had it set to 1024 block size, I thought changing it to 512 would ruin my partitions but it didn't happen :)
Had SFS on this hdd before, the speed increase when opening a drawer is very noticeable with PFS3
-
iirc, someone here before said that pfs3 ignores block size and always uses 512bytes?
correct.
-
Actually had it set to 1024 block size, I thought changing it to 512 would ruin my partitions but it didn't happen :)
Had SFS on this hdd before, the speed increase when opening a drawer is very noticeable with PFS3
check with Icon Information (or any other tool) and you will see that is using 512 bytes even if you set 1024 in HDToolbox (or what you have used for partitioning)
-
512 bytes
Surely this really depends on the modal average file size of files under say 8KB and the size the device is optimised for rather than a stock answer?
On NTFS it is 4KB (iirc) but that is wasteful for files smaller than that amount as they'd be padded, but that doesn't matter so much now with large HD capacities? Most modern hard disks I would think would also be optimized for 4K block transfers based on expected NTFS usage and CF would probably be FAT16 (i.e. camera) optimized - have a look at this table: http://support.microsoft.com/kb/140365
Yes these are MS standards but I suspect most modern hardware developers would at least look at these during performance considerations.
We need someone to do an analysis of the average modern Workbench distribution file size and average block file transfer performance from a collection of CFs or hard disks and base it on that for the category of devices respectively... Processor IO performance might be another bottleneck to consider...
[EDIT] Also: "However, a larger cluster size reduces bookkeeping overhead and fragmentation, which may improve reading and writing speed overall. Typical cluster sizes range from 1 sector (512 B) to 128 sectors (64 KiB)." (http://en.wikipedia.org/wiki/Data_cluster) [/EDIT]
[EDIT2] I'm referring to FS in general - I'm using FFS-DC[/EDIT2]
-
Surely this really depends on the modal average file size of files under say 8KB and the size the device is optimised for rather than a stock answer?
No, it doesn't.
Note: I answered the question.
-
Had SFS on this hdd before, the speed increase when opening a drawer is very noticeable with PFS3
PFS3 (68060 version) is about 50% faster on average than SFS on my CSMK3 UltraSCSI. Deleting is several times faster. The SFS delete slowness bug, the guru on cold boots with a really fast Ultra SCSI drive bug, and the corrupted partition bugs seem to have disappeared.
-
No, it doesn't.
Note: I answered the question.
I think you might have missed my second edit - I was referring to file systems in general (in the context Amiga file systems).
IIRC the Amiga has PIO IDE rather than DMA so you'd expect performance benefits if the block size is optimal for clock cycles, i.e. the minimum number of cycles required to request across the IDE bus and CRC the block of a given size would be optimal. Having a faster CPU I expect would improve this as it would allow more CPU cycles to be dedicated to IO, assuming a modern drive with fast IO.
Later 68K processors (IMHO) benefit IO from having a data cache (i.e. not the 68000/68020). 68030's data cache is 256B, 4K for '040 and 8K for '060 so IMHO an 040 upwards using PIO and a decent FS should (assuming not a lot of small files) see benefits of using a 1-4K block size.
Does this sound right (I'm not a hardware engineer)?
-
where do you guys get psf3 all the sudden, have i missed smth? has it been released already?
-
It would have been released with Amiga Future (http://www.amigafuture.de/kb.php?mode=article&k=3431) magazine by now I guess, but PFS3 is kinda old by now, so some of us have actually been using it for several years.
-
'fraid not (http://amiga.org/forums/showthread.php?t=52358) I'm just querying best block sizes for an Amiga in general.
-
i refer to a free release as been talked about lately. so its been released with af by now. have to consider, tastes a little fishy.
-
You could go read the thread, there's some drama if you enjoy that kind of thing with your fish, but the release by AF is legit.
-
I think you might have missed my second edit - I was referring to file systems in general (in the context Amiga file systems).
The absolute answer you questioned was the only correct answer to the question presented.
Regarding blocksize: Using large blocksize to gain performance only makes sense with poor filesystems that don't keep the data sequental (and thus don't perform mostly sequental access), or with hardware that actually uses larger hardware block size.
FFS is notoriously slow so with that you get a performance boost.
PS. While some amiga file systems allow 4k block sizes (and thus seem to be ready for the new 4k block size hard drives) there still is the issue of proper alignment. In order to obtain best performance the partition beginning must be aligned by the block size, too. I doubt the Amiga partitioning programs account for this.
-
'fraid not (http://amiga.org/forums/showthread.php?t=52358) I'm just querying best block sizes for an Amiga in general.
In general? :) And of course not everyone uses the builtin ide controllers... so that's another thing for you to consider. But do post your results, would be more interesting than speculation.
-
The absolute answer you questioned was the only correct answer to the question presented.
You missed the boat then re. second edit added specifically because I read the entirety of the thread after submitting the post. ;)
Regarding blocksize: Using large blocksize to gain performance only makes sense with poor filesystems that don't keep the data sequental (and thus don't perform mostly sequental access), or with hardware that actually uses larger hardware block size.
I disagree. Since block size is a minimum unit size you are also discussing the impact on the CPU and the amount of usable data transferred in a given time. An extreme example - if the average file size being read is 1K and a cluster size is 8K (thus transfer block size) it'll be transferring more slack space than say with a block of 4K, plus the CPU will be doing more redundant IO operations (CRC, reads etc). Sequential access only lowers seek latency as far as I'm aware...
FFS is notoriously slow so with that you get a performance boost.
Now that I agree with ;)
PS. While some amiga file systems allow 4k block sizes (and thus seem to be ready for the new 4k block size hard drives) there still is the issue of proper alignment. In order to obtain best performance the partition beginning must be aligned by the block size, too. I doubt the Amiga partitioning programs account for this.
(http://en.wikipedia.org/wiki/Disk_sector)This probably doesn't matter for compact flash which has a standardised sector of 512B so I guess that part answers my initial question as to the best size for CF, either 512B or 1K :) Some hard disks have been using 4K since 2008 btw. (http://en.wikipedia.org/wiki/Disk_sector)
-
In general? :) And of course not everyone uses the builtin ide controllers... so that's another thing for you to consider. But do post your results, would be more interesting than speculation.
"In general..." something to recommend using certain controllers/CPU combinations/media/fs/blue moons when someone asks what size should I use on a forum there'd be some guidelines. :)
I *think* based on reading around for this thread on hardware standards: for the internal controller 512B would be about the best for low end hardware 000,020 (CF or hard disk). 512B-1K would be better for 030,040,060 and probably 4K for the newer HD standard discussed (040 upwards though). Again this all assumes the alignment is correct as Piru mentions... and that I'm not talking out of my bottom ;)
-
Since block size is a minimum unit size you are also discussing the impact on the CPU and the amount of usable data transferred in a given time.
How? How is it different to transter 64 blocks of 512 bytes vs 32 blocks of 1024 bytes?
An extreme example - if the average file size being read is 1K and a cluster size is 8K (thus transfer block size) it'll be transferring more slack space than say with a block of 4K, plus the CPU will be doing more redundant IO operations (CRC, reads etc).
Why would the filesystem read the whole 8K in this case? Even if it would read the whole 8k, modern drives are so fast in sequental access that it makes little difference if you read 1k or 8k.
What CRC are you talking about? If it's something in the filesystem (no amiga filesystem CRCs the data btw), why would the filesystem calculate CRC for the whole 8k if only 1k is of is valid data? Lets not forget that most time is spent reading and writing the data. Metadata is insignificant in comparison.
This probably doesn't matter for compact flash which has a standardised sector of 512B so I guess that part answers my initial question as to the best size for CF, either 512B or 1K :)
It was about the new 4k hard disk drives which report 512k block size (for compatibility reasons) while actually using 4k block sizes internally. They do work, but writing gets really slow of the block is not aligned properly. Until recently both Windows and Linux created such bad partitioning.
As for flash, they have a totally different internal structure which the typical filesystem has no way of knowing. The firmware tries to do some magic tricks to accomodate for the silly things typical filesystems do when they assume classical HDD. There's little that can be assumed about flash, except that small writes are typically really slow (due to flash internal arrangement).
http://en.wikipedia.org/wiki/UBIFS is an interesting approach to the problem. Rather than going thru the firmware, access the flash directly. Obviously this requires special HW (that is: it's not usable with your off-the-shelf SSD drives). Here's some more about UBIFS: http://www.linux-mtd.infradead.org/doc/ubifs_whitepaper.pdf
-
As reminded by a friend: AmigaOS is one of the few OSes that don't do proper block level caching by itself. This is another reason by large block size gives benefit with FFS.
It has gone as far as some filesystems implementing such caching, read ahead etc in itself (SFS).
-
How? How is it different to transter 64 blocks of 512 bytes vs 32 blocks of 1024 bytes?
A CRC-enabled device calculates a short, fixed-length binary sequence, known as the CRC code or just CRC, for each block of data and sends or stores them both together. (http://en.wikipedia.org/wiki/Cyclic_redundancy_check)
I submit that less blocks (transferred disk sectors) => less CRC checking, unless I am misunderstanding something here.
[EDIT]Also (again might be misunderstanding) as it is PIO the CPU is dealing with the IO and requesting the file at the lowest level, i.e. manipulating/copying blocks (say in contrast to DMA where a single request will be made whilst the CPU does something else).[/EDIT]
-
A CRC-enabled device calculates a short, fixed-length binary sequence, known as the CRC code or just CRC, for each block of data and sends or stores them both together. (http://en.wikipedia.org/wiki/Cyclic_redundancy_check)
I submit that less blocks (transferred disk sectors) => less CRC checking, unless I am misunderstanding something here.
A CRC-enabled device always calculates a checksum for its physical blocks. It does not know about logical clusters and therefore it does not make a difference if larger clusters are used. Unfortunately the word "block" is used with many different meanings. Physical device block size (e.g. 512 bytes or 4k) is always fixed for SCSI or ATA devices. File system block size (a.k.a. cluster size) can be varied by software, but it depends on the file system implementation whether larger clusters improve performance or not.
-
I've tried PFS in the past but I much prefer to use SmartFileSystem myself. I'ts very easy to use and set up, no problems with block sizes or such like. Been using it for about 4 to 5 years now and have never lossed a single bit of data.
Best of all using SFS2 you can have up to 1 Terabyte sized partitions and no limit on file lengths, which is needed if you want to store backup ISO images of your DVDs which can be up to 8Gb in size. :)
-
I've tried PFS in the past but I much prefer to use SmartFileSystem myself. I'ts very easy to use and set up, no problems with block sizes or such like. Been using it for about 4 to 5 years now and have never lossed a single bit of data.
The sad thing about SFS is that it's inferior compared to PFS3, regardless of SFS being much later development.
- PFS3 is faster than SFS.
- PFS3 doesn't generate massively fragmented files when two or more applications write files to disk. SFS does.
- PFS3 performance doesn't deteriorate over time. SFS does.
- PFS3 has a repair tool. Often with SFS the only option is to copy data over, reformat and copy data back (MorphOS does have a SFSDoctor tool, however).
-
The sad thing about SFS is that it's inferior compared to PFS3, regardless of SFS being much later development.
- PFS3 is faster than SFS.
- PFS3 doesn't generate massively fragmented files when two or more applications write files to disk. SFS does.
- PFS3 performance doesn't deteriorate over time. SFS does.
- PFS3 has a repair tool. Often with SFS the only option is to copy data over, reformat and copy data back (MorphOS does have a SFSDoctor tool, however).
SFS being inferior is just your opinion, I've been using it for years, and both of my 500GB HDs are almost full. I've never noticed any performance deterioration or ever had the need to repair any files or partions.
Im perfectly happy with it, but that's just my opinion... :)
-
SFS being inferior is just your opinion
Benchmarks are out there, PFS3 is a lot faster than SFS. It's easy to reproduce the fragmentation issue by having multiple apps writing a file on SFS volume at the same time, and to verify the excessive fragmentation that results. Performance deterioration over time is a bit trickier to test, but most likely it's a direct result from the fragmentation issue described, but other factors can be in play as well. The issue has been observed by many (more than just me). Lack of repair tool (for anything but MorphOS) is a fact, too.
It's more than just an opinion.
I've been using it for years, and both of my 500GB HDs are almost full. I've never noticed any performance deterioration or ever had the need to repair any files or partions.
You're extremely lucky.
-
Benchmarks, smenschmarks, Piru I'm perfectly happy with SFS, never personally experienced any of the issues you described, I was simply just saying that there are other alternative file systems available.
Yup, maybe I am lucky... :)
-
Benchmarks, smenschmarks, Piru I'm perfectly happy with SFS, never personally experienced any of the issues you described, I was simply just saying that there are other alternative file systems available.
Yup, maybe I am lucky... :)
I'm a satisfied SFS user too, however I am looking forward to testing PFS3 when the "free" version is openly released.
-
Pfs3 ftw! :)
-
Benchmarks, smenschmarks, Piru I'm perfectly happy with SFS, never personally experienced any of the issues you described, I was simply just saying that there are other alternative file systems available.
Yup, maybe I am lucky... :)
wait your hour.
-
PFS3 is faster than SFS.
No comparison on my system. PFS destroyed SFS in my SysSpeed tests. PFS3 doesn't generate massively fragmented files when two or more applications write files to disk. SFS does.
I can't verify this. SFS is slower and thrashes the HD more with simultaneous writes. PFS3 performance doesn't deteriorate over time. SFS does.
This is my experience too. I wouldn't consider this is a major issue with SFS. PFS3 has a repair tool. Often with SFS the only option is to copy data over, reformat and copy data back (MorphOS does have a SFSDoctor tool, however).
SFS has a recovery tool that works under AmigaOS 3. It's very buggy and slow but I used it to recover a partition. I did have to reformat.
The biggest issue with SFS is bugs.
Can't the mask force the buffer alignment to whatever is needed? I use MASK=0x7FFFFFFC to force longword alignment as that is faster.
-
I have received various reports from playtesters that SFS has severe problems trying to do 2 or more simultaneous reads. (even worse if there is a simultaneous read+write). Supposedly, SFS does not multitask internally. Maybe some knowledgeable person out there could shed some light on this aspect?
-
I can still only say, with SFS I have never experienced any of the above mentioned problems...
I must be really lucky... :)
-
so this may be an incredibly retarded question but if I was to use one of these file systems, how does one go about it?
I am assuming a fresh install would be needed, correct?
-
Correct !!!
Naw only kidding, You can easily change a partition to SFS, but of course you still need to do a quick format and reinstall. It's well worth the hassle if you use large single files of up to 8GB like I do... :)
-
You don't need to reinstall. Just backup (for example copy all files to another partition), change file system, quick format and restore the backup.
-
You don't need to reinstall. Just backup (for example copy all files to another partition), change file system, quick format and restore the backup.
Someone mentioned in another thread that you need the RDB to contain the "driver" (or whatever it is) in order to boot from the disk too.
-
Someone mentioned in another thread that you need the RDB to contain the "driver" (or whatever it is) in order to boot from the disk too.
That's not specific to the boot partition. Prior to be able to use the file system on any partition, it needs to be added to the RDB.