Welcome, Guest. Please login or register.

Author Topic: Speed ok amiga network  (Read 1619 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline alewis

  • Full Member
  • ***
  • Join Date: Jul 2006
  • Posts: 176
    • Show all replies
Re: Speed ok amiga network
« on: August 08, 2006, 10:08:01 PM »
Long post - but DONT set async transfer mode, it is waaay slow! See below

There are a number of factors affecting transfer speed between two machines

Hardware:
"Speed" of source/target medium; latency, rotational speed, medium<->interface transfer speed, sync/asyn mode

Device access method; PIO, DMA, etc

Bus bandwidth and contention; ie the speed of the computer bus, and the contention from other devices on the bus

Software:
Filing system used
Fragmentation
Cache

Network:
Topology; bus, ring, or star
Bandwidth; 10mbit -> 1gig
Contention/switch speed; hub or switch speed

Obviously, you can't transfer data over a network faster than the slowest device can read or write it. And Amiga SCSI disks are slow; in general being 10mb SCSI-2 devices. The max bandwidth on the SCSI bus is limited to 10MB/sec, including command overhead, and many SCSI-2 drives cannot transfer data between the drive and the host adaptor (SCSI card) at this rate anyway. Multiple drives on Amiga SCSI cards /could/ result in a further slow down if the card does not support command queing and disconnect/reconnect. The former allows a number of commands to "queue" on the controller, and even be sorted by device. The later allows a "slow" device to recieve a command, relinguish the bus, excute the command and read/write data, and then send it over the bus. Latency and rotational speed speak for themselves, but again, the lower the latency and higher the rotational speed, the nimbler the drive. SCSI-2 drives are, in general, 3600-5400rpm disks, with perhaps some of the "high-end" [of the day] touching 7200.

Data transfer between the card and the controller is either synchronous or asynchronous. Synchronous is faster, as the devices (counting the controller as a device) can send ACKs at the same time as receiving data. In async mode, each block of data is received and then ACK'ed, with the sender awaiting an ACK before sending the next block.

How the card sends data is another factor. DMA transfer are faster than non-DMA; in the former, the card controls the data transfer and sends data direct to memory. In non-DMA, the CPU carries out the transaction. Depending on your processor, non-DMA can actually be faster than DMA, at the expense of CPU utilisation. However, this can be outweighed as the CPU also has to control the network transfer. This may seem strange in todays PC world, where uDMA-133 drives rule, and NICs are appearing which off-load network activity and IP stack processing.

Bus - ZorroII has a max bandwidth, of, erm, I forget, and ZorroIII is only, what, 33mb/sec? Or lower? Either way, that bandwidth is also shared with any other device on the bus, such as a graphics card, and of course the network card. Bus contention means something has to wait it's turn.

Software - at a low level, the filing system is a bottleneck. FFS has limits to how fast it can read/write/create files. PFS improved on this, AFS even more so. Not just writing the data, but also updating the directory blocks.

Fragmentation is important, and is worsened given the hardware limitations. A slow drive, with target data all over it (or free space in non-contiguous blocks scattered over the drive) is going to be way way slower than a slow drive with contiguous data or free space. Even a fast drive is impacted by fragmentation.

Cache. If you have a large block of memory as a write cache, this can perhaps be filled quicker than directly writing to a slow device, especially in conjunction with a large read cache on the spource machine. Instead of reading in spurts (waiting an ACK from the target machine as it receives and then writes each block), the source drive can fill the cache, which is then sent over the wire, into the cache at the other end. The logical transfer completes quicker.

Topology. Obviously 10mbit has less bandwidth than 100mbit. Cheapernet (one long bit of co-ax) is way slow, only one device can transmit at a time. Less obvious is that a hub network, using RG-45, is logically the same as a single piece of co-ax, and only one device can talk at a time. Two devices try, and the result is a collision, abort, and retry. Cheap hubs may introduce network latency, as the hub may not work at 10bit speed. The same applies, albeit to a lesser degree, when using the old 100mbit hubs.

Switches are the answer to this. They segment the network so any two devices can talk direct to each other, and 50% of the ports can be active at once (although cheap switches may slow down when this actually happens!). A good, reliable, full-speed switch is the Dell 2716; 16port 10/100/1000 gigabit switch, full-speed fabric, web managable, and only costs £125 in UK... Converted me from Cisco (for home use) at that price.

Overall. I remember benchmarking several SCSI controller and drive combos in the eary 90's... I thought 800kb/sec was fast then... some were as slow as 80kb/sec. Sorta pales in comparison with todays drives!

How to increase speed.

Check SCSI bus is set to SYNC transfer
Check maxtransfer mask is set to the optimal level (google for this, depending on hardware)
Benchmark the drive and controller, and if funds allow, upgrade.
Ensure the drives are optimised and defragged. This allows for the quickest reads/writes at DOS level.
Consider AFS instead of FFS
If you have the memory, add a 3rd party disk cache program
Tweak network settings; now depending on stack/card driver/card this can be 1512, 1514, or 1524 bytes.
Review network topology; in order of bandwidth; bus (co-ax), hub, switch. Invest in a switch, especially if you have multiple machines. And if you are using GIGe, then get a decent GIGe switch such as the Dell.