Welcome, Guest. Please login or register.

Author Topic: Amiga vs PC  (Read 68226 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline vidarh

  • Sr. Member
  • ****
  • Join Date: Feb 2010
  • Posts: 409
    • Show all replies
Re: Amiga vs PC
« on: August 13, 2010, 11:31:36 AM »
Quote from: Amiga_Nut;574591

Now had you pointed out that something like an A500/2000 etc didn't have a serial port capable of utilising a 33.6/56k modem unlike your average early 90s PC.............


Huh? 56k maybe - I never tried. But I ran a BBS off my A500 with a 33.6 modem, and did extensive downloads with the same modem. All at full speed with no problems.

I may have used an optimized serial port driver, I remember messing around with serial drivers a bit, but it certainly worked.
 

Offline vidarh

  • Sr. Member
  • ****
  • Join Date: Feb 2010
  • Posts: 409
    • Show all replies
Re: Amiga vs PC
« Reply #1 on: August 17, 2010, 10:25:30 AM »
Quote from: Karlos;575071

The machine has 4GB of RAM and 896MB of video RAM, which just isn't possible in a 32-bit OS 4GB address space (unless the OS supports PAE). Plenty of the applications (read games) I run in Windows are 32-bit, though drivers and codecs are 64-bit.

Generally, the benefits are that 64-bit optimised code runs faster on the CPU than legacy x86 code does (there are a few rare exceptions, even in some of my own code), since 64-bit code can make use of 16 64-bit general purpose registers for integer code and at least SSE2 for floating point/vector ops.
Furthermore, 32-bit applications in the 64-bit environment can allocate more physical RAM than they could in a 32-bit one, since on 32-bit, only around 2GB was addressable in total (1GB of address space reserved for OS/hardware space, another 1GB used to map in the video memory.


As an example of the benefits of this: I'm working on a mapping application that runs on Linux. This app frequently have to work on datasets way larger than 2GB, often over 4GB.

For the legacy 32 bit version, which I'm facing out, this code has to seek to a location and do one or more reads to get to specific data.

For the 64 bit version, it just uses mmap() to map the entire data file into memory at once (ca 2GB is really the practical limit for this on 32 bit Linux, regardless of PAE, since PAE only helps with the *total* amount of RAM in the system, not the per-process addressable space), and reads from wherever it wants to, and leaves the OS to optimize the disk IO accordingly (e.g. how many/few bytes are worth reading.

As it turns out, Linux does a pretty good job of that, and in any case not having to do explicit seek()'s and read()'s saves a massive amount by reducing the number of context switches.