Welcome, Guest. Please login or register.

Author Topic: Does Size Matter?  (Read 1947 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline Karlos

  • Sockologist
  • Global Moderator
  • Hero Member
  • *****
  • Join Date: Nov 2002
  • Posts: 16879
  • Country: gb
  • Thanked: 5 times
    • Show all replies
Re: Does Size Matter?
« on: July 27, 2003, 02:41:16 AM »
@Atheist

You keep asking this one :-)

Look, a 64-bit processor does not necessarily imply 64-bit opcodes. It may be true of some VLIW architectures but it is not universally so.

64-bit means that the processors can operate directly on 64-bit integer and pointer operands (thereby breaking the 4Gb physically addressable memory barrier).

The 68020+ are all true 32-bit processors. However, the typical 680x0 opcode size is still defined in the 16-bit words used on the original 68000. 68020 code is not 2x the size of 68000 code by virtue of being truly 32-bit...

In a similar fashion, the 32-bit PPC instruction set is 32-bits wide. The 64-bit implementations will still use 32-bit sized opcodes, if they dont they wont be code compatible with the earlier 60x / G3 / G4.

Ultimately the change is that the size of the general purpose registers has expanded to 64-bits wide and some new instructions exist to handle 64-bit data.

I hope this clarifies it for you a bit :-)

-edit-

The statement about CISC v RISC clock cycles is not true. Most modern CISC architectures (x86 in particular) break a CISC instruction down into a series of micro operations that are then fed to multiple execution units. This decomposition is a complex process.

The core of the chip borrows heavily from RISC design paragdims, using strategies such as pipelining, rename registers etc.

Basically modern CISC chips only reach the performance they do by copying RISC ideas internally. Its basically a kludge.

If you think of an x86 executing several micro ops from a single CISC instruction in parallel, consider a genuine RISC processor will be executing several RISC instructions in parallel. Its common for current RISC cpus to complete several instuction every cycle :-D
int p; // A
 

Offline Karlos

  • Sockologist
  • Global Moderator
  • Hero Member
  • *****
  • Join Date: Nov 2002
  • Posts: 16879
  • Country: gb
  • Thanked: 5 times
    • Show all replies
Re: Does Size Matter?
« Reply #1 on: July 27, 2003, 03:00:05 AM »
Quote

iamaboringperson wrote:
Quote
The 64-bit implementations will still use 32-bit sized opcodes,
thats the bit that i was unsure of! :lol: (at times im to lazy to go and do the research myself)


The PPC instuction set is completely orthoganal AFAIK. Adding opcodes that are 64-bits wide would really screw it up :-)

Quote

karlos! i think you have just exposed the secrets to human inteligence! i could imagine pretty soon you will have a software human being(AI) in its beta testing stage ;-)


Like a real human, memory fragmentation over time will cause this algorithm to break down (unless were lucky enough to get the storage for idea recycled each time that is ;-))
int p; // A
 

Offline Karlos

  • Sockologist
  • Global Moderator
  • Hero Member
  • *****
  • Join Date: Nov 2002
  • Posts: 16879
  • Country: gb
  • Thanked: 5 times
    • Show all replies
Re: Does Size Matter?
« Reply #2 on: July 27, 2003, 03:19:57 AM »
Fear not :-)

Whilst the code will doubtlessly increase in size by 50% or so, think how compact the OS actually is.

AmigaOS (and MorphOS and AROS) stick to the amiga notion of using lots of shared code, the vast majority of which is fully reentrant.

Windows, by comparison, is extremely bad at this, it's not uncommon to have several instances of the same .dll in memory, which defeats the whole purpose of dlls :-) Hell, if you go into Win2Ks advanced options you'll find the option to run every window task in seperate code space (the idea being that its more stable than allowing reentrant behaviour :lol:).

Add to that the need for huge memory buffers to hide the inadequacies of the filing systems, lazy programmers and the assumption that the end user will always buy more ram and you get the present nightmare.

We have some really unique concepts to our OS too. Datatypes are one such example that reduces an applications dependency on having its own code to load/save data. Contrast that to other platforms where every app is resposible for such things and you can see how they bloat.

Amiga programmers, long since attuned to the need for compact, fast and reusable code to get the performance on our 'older' hardware have the opportunity to create a whole new wealth of applications on the new hardware. We live in interesting times :-)

:-D

Irrespective of the CPU type, satan himslef will be skating the ice to work the day any incarnation of AmigaOS and its applications requires anything like the resources Windows does :-)
int p; // A