Welcome, Guest. Please login or register.

Author Topic: What's so bad about Intel 8086 in technical terms?  (Read 21042 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline Fats

  • Hero Member
  • *****
  • Join Date: Mar 2002
  • Posts: 672
    • Show all replies
Re: What's so bad about Intel 8086 in technical terms?
« on: June 19, 2014, 10:11:54 PM »
Quote from: psxphill;767118
Berkley RISC started because they noticed that when compiling Unix they were only using 30% of the 68000 instruction set. They weren't using all addressing modes of the add instruction for instance. I don't know if they spent any time on figuring out whether the compiler would have a hard time using them, or even confirming whether the compiler would never use them. Just that compiling unix didn't use them.

...

Their motivation was simplifying the hardware so they could have dedicated hardware for parallel processing, a concept that was put to good use in the Amiga.
 
x86 compilers can now use a lot of the fancy CISC instructions, which internally in the CPU is just a macro for a set of RISC instructions anyway.


One thing to realise is that this CISC vs RISC discussion is from a time when single transistor cost was still important. Moving complexity form the chip to compiler could result in a more cost effective solution at that time.
In recent times where the cache memories are using the majority of the transistor budget this reasoning is not valid anymore.

Also in current when extensions to a CPU instruction set are made I think always the compiler side is included to be sure it can be effectively used.
Trust me...                                              I know what I\'m doing