Welcome, Guest. Please login or register.

Author Topic: What's so bad about Intel 8086 in technical terms?  (Read 21046 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show all replies
Re: What's so bad about Intel 8086 in technical terms?
« on: June 10, 2014, 08:57:23 PM »
Quote from: freqmax;766140
Most people here I guess finds the Motorola 68000 a really good design given the limitations at the time (economy, tech and market). The Intel 8086 and descendants were a less well thought design. But what specific technical aspects of it were made worse that any circumstance would had enforced?


First off, there is a difference between how good the CPU design is and how good the ISA design is. IMO, for it's time, the 68000 was an average implementation of a great ISA design. There were 8 bit upgraded to 16 bits processors that were faster than the 68000 although the 68000 had an advantages with complex code, large code and ease of optimizing. The 68000 ISA was created with the foresight that it would become a 32 bit processor (32 bit registers and operations). It had no major disadvantages when it became 32 bit. The x86 started out 8 bits with no foresight for 16 bits. There is a big difference between 8 bit memory banks and 16 bit direct addressing. The 8 bit x86 processors were using 2 variable registers with an accumulator for the result. Only certain registers were allowed to do specific functions. Orthogonality wasn't even conceived when they were designed. Growing to 16 bits was a big change with 8 bit legacy baggage. Then came 32 bit and extensions after most of the shorter efficient instruction encodings were taken. Some common instructions were dropped (INC) and new replaced (MOVSX/MOVZX). Compatibility modes were created adding complexity. Some things work and others still seem like bolt-ons and throwbacks to the 8 bit days. The x86 does have some advantages. The original instructions and addressing modes were generally simple so easy to clock fast. Certain instructions were more common and shorter which was easier to optimize internally in the CPU. The code density was good especially for 8 bit text. The 8 bit x86 processors weren't bad 8 bit processors. The ISA suffered from 8->16->32 bits while the 68k was designed for 32 bits from the beginning. Ironically, the x86 has been developed into the fastest consumer processor while the 68k was dropped after only 1 major ISA update.
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show all replies
Re: What's so bad about Intel 8086 in technical terms?
« Reply #1 on: June 11, 2014, 12:48:34 AM »
Quote from: bloodline;766150
Not sure if the AMD64 really counts as x86... I know it has full x86 compatibility, but the ISA is so far removed from the old x86 design, I would call it a new architecture.


Adding 8 somewhat general purpose 16 bit registers and 16 bit addressing instead of segmented memory banks may have been a bigger ISA change that the jump from 16 bit to 32 or even 32 bit to 64 bit with x86_64. It's true that the x86_64 ISA is so different than the original 8 bit x86 that it really isn't the same thing anymore. I would call it a new architecture also but I would do the same for any new ISA (Instruction Set Architecture).
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show all replies
Re: What's so bad about Intel 8086 in technical terms?
« Reply #2 on: June 18, 2014, 07:33:31 AM »
Quote from: freqmax;767007
What would you classify ARM Cortex-M and ARM Cortex-A as?
(presumably v7 and higher)


All ARM processors are load/store architecture. load/store = RISC therefore they are RISC.

ARM may have CISC like encodings with Thumb and complex addressing modes common on CISC but it's still not a register memory architecture.

load/store architecture = RISC
register memory architecture = CISC

Modern RISC: ARM (all variants), PPC/Power, MIPS, SPARC
Modern CISC: 68k, x86/x86_64, z/Architecture
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show all replies
Re: What's so bad about Intel 8086 in technical terms?
« Reply #3 on: June 18, 2014, 10:33:03 AM »
Quote from: biggun;767012
RISC was an invention of a certain time...

There was a golden time when CPU designers tried to make CPU cores which are nice to program in ASM. Great examples are VAX and 68K.


Easier to program in assembler usually equates to easier to create good compilers and easier debugging. The scursed (screwed and cursed?) 68020 addressing modes were easier for assembler programmers and compilers but they must have forgotten to consult the chip designers. The (bd,An,Xn) addressing mode is quite nice even if bd=32 bit is there more for completeness and crappy compilers. The double indirect wouldn't have been so bad either if they would have limited it to LEA, PEA, JSR and JMP (12 bytes max length). Not allowing it for MOVE alone reduces the max instruction length from 22 bytes to 14 bytes. There really wasn't a better way of encoding (bd,An,Xn) although double indirect could have been simplified and had a simpler encoding.

Quote from: biggun;767012

Then there was a time when chip technology allowed better clockrates,
but some companies failed to reach this because the complexity of their decoding logic
and complexity of their internal data pathes.
This was the time MOTOROLA scursed some of their 68020 instruction enhancements because it limited their clockrate -  and the time some people had the idea to avoid the problem by inventing new much simpler decoding schemas.


But was instruction decoding the clock rate limiting bottleneck on the 68060? Wasn't the 68060 slower with longer instructions because of fetching and not decoding? The timings are good for the complex addressing modes, if the instructions are short. It looks to me like the 68060 solved many of the 68020+ complexity problems only to be canned. It needed upgrading in some areas (like the instruction fetch) and more internal optimizations (more instructions that worked in both pipes, more instruction fusing/folding, a link stack, etc.) but it was a very solid early foundation to build on. It also would have benefited from a more modern ISA and ditching the transistor misers.
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show all replies
Re: What's so bad about Intel 8086 in technical terms?
« Reply #4 on: June 18, 2014, 10:48:21 PM »
Quote from: freqmax;767056
Much software can work with 32-bit space. So 64-bit environments may be stuck in some ways with more bits than really needed. Which will bloat code.

RISC route
Make fixed length instructions to simplify decoding.
Result: More fetch, memory and caches needed for larger code sizes

Reduce the number of instructions and addressing modes to increase the clock rate
Result: More instructions, larger programs and hotter processors

Use separate load/store instructions to simplify decoding and execution.
Result: Larger programs, more registers and OoO execution needed to avoid load/store bubbles

Move complexity to the compiler.
Result: Slower and larger programs needing more caches

Not enough addressing space and memory because programs are now too big
Result: Move to 64 bit which slows clock speeds and makes programs even bigger

Progress!

The other route is to stay with the 32 bit 68k but enhance it making programs even smaller. This reduces cache, memory and bandwidth requirements. The 68k will never clock as high as some other processors but it does offer strong single core/thread integer performance using low resources. Which is a better fit for the Amiga?
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show all replies
Re: What's so bad about Intel 8086 in technical terms?
« Reply #5 on: June 19, 2014, 12:10:54 AM »
Quote from: NorthWay;767087
Much of the original thinking behind RISC designs was that compilers had a hard time using all the fancy CISC instructions.

It is not exactly rocket science to speed up those things it uses then.
Compilers only use simple instructions -> make a cpu with simple instructions and speed those up.


This thinking was a bit naive considering the same RISC fans decided to move the complexity into the compilers that weren't smart enough to use complex instructions. Compiler can and do use complex instructions if they are the best option and fast. CISC often had slow instructions that couldn't be made fast or the functionality was either to specialized or not needed. These types of instructions are baggage for any processor and it's not just CISC that has them. The PPC ISA has it's fair share of baggage instructions now.

Quote from: NorthWay;767087

"The perfect number of registers in a CPU are 0, 1, or infinite" - quote from a class I took sometime.
Compilers have no trouble juggling lots of registers and figuring out when to keep values in registers and when to purge them. Many registers was an answer for having to go to memory for immediates and to lower memory pressure in general. With code going in loops it works.


Processor logic outpacing memory speeds is another limiting battle of modern processors. More registers does help but RISC doesn't have as much of an advantage here as would be expected. Processor logic speeds vs memory speeds are not as much of an issue for an fpga CPU. With less of a limitation here, fpga processors may be able to do more work in parallel and come surprisingly close to hard processors that are clocked much higher.  

Quote from: NorthWay;767087

RISC was a lot of good ideas that has had its advantages reduced by advances in manufacturing. RISC and CISC today are both struggling with instruction dependency and the IPC limits you get from that. EPIC and Mill have tried to break through that barrier. EPIC seems to be compiler limited and possibly with too much junk in the trunk, and Mill is so far mostly an idea. I don't know if there are other designs working on this.


Multi-cores and multi-threading are a good way to break through the dependency problems but the memory limitation remains (to a lesser extent), multi-processing overhead and cache coherency eat up a lot of the gains and some tasks can't be done in parallel. I think the Mill computer will have the same compiler complexity problems as VLIW processors. Good luck debugging that one when the compiler doesn't work right. I would still take CISC over all the choices even though it has the same limitations. It's simpler and easier to code with smaller programs. I like that.
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show all replies
Re: What's so bad about Intel 8086 in technical terms?
« Reply #6 on: June 19, 2014, 01:27:26 AM »
Quote from: freqmax;767093
What embedded CPU choice is a good one these days? There are a few 32-bit designs.. AVR, C28x, ColdFire, CPU32, ETRAX, PowerPC 603e, PowerPC e200, PowerPC e300, M-CORE, MIPS32 M4K, MIPS32 microAptiv MPU, MPC500, PIC, RISC, TLCS-900, TMS320C28x, TriCore, TX19A, etc. And the only VLIW seen in the flesh seems to be the products of Transmeta for an unattractive price.


Embedded has a few VLIW processors for specialized tasks. See the Fujitsu FR-V processors for example:

http://en.wikipedia.org/wiki/FR-V_%28microprocessor%29

They have amazing power efficiency but are very specialized. I recall another VLIW embedded processor also but I can't remember the name. Embedded is about the only place where VLIW processors are used. None are general purpose enough to be well known.
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show all replies
Re: What's so bad about Intel 8086 in technical terms?
« Reply #7 on: June 19, 2014, 02:59:33 AM »
Quote from: freqmax;767105
Nothing one wants to code C on?

VR-F has C support with GNU and can even use multiple operating systems. I would expect programming would have some similarities to an SIMD processor or GPU (where branches are evil) but I don't really know.