Welcome, Guest. Please login or register.

Author Topic: What's so bad about Intel 8086 in technical terms?  (Read 20863 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show only replies by biggun
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #104 from previous page: June 18, 2014, 07:15:03 AM »
Quote from: freqmax;767007
What would you classify ARM Cortex-M and ARM Cortex-A as?
(presumably v7 and higher)


ARM are typical RISC chips.

Cortex-M are tuned for low power.
Cortex-A are available in various types.
Some are very simple in Order risc designs with pipeline length similar to chip from the late early 90th.
Some are a bit more fancy out or order designs with pipelines structures more similar to the PPC G3/G4.

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show only replies by matthey
Re: What's so bad about Intel 8086 in technical terms?
« Reply #105 on: June 18, 2014, 07:33:31 AM »
Quote from: freqmax;767007
What would you classify ARM Cortex-M and ARM Cortex-A as?
(presumably v7 and higher)


All ARM processors are load/store architecture. load/store = RISC therefore they are RISC.

ARM may have CISC like encodings with Thumb and complex addressing modes common on CISC but it's still not a register memory architecture.

load/store architecture = RISC
register memory architecture = CISC

Modern RISC: ARM (all variants), PPC/Power, MIPS, SPARC
Modern CISC: 68k, x86/x86_64, z/Architecture
 

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show only replies by biggun
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #106 on: June 18, 2014, 08:14:32 AM »
RISC was an invention of a certain time...

There was a golden time when CPU designers tried to make CPU cores which are nice to program in ASM. Great examples are VAX and 68K.

Then there was a time when chip technology allowed better clockrates,
but some companies failed to reach this because the complexity of their decoding logic
and complixity of their internal data pathes.
This was the time MOTOROLA scursed some of their 68020 instruction enhancements because it limited their clockrate -  and the time some people had the idea to avoid the problem by inventing new much simpler decoding schemas.
This was the golden time of the RISC chips.
This was the time RISC chip reached much higher clockrates than CISC.
RISC chips avoid the decoding and memory-data challenges.
RISC chips traded in simpler internal design with having sometimes to execute more instructions to do the same amount of work.

Some of the CISC designs then died. 68k and VAX are good examples for this.
Some CISC like x86 and Z continued and found solutions to the challenge.
Today CISC chips are the chips reaching the highest clockrates again.

Then CPU developers run into another problem.
Instruction dependancies. Neither CISC nor RISC does solve this.
This problem limits the amount of Super Scalarity you can sensibly have.

Again some people an ideas to "fix" this.
The idea to fix this was to create big macro instructions.
Keywords are EPIC or VLIW. ITANIUM is a chip of this design.

The CISC designs are generally easier to program.
The RISC and EPIC designs came up to avoid challenges of the CISC or CISC/RISC designs.
RISC and EPIC added their own limitations.

Today a major factor is compiler support.
When Itanium came out is was relativ strong but also very hard to program and very hard to write a good compiler for it - therefore the final performance of the software did dissapoint many.

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show only replies by matthey
Re: What's so bad about Intel 8086 in technical terms?
« Reply #107 on: June 18, 2014, 10:33:03 AM »
Quote from: biggun;767012
RISC was an invention of a certain time...

There was a golden time when CPU designers tried to make CPU cores which are nice to program in ASM. Great examples are VAX and 68K.


Easier to program in assembler usually equates to easier to create good compilers and easier debugging. The scursed (screwed and cursed?) 68020 addressing modes were easier for assembler programmers and compilers but they must have forgotten to consult the chip designers. The (bd,An,Xn) addressing mode is quite nice even if bd=32 bit is there more for completeness and crappy compilers. The double indirect wouldn't have been so bad either if they would have limited it to LEA, PEA, JSR and JMP (12 bytes max length). Not allowing it for MOVE alone reduces the max instruction length from 22 bytes to 14 bytes. There really wasn't a better way of encoding (bd,An,Xn) although double indirect could have been simplified and had a simpler encoding.

Quote from: biggun;767012

Then there was a time when chip technology allowed better clockrates,
but some companies failed to reach this because the complexity of their decoding logic
and complexity of their internal data pathes.
This was the time MOTOROLA scursed some of their 68020 instruction enhancements because it limited their clockrate -  and the time some people had the idea to avoid the problem by inventing new much simpler decoding schemas.


But was instruction decoding the clock rate limiting bottleneck on the 68060? Wasn't the 68060 slower with longer instructions because of fetching and not decoding? The timings are good for the complex addressing modes, if the instructions are short. It looks to me like the 68060 solved many of the 68020+ complexity problems only to be canned. It needed upgrading in some areas (like the instruction fetch) and more internal optimizations (more instructions that worked in both pipes, more instruction fusing/folding, a link stack, etc.) but it was a very solid early foundation to build on. It also would have benefited from a more modern ISA and ditching the transistor misers.
 

Offline ElPolloDiabl

  • Hero Member
  • *****
  • Join Date: May 2009
  • Posts: 1702
    • Show only replies by ElPolloDiabl
Re: What's so bad about Intel 8086 in technical terms?
« Reply #108 on: June 18, 2014, 11:08:18 AM »
At the time would it have been worth continuing the 68k line? Was the mhz race a factor in dropping it? Going to PowerPC was meant to make a common architecture.

Was it the existing software that held back the 68k?
Go Go Gadget Signature!
 

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show only replies by biggun
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #109 on: June 18, 2014, 11:15:08 AM »
Quote from: matthey;767019
But was instruction decoding the clock rate limiting bottleneck on the 68060?
No it was not.

Quote from: matthey;767019
Wasn't the 68060 slower with longer instructions because of fetching and not decoding?
Yes this was a limit in 68060-A which they wanted to fix in 68060-B.

The big problem also known as "how-the f_uc_k can I decode this instructions fast" was before the 68060 came out.
During the time of the 68040 Motorola had no good answer to this.
And these years around early 90th were the golden years of the RISC.



The 68060 came out late - by this time Intel and Motorola had already solutions for this.


So yes - when the 68060 came out there was no real need for the RISC trick anymore.
In theoy Moto could have continued the 68k line at this time.
But customer were already gone to other chips - so the market was lost.
And Moto wanted to focus on the PPC chips.

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #110 on: June 18, 2014, 11:18:43 AM »
Quote from: matthey;767011
load/store architecture = RISC
register memory architecture = CISC

That is only how they are defined now because all of the other things that made a chip RISC have been taken on by CISC processors. To the point where it largely makes no difference whether it's RISC or CISC.
 
RISC was load store because it allowed the instruction decoding to be simpler, which meant you didn't have to use microcode, which at the time allowed higher instruction throughput. Now RISC have complex instruction decoding and both RISC & CISC can either be micro-coded or not.
 
The only RISC processor that I like is 32 bit MIPS as all the others are horribly complex.
« Last Edit: June 18, 2014, 11:33:07 AM by psxphill »
 

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show only replies by biggun
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #111 on: June 18, 2014, 11:19:53 AM »
Quote from: matthey;767019
It looks to me like the 68060 solved many of the 68020+ complexity problems only to be canned. It needed upgrading in some areas (like the instruction fetch) and more internal optimizations (more instructions that worked in both pipes, more instruction fusing/folding, a link stack, etc.) but it was a very solid early foundation to build on. It also would have benefited from a more modern ISA and ditching the transistor misers.


This is absolutely true.

The 68060 did many thinks right.
The 68060-B which was planned but did not come out would have been a great chip.

The enhancements you mentioned like Fusion, Linkstack, Folding, Conditional rewrite.
These would have made super chips.

And with a minimal cleanup and ditching some "near" useless stuff the 68K could have been a great architecture which could also today easily compete and beat others

Offline TeamBlackFox

  • Master SPARC
  • Full Member
  • ***
  • Join Date: Jan 2014
  • Posts: 220
    • Show only replies by TeamBlackFox
Re: What's so bad about Intel 8086 in technical terms?
« Reply #112 on: June 18, 2014, 12:33:45 PM »
Quote from: psxphill;767025
The only RISC processor that I like is 32 bit MIPS as all the others are horribly complex.


I've only done a little MIPS64 ASM but coding for the R14k in my Fuel has been pretty darn easy, and my friend who is learning ARM64 ASM says its easy too. Whats so complex between MIPS32 and MIPS64 other than the extended modes for 64-bit addressing and such?
After many years in the Amiga community I have decided to leave the Amiga community permanently. If you have a question about SGI or Sun computers please PM me and I will return your contact as soon as I can.
 

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show only replies by biggun
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #113 on: June 18, 2014, 12:53:25 PM »
Quote from: psxphill;767025

The only RISC processor that I like is 32 bit MIPS as all the others are horribly complex.


The original MIPS implementation was very simple but absolutely not future proove.
Originally MIPS forced the developer/compiler to take too much CPU specific information into account.
This meant the original MIPS CPU could not be properly upgraded / performance enhanced without breaking all old programs.

The 68K architecture is much more future proove.

MIPS learned this also and changed their achitecture.

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #114 on: June 18, 2014, 01:12:13 PM »
Quote from: TeamBlackFox;767033
Whats so complex between MIPS32 and MIPS64 other than the extended modes for 64-bit addressing and such?

I'm sure 64 bit mips is better on a technical level (more bits is better right?), but I just prefer the 32 bit version. I thought maybe now the thread is derailed I'd throw in my emotional preference.
 

Offline TeamBlackFox

  • Master SPARC
  • Full Member
  • ***
  • Join Date: Jan 2014
  • Posts: 220
    • Show only replies by TeamBlackFox
Re: What's so bad about Intel 8086 in technical terms?
« Reply #115 on: June 18, 2014, 03:16:58 PM »
> I'm sure 64 bit mips is better on a technical level (more bits is better  right?), but I just prefer the 32 bit version. I thought maybe now the  thread is derailed I'd throw in my emotional preference.       

You're right it is sort of derailed. But if you ever have interest in trying out a MIPS64 machine let me know as I plan on setting up one of my SGIs headless soon. I'd be happy to hand out an ssh account. All of mine are MIPS64 so yeah...
After many years in the Amiga community I have decided to leave the Amiga community permanently. If you have a question about SGI or Sun computers please PM me and I will return your contact as soon as I can.
 

Offline freqmaxTopic starter

  • Hero Member
  • *****
  • Join Date: Mar 2006
  • Posts: 2179
    • Show only replies by freqmax
Re: What's so bad about Intel 8086 in technical terms?
« Reply #116 on: June 18, 2014, 05:30:33 PM »
Much software can work with 32-bit space. So 64-bit environments may be stuck in some ways with more bits than really needed. Which will bloat code.
 

Offline commodorejohn

  • Hero Member
  • *****
  • Join Date: Mar 2010
  • Posts: 3165
    • Show only replies by commodorejohn
    • http://www.commodorejohn.com
Re: What's so bad about Intel 8086 in technical terms?
« Reply #117 on: June 18, 2014, 05:46:50 PM »
Quote from: freqmax;767056
Much software can work with 32-bit space. So 64-bit environments may be stuck in some ways with more bits than really needed. Which will bloat code.
Depends on the architecture. Not every CPU confines itself to using instructions exactly the same length as its word size. These days they try to make it less arbitrary, so as to keep instruction fetch simple (you want your instructions to always be an even divisor of the data bus size, and to always be aligned on an even boundary, so that one instruction doesn't require two separate fetches,) but plenty of 64-bit architectures use 32-bit instruction words.
Computers: Amiga 1200, DEC VAXStation 4000/60, DEC MicroPDP-11/73
Synthesizers: Roland JX-10/MT-32/D-10, Oberheim Matrix-6, Yamaha DX7/FB-01, Korg MS-20 Mini, Ensoniq Mirage/SQ-80, Sequential Circuits Prophet-600, Hohner String Performer

"\'Legacy code\' often differs from its suggested alternative by actually working and scaling." - Bjarne Stroustrup
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show only replies by matthey
Re: What's so bad about Intel 8086 in technical terms?
« Reply #118 on: June 18, 2014, 10:48:21 PM »
Quote from: freqmax;767056
Much software can work with 32-bit space. So 64-bit environments may be stuck in some ways with more bits than really needed. Which will bloat code.

RISC route
Make fixed length instructions to simplify decoding.
Result: More fetch, memory and caches needed for larger code sizes

Reduce the number of instructions and addressing modes to increase the clock rate
Result: More instructions, larger programs and hotter processors

Use separate load/store instructions to simplify decoding and execution.
Result: Larger programs, more registers and OoO execution needed to avoid load/store bubbles

Move complexity to the compiler.
Result: Slower and larger programs needing more caches

Not enough addressing space and memory because programs are now too big
Result: Move to 64 bit which slows clock speeds and makes programs even bigger

Progress!

The other route is to stay with the 32 bit 68k but enhance it making programs even smaller. This reduces cache, memory and bandwidth requirements. The 68k will never clock as high as some other processors but it does offer strong single core/thread integer performance using low resources. Which is a better fit for the Amiga?
 

Offline NorthWay

  • Full Member
  • ***
  • Join Date: Jun 2003
  • Posts: 209
    • Show only replies by NorthWay
Re: What's so bad about Intel 8086 in technical terms?
« Reply #119 on: June 18, 2014, 10:54:23 PM »
Quote from: psxphill;767025
The only RISC processor that I like is 32 bit MIPS as all the others are horribly complex.

Didn't that one have the delay-slot after branch instruction that was later considered a dead-end?
(I.e. the instruction following a branch was always executed.)