Welcome, Guest. Please login or register.

Author Topic: What's so bad about Intel 8086 in technical terms?  (Read 21217 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« on: June 12, 2014, 12:04:24 PM »
Quote from: freqmax;766140
Most people here I guess finds the Motorola 68000 a really good design given the limitations at the time (economy, tech and market). The Intel 8086 and descendants were a less well thought design. But what specific technical aspects of it were made worse that any circumstance would had enforced?

I can think of some personal points:
 * Segmentation registers
 * Lacks the "MOVE" instruction?
 etc..



Maybe historically it might make sense to start the comparison with the PDP-11?

The PDP-11 was first.
The 8086 could be regarded as inpsired by PDP-11 but with limitations...
Also the 68000 could be regarded by elements of the PDP-11.

The x86 has MOV but it can only do either "mem to reg"  or "reg to mem" it can NOT do the "mem to mem" like the 68000.
This is both a limitation as also a big advantage for the x86.
Speed wise doing two instruction (mem),reg and reg,(mem)
is the same as doing one (mem),(mem) as  the limiting factor is the memory access.
The disadvantage of the x86 here was to have 2 instructions needed.
This makes the code sometimes a little bigger.
The big advantage was that this simpelr encoding
was much shorter therefore the code could save a lot of space.

The 68000 being 32bit was much more flexible than the 8086.
But the x86 "improved" and when you later compare the 486 and the 68030 -
The x86 was not that bad anymore....

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #1 on: June 12, 2014, 01:27:14 PM »
Quote from: freqmax;766317
What about little endian on x86? I have always found that really annoying.

Historically little endian was a real advantage.

Lets give an example:



lets say you have in your register this number:
$11FF

And you want to add from memory this value
B: dc.w $0001

Lets say your register is 16 bit and your memory bus 8bit and your ALU can do 8bit per cycle.

A big endian machine needs to read the memory content
= 2 bus cycles
then it can add
= this takes another 2 cycles


For little endian the value look like this
B: dc.w $0100

A little endian machine can read the first byte "01" and add it right away, then it gets a carry out.
in the next cycle it can read the next byte add it using the carry.
This means the little endian machine can save 1 cycle compared to the big endian machine
« Last Edit: June 12, 2014, 01:29:16 PM by biggun »
 

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #2 on: June 12, 2014, 02:09:19 PM »
Quote from: freqmax;766322
There's no performance enchancing technique to get around it?
I guess m68k suffers from an extra cycle?


Well in 1979 this was a small advantage.
When you had 32 bit registers, and 16bit memory.

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #3 on: June 12, 2014, 08:45:32 PM »
Quote from: psxphill;766256

The P4 had a successor that was cancelled.


Yeah, do you recall their marketing talks when they
announced that their next Gen Pentium will reach 10 Gigaherz ?

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #4 on: June 13, 2014, 08:51:01 AM »
Quote from: freqmax;766422
Actually when you need a machine that perform as fast as possible per watt used then ARM beats Intel.
...

The performance depends very much in the task / benchmark you run.
In my experience current Intel cores are generally very good in many different tasks.

Here is a benchmark which stresses cache and conditional code execution.
The 68060 is not bad for its age.
Its faster than than a 240 Mhz Coldfire.

While modern Intel chips score very good.
The tested ARM chips were not impressive.

http://www.apollo-core.com/sortbench/index.htm?page=benchmarks

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #5 on: June 16, 2014, 02:57:36 PM »
While ARM and PPC are load/store machines - x86 and 68K are CISC machines.


The decoding is much simpler for RISC machines.
This means when you compare a real simple CORE which does 1 instruction per cycle the RISC machine is smaller /needs less power.

Also decoding multiple instructions is in the naive approach a lot simpler with RISC machines.
This means developing a super-scalar decoder is simpler for RISC.
But Intel,AMD and also new 68K chips have found their solutions to also be able to fast decode several instructions per cycle.

Now a CISC machine also has several advantages.
1) CISC instructions are much more powerful than RISC instructions.
For example:
ADDi.L #12456,(48,A0,Dn*8)
1 instruction on CISC  - some CISC can even do this in 1 cycle.
= you need about 6 instructions to do the same on POWER

2) CISC instruction are much more compact.
This means caches can cache more instructions, and cache can also supply moer instrucitoner per cycle to the CPU.

To good designed CISC machine can do a lot of work per cycle.
Its not easy even for good RISC machines to keep up with this.

RISC has some clear advantages.
RISC chips are seasier to design.
Low performance = simple RISC chips need low power.

When you go high end the more complex CISC decoder is not the only problem anymore.
- Instruction Cache bandwidth limitations
- dependancies between instructions
There are the important topics.
RISC is no advantage here.

EPIC tries to address some of those but also has their very own pitfalls.

So yes - I can see that ARM has by design an advantage in the low performance region.
But in the high perfromance region - the problems are diffirent - and RISC is not in advantage here anymore.

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #6 on: June 17, 2014, 10:44:18 PM »
Quote from: bloodline;766910
there's no such thing as CISC and RISC anymore, all modern processors are a hybrid of these two concepts.


Err no.

CISC chips = can operate on memory.

RISC chips = are load/store machines and can only operate on register.


68K and x86 = CISC

MIPS/POWER/ARM = RISC


Whether your chip is internally hardcoded, or does Microcode or has pipeline has nothing to do with CISC or RISC.

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #7 on: June 18, 2014, 03:32:42 AM »
Quote from: bloodline;766997
Hahaha, when marketing becomes policy :)

My defitition is the definition that CPU designers use ....


You are right that "marketing" was often misusing the RISC/CISC definitions.
And companies like IBM came up with those definition for marketing reasons.


Today CPUs are classified as either

1)
LOAD-STORE / or REGISTER-REGISTER / or RISC =
these CPU's can only operate on Register and NOT on memory.

the opposite are
2) CISC Architecture which can operate on memory.



Quote from: bloodline;766997

the x86 is actually a RISC machine! Since it's non orthogonal ISA often requires one to load data into Registers for processing and then written back to the main memory.

The x86 can generally use 1 operant from memory.
The 68k can for some operations have 2 operants in memory.
A VAX can even have 3 operants in memory.


But a RISC machine NEVER can use an operant in memory.


Quote from: bloodline;766997

PPC an ARM are examples of RISC chips that have woefully complex instruction sets,

Complex?
No, not really...

They have many instructions and some instruction also take more than 1 cycle.
But their instructions are all regular and not complex neither in execution nor in decoding.

The main complixity that RISC took away from CISC was the decoding complexity.
The 68k did support instruction up to 10 byte length. - This was difficult enough to decode.
Since the 68020 Motorola broke did record and supported even over 20 bytes - This complixity was a problem which made making 68k fast really difficult.

The common dominiator of CISC chips are the complex address modes.
And that instruction can operate on memory and sometimes even can have more than one operant in memory made the instruction very complex to decode.
So complex that it became very challanging for CPU developers
to invent decoders which are able to decoder more than 1 instruction per cycle.

Not all CISC chips are equally complex to decode.
68000 was complex but instruction size could be determined with decoding of 16bits. This is OK.
The Z chip. IBMs CISC mainframe design - its instruction size can be decoder by evaluating only 2 bits. This is nice.
While the added address modes of the 68020+ make it neccessary to look at 10 bytes = 80bit to be able to decode it length. This change was a real big mistake by Motorola.


If you want to understand wheter a chip is CISC or RISC then simply check a few points:

Can the chip support 3, 2 or even 1 operants in memory?
RISC can't.

Does the chip allow updating only parts of their registers in a BYTE/WORD/LONGWORD fashion?
RISC don't.

Does it allow full size immediates encoded in their instructions?
Like 32bit or 64bit immedates?
RISC don't.


Of course not all chips in one category are the same,
VAX was more CISCy then all other CISC chips.
The VAX could read two operants from memory, do an operation with them and store the result as third operant again to memory. And this all in a single instruction.

The 68k can use 2 memory operants only a few instructions.
Them being ADDX,SUBX,CMPM, MOVE, ABCD, SBCD

The x86 generally only allows 1 memory operant.


RISC chips do not allow even 1 memory operant.

Coding RISC chips is different han coding CISC chips.
With CISC chips you can use immediates easily.
With RISC chips you to can only use small immediates embedded in your instructions stream.
All bigger constanst you have to reference over a pointer from memory.
All bigger offsets from a pointer you can not include in your instruction but you have to create with extra instructions. The default GCC compiler setting is big data model nowadays.
This means that pointers to immediates are per default generated with 2 extras instructions.

This means for something which looks "simple" to a CISC developer as
ADD #64bitimmediate,Register  

On POWER the per default generated code is

2 instruction to generate a 32bit offset
1 instruction to load data from offset plus base pointer into a tempregister
1 instruction to add the tempregister to the register.


If you look at generated code you see much more examples for this.
You see such code very often when you compare SSE instructions with POWER instructions.
x86 needing 1 instruction and directly referencing 1 operant from memory.
POWER needing 4 instruction to do exactly the same work.

You also see this with typical integer code.
Good CISC chips like 68060 or modern x86 are clock by clock very efficient in integer operations.
Its very difficult to keep their pace with RISC chips as RISC chips need to execute much more instructions to do the same amount of work.

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #8 on: June 18, 2014, 07:15:03 AM »
Quote from: freqmax;767007
What would you classify ARM Cortex-M and ARM Cortex-A as?
(presumably v7 and higher)


ARM are typical RISC chips.

Cortex-M are tuned for low power.
Cortex-A are available in various types.
Some are very simple in Order risc designs with pipeline length similar to chip from the late early 90th.
Some are a bit more fancy out or order designs with pipelines structures more similar to the PPC G3/G4.

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #9 on: June 18, 2014, 08:14:32 AM »
RISC was an invention of a certain time...

There was a golden time when CPU designers tried to make CPU cores which are nice to program in ASM. Great examples are VAX and 68K.

Then there was a time when chip technology allowed better clockrates,
but some companies failed to reach this because the complexity of their decoding logic
and complixity of their internal data pathes.
This was the time MOTOROLA scursed some of their 68020 instruction enhancements because it limited their clockrate -  and the time some people had the idea to avoid the problem by inventing new much simpler decoding schemas.
This was the golden time of the RISC chips.
This was the time RISC chip reached much higher clockrates than CISC.
RISC chips avoid the decoding and memory-data challenges.
RISC chips traded in simpler internal design with having sometimes to execute more instructions to do the same amount of work.

Some of the CISC designs then died. 68k and VAX are good examples for this.
Some CISC like x86 and Z continued and found solutions to the challenge.
Today CISC chips are the chips reaching the highest clockrates again.

Then CPU developers run into another problem.
Instruction dependancies. Neither CISC nor RISC does solve this.
This problem limits the amount of Super Scalarity you can sensibly have.

Again some people an ideas to "fix" this.
The idea to fix this was to create big macro instructions.
Keywords are EPIC or VLIW. ITANIUM is a chip of this design.

The CISC designs are generally easier to program.
The RISC and EPIC designs came up to avoid challenges of the CISC or CISC/RISC designs.
RISC and EPIC added their own limitations.

Today a major factor is compiler support.
When Itanium came out is was relativ strong but also very hard to program and very hard to write a good compiler for it - therefore the final performance of the software did dissapoint many.

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #10 on: June 18, 2014, 11:15:08 AM »
Quote from: matthey;767019
But was instruction decoding the clock rate limiting bottleneck on the 68060?
No it was not.

Quote from: matthey;767019
Wasn't the 68060 slower with longer instructions because of fetching and not decoding?
Yes this was a limit in 68060-A which they wanted to fix in 68060-B.

The big problem also known as "how-the f_uc_k can I decode this instructions fast" was before the 68060 came out.
During the time of the 68040 Motorola had no good answer to this.
And these years around early 90th were the golden years of the RISC.



The 68060 came out late - by this time Intel and Motorola had already solutions for this.


So yes - when the 68060 came out there was no real need for the RISC trick anymore.
In theoy Moto could have continued the 68k line at this time.
But customer were already gone to other chips - so the market was lost.
And Moto wanted to focus on the PPC chips.

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #11 on: June 18, 2014, 11:19:53 AM »
Quote from: matthey;767019
It looks to me like the 68060 solved many of the 68020+ complexity problems only to be canned. It needed upgrading in some areas (like the instruction fetch) and more internal optimizations (more instructions that worked in both pipes, more instruction fusing/folding, a link stack, etc.) but it was a very solid early foundation to build on. It also would have benefited from a more modern ISA and ditching the transistor misers.


This is absolutely true.

The 68060 did many thinks right.
The 68060-B which was planned but did not come out would have been a great chip.

The enhancements you mentioned like Fusion, Linkstack, Folding, Conditional rewrite.
These would have made super chips.

And with a minimal cleanup and ditching some "near" useless stuff the 68K could have been a great architecture which could also today easily compete and beat others

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #12 on: June 18, 2014, 12:53:25 PM »
Quote from: psxphill;767025

The only RISC processor that I like is 32 bit MIPS as all the others are horribly complex.


The original MIPS implementation was very simple but absolutely not future proove.
Originally MIPS forced the developer/compiler to take too much CPU specific information into account.
This meant the original MIPS CPU could not be properly upgraded / performance enhanced without breaking all old programs.

The 68K architecture is much more future proove.

MIPS learned this also and changed their achitecture.

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #13 on: June 19, 2014, 09:11:48 AM »
Quote from: psxphill;767118

x86 compilers can now use a lot of the fancy CISC instructions, which internally in the CPU is just a macro for a set of RISC instructions anyway.


This description is not wrong but also not right.
Under this description every CISC CPU ever made on uses RISC instructions.

For example:
The 68000 was a CISC CPU.
The 68000 uses microcode for each instruction.
The "micro-code" pieces can be regarded as RISC.
This means the 68000 did for an ADD (mem),Reg in micocode
* calc EA
* load mem, to temp
* add tmp to reg

So was the 68000 already a RISC chip?

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show all replies
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #14 on: June 19, 2014, 03:51:51 PM »
Quote from: psxphill;767126
No, microcode isn't like RISC. It's just a table that the cpu uses as part of running the standard opcodes.

I know what Mico-code is.

But where do you know that the microcode lines are not like RISC?
Mircrocode is list of micro-instruction - each one them the CPU can do in a single cycle.
Where is the difference to what the "Pentium Pro" does?

If you call a Pentium-Pro a RISC CPU with CISC decoder -
why don't you call a 68_000 the same?


Quote from: psxphill;767126
Modern CISC gives you the best of both worlds, because you can completely redefine your RISC architecture every time but you still have
This has nothing to do with modern CISC.
The instructions the programmer see are always a "compressed" form of the internal signals a CPU needs and uses.

This means the orignal 68_000 might internally use 80bit wide instructions.
But the programmer sees only a 16bit word.

The 68010 might already have changed his internal structure slightly and might have 70bits or 85 bits.

A RISC like POWERPC has internally also totally different signals than the programmer uses as opcodes. And every different PPC chip might have slightly different internal signals.

This means every CPU does a decoding from instruction obcodes to internal signals.
And the internal design is different with every CPU generation.

This concept of translation CISC opcodes to is internal format is not new
Every CISC CPU did this since the ice-age.
« Last Edit: June 19, 2014, 04:07:27 PM by biggun »