Welcome, Guest. Please login or register.

Author Topic: What's so bad about Intel 8086 in technical terms?  (Read 8482 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline NorthWay

  • Full Member
  • ***
  • Join Date: Jun 2003
  • Posts: 209
    • Show only replies by NorthWay
Re: What's so bad about Intel 8086 in technical terms?
« Reply #119 from previous page: June 18, 2014, 10:54:23 PM »
Quote from: psxphill;767025
The only RISC processor that I like is 32 bit MIPS as all the others are horribly complex.

Didn't that one have the delay-slot after branch instruction that was later considered a dead-end?
(I.e. the instruction following a branch was always executed.)
 

Offline NorthWay

  • Full Member
  • ***
  • Join Date: Jun 2003
  • Posts: 209
    • Show only replies by NorthWay
Re: What's so bad about Intel 8086 in technical terms?
« Reply #120 on: June 18, 2014, 11:15:04 PM »
Much of the original thinking behind RISC designs was that compilers had a hard time using all the fancy CISC instructions.

It is not exactly rocket science to speed up those things it uses then.
Compilers only use simple instructions -> make a cpu with simple instructions and speed those up.

"The perfect number of registers in a CPU are 0, 1, or infinite" - quote from a class I took sometime.
Compilers have no trouble juggling lots of registers and figuring out when to keep values in registers and when to purge them. Many registers was an answer for having to go to memory for immediates and to lower memory pressure in general. With code going in loops it works.

RISC was a lot of good ideas that has had its advantages reduced by advances in manufacturing. RISC and CISC today are both struggling with instruction dependency and the IPC limits you get from that. EPIC and Mill have tried to break through that barrier. EPIC seems to be compiler limited and possibly with too much junk in the trunk, and Mill is so far mostly an idea. I don't know if there are other designs working on this.
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show only replies by matthey
Re: What's so bad about Intel 8086 in technical terms?
« Reply #121 on: June 19, 2014, 12:10:54 AM »
Quote from: NorthWay;767087
Much of the original thinking behind RISC designs was that compilers had a hard time using all the fancy CISC instructions.

It is not exactly rocket science to speed up those things it uses then.
Compilers only use simple instructions -> make a cpu with simple instructions and speed those up.


This thinking was a bit naive considering the same RISC fans decided to move the complexity into the compilers that weren't smart enough to use complex instructions. Compiler can and do use complex instructions if they are the best option and fast. CISC often had slow instructions that couldn't be made fast or the functionality was either to specialized or not needed. These types of instructions are baggage for any processor and it's not just CISC that has them. The PPC ISA has it's fair share of baggage instructions now.

Quote from: NorthWay;767087

"The perfect number of registers in a CPU are 0, 1, or infinite" - quote from a class I took sometime.
Compilers have no trouble juggling lots of registers and figuring out when to keep values in registers and when to purge them. Many registers was an answer for having to go to memory for immediates and to lower memory pressure in general. With code going in loops it works.


Processor logic outpacing memory speeds is another limiting battle of modern processors. More registers does help but RISC doesn't have as much of an advantage here as would be expected. Processor logic speeds vs memory speeds are not as much of an issue for an fpga CPU. With less of a limitation here, fpga processors may be able to do more work in parallel and come surprisingly close to hard processors that are clocked much higher.  

Quote from: NorthWay;767087

RISC was a lot of good ideas that has had its advantages reduced by advances in manufacturing. RISC and CISC today are both struggling with instruction dependency and the IPC limits you get from that. EPIC and Mill have tried to break through that barrier. EPIC seems to be compiler limited and possibly with too much junk in the trunk, and Mill is so far mostly an idea. I don't know if there are other designs working on this.


Multi-cores and multi-threading are a good way to break through the dependency problems but the memory limitation remains (to a lesser extent), multi-processing overhead and cache coherency eat up a lot of the gains and some tasks can't be done in parallel. I think the Mill computer will have the same compiler complexity problems as VLIW processors. Good luck debugging that one when the compiler doesn't work right. I would still take CISC over all the choices even though it has the same limitations. It's simpler and easier to code with smaller programs. I like that.
 

Offline freqmaxTopic starter

  • Hero Member
  • *****
  • Join Date: Mar 2006
  • Posts: 2179
    • Show only replies by freqmax
Re: What's so bad about Intel 8086 in technical terms?
« Reply #122 on: June 19, 2014, 12:40:55 AM »
What embedded CPU choice is a good one these days? There are a few 32-bit designs.. AVR, C28x, ColdFire, CPU32, ETRAX, PowerPC 603e, PowerPC e200, PowerPC e300, M-CORE, MIPS32 M4K, MIPS32 microAptiv MPU, MPC500, PIC, RISC, TLCS-900, TMS320C28x, TriCore, TX19A, etc. And the only VLIW seen in the flesh seems to be the products of Transmeta for an unattractive price. ARM Cortex-M and to some extent the more demanding counterpart ARM Cortex-A with DMA and external memory seems to take over ever more market sections like an viral octopussy. It's in your phone, hdd, photoframe, DSL, printer, switch etc. So it seems to pay to get to know ARM architecture even thoe they enforce their patents a bit too much for my taste. Like on HDL-code to create an ARM processor in FPGA.

I find it fascinating that these single chips have more power than some Amiga machines. They lack the memory on-chip and the graphics accelerator. But in terms of crunch performance they most likely run circles around many Amiga machines.
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show only replies by matthey
Re: What's so bad about Intel 8086 in technical terms?
« Reply #123 on: June 19, 2014, 01:27:26 AM »
Quote from: freqmax;767093
What embedded CPU choice is a good one these days? There are a few 32-bit designs.. AVR, C28x, ColdFire, CPU32, ETRAX, PowerPC 603e, PowerPC e200, PowerPC e300, M-CORE, MIPS32 M4K, MIPS32 microAptiv MPU, MPC500, PIC, RISC, TLCS-900, TMS320C28x, TriCore, TX19A, etc. And the only VLIW seen in the flesh seems to be the products of Transmeta for an unattractive price.


Embedded has a few VLIW processors for specialized tasks. See the Fujitsu FR-V processors for example:

http://en.wikipedia.org/wiki/FR-V_%28microprocessor%29

They have amazing power efficiency but are very specialized. I recall another VLIW embedded processor also but I can't remember the name. Embedded is about the only place where VLIW processors are used. None are general purpose enough to be well known.
 

Offline freqmaxTopic starter

  • Hero Member
  • *****
  • Join Date: Mar 2006
  • Posts: 2179
    • Show only replies by freqmax
Re: What's so bad about Intel 8086 in technical terms?
« Reply #124 on: June 19, 2014, 02:41:00 AM »
Nothing one wants to code C on?
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show only replies by matthey
Re: What's so bad about Intel 8086 in technical terms?
« Reply #125 on: June 19, 2014, 02:59:33 AM »
Quote from: freqmax;767105
Nothing one wants to code C on?

VR-F has C support with GNU and can even use multiple operating systems. I would expect programming would have some similarities to an SIMD processor or GPU (where branches are evil) but I don't really know.
 

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #126 on: June 19, 2014, 08:54:01 AM »
Quote from: NorthWay;767087
Much of the original thinking behind RISC designs was that compilers had a hard time using all the fancy CISC instructions.

Berkley RISC started because they noticed that when compiling Unix they were only using 30% of the 68000 instruction set. They weren't using all addressing modes of the add instruction for instance. I don't know if they spent any time on figuring out whether the compiler would have a hard time using them, or even confirming whether the compiler would never use them. Just that compiling unix didn't use them.
 
They then went on to design a cpu with register windows which the people writing the compilers realised was a terrible design choice. Investing in your compiler and cpu design before you commit to it is very important. Adding instructions because you can come up with a single assembler fragment that performs better is a naïve and essentially terrible idea.
 
They coined the term, but there was prior art. I believe the http://en.wikipedia.org/wiki/CDC_6600 was the earliest example of what inspired RISC. Their motivation was simplifying the hardware so they could have dedicated hardware for parallel processing, a concept that was put to good use in the Amiga.
 
x86 compilers can now use a lot of the fancy CISC instructions, which internally in the CPU is just a macro for a set of RISC instructions anyway.
« Last Edit: June 19, 2014, 09:10:21 AM by psxphill »
 

Offline KimmoK

  • Sr. Member
  • ****
  • Join Date: Jun 2004
  • Posts: 319
    • Show only replies by KimmoK
Re: What's so bad about Intel 8086 in technical terms?
« Reply #127 on: June 19, 2014, 09:06:43 AM »
8086 is an example of inferior design, made to succeed only with insane amount of investment circumventing it's defects, the rest is history.

Money matters more than anything. With enough money, everything is ok.
- KimmoK
// Windows will never catch us now.
// The multicolor AmigaFUTURE IS NOW !! :crazy:
 

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show only replies by biggun
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #128 on: June 19, 2014, 09:11:48 AM »
Quote from: psxphill;767118

x86 compilers can now use a lot of the fancy CISC instructions, which internally in the CPU is just a macro for a set of RISC instructions anyway.


This description is not wrong but also not right.
Under this description every CISC CPU ever made on uses RISC instructions.

For example:
The 68000 was a CISC CPU.
The 68000 uses microcode for each instruction.
The "micro-code" pieces can be regarded as RISC.
This means the 68000 did for an ADD (mem),Reg in micocode
* calc EA
* load mem, to temp
* add tmp to reg

So was the 68000 already a RISC chip?

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #129 on: June 19, 2014, 11:14:24 AM »
Quote from: biggun;767121
So was the 68000 already a RISC chip?

No, microcode isn't like RISC. It's just a table that the cpu uses as part of running the standard opcodes.
 
microop's are what I was referring to, so 68060 & anything x86 since the Pentium Pro. Opcodes are fetched, then translated into one or more micro-ops which are stored in cache and then executed by a dedicated core. In theory they could strip out the front end and let you write in micro-ops, but nobody does that because it is terrible for compatibility. Modern CISC gives you the best of both worlds, because you can completely redefine your RISC architecture every time but you still have backward compatibility with code written twenty years ago.
« Last Edit: June 19, 2014, 11:20:11 AM by psxphill »
 

Offline freqmaxTopic starter

  • Hero Member
  • *****
  • Join Date: Mar 2006
  • Posts: 2179
    • Show only replies by freqmax
Re: What's so bad about Intel 8086 in technical terms?
« Reply #130 on: June 19, 2014, 03:05:39 PM »
Loading those micro-ops from an internal table instead of RAM will most likely be faster too. One CISC op-code to generate several micro-ops internally certainly helps that memory bottleneck.
 

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show only replies by biggun
    • http://www.greyhound-data.com/gunnar/
Re: What's so bad about Intel 8086 in technical terms?
« Reply #131 on: June 19, 2014, 03:51:51 PM »
Quote from: psxphill;767126
No, microcode isn't like RISC. It's just a table that the cpu uses as part of running the standard opcodes.

I know what Mico-code is.

But where do you know that the microcode lines are not like RISC?
Mircrocode is list of micro-instruction - each one them the CPU can do in a single cycle.
Where is the difference to what the "Pentium Pro" does?

If you call a Pentium-Pro a RISC CPU with CISC decoder -
why don't you call a 68_000 the same?


Quote from: psxphill;767126
Modern CISC gives you the best of both worlds, because you can completely redefine your RISC architecture every time but you still have
This has nothing to do with modern CISC.
The instructions the programmer see are always a "compressed" form of the internal signals a CPU needs and uses.

This means the orignal 68_000 might internally use 80bit wide instructions.
But the programmer sees only a 16bit word.

The 68010 might already have changed his internal structure slightly and might have 70bits or 85 bits.

A RISC like POWERPC has internally also totally different signals than the programmer uses as opcodes. And every different PPC chip might have slightly different internal signals.

This means every CPU does a decoding from instruction obcodes to internal signals.
And the internal design is different with every CPU generation.

This concept of translation CISC opcodes to is internal format is not new
Every CISC CPU did this since the ice-age.
« Last Edit: June 19, 2014, 04:07:27 PM by biggun »
 

Offline Fats

  • Hero Member
  • *****
  • Join Date: Mar 2002
  • Posts: 672
    • Show only replies by Fats
Re: What's so bad about Intel 8086 in technical terms?
« Reply #132 on: June 19, 2014, 10:11:54 PM »
Quote from: psxphill;767118
Berkley RISC started because they noticed that when compiling Unix they were only using 30% of the 68000 instruction set. They weren't using all addressing modes of the add instruction for instance. I don't know if they spent any time on figuring out whether the compiler would have a hard time using them, or even confirming whether the compiler would never use them. Just that compiling unix didn't use them.

...

Their motivation was simplifying the hardware so they could have dedicated hardware for parallel processing, a concept that was put to good use in the Amiga.
 
x86 compilers can now use a lot of the fancy CISC instructions, which internally in the CPU is just a macro for a set of RISC instructions anyway.


One thing to realise is that this CISC vs RISC discussion is from a time when single transistor cost was still important. Moving complexity form the chip to compiler could result in a more cost effective solution at that time.
In recent times where the cache memories are using the majority of the transistor budget this reasoning is not valid anymore.

Also in current when extensions to a CPU instruction set are made I think always the compiler side is included to be sure it can be effectively used.
Trust me...                                              I know what I\'m doing
 

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #133 on: June 20, 2014, 12:06:16 AM »
Quote from: biggun;767143
But where do you know that the microcode lines are not like RISC?

How the 68000 works is in the patent. RISC instructions aren't full of flags like the 68000 microcode is. If anything it's VLIW processor, but it's not RISC.
 
Quote from: biggun;767143
If you call a Pentium-Pro a RISC CPU with CISC decoder -

I didn't, I said the CISC instructions were translated into micro-ops at runtime. Translated means that as each instruction is fetched the frontend and then writes a new program and stores it in fast cache ram which the backend then fetches, decodes and executes.
 
Quote from: biggun;767143
This concept of translation CISC opcodes to is internal format is not new
Every CISC CPU did this since the ice-age.

It's not new, it's been around since the 1990's. But it was new then and it's a different concept entirely.
« Last Edit: June 20, 2014, 12:53:45 AM by psxphill »
 

Offline commodorejohn

  • Hero Member
  • *****
  • Join Date: Mar 2010
  • Posts: 3165
    • Show only replies by commodorejohn
    • http://www.commodorejohn.com
Re: What's so bad about Intel 8086 in technical terms?
« Reply #134 on: June 20, 2014, 12:50:27 AM »
Quote from: psxphill;767158
I didn't, I said the CISC instructions were translated into micro-ops at runtime. Translated means that as each instruction is fetched the frontend writes a new program and stores it in fast cache ram which the backend then executes.
Does it really actually write the sequence to an internal writable control store? I'd think it would be simpler to just execute directly from an internal ROM, but I guess maybe that wouldn't have been fast enough...?
 
Quote
It's not new, it's been around since the 90's. But it was new then.
It's been around much, much longer than that, actually. Mainframes and minis were doing it since the '70s at least.
Computers: Amiga 1200, DEC VAXStation 4000/60, DEC MicroPDP-11/73
Synthesizers: Roland JX-10/MT-32/D-10, Oberheim Matrix-6, Yamaha DX7/FB-01, Korg MS-20 Mini, Ensoniq Mirage/SQ-80, Sequential Circuits Prophet-600, Hohner String Performer

"\'Legacy code\' often differs from its suggested alternative by actually working and scaling." - Bjarne Stroustrup