Amiga.org

Amiga computer related discussion => Amiga Hardware Issues and discussion => Topic started by: freqmax on June 10, 2014, 08:04:30 PM

Title: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 10, 2014, 08:04:30 PM
Most people here I guess finds the Motorola 68000 a really good design given the limitations at the time (economy, tech and market). The Intel 8086 and descendants were a less well thought design. But what specific technical aspects of it were made worse that any circumstance would had enforced?

I can think of some personal points:
 * Segmentation registers
 * Lacks the "MOVE" instruction?
 etc..
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: matthey on June 10, 2014, 08:57:23 PM
Quote from: freqmax;766140
Most people here I guess finds the Motorola 68000 a really good design given the limitations at the time (economy, tech and market). The Intel 8086 and descendants were a less well thought design. But what specific technical aspects of it were made worse that any circumstance would had enforced?


First off, there is a difference between how good the CPU design is and how good the ISA design is. IMO, for it's time, the 68000 was an average implementation of a great ISA design. There were 8 bit upgraded to 16 bits processors that were faster than the 68000 although the 68000 had an advantages with complex code, large code and ease of optimizing. The 68000 ISA was created with the foresight that it would become a 32 bit processor (32 bit registers and operations). It had no major disadvantages when it became 32 bit. The x86 started out 8 bits with no foresight for 16 bits. There is a big difference between 8 bit memory banks and 16 bit direct addressing. The 8 bit x86 processors were using 2 variable registers with an accumulator for the result. Only certain registers were allowed to do specific functions. Orthogonality wasn't even conceived when they were designed. Growing to 16 bits was a big change with 8 bit legacy baggage. Then came 32 bit and extensions after most of the shorter efficient instruction encodings were taken. Some common instructions were dropped (INC) and new replaced (MOVSX/MOVZX). Compatibility modes were created adding complexity. Some things work and others still seem like bolt-ons and throwbacks to the 8 bit days. The x86 does have some advantages. The original instructions and addressing modes were generally simple so easy to clock fast. Certain instructions were more common and shorter which was easier to optimize internally in the CPU. The code density was good especially for 8 bit text. The 8 bit x86 processors weren't bad 8 bit processors. The ISA suffered from 8->16->32 bits while the 68k was designed for 32 bits from the beginning. Ironically, the x86 has been developed into the fastest consumer processor while the 68k was dropped after only 1 major ISA update.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: bloodline on June 10, 2014, 09:43:03 PM
Quote from: matthey;766146
Ironically, the x86 has been developed into the fastest consumer processor while the 68k was dropped after only 1 major ISA update.

Not sure if the AMD64 really counts as x86... I know it has full x86 compatibility, but the ISA is so far removed from the old x86 design, I would call it a new architecture.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: NorthWay on June 10, 2014, 11:15:40 PM
In an era when people still hand-coded in asm, the 68K had room to breathe and express ideas, while the x86 was so cramped it gave you headaches.

Use a high-level language and it doesn't matter really.
A good architecture will probably go faster than a worse one. Throw billions of transistors at it and architecture disappears in the tech fog.
When your transistor budget was limited it was hard and expensive to transform something bad to something good. Nowadays you don't see the stuff that makes it go fast.
You don't want to try to write optimal assembler code these days, that is better left to the compiler.

How times have changed.
There is however someone trying to do both architecture and technology at the same time to go faster - search for "The Mill". Or just go to their site at http://millcomputing.com/
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 11, 2014, 12:00:10 AM
Ugly segmented architecture, slow memory addressing coupled with a smallish register file, just enough orthogonality to trick you into thinking orthogonally only to be confronted with all the stupidly special-purpose register functions, and the added insult of all of that being to achieve not even binary compatibility with the 8080, but only source-compatibility. Bleah.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 11, 2014, 12:31:02 AM
Quote from: freqmax;766140
* Segmentation registers
* Lacks the "MOVE" instruction?

Segmentation registers are somewhat similar to address registers in 68000
It has MOV instead of MOVE.
 
The main benefit 68000 had was loads of registers and they were mostly orthogonal.
 
The main benefit of the 8086 was that it was source compatible with earlier intel processors and it was cheap.
 
The 8086 was the kind of chip that commodore would have put out in it's 8 bit days.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: matthey on June 11, 2014, 12:48:34 AM
Quote from: bloodline;766150
Not sure if the AMD64 really counts as x86... I know it has full x86 compatibility, but the ISA is so far removed from the old x86 design, I would call it a new architecture.


Adding 8 somewhat general purpose 16 bit registers and 16 bit addressing instead of segmented memory banks may have been a bigger ISA change that the jump from 16 bit to 32 or even 32 bit to 64 bit with x86_64. It's true that the x86_64 ISA is so different than the original 8 bit x86 that it really isn't the same thing anymore. I would call it a new architecture also but I would do the same for any new ISA (Instruction Set Architecture).
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: danbeaver on June 11, 2014, 01:18:19 AM
And the rational for IBM using an 8088 rather than the 8086?  Well other than usual...
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 11, 2014, 02:15:39 AM
Quote from: danbeaver;766178
And the rational for IBM using an 8088 rather than the 8086?  Well other than usual...
Just the usual: 8-bit support components were cheaper than 16-bit ones.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: persia on June 11, 2014, 03:38:22 AM
A core i7 is so far removed from an 8086 it isn't funny.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 11, 2014, 03:42:31 AM
Are you sure?

The P4 was at least an expensive air heater.. hot air if you like ;)
And then marketing seemed to learn that performance not Hz is the measure that the market uses.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: persia on June 11, 2014, 06:03:05 PM
The original Core Duos were basically two Pentium 4s on a chip, the i series developed out of this but fixed many of the pentium core issues, but yeah, somewhere deep in the cores lie remnants of the old 8086...
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 11, 2014, 06:51:27 PM
Actually the Core line was derived from the Pentium M, which was basically a Pentium III with a DDR bus. Pentium 4s were Netburst, which was so bad that they pretty much abandoned it after that.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: bloodline on June 11, 2014, 06:56:54 PM
Quote from: persia;766227
The original Core Duos were basically two Pentium 4s on a chip, the i series developed out of this but fixed many of the pentium core issues, but yeah, somewhere deep in the cores lie remnants of the old 8086...


Actually the CoreDuo and later processors were develops from the Pentium M... Which was itself developed from the Pentium III. The Pentium 4, known as netburst was discontinued.

I doubt there is anything of the 8086 in there, except the real mode emulator that the chip boots in... Hmmm, that's only true if the system is using an IBM PC compatible BIOS... All my x86 machines use EFI, which AFAIK has no real mode code in it.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 11, 2014, 10:30:29 PM
Quote from: bloodline;766236
The Pentium 4, known as netburst was discontinued.

Intel knew they had a problem with the P4 before it launched, they hacked the design up to make it work but that unbalanced the chip so that a lot of the design decision no longer made sense. Hyper threading allowed more of the chip to be utilised and they managed to hold on long enough for the core series to make it out of the door.
 
The P4 had a successor that was cancelled.
 
Quote from: bloodline;766236

I doubt there is anything of the 8086 in there, except the real mode emulator that the chip boots in... Hmmm, that's only true if the system is using an IBM PC compatible BIOS... All my x86 machines use EFI, which AFAIK has no real mode code in it.

They all boot in real mode, EFI just switches to protected mode within the first few instructions.
 
No matter what mode it's in the cpu is executing instructions by translating them. So if you class real mode an emulator then protected or x64 mode is also an emulator.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: bloodline on June 11, 2014, 10:49:25 PM
Quote from: psxphill;766256

They all boot in real mode, EFI just switches to protected mode within the first few instructions.


Yes, I think you're right, I was wondering if there might be some nonresetable fuses that they might burn at the factory to disable the real mode booting... But I guess it just wouldn't be worth it.

Quote

No matter what mode it's in the cpu is executing instructions by translating them. So if you class real mode an emulator then protected or x64 mode is also an emulator.


Emulator isn't the right word obviously, when the CPU is running in whichever mode it's using a decoder for that ISA... I used the term emulator, as I'm not sure how closely the x86-64 architecture maps to the 8086, but a quick glance at the AMD docs shows clearly that a real mode decoder would fit comfortably in there.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: Hans_ on June 11, 2014, 11:04:12 PM
Quote from: freqmax;766140
Most people here I guess finds the Motorola 68000 a really good design given the limitations at the time (economy, tech and market). The Intel 8086 and descendants were a less well thought design. But what specific technical aspects of it were made worse that any circumstance would had enforced?

I can think of some personal points:
 * Segmentation registers
 * Lacks the "MOVE" instruction?
 etc..


IIRC, the biggest problem with the 8086 was having memory divided up into segments/banks. This allowed the 16-bit CPU to access more than 64K,** but meant that programmers had to be very careful with setting the register that selects the bank before accesing memory. This made it a nightmare to program. To quote one of my university lecturers, "this set back programming by 10 years."

Fortunately, segmented/banked memory is long gone now.

Hans


** To put it another way, this allowed the CPU to access more memory without the expense of going to a 32-bit architecture.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 12, 2014, 12:40:45 AM
Quote from: bloodline;766261
Emulator isn't the right word obviously, when the CPU is running in whichever mode it's using a decoder for that ISA... I used the term emulator, as I'm not sure how closely the x86-64 architecture maps to the 8086, but a quick glance at the AMD docs shows clearly that a real mode decoder would fit comfortably in there.
It's a bit of a blurry issue in any case, since x86 has been microcode running on top of a RISC microarchitecture since the Pentium Pro. Either that counts as an "emulator" and everything is emulated, or real mode is just another non-emulation mode of the processor - take your pick.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: Hans_ on June 12, 2014, 05:35:22 AM
Quote from: commodorejohn;766266
It's a bit of a blurry issue in any case, since x86 has been microcode running on top of a RISC microarchitecture since the Pentium Pro. Either that counts as an "emulator" and everything is emulated, or real mode is just another non-emulation mode of the processor - take your pick.

Microcode is normally seen as just another method of implementing a CPU. It's a method that's very common for CISC architectures. The 68000 CPU is also microcode based,** as is the original 8086. So, having RISC-like microcode doesn't make it an emulator. After all, otherwise you'd have to say that the 8086 CPU emulates the 8086 instruction set. ;)

Hans


** IIRC, only the 68060 is hardwired; the rest of the 68K CPUs are microcode based.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 12, 2014, 05:47:43 AM
That's an entirely reasonable take on it, yep.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 12, 2014, 09:46:16 AM
Quote from: Hans_;766262
but meant that programmers had to be very careful with setting the register that selects the bank before accesing memory. This made it a nightmare to program. To quote one of my university lecturers, "this set back programming by 10 years."

Segmented memory made address calculation more complex, but you have to be very careful setting any register before accessing memory on any cpu.
 
MOVE.W 6(A6), D0

If you don't set a6 correctly on 68000, it will fail as badly as not setting the ds/es segment correctly on 8086.
 
Your lecturers quote sounds like hyperbole. The 8086 came out in 1978, the 68000 came out in 1979. There weren't any microprocessors ten years prior to that. Mini computers and main frames were still using segmented memory then. I'd be interested in what two years he thinks it was set back from and to and why.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: darkcoder on June 12, 2014, 10:20:21 AM
Quote from: psxphill;766174
Segmentation registers are somewhat similar to address registers in 68000


IMHO, it's not fair to say that. Differences are much more important than similarities.
Using Ax as base register of a segment is one of the addressing mode of 68000, while segment registers are *always* added to the "logical address" (using x86 parlance). This means that segmentation is an option for 68000 while is a constraint for 8086.

Plus, 68000 adress registers are much more similar to GRPs, you can use them as stack pointers, memory pointers, data variables.

Moreover, with the 80286 protected mode, segmentation registers can be protected and semantics changes completely, so it is clear that they have a very different function than that of 68000 adress registers. But maybe this is offtopic since the OP asked about 8086.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: itix on June 12, 2014, 11:25:16 AM
Quote from: Hans_;766262
Fortunately, segmented/banked memory is long gone now.

Such techniques are still used in some operating system you know very well ;-)

Alas, on C64 you had to use memory banks to access all 64K RAM but for some reason 6502 and its descendants are not considered as "bad designs" like 8088/8086. Certainly 8088 was not too nice to program for but when I did some coding on 486 in Turbo Pascal it was not bad at all.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 12, 2014, 12:04:24 PM
Quote from: freqmax;766140
Most people here I guess finds the Motorola 68000 a really good design given the limitations at the time (economy, tech and market). The Intel 8086 and descendants were a less well thought design. But what specific technical aspects of it were made worse that any circumstance would had enforced?

I can think of some personal points:
 * Segmentation registers
 * Lacks the "MOVE" instruction?
 etc..



Maybe historically it might make sense to start the comparison with the PDP-11?

The PDP-11 was first.
The 8086 could be regarded as inpsired by PDP-11 but with limitations...
Also the 68000 could be regarded by elements of the PDP-11.

The x86 has MOV but it can only do either "mem to reg"  or "reg to mem" it can NOT do the "mem to mem" like the 68000.
This is both a limitation as also a big advantage for the x86.
Speed wise doing two instruction (mem),reg and reg,(mem)
is the same as doing one (mem),(mem) as  the limiting factor is the memory access.
The disadvantage of the x86 here was to have 2 instructions needed.
This makes the code sometimes a little bigger.
The big advantage was that this simpelr encoding
was much shorter therefore the code could save a lot of space.

The 68000 being 32bit was much more flexible than the 8086.
But the x86 "improved" and when you later compare the 486 and the 68030 -
The x86 was not that bad anymore....
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 12, 2014, 12:32:32 PM
Quote from: psxphill;766301
Segmented memory made address calculation more complex, but you have to be very careful setting any register before accessing memory on any cpu.
 
MOVE.W 6(A6), D0

If you don't set a6 correctly on 68000, it will fail as badly as not setting the ds/es segment correctly on 8086.
 
Your lecturers quote sounds like hyperbole. The 8086 came out in 1978, the 68000 came out in 1979. There weren't any microprocessors ten years prior to that. Mini computers and main frames were still using segmented memory then. I'd be interested in what two years he thinks it was set back from and to and why.
The big problem wasn't even with assembler - as you point out, one more register to set isn't a huge deal when you're already directly working with registers all the time. The big problem was that it did awful things to higher-level languages - most 8086 compilers have distinct memory models you have to select based on whether you want more than 64KB for code, data, stack, or any combination thereof, and have distinct types for near (same-segment, 16-bit) or far (different-segment, 32-bit) pointers - which plays merry hell with C, where pointers and pointer manipulation are a way of life. They also frequently didn't allow arrays larger than 64KB (not a big deal in C, where arrays are just a special case of pointer and you can just use far pointer arithmetic almost interchangeably, albeit at a speed penalty, but a much bigger problem in less flexible languages.) And thanks to the incorporation of this kind of thing into the Windows API, programmers were stuck dealing with it even when the 386 had made it theoretically obsolete - it wasn't until Windows 95 that the stink of segmented addressing was finally washed off.

Also, some minicomputers and mainframes did use segmented memory, but most of them weren't designed that way from the ground up the way the 8086 was (the PDP-11, for example, originally only had a 16-bit address bus, and wasn't provided with an MMU until later - same goes for the TI-990.) They're kludgy because that functionality was actually a kludge; the 8086 was weird and balky right from day one.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 12, 2014, 01:10:45 PM
@psxphill, What's so special about A6 on m68k?

Quote from: darkcoder;766304
Moreover, with the 80286 protected mode, segmentation registers can be protected and semantics changes completely, so it is clear that they have a very different function than that of 68000 adress registers. But maybe this is offtopic since the OP asked about 8086.


I find the 286 etc interesting too ;)

Quote from: itix;766305
Alas, on C64 you had to use memory banks to access all 64K RAM but for some reason 6502 and its descendants are not considered as "bad designs" like 8088/8086.

6502/6510 didn't have any segmentation to wreck the program flow at its very core. ROM switching was more of an opportunity to get access to all that 64 kB of RAM.

Quote from: itix;766305
Certainly 8088 was not too nice to program for but when I did some coding on 486 in Turbo Pascal it was not bad at all.

Coding in any high level language tend to isolate the user from intrinsics of the CPU.

What about little endian on x86? I have always found that really annoying.
(oh and that segment vs pointer collision in C was hell)
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 12, 2014, 01:27:14 PM
Quote from: freqmax;766317
What about little endian on x86? I have always found that really annoying.

Historically little endian was a real advantage.

Lets give an example:



lets say you have in your register this number:
$11FF

And you want to add from memory this value
B: dc.w $0001

Lets say your register is 16 bit and your memory bus 8bit and your ALU can do 8bit per cycle.

A big endian machine needs to read the memory content
= 2 bus cycles
then it can add
= this takes another 2 cycles


For little endian the value look like this
B: dc.w $0100

A little endian machine can read the first byte "01" and add it right away, then it gets a carry out.
in the next cycle it can read the next byte add it using the carry.
This means the little endian machine can save 1 cycle compared to the big endian machine
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 12, 2014, 02:05:09 PM
There's no performance enchancing technique to get around it?
I guess m68k suffers from an extra cycle?
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 12, 2014, 02:09:19 PM
Quote from: freqmax;766322
There's no performance enchancing technique to get around it?
I guess m68k suffers from an extra cycle?


Well in 1979 this was a small advantage.
When you had 32 bit registers, and 16bit memory.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 12, 2014, 02:32:39 PM
Quote from: commodorejohn;766315
The big problem was that it did awful things to higher-level languages - most 8086 compilers have distinct memory models you have to select based on whether you want more than 64KB for code, data, stack, or any combination thereof, and have distinct types for near (same-segment, 16-bit) or far (different-segment, 32-bit) pointers - which plays merry hell with C, where pointers and pointer manipulation are a way of life.

I never had that much of a problem with memory models in C. I thought SAS/C also had the concept of near and far pointers.
 
Quote from: commodorejohn;766315
Also, some minicomputers and mainframes did use segmented memory, but most of them weren't designed that way from the ground up the way the 8086 was (the PDP-11, for example, originally only had a 16-bit address bus, and wasn't provided with an MMU until later - same goes for the TI-990.) They're kludgy because that functionality was actually a kludge; the 8086 was weird and balky right from day one.

The 8086 was mainly designed by one person & based in part on the 8085 architecture. You can call it a kludge all you want but the compromises made it commercially successful.
 
The iAPX 432 was their proper cpu, which was a failure (like all proper projects tend to do).
 
The history of the 80xx series is interesting http://research.microsoft.com/en-us/um/people/gbell/computer_structures_principles_and_examples/csp0631.htm (http://research.microsoft.com/en-us/um/people/gbell/computer_structures_principles_and_examples/csp0631.htm) The 8008 that started the series off was a contract to put a TTL cpu into a single package, which I think Intel just did the design shrink and the manufacture was done by TI.
 
Quote from: freqmax;766322
There's no performance enchancing technique to get around it?
I guess m68k suffers from an extra cycle?

68000 doesn't allow unaligned access for 16 bit fetches, so all 16 bits will be available in one read. I don't know if the 68008 stalls until the 16 bits are fetched or whether it can start working with partial results, I suspect it stalls though.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: itix on June 12, 2014, 02:37:40 PM
Quote from: freqmax;766317

What about little endian on x86? I have always found that really annoying.


Sometimes when you have to edit memory layout manually it is. Like ARGB pixmaps are actually BGRA pixmaps. :-)

But in normal coding you never stumble on endianess.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 12, 2014, 02:59:14 PM
Quote from: psxphill;766324
I never had that much of a problem with memory models in C.
Well good for you, then.
 
Quote
The 8086 was mainly designed by one person & based in part on the 8085 architecture. You can call it a kludge all you want but the compromises made it commercially successful.
I never argued that it wasn't commercially successful - it's just ugly.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 12, 2014, 05:32:12 PM
Quote from: commodorejohn;766326
I never argued that it wasn't commercially successful - it's just ugly.

People wanted a more advanced chip that could have CPM software easily ported to it and then easily multitask. The 8086 is a good design for that requirement. The original IBM PC came with either 16k or 64k, so segments were still not a major issue.
 
I don't think anyone predicted how successful the PC would be and how important backward compatibility would become. Until dos extenders came along you had to deal with segments, expanded and extended memory. Any decent programmer wasn't spending an appreciable amount of time on that though.
 
It was successful because it was good enough. People don't care that programmers have to spend an extra 5 minutes a day writing their code.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 12, 2014, 05:52:53 PM
I also never claimed that there weren't reasons for the ugliness - but it's still ugly.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 12, 2014, 07:52:39 PM
Quote from: commodorejohn;766336
I also never claimed that there weren't reasons for the ugliness - but it's still ugly.

It was more complex, for certain things & those things aren't always necessary even in big programs. The 8086 wasn't designed for large data, because the applications it was designed for wasn't using a lot of ram at the time.
 
Ugly is an emotive term and we're having a technical discussion.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 12, 2014, 08:04:27 PM
Ugly is an emotive description of technical design compromises. I'm not going to play at being some kind of impassive robot thing.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 12, 2014, 08:45:32 PM
Quote from: psxphill;766256

The P4 had a successor that was cancelled.


Yeah, do you recall their marketing talks when they
announced that their next Gen Pentium will reach 10 Gigaherz ?
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 12, 2014, 08:58:49 PM
Quote from: commodorejohn;766348
I'm not going to play at being some kind of impassive robot thing.

Then your value in any technical discussion is going to be limited.
 
Like if we were talking about Ford cars you'd be screaming at the top of your lungs about how Ferrari's are so much better.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: whabang on June 12, 2014, 09:08:10 PM
Quote from: biggun;766357
Yeah, do you recall their marketing talks when they
announced that their next Gen Pentium will reach 10 Gigaherz ?

It is definitely technically possible - I mean, they did sell a few machines with factory-overclocked P4EE's running at over 4 GHz. Add the fact that Netburst was fairly decent as long as you didn't try to use it for too many things (ridiculously large pipelines) and you have a decent CPU for video encoding and so forth.

Not that it'd be of much use today, when we'll just do it with the video card instead.

The problem with scaling was that the power consumption and heat emission was quite insane. Some of the later Netburst CPU's have been clocked extremely high, and it would be expected that theey could have increased speeds even further if they'd continued developing the architecture.

That being said, I'll rather stay at 2.5-3 GHz with a few extra cores if it saves power compared to a 10 GHz single core.

As for the 8086, I dunno. I'm not old enough. :D
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 12, 2014, 09:13:01 PM
Quote from: psxphill;766359
Then your value in any technical discussion is going to be limited.
 
Like if we were talking about Ford cars you'd be screaming at the top of your lungs about how Ferrari's are so much better.
That's not even remotely how that works. I don't have an opinion on technology because I have emotional reactions to it, I have emotional reactions to it because I have an opinion on it - and one that I've quite clearly elucidated on in this very thread. That's the best any squishy, organic, hormonal human being can manage.

But whatever, you just keep insisting that the fact that I display emotional reactions proves that my assessment of the technical issues that induce those reactions must be wrong, Mr. Spock.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 12, 2014, 10:03:11 PM
Quote from: commodorejohn;766362
But whatever, you just keep insisting that the fact that I display emotional reactions proves that my assessment of the technical issues that induce those reactions must be wrong, Mr. Spock.

I don't know what the assessment of the technical issues are, because you just keep repeating that it's ugly. That isn't a technical term for why 8086 is "so bad", how much of an emotional reaction you have says more about you than the design of the 8086. And yes, emotions generally do make you make incorrect assessments. Which is why people end up in harmful relationships, or become addicted to drink/illegal substances/etc.
 
Many people can cope with discussing things rationally, not just fictional science officers.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 12, 2014, 10:07:28 PM
Quote from: psxphill;766368
I don't know what the assessment of the technical issues are, because you just keep repeating that it's ugly.
For those who (http://amiga.org/forums/showpost.php?p=766169&postcount=5) weren't paying attention... (http://amiga.org/forums/showpost.php?p=766315&postcount=25)

Quote
That isn't a technical term for why 8086 is "so bad", how much of an emotional reaction you have says more about you than the design of the 8086.
Which is why I provided my technical assessment in those posts that you evidently either didn't read or forgot about before claiming that I only railed against ugliness in purely emotive terms.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: ElPolloDiabl on June 12, 2014, 10:10:41 PM
8086: an ugly machine
RISC CPU: A streamlined work of art

Those are the type of comments people made about them.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: Hans_ on June 12, 2014, 10:49:43 PM
Quote from: psxphill;766301
Segmented memory made address calculation more complex, but you have to be very careful setting any register before accessing memory on any cpu.
 
MOVE.W 6(A6), D0

If you don't set a6 correctly on 68000, it will fail as badly as not setting the ds/es segment correctly on 8086.
I don't understand your comparison. Of course you have to use the right address. However, at least you don't have the "oops, I'm writing to the wrong memory bank" problem that segmented memory creates.
 
Quote from: psxphill;766301
Your lecturers quote sounds like hyperbole. The 8086 came out in 1978, the 68000 came out in 1979. There weren't any microprocessors ten years prior to that. Mini computers and main frames were still using segmented memory then. I'd be interested in what two years he thinks it was set back from and to and why.
Naturally it's a hyperbole, but one that was made to illustrate a point. Also, we're talking about a guy who has used mini computers and mainframes. Computer programming didn't begin with the first microprocessor...

EDIT: He may have said that IBM's decision to use the 8086 for the PC set back programming by about 10 years. It was a while ago, so I can't remember exactly what he said.

Hans
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: Hans_ on June 12, 2014, 11:23:22 PM
Quote from: itix;766305
Such techniques are still used in some operating system you know very well ;-)

I was expecting someone to make a snarky comment like this...

Yes, Extended Memory Objects bear a strong resemblance to segmented memory/memory banks, and it is a compromise.

However, unlike memory banks, mapping in an ExtMem object does not redirect the entire address space to the newly mapped memory. So accesses to all 2GiB of "normal" RAM go unimpeded; no bank switching necessary. That does make it less problematic, although developers should still heed Hans-Joerg's advice to treat it more like a file that's accessed by offset rather than as RAM.

Quote from: itix;766305
Alas, on C64 you had to use memory banks to access all 64K RAM but for some reason 6502 and its descendants are not considered as "bad designs" like 8088/8086. Certainly 8088 was not too nice to program for but when I did some coding on 486 in Turbo Pascal it was not bad at all.

Possibly because the "C64 was cool." Or maybe because the 6502 & decendants didn't go on to become the core of mainstream computers.

Yes, a compiler can take care of the bank switching for you. Plus, the 80486 also had a mode allowing full 32-bit RAM access without switching.

Hans
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 12, 2014, 11:38:59 PM
Quote from: Hans_;766386
Yes, a compiler can take care of the bank switching for you. Plus, the 80486 also had a mode allowing full 32-bit RAM access without switching.
As did the 386 - unfortunately, it took a good long while for the OS to catch up.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: itix on June 13, 2014, 12:07:52 AM
Quote from: Hans_;766386

Possibly because the "C64 was cool." Or maybe because the 6502 & decendants didn't go on to become the core of mainstream computers.


There was 16-bit variant which had two operation modes, emulation mode and native mode (like 80286 had real mode and protected mode) and yet another variant was used by Nintendo in SNES.

Yes I see you only mentioned mainstream computers ;-)

Quote

Yes, a compiler can take care of the bank switching for you. Plus, the 80486 also had a mode allowing full 32-bit RAM access without switching.


80386 had that already. 386 was actually quite decent chip unlike its predecessor, 80286.

But to me, as a developer, it does not matter if there is bank switching involved or if CPU is using slower base+index segment memory model. Only user experience is important.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: Hans_ on June 13, 2014, 12:38:53 AM
Quote from: itix;766398
80386 had that already. 386 was actually quite decent chip unlike its predecessor, 80286.

Yes, the 386 already had it. Intel had a hard time getting people to use it, but the capability was there.

Quote from: itix;766398
But to me, as a developer, it does not matter if there is bank switching involved or if CPU is using slower base+index segment memory model. Only user experience is important.


AFAIK, it did hurt the user experience early on due to the inevitable higher occurrence of bugs that it caused. It's easy to be dismissive when you're used to using compilers that hide these kinds of complexities and don't need to write hand-optimised assembly, but programmers weren't always so well equipped.

Hans
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 13, 2014, 01:01:57 AM
Quote from: Hans_;766401
Yes, the 386 already had it. Intel had a hard time getting people to use it, but the capability was there.
Well, they had a hard time getting DOS and Windows to use it, on account of the PC BIOS, MS-DOS, and pre-NT/95 Windows being designed for real mode. Xenix supported protected mode all the way back in 1987.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: bbond007 on June 13, 2014, 01:19:12 AM
I had a 286 20mmhz. The 386sx or 386dx where what most people were getting but I was tying to save a little money. Even with the crappy instruction set it was very fast for a lot less money than about anything else. I really just ran DOS games and Turbo Pascal, BBS programs.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: persia on June 13, 2014, 02:56:52 AM
Really at this point in time there is no competition on the desktop/laptop.  i5 and i7 far outclass anything offered by ARM, and there are no other practical competitors left.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 13, 2014, 04:13:32 AM
ARM's getting to the point of being quite adequate for users who don't require a high-performance gaming machine, though - and it does it at a lower price and a lot less power consumption. Its biggest handicap is that the only software support it has outside of iOS and Android is experimental nerd OSes like the free Unices, AROS, or RISC OS. Interesting stuff to be sure, but nothing that could make it a serious competitor in the general market.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: persia on June 13, 2014, 04:31:44 AM
I don't see an end to the divided market with ARM on the tablet and phone and intel on the desktop/laptop.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 13, 2014, 05:06:42 AM
Quote from: psxphill;766324
The iAPX 432 was their proper cpu, which was a failure (like all proper projects tend to do).

Why did it fail?

Quote from: commodorejohn;766348
Ugly is an emotive description of technical design compromises. I'm not going to play at being some kind of impassive robot thing.

One could also call it a judgment based on experience of good engineering.

Quote from: commodorejohn;766389
As did the 386 - unfortunately, it took a good long while for the OS to catch up.

BSD operating systems (or most unix:es) provided an abstraction on x86 to do away with the dysfunctional segmentation memory handling. Mainly by activating the protected mode memory model and provide a compiler environment that did away with it.

Quote from: persia;766413
Really at this point in time there is no competition on the desktop/laptop. i5 and i7 far outclass anything offered by ARM, and there are no other practical competitors left.

Actually when you need a machine that perform as fast as possible per watt used then ARM beats Intel. This also goes for solder ability of the chip package and stable chip offerings. Intel has a habit of EOL:ing chips while you design a circuit board for them..

Also a single chip ARM is now approaching the capacity of an A500 with RAM of 192 kB vs 512 kB and flash/ROM of 1024 kB vs 256 kB. So you could fit the Amiga ROM + workbench into ONE ARM chip and use the powerful DMA circuits to do graphics and sound. And all that at 168 MHz for like 10 EUR. Neat!
(for serious stuff one needs more DRAM, but those 192 kB goes a long way..)

What Intel has going is an existing software base and knowledge from developers. Their processors are also most likely the fastest for single thread applications. In many cases the x86 platform is the cheapest performance per instruction per second too. But the platform is a hodgepod of bad compromises. They lately also left the open nature of the platform (UEFI, TPM, CPUID etc).
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 13, 2014, 08:51:01 AM
Quote from: freqmax;766422
Actually when you need a machine that perform as fast as possible per watt used then ARM beats Intel.
...

The performance depends very much in the task / benchmark you run.
In my experience current Intel cores are generally very good in many different tasks.

Here is a benchmark which stresses cache and conditional code execution.
The 68060 is not bad for its age.
Its faster than than a 240 Mhz Coldfire.

While modern Intel chips score very good.
The tested ARM chips were not impressive.

http://www.apollo-core.com/sortbench/index.htm?page=benchmarks
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 13, 2014, 11:40:45 AM
I think the keyword here is Real-world-scenarios. Also the ARM can go really low in power consumption absolute terms.

(Dunno how MIPS fares in this)
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: Faerytale on June 13, 2014, 12:20:48 PM
x86 design was good enough for World domination.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: bloodline on June 13, 2014, 12:55:05 PM
Quote from: Faerytale;766433
x86 design was good enough for World domination.
Nothing stands in the way of progress more than "good enough".
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 13, 2014, 01:24:39 PM
Quote from: bloodline;766437
Nothing stands in the way of progress more than "good enough".
But, but, but bloodline! Don't you know that Worse is Better? People on the Internet said it, so it must be true!
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 13, 2014, 02:40:55 PM
Off-topic: IBM was early with computing for commercial companies and thus had a foothold with business people (MBA beancounters). IBM wanted to a bite of the personal computer market so they throw together a team that put together some crappy chips from some crappy chip designer like Intel. They then tried to get CP/M but were too important to wait for a good deal so got a crap software to go with this design.

The next phase is that because they consider it an unimportant product they release drawings and documentation. Businesses buy it because (1) IBM reputation (2) cheap compared to mainframe. And then it takes of.

So what's needed for world domination is contacts and reputation..
x86 was bought by clueless MBA:s because it had "IBM" stamped onto it. Now the actual people doing the buying may be different people but the management culture still permeates. And once there was software for crap hw/sw the other software had to be compatible with the former which was used in the all important "business environment" ..

Take out the reputation and contacts from IBM in the 1980s and the problem would likely been a lot less severe. Add compability layer to other platforms to snuff out the compability aspect and there might be a solution. The current solution seems to be that x86 is to inefficient and Windows is just a too big blob of code for mobile environments where power and resources utilization really counts. Besides Microsoft was just too busy entrenching themself in a market that was soon to be competing with a whole new market they perhaps didn't "get".
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: nicholas on June 13, 2014, 04:24:04 PM
Even if IBM had chosen the 68k as the CPU and CP/M as the OS it would still have been crap.  The Atari ST is proof.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 13, 2014, 04:44:21 PM
Quote from: freqmax;766447
IBM wanted to a bite of the personal computer market so they throw together a team that put together some crappy chips from some crappy chip designer like Intel. They then tried to get CP/M but were too important to wait for a good deal so got a crap software to go with this design.

The 68000 was on the table for the PC but Intel came in and priced the 8086 aggressively to make sure their chip got used. Motorola missed their chance.
 
Microsoft sent IBM to Digital Research, but Gary Kildal wanted a royalty and not an outright payment & didn't want it sold as PC DOS. So Microsoft bought QDOS and sold it to IBM, they didn't want a per copy royalty but wanted to be able to sell it to other people. The Gary Kildal was out flying story is a lie. IBM did a deal with Digital Research, including an advance in royalties (and according to Tom Rolander a payment for writing the BIOS). The IBM PC wasn't bundled with an operating system. PC DOS was $40, CPM86 was $240. The majority spoke and PC DOS became standard.
 
Quote from: nicholas;766453
The current solution seems to be that x86 is to inefficient and Windows is just a too big blob of code for mobile environments where power and resources utilization really counts. Besides Microsoft was just too busy entrenching themself in a market that was soon to be competing with a whole new market they perhaps didn't "get".

The problem with being big is that it becomes hard to move. People have wanted more and more functionality from Windows and dropping functionality to make it smaller isn't always that easy (maybe you need to drop 50% of every sub system which effectively means you're re-writing it all). They could have done like Apple did and just start mostly from scratch, but Microsoft try to avoid fragmentation.
 
Android is the most fragmented, but because it's so cheap it has really taken off.
 
Quote from: nicholas;766453
Even if IBM had chosen the 68k as the CPU and CP/M as the OS it would still have been crap. The Atari ST is proof.

Interestingly the person that drove the PC project through at IBM wanted to buy Atari, to get their expertise at designing and manufacturing consumer level micro computers.
 
Instead of the ST being sold by Atari it would have the less catch name of Tramel Technology, Ltd.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: nicholas on June 13, 2014, 05:29:10 PM
double post
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: nicholas on June 13, 2014, 05:44:23 PM
Quote from: psxphill;766454
Microsoft sent IBM to Digital Research, but Gary Kildal wanted a royalty and not an outright payment & didn't want it sold as PC DOS. So Microsoft bought QDOS and sold it to IBM. The Gary Kildal was out flying story is a lie.

Quote
When the IBM team arrived in Pacific Grove they met with Dorothy and  worked with company attorney Gervaise “Gerry” Davis to settle the terms  of a non-disclosure agreement.  Gary, who had flown his aircraft to  Oakland to meet an important customer, returned as scheduled to discuss  technical matters. The meeting ended in an impasse over financial terms.  IBM wished to purchase CP/M outright, whereas DRI sought a per-copy  royalty payment in order to protect its existing base of business. With  some alternative approaches in mind, Kildall tried to renew the  negotiations a week later but IBM did not respond.

In the meantime, Gates negotiated terms to purchase 86-DOS from  Brock. He then sold a one-time, non-exclusive license to IBM, who used  the designation PC DOS, but retained the right to license the product as  MS-DOS to others. When Kildall discovered that the function calls of  the programmer’s application interface were identical to those of the CP/M Interface Guide that was copyrighted and marked “Proprietary to Digital Research” he threatened IBM with a lawsuit.
 
 Kildall and Davis negotiated a resolution that required IBM to market  CP/M-86 alongside PC DOS. However the list price differential, $40 vs.  $240 for the DRI product, discouraged consumer interest in the latter.  Davis says “IBM clearly betrayed the impression they gave Gary and me.”
http://www.computerhistory.org/atchm/gary-kildall-40th-anniversary-of-the-birth-of-the-pc-operating-system/

More historical info from the original developer of QDOS here: http://dosmandrivel.blogspot.co.uk/
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: Nlandas on June 13, 2014, 06:08:34 PM
Quote from: psxphill;766174

The 8086 was the kind of chip that commodore would have put out in it's 8 bit days.


I really want a LIKE(vote up) function on Amiga.org.

I wish we could find archives of all the old discussions on this topic. LOL!

The really sad thing is that Motorola didn't position the 68000 into markets that would end up winning the PC wars.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 13, 2014, 06:26:06 PM
Quote from: nicholas;766456
http://www.computerhistory.org/atchm/gary-kildall-40th-anniversary-of-the-birth-of-the-pc-operating-system/
 
More historical info from the original developer of QDOS here: http://dosmandrivel.blogspot.co.uk/

I'll go with the account of the person who was in the plane at the time that IBM turned up.
 
http://www.podtech.net/scobleshow/technology/1593/the-rest-of-the-story-how-bill-gates-beat-gary-kildall-in-os-war-part-1
 
Gary Kildall was in a plane on that day, but IBM waited and they sorted out the NDA issue. But IBM didn't want to pay a royalty and so they went away. Digital Research contacted IBM because they were going to sue over QDOS and IBM gave into all their licensing demands & said they weren't going to bundle either PC DOS or CPM86 and just let the customers choose plus giving them $100,000 dollars to do the BIOS. Gary signed thinking that IBM wouldn't be successful but they should just take the money and run, he was wrong.
 
You can argue that IBM overpriced CPM86 on purpose, but Gary let them do it because he didn't think it would matter. But PC DOS winning over CPM86 was nothing to do with IBM not wanting to wait.
 
CPM86 supposedly supported multitasking, but for people buying the 16k model you had so little memory that multitasking wasn't viable. The choice of PC DOS and the 8086 made less sense once shipping in quantity pushed the prices of all the chips down, but nobody predicted that volume would be reached. There isn't enough technical details about CPM86 online, but it appears that it's pretty much the same as DOS 1 apart from the multitasking (it's not even clear that what they shipped supported multitasking). I'm not convinced it would have made much technical difference if CPM86 had been the one and only OS, it just wouldn't be Bill Gates that made all the money.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: nicholas on June 13, 2014, 06:51:06 PM
Quote from: psxphill;766460
I'll go with the account of the person who was in the plane at the time that IBM turned up.
 
http://www.podtech.net/scobleshow/technology/1593/the-rest-of-the-story-how-bill-gates-beat-gary-kildall-in-os-war-part-1
 
Gary Kildall was in a plane on that day, but IBM waited and they sorted out the NDA issue. But IBM didn't want to pay a royalty and so they went away. Digital Research contacted IBM because they were going to sue over QDOS and IBM gave into all their demands & said they weren't going to bundle either PC DOS or CPM86 and let the customers choose plus giving them $100,000 dollars to do the BIOS. Gary signed thinking that IBM wouldn't be successful but they should just take the money and run, he was wrong.
 
You can argue that IBM overpriced CPM86 on purpose, but Gary let them do it because he didn't think it would matter. But PC DOS winning over CPM86 was nothing to do with IBM not wanting to wait.
 
CPM86 supposedly supported multitasking, but for people buying the 16k model you had so little memory that multitasking wasn't viable. The choice of PC DOS and the 8086 made less sense once shipping in quantity pushed the prices of all the chips down, but nobody predicted that volume would be reached.

Which is the same as the write up in the computerhistory article narrates it.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 13, 2014, 07:08:45 PM
Quote from: nicholas;766464
Which is the same as the write up in the computerhistory article narrates it.

Not really, it says:
 
"With some alternative approaches in mind, Kildall tried to renew the negotiations a week later but IBM did not respond."
 
And according to the interview that didn't happen. He wrote to them saying they were considering suing over QDOS and IBM did respond.
 
"Kildall and Davis negotiated a resolution that required IBM to market CP/M-86 alongside PC DOS"
 
According to the interview it was IBM that suggested both operating systems be available & the deal that IBM came up with was a no brainer for them, they were so quick to bite the hand off IBM that they didn't negotiate anything.
 
It also doesn't have anything about IBM paying $100,000 for BIOS development.
 
But you're right, it doesn't validate the IBM not wanting to wait for Gary to come back from flying his plane story (which I think Bill Gates might have made up).
 
Without being there we can't know what happened, supposedly he was there and there is nobody else who was there that has covered the story in so much detail. So unless someone can offer better evidence we might as well take his word (some of Bil Herd's stories change with every telling and because so many of them have been documented online you can compare how they diverge but you can't tell which one is true).
 
 
Quote from: Nlandas;766458
The really sad thing is that Motorola didn't position the 68000 into markets that would end up winning the PC wars.

They were pricing it for the mini computer market, if they'd priced it to compete with the 8086 then they'd have lost money from the mini computer sales they were making. Someone (or lots of someones) at Motorola failed to predict the microcomputer taking over. The also priced the 6809 too high, which is how the 6502 came about in the first place. You could argue that Motorola were evil money grabbers.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: A6000 on June 14, 2014, 06:22:33 AM
I read somewhere that IBM developed a version of the 68000 that executed the IBM 360 instruction set, I wonder if they did anything with that.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: ElPolloDiabl on June 14, 2014, 09:35:39 AM
IBM PC had upgrade cards from the beginning.
Theres one reason why IBM PC beat Amiga... Mass produced generic add on cards.
We had to wait for any large production of cards.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 14, 2014, 09:58:10 AM
Quote from: A6000;766556
I read somewhere that IBM developed a version of the 68000 that executed the IBM 360 instruction set, I wonder if they did anything with that.

They tried to sell it in the XT/370, I don't know how successful they were.
 
I don't know how true the custom microcode story is http://marc.info/?l=classiccmp&m=109279766418496&w=2
It might be that there was external hardware that translated the instructions.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 14, 2014, 06:09:14 PM
Makes you wonder what would have happened if the "PC" got an 68000 and CP/M to go with it ;)
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 14, 2014, 06:20:55 PM
Quote from: freqmax;766646
Makes you wonder what would have happened if the "PC" got an 68000 and CP/M to go with it ;)

CP/M had stagnated due to lack of competition & it was expensive, so I'd expect the PC wouldn't have been so successful. At that point it's hard to predict, we might not have become so reliant on computers at all. The butterfly effect of something that big could have caused the Amiga to never exist. It's unlikely the Atari ST would have ended up with software by digital research if they had IBM breathing down their necks for new stuff.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 14, 2014, 06:46:38 PM
Perhaps Amiga would have gone with MIPS and some more unix like sw?
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 14, 2014, 07:02:09 PM
Quote from: freqmax;766649
Perhaps Amiga would have gone with MIPS and some more unix like sw?
Good Lord no. MIPS was still professional Unix workstation stuff in 1985. The Amiga didn't pick 68k because IBM didn't, they picked it because it was a powerful but cost-effective architecture for the time.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 15, 2014, 04:34:52 AM
Seems the conclusion on x86 is that it was all haphazard and then nobody wanted to do a clean break. Well until smartphones forced the issue due power constraints.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 15, 2014, 05:56:02 AM
Precisely. The whole thing hinged around compatibility with the IBM PC architecture pretty much from the start. There were a few points (OS/2 on PPC and NT on Alpha) where it looked like there might've been a chance of breaking off and doing legacy support in emulation, but it never took.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 16, 2014, 05:06:54 AM
So now that processors have a frequency ceiling the businesses that stay with x86 will see their competitors run other stuff way faster due efficiencies .. ;)
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: persia on June 16, 2014, 11:41:02 AM
ARM is the sole survivor and they are concentrating on the tablet/phone half of the market.   PPC is for all intents and purposes dead after being abandoned by the game console makers.  Freescale isn't in the competition for speed at all.  

Quote from: freqmax;766823
So now that processors have a frequency ceiling the businesses that stay with x86 will see their competitors run other stuff way faster due efficiencies .. ;)
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 16, 2014, 12:16:48 PM
Quote from: persia;766847
ARM is the sole survivor and they are concentrating on the tablet/phone half of the market.   PPC is for all intents and purposes dead after being abandoned by the game console makers.
Keep repeating that; it won't make it true.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: ElPolloDiabl on June 16, 2014, 12:36:11 PM
No. we will be running Amiga OS on old PowerPC servers in ten years time. lol

Personally the lack of software available on MorphOS and OS4 is a big turn off. 68k Amiga has just enough to keep me going.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: bloodline on June 16, 2014, 12:49:40 PM
Quote from: commodorejohn;766851
Keep repeating that; it won't make it true.


I will quote Neil DeGrasse Tyson: "The good thing about science is that it's true whether or not believe in it".

I'm not sure if your gripe was about the emergence of ARM as the next major processor architecture, or that the PPC is now a dead platform with only a few legacy devices left in the supply chain. Both statements seem reasonable to me, and backed up by the evidence available.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 16, 2014, 12:59:43 PM
Quote from: freqmax;766697
Seems the conclusion on x86 is that it was all haphazard and then nobody wanted to do a clean break.

Most successful things are haphazard. The last successful good design I have seen is the PlayStation, but even that has some hardware bugs that they had to maintain throughout the life of the console because fixing them would hurt compatibility.
 
Quote from: freqmax;766823
So now that processors have a frequency ceiling the businesses that stay with x86 will see their competitors run other stuff way faster due efficiencies .. ;)

x86 has always run faster than arm, the only thing arm has is lower power consumption. Which is very important in a phone, tablet or handheld games console. When the device is constantly tethered to the mains it becomes a less important consideration. I have an arm powered NAS, because it's cheap and quiet but it's woefully underpowered.
 
Intel have managed to get power usage for their phone chipsets down a lot in the last few years though. In some cases they have performed identically with lower power, Arm continues to dominate the market because of momentum.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: bloodline on June 16, 2014, 01:12:35 PM
Quote from: psxphill;766854
Most successful things are haphazard. The last successful good design I have seen is the PlayStation, but even that has some hardware bugs that they had to maintain throughout the life of the console because fixing them would hurt compatibility.


It always comes down to the real world vs the perfect world. A concept might be beautiful and elegant. But in the real world, compromises must be made.

Quote

Intel have managed to get power usage for their phone chipsets down a lot in the last few years though.


But as you will find if you try a low power intel chip, when they get the power usage down to ARM levels they struggle to offer the performance that ARM can. The converse is also true, as ARM ramp up performance, power usage increases to Intel x86 levels.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 16, 2014, 02:57:36 PM
While ARM and PPC are load/store machines - x86 and 68K are CISC machines.


The decoding is much simpler for RISC machines.
This means when you compare a real simple CORE which does 1 instruction per cycle the RISC machine is smaller /needs less power.

Also decoding multiple instructions is in the naive approach a lot simpler with RISC machines.
This means developing a super-scalar decoder is simpler for RISC.
But Intel,AMD and also new 68K chips have found their solutions to also be able to fast decode several instructions per cycle.

Now a CISC machine also has several advantages.
1) CISC instructions are much more powerful than RISC instructions.
For example:
ADDi.L #12456,(48,A0,Dn*8)
1 instruction on CISC  - some CISC can even do this in 1 cycle.
= you need about 6 instructions to do the same on POWER

2) CISC instruction are much more compact.
This means caches can cache more instructions, and cache can also supply moer instrucitoner per cycle to the CPU.

To good designed CISC machine can do a lot of work per cycle.
Its not easy even for good RISC machines to keep up with this.

RISC has some clear advantages.
RISC chips are seasier to design.
Low performance = simple RISC chips need low power.

When you go high end the more complex CISC decoder is not the only problem anymore.
- Instruction Cache bandwidth limitations
- dependancies between instructions
There are the important topics.
RISC is no advantage here.

EPIC tries to address some of those but also has their very own pitfalls.

So yes - I can see that ARM has by design an advantage in the low performance region.
But in the high perfromance region - the problems are diffirent - and RISC is not in advantage here anymore.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 16, 2014, 05:53:47 PM
Lets not forget MIPS..
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: TeamBlackFox on June 16, 2014, 06:16:32 PM
Agreed I'd love to see a good MIPS desktop come about - but until then my 2.5 Ghz G5 quad is trucking right along - just checked the cooling unit the other day, its in fantastic shape. The case has seen better days, but thats just superficial.

I have an SGI Fuel running at 600mhz and it works very well for basic things - but if I need the oomph my G5 can take it. I also have a dream of getting an SGI Tezro, but that will be when I'm in better finances.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: Nlandas on June 16, 2014, 06:32:00 PM
Quote from: bloodline;766853
I will quote Neil DeGrasse Tyson: "The good thing about science is that it's true whether or not believe in it".


   Yes, he usually says that right before he makes up some nonsense that you have to be a fervent believer in that has little to do with actual science.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 16, 2014, 08:33:43 PM
Quote from: bloodline;766855
But as you will find if you try a low power intel chip, when they get the power usage down to ARM levels they struggle to offer the performance that ARM can. The converse is also true, as ARM ramp up performance, power usage increases to Intel x86 levels.

The benchmarks I saw were identical performance with Intel showing lower power usage. Supposedly the problem for Intel today is they haven't got a chipset with 4G support.
 
Arm architecture has changed a lot since the beginning, it's not a simple RISC processor anymore.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: bloodline on June 16, 2014, 08:46:31 PM
Quote from: psxphill;766903
The benchmarks I saw were identical performance with Intel showing lower power usage. Supposedly the problem for Intel today is they haven't got a chipset with 4G support.


I'd be intrigued to see that. I've not seen the ARM bested in power consumption stats.

Quote

Arm architecture has changed a lot since the beginning, it's not a simple RISC processor anymore.


Hahahahah, there's no such thing as CISC and RISC anymore, all modern processors are a hybrid of these two concepts.

The best solution to most problems are hybrids.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 16, 2014, 09:13:28 PM
Quote from: bloodline;766910
The best solution to most problems are hybrids.
Like Windows 8!
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: bloodline on June 16, 2014, 09:21:01 PM
Quote from: commodorejohn;766913
Like Windows 8!


Well yes, in a way... Windows does use a hybrid Kernel, that has features of both Micro Kernels and Monolithic Kernels.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 16, 2014, 11:13:36 PM
Quote from: bloodline;766910
Hahahahah, there's no such thing as CISC and RISC anymore, all modern processors are a hybrid of these two concepts.

 There is such a thing as RISC, it just happens that ARM no longer fits the description.
 
 Due to moore's law CISC processors have room for lots of cache and registers, which used to only be available to RISC processors because the core took up less chip space. But those features weren't what defined RISC.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: TeamBlackFox on June 17, 2014, 01:59:52 AM
Quote from: bloodline;766910
The best solution to most problems are hybrids.

Eh I still prefer the monolithic kernels - mostly the BSD and System V kernels for performance reasons, BUT DragonFly BSD is a hybrid kernel, and it may just turn out to be the magic bullet against GNU/Linux - if it ever gets anywhere. 10 years and while its usable - its painful...
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 17, 2014, 04:07:18 AM
Microkernels are nice. IF the CPU sports a genereous on board cache and in general the architecture won't induce a performance penalty that microkernels seems to do.

Any tip for microcontroller BSD unix for MMU less stuff ..?
(ie run on 512 kB flash and some way less RAM)
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: TeamBlackFox on June 17, 2014, 03:15:37 PM
Well Freqmax I like the kernel used in the Amiga family as its one of the few that doesn't incur severe overhead ( Mach and Hurd suck as anything more than hobbyist kernels due to overhead/message passing lag )

I don't know enough coding to know how to get a modern BSD onto something that small and without an MMU -  NetBSD/Amiga requires an MMU to run for example: http://www.de.netbsd.org/ports/amiga/

If you're interested in trying to code, you may want to look at 4.4BSD Lite or the open source Research UNIX versions - as far as archaic UNIX go they're the best bet for finding one that isn't dependent on MMU.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 17, 2014, 08:58:50 PM
The 4.4BSD-Lite2.tar.gz source archive is 44.23 MB. I suspect one might run out of flash memory..

Perhaps 2.11BSD is small enough.

If the C64 can run Unix. Then surely an ARM cpu can too. But one might have to strip out a lot. The practical way is to use adress relocation table and trust programs to behave. And ofourse (ab)use the clock timers to create pre-emptive task scheduling.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: TeamBlackFox on June 17, 2014, 09:29:34 PM
Alright then. You'll need to be a guru at K&R style C. Plus, you do realize the 4.4BSD install will be smaller after you build it right?
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 17, 2014, 09:48:34 PM
Quote from: freqmax;766975
If the C64 can run Unix. Then surely an ARM cpu can too.

The c64 can't run unix, it can run a multitasking os that has a cut down posix-ish c run time. A lot of work went into that for pretty much no reward.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 17, 2014, 10:44:18 PM
Quote from: bloodline;766910
there's no such thing as CISC and RISC anymore, all modern processors are a hybrid of these two concepts.


Err no.

CISC chips = can operate on memory.

RISC chips = are load/store machines and can only operate on register.


68K and x86 = CISC

MIPS/POWER/ARM = RISC


Whether your chip is internally hardcoded, or does Microcode or has pipeline has nothing to do with CISC or RISC.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: bloodline on June 18, 2014, 01:28:55 AM
Quote from: biggun;766984
Err no.

CISC chips = can operate on memory.

RISC chips = are load/store machines and can only operate on register.


68K and x86 = CISC

MIPS/POWER/ARM = RISC


Whether your chip is internally hardcoded, or does Microcode or has pipeline has nothing to do with CISC or RISC.


Hahaha, when marketing becomes policy :)

By your simplistic (though not wholy inaccurate definition), the x86 is actually a RISC machine! Since it's non orthogonal ISA often requires one to load data into Registers for processing and then written back to the main memory.

To be frank, only the MIPS every really fully implemented all the RISC concepts... And look where that is now! I come back to my original statement: modern CPUs have features of both RISC and CISC designs. PPC an ARM are examples of RISC chips that have woefully complex instruction sets, and the x86 is a great example of a CISC chip that has been on a diet to give it RISC like features.

Check out the ARM64 ISA, that is so
Complex it could be CISC, but so carefully crafted for throughput it's clearly RISC in origin!
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 18, 2014, 01:35:20 AM
Quote from: bloodline;766997
By your simplistic (though not wholy inaccurate definition), the x86 is actually a RISC machine! Since it's non orthogonal ISA often requires one to load data into Registers for processing and then written back to the main memory.
That's not what "load-store architecture" means and you freakin' know it. Load-store means only performing operations on registers. x86 is more than happy to do quite a number of operations with one or more operands being in memory.

Quote
To be frank, only the MIPS every really fully implemented all the RISC concepts... And look where that is now!
Yeah, I mean, it was only in the PSP, that's all! That was only the second most popular handheld gaming system on the market in its recently-concluded run!
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 18, 2014, 03:32:42 AM
Quote from: bloodline;766997
Hahaha, when marketing becomes policy :)

My defitition is the definition that CPU designers use ....


You are right that "marketing" was often misusing the RISC/CISC definitions.
And companies like IBM came up with those definition for marketing reasons.


Today CPUs are classified as either

1)
LOAD-STORE / or REGISTER-REGISTER / or RISC =
these CPU's can only operate on Register and NOT on memory.

the opposite are
2) CISC Architecture which can operate on memory.



Quote from: bloodline;766997

the x86 is actually a RISC machine! Since it's non orthogonal ISA often requires one to load data into Registers for processing and then written back to the main memory.

The x86 can generally use 1 operant from memory.
The 68k can for some operations have 2 operants in memory.
A VAX can even have 3 operants in memory.


But a RISC machine NEVER can use an operant in memory.


Quote from: bloodline;766997

PPC an ARM are examples of RISC chips that have woefully complex instruction sets,

Complex?
No, not really...

They have many instructions and some instruction also take more than 1 cycle.
But their instructions are all regular and not complex neither in execution nor in decoding.

The main complixity that RISC took away from CISC was the decoding complexity.
The 68k did support instruction up to 10 byte length. - This was difficult enough to decode.
Since the 68020 Motorola broke did record and supported even over 20 bytes - This complixity was a problem which made making 68k fast really difficult.

The common dominiator of CISC chips are the complex address modes.
And that instruction can operate on memory and sometimes even can have more than one operant in memory made the instruction very complex to decode.
So complex that it became very challanging for CPU developers
to invent decoders which are able to decoder more than 1 instruction per cycle.

Not all CISC chips are equally complex to decode.
68000 was complex but instruction size could be determined with decoding of 16bits. This is OK.
The Z chip. IBMs CISC mainframe design - its instruction size can be decoder by evaluating only 2 bits. This is nice.
While the added address modes of the 68020+ make it neccessary to look at 10 bytes = 80bit to be able to decode it length. This change was a real big mistake by Motorola.


If you want to understand wheter a chip is CISC or RISC then simply check a few points:

Can the chip support 3, 2 or even 1 operants in memory?
RISC can't.

Does the chip allow updating only parts of their registers in a BYTE/WORD/LONGWORD fashion?
RISC don't.

Does it allow full size immediates encoded in their instructions?
Like 32bit or 64bit immedates?
RISC don't.


Of course not all chips in one category are the same,
VAX was more CISCy then all other CISC chips.
The VAX could read two operants from memory, do an operation with them and store the result as third operant again to memory. And this all in a single instruction.

The 68k can use 2 memory operants only a few instructions.
Them being ADDX,SUBX,CMPM, MOVE, ABCD, SBCD

The x86 generally only allows 1 memory operant.


RISC chips do not allow even 1 memory operant.

Coding RISC chips is different han coding CISC chips.
With CISC chips you can use immediates easily.
With RISC chips you to can only use small immediates embedded in your instructions stream.
All bigger constanst you have to reference over a pointer from memory.
All bigger offsets from a pointer you can not include in your instruction but you have to create with extra instructions. The default GCC compiler setting is big data model nowadays.
This means that pointers to immediates are per default generated with 2 extras instructions.

This means for something which looks "simple" to a CISC developer as
ADD #64bitimmediate,Register  

On POWER the per default generated code is

2 instruction to generate a 32bit offset
1 instruction to load data from offset plus base pointer into a tempregister
1 instruction to add the tempregister to the register.


If you look at generated code you see much more examples for this.
You see such code very often when you compare SSE instructions with POWER instructions.
x86 needing 1 instruction and directly referencing 1 operant from memory.
POWER needing 4 instruction to do exactly the same work.

You also see this with typical integer code.
Good CISC chips like 68060 or modern x86 are clock by clock very efficient in integer operations.
Its very difficult to keep their pace with RISC chips as RISC chips need to execute much more instructions to do the same amount of work.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 18, 2014, 05:48:39 AM
What would you classify ARM Cortex-M and ARM Cortex-A as?
(presumably v7 and higher)
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 18, 2014, 07:15:03 AM
Quote from: freqmax;767007
What would you classify ARM Cortex-M and ARM Cortex-A as?
(presumably v7 and higher)


ARM are typical RISC chips.

Cortex-M are tuned for low power.
Cortex-A are available in various types.
Some are very simple in Order risc designs with pipeline length similar to chip from the late early 90th.
Some are a bit more fancy out or order designs with pipelines structures more similar to the PPC G3/G4.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: matthey on June 18, 2014, 07:33:31 AM
Quote from: freqmax;767007
What would you classify ARM Cortex-M and ARM Cortex-A as?
(presumably v7 and higher)


All ARM processors are load/store architecture. load/store = RISC therefore they are RISC.

ARM may have CISC like encodings with Thumb and complex addressing modes common on CISC but it's still not a register memory architecture.

load/store architecture = RISC
register memory architecture = CISC

Modern RISC: ARM (all variants), PPC/Power, MIPS, SPARC
Modern CISC: 68k, x86/x86_64, z/Architecture
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 18, 2014, 08:14:32 AM
RISC was an invention of a certain time...

There was a golden time when CPU designers tried to make CPU cores which are nice to program in ASM. Great examples are VAX and 68K.

Then there was a time when chip technology allowed better clockrates,
but some companies failed to reach this because the complexity of their decoding logic
and complixity of their internal data pathes.
This was the time MOTOROLA scursed some of their 68020 instruction enhancements because it limited their clockrate -  and the time some people had the idea to avoid the problem by inventing new much simpler decoding schemas.
This was the golden time of the RISC chips.
This was the time RISC chip reached much higher clockrates than CISC.
RISC chips avoid the decoding and memory-data challenges.
RISC chips traded in simpler internal design with having sometimes to execute more instructions to do the same amount of work.

Some of the CISC designs then died. 68k and VAX are good examples for this.
Some CISC like x86 and Z continued and found solutions to the challenge.
Today CISC chips are the chips reaching the highest clockrates again.

Then CPU developers run into another problem.
Instruction dependancies. Neither CISC nor RISC does solve this.
This problem limits the amount of Super Scalarity you can sensibly have.

Again some people an ideas to "fix" this.
The idea to fix this was to create big macro instructions.
Keywords are EPIC or VLIW. ITANIUM is a chip of this design.

The CISC designs are generally easier to program.
The RISC and EPIC designs came up to avoid challenges of the CISC or CISC/RISC designs.
RISC and EPIC added their own limitations.

Today a major factor is compiler support.
When Itanium came out is was relativ strong but also very hard to program and very hard to write a good compiler for it - therefore the final performance of the software did dissapoint many.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: matthey on June 18, 2014, 10:33:03 AM
Quote from: biggun;767012
RISC was an invention of a certain time...

There was a golden time when CPU designers tried to make CPU cores which are nice to program in ASM. Great examples are VAX and 68K.


Easier to program in assembler usually equates to easier to create good compilers and easier debugging. The scursed (screwed and cursed?) 68020 addressing modes were easier for assembler programmers and compilers but they must have forgotten to consult the chip designers. The (bd,An,Xn) addressing mode is quite nice even if bd=32 bit is there more for completeness and crappy compilers. The double indirect wouldn't have been so bad either if they would have limited it to LEA, PEA, JSR and JMP (12 bytes max length). Not allowing it for MOVE alone reduces the max instruction length from 22 bytes to 14 bytes. There really wasn't a better way of encoding (bd,An,Xn) although double indirect could have been simplified and had a simpler encoding.

Quote from: biggun;767012

Then there was a time when chip technology allowed better clockrates,
but some companies failed to reach this because the complexity of their decoding logic
and complexity of their internal data pathes.
This was the time MOTOROLA scursed some of their 68020 instruction enhancements because it limited their clockrate -  and the time some people had the idea to avoid the problem by inventing new much simpler decoding schemas.


But was instruction decoding the clock rate limiting bottleneck on the 68060? Wasn't the 68060 slower with longer instructions because of fetching and not decoding? The timings are good for the complex addressing modes, if the instructions are short. It looks to me like the 68060 solved many of the 68020+ complexity problems only to be canned. It needed upgrading in some areas (like the instruction fetch) and more internal optimizations (more instructions that worked in both pipes, more instruction fusing/folding, a link stack, etc.) but it was a very solid early foundation to build on. It also would have benefited from a more modern ISA and ditching the transistor misers.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: ElPolloDiabl on June 18, 2014, 11:08:18 AM
At the time would it have been worth continuing the 68k line? Was the mhz race a factor in dropping it? Going to PowerPC was meant to make a common architecture.

Was it the existing software that held back the 68k?
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 18, 2014, 11:15:08 AM
Quote from: matthey;767019
But was instruction decoding the clock rate limiting bottleneck on the 68060?
No it was not.

Quote from: matthey;767019
Wasn't the 68060 slower with longer instructions because of fetching and not decoding?
Yes this was a limit in 68060-A which they wanted to fix in 68060-B.

The big problem also known as "how-the f_uc_k can I decode this instructions fast" was before the 68060 came out.
During the time of the 68040 Motorola had no good answer to this.
And these years around early 90th were the golden years of the RISC.



The 68060 came out late - by this time Intel and Motorola had already solutions for this.


So yes - when the 68060 came out there was no real need for the RISC trick anymore.
In theoy Moto could have continued the 68k line at this time.
But customer were already gone to other chips - so the market was lost.
And Moto wanted to focus on the PPC chips.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 18, 2014, 11:18:43 AM
Quote from: matthey;767011
load/store architecture = RISC
register memory architecture = CISC

That is only how they are defined now because all of the other things that made a chip RISC have been taken on by CISC processors. To the point where it largely makes no difference whether it's RISC or CISC.
 
RISC was load store because it allowed the instruction decoding to be simpler, which meant you didn't have to use microcode, which at the time allowed higher instruction throughput. Now RISC have complex instruction decoding and both RISC & CISC can either be micro-coded or not.
 
The only RISC processor that I like is 32 bit MIPS as all the others are horribly complex.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 18, 2014, 11:19:53 AM
Quote from: matthey;767019
It looks to me like the 68060 solved many of the 68020+ complexity problems only to be canned. It needed upgrading in some areas (like the instruction fetch) and more internal optimizations (more instructions that worked in both pipes, more instruction fusing/folding, a link stack, etc.) but it was a very solid early foundation to build on. It also would have benefited from a more modern ISA and ditching the transistor misers.


This is absolutely true.

The 68060 did many thinks right.
The 68060-B which was planned but did not come out would have been a great chip.

The enhancements you mentioned like Fusion, Linkstack, Folding, Conditional rewrite.
These would have made super chips.

And with a minimal cleanup and ditching some "near" useless stuff the 68K could have been a great architecture which could also today easily compete and beat others
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: TeamBlackFox on June 18, 2014, 12:33:45 PM
Quote from: psxphill;767025
The only RISC processor that I like is 32 bit MIPS as all the others are horribly complex.


I've only done a little MIPS64 ASM but coding for the R14k in my Fuel has been pretty darn easy, and my friend who is learning ARM64 ASM says its easy too. Whats so complex between MIPS32 and MIPS64 other than the extended modes for 64-bit addressing and such?
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 18, 2014, 12:53:25 PM
Quote from: psxphill;767025

The only RISC processor that I like is 32 bit MIPS as all the others are horribly complex.


The original MIPS implementation was very simple but absolutely not future proove.
Originally MIPS forced the developer/compiler to take too much CPU specific information into account.
This meant the original MIPS CPU could not be properly upgraded / performance enhanced without breaking all old programs.

The 68K architecture is much more future proove.

MIPS learned this also and changed their achitecture.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 18, 2014, 01:12:13 PM
Quote from: TeamBlackFox;767033
Whats so complex between MIPS32 and MIPS64 other than the extended modes for 64-bit addressing and such?

I'm sure 64 bit mips is better on a technical level (more bits is better right?), but I just prefer the 32 bit version. I thought maybe now the thread is derailed I'd throw in my emotional preference.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: TeamBlackFox on June 18, 2014, 03:16:58 PM
> I'm sure 64 bit mips is better on a technical level (more bits is better  right?), but I just prefer the 32 bit version. I thought maybe now the  thread is derailed I'd throw in my emotional preference.       

You're right it is sort of derailed. But if you ever have interest in trying out a MIPS64 machine let me know as I plan on setting up one of my SGIs headless soon. I'd be happy to hand out an ssh account. All of mine are MIPS64 so yeah...
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 18, 2014, 05:30:33 PM
Much software can work with 32-bit space. So 64-bit environments may be stuck in some ways with more bits than really needed. Which will bloat code.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 18, 2014, 05:46:50 PM
Quote from: freqmax;767056
Much software can work with 32-bit space. So 64-bit environments may be stuck in some ways with more bits than really needed. Which will bloat code.
Depends on the architecture. Not every CPU confines itself to using instructions exactly the same length as its word size. These days they try to make it less arbitrary, so as to keep instruction fetch simple (you want your instructions to always be an even divisor of the data bus size, and to always be aligned on an even boundary, so that one instruction doesn't require two separate fetches,) but plenty of 64-bit architectures use 32-bit instruction words.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: matthey on June 18, 2014, 10:48:21 PM
Quote from: freqmax;767056
Much software can work with 32-bit space. So 64-bit environments may be stuck in some ways with more bits than really needed. Which will bloat code.

RISC route
Make fixed length instructions to simplify decoding.
Result: More fetch, memory and caches needed for larger code sizes

Reduce the number of instructions and addressing modes to increase the clock rate
Result: More instructions, larger programs and hotter processors

Use separate load/store instructions to simplify decoding and execution.
Result: Larger programs, more registers and OoO execution needed to avoid load/store bubbles

Move complexity to the compiler.
Result: Slower and larger programs needing more caches

Not enough addressing space and memory because programs are now too big
Result: Move to 64 bit which slows clock speeds and makes programs even bigger

Progress!

The other route is to stay with the 32 bit 68k but enhance it making programs even smaller. This reduces cache, memory and bandwidth requirements. The 68k will never clock as high as some other processors but it does offer strong single core/thread integer performance using low resources. Which is a better fit for the Amiga?
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: NorthWay on June 18, 2014, 10:54:23 PM
Quote from: psxphill;767025
The only RISC processor that I like is 32 bit MIPS as all the others are horribly complex.

Didn't that one have the delay-slot after branch instruction that was later considered a dead-end?
(I.e. the instruction following a branch was always executed.)
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: NorthWay on June 18, 2014, 11:15:04 PM
Much of the original thinking behind RISC designs was that compilers had a hard time using all the fancy CISC instructions.

It is not exactly rocket science to speed up those things it uses then.
Compilers only use simple instructions -> make a cpu with simple instructions and speed those up.

"The perfect number of registers in a CPU are 0, 1, or infinite" - quote from a class I took sometime.
Compilers have no trouble juggling lots of registers and figuring out when to keep values in registers and when to purge them. Many registers was an answer for having to go to memory for immediates and to lower memory pressure in general. With code going in loops it works.

RISC was a lot of good ideas that has had its advantages reduced by advances in manufacturing. RISC and CISC today are both struggling with instruction dependency and the IPC limits you get from that. EPIC and Mill have tried to break through that barrier. EPIC seems to be compiler limited and possibly with too much junk in the trunk, and Mill is so far mostly an idea. I don't know if there are other designs working on this.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: matthey on June 19, 2014, 12:10:54 AM
Quote from: NorthWay;767087
Much of the original thinking behind RISC designs was that compilers had a hard time using all the fancy CISC instructions.

It is not exactly rocket science to speed up those things it uses then.
Compilers only use simple instructions -> make a cpu with simple instructions and speed those up.


This thinking was a bit naive considering the same RISC fans decided to move the complexity into the compilers that weren't smart enough to use complex instructions. Compiler can and do use complex instructions if they are the best option and fast. CISC often had slow instructions that couldn't be made fast or the functionality was either to specialized or not needed. These types of instructions are baggage for any processor and it's not just CISC that has them. The PPC ISA has it's fair share of baggage instructions now.

Quote from: NorthWay;767087

"The perfect number of registers in a CPU are 0, 1, or infinite" - quote from a class I took sometime.
Compilers have no trouble juggling lots of registers and figuring out when to keep values in registers and when to purge them. Many registers was an answer for having to go to memory for immediates and to lower memory pressure in general. With code going in loops it works.


Processor logic outpacing memory speeds is another limiting battle of modern processors. More registers does help but RISC doesn't have as much of an advantage here as would be expected. Processor logic speeds vs memory speeds are not as much of an issue for an fpga CPU. With less of a limitation here, fpga processors may be able to do more work in parallel and come surprisingly close to hard processors that are clocked much higher.  

Quote from: NorthWay;767087

RISC was a lot of good ideas that has had its advantages reduced by advances in manufacturing. RISC and CISC today are both struggling with instruction dependency and the IPC limits you get from that. EPIC and Mill have tried to break through that barrier. EPIC seems to be compiler limited and possibly with too much junk in the trunk, and Mill is so far mostly an idea. I don't know if there are other designs working on this.


Multi-cores and multi-threading are a good way to break through the dependency problems but the memory limitation remains (to a lesser extent), multi-processing overhead and cache coherency eat up a lot of the gains and some tasks can't be done in parallel. I think the Mill computer will have the same compiler complexity problems as VLIW processors. Good luck debugging that one when the compiler doesn't work right. I would still take CISC over all the choices even though it has the same limitations. It's simpler and easier to code with smaller programs. I like that.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 19, 2014, 12:40:55 AM
What embedded CPU choice is a good one these days? There are a few 32-bit designs.. AVR, C28x, ColdFire, CPU32, ETRAX, PowerPC 603e, PowerPC e200, PowerPC e300, M-CORE, MIPS32 M4K, MIPS32 microAptiv MPU, MPC500, PIC, RISC, TLCS-900, TMS320C28x, TriCore, TX19A, etc. And the only VLIW seen in the flesh seems to be the products of Transmeta for an unattractive price. ARM Cortex-M and to some extent the more demanding counterpart ARM Cortex-A with DMA and external memory seems to take over ever more market sections like an viral octopussy. It's in your phone, hdd, photoframe, DSL, printer, switch etc. So it seems to pay to get to know ARM architecture even thoe they enforce their patents a bit too much for my taste. Like on HDL-code to create an ARM processor in FPGA.

I find it fascinating that these single chips have more power than some Amiga machines. They lack the memory on-chip and the graphics accelerator. But in terms of crunch performance they most likely run circles around many Amiga machines.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: matthey on June 19, 2014, 01:27:26 AM
Quote from: freqmax;767093
What embedded CPU choice is a good one these days? There are a few 32-bit designs.. AVR, C28x, ColdFire, CPU32, ETRAX, PowerPC 603e, PowerPC e200, PowerPC e300, M-CORE, MIPS32 M4K, MIPS32 microAptiv MPU, MPC500, PIC, RISC, TLCS-900, TMS320C28x, TriCore, TX19A, etc. And the only VLIW seen in the flesh seems to be the products of Transmeta for an unattractive price.


Embedded has a few VLIW processors for specialized tasks. See the Fujitsu FR-V processors for example:

http://en.wikipedia.org/wiki/FR-V_%28microprocessor%29

They have amazing power efficiency but are very specialized. I recall another VLIW embedded processor also but I can't remember the name. Embedded is about the only place where VLIW processors are used. None are general purpose enough to be well known.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 19, 2014, 02:41:00 AM
Nothing one wants to code C on?
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: matthey on June 19, 2014, 02:59:33 AM
Quote from: freqmax;767105
Nothing one wants to code C on?

VR-F has C support with GNU and can even use multiple operating systems. I would expect programming would have some similarities to an SIMD processor or GPU (where branches are evil) but I don't really know.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 19, 2014, 08:54:01 AM
Quote from: NorthWay;767087
Much of the original thinking behind RISC designs was that compilers had a hard time using all the fancy CISC instructions.

Berkley RISC started because they noticed that when compiling Unix they were only using 30% of the 68000 instruction set. They weren't using all addressing modes of the add instruction for instance. I don't know if they spent any time on figuring out whether the compiler would have a hard time using them, or even confirming whether the compiler would never use them. Just that compiling unix didn't use them.
 
They then went on to design a cpu with register windows which the people writing the compilers realised was a terrible design choice. Investing in your compiler and cpu design before you commit to it is very important. Adding instructions because you can come up with a single assembler fragment that performs better is a naïve and essentially terrible idea.
 
They coined the term, but there was prior art. I believe the http://en.wikipedia.org/wiki/CDC_6600 was the earliest example of what inspired RISC. Their motivation was simplifying the hardware so they could have dedicated hardware for parallel processing, a concept that was put to good use in the Amiga.
 
x86 compilers can now use a lot of the fancy CISC instructions, which internally in the CPU is just a macro for a set of RISC instructions anyway.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: KimmoK on June 19, 2014, 09:06:43 AM
8086 is an example of inferior design, made to succeed only with insane amount of investment circumventing it's defects, the rest is history.

Money matters more than anything. With enough money, everything is ok.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 19, 2014, 09:11:48 AM
Quote from: psxphill;767118

x86 compilers can now use a lot of the fancy CISC instructions, which internally in the CPU is just a macro for a set of RISC instructions anyway.


This description is not wrong but also not right.
Under this description every CISC CPU ever made on uses RISC instructions.

For example:
The 68000 was a CISC CPU.
The 68000 uses microcode for each instruction.
The "micro-code" pieces can be regarded as RISC.
This means the 68000 did for an ADD (mem),Reg in micocode
* calc EA
* load mem, to temp
* add tmp to reg

So was the 68000 already a RISC chip?
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 19, 2014, 11:14:24 AM
Quote from: biggun;767121
So was the 68000 already a RISC chip?

No, microcode isn't like RISC. It's just a table that the cpu uses as part of running the standard opcodes.
 
microop's are what I was referring to, so 68060 & anything x86 since the Pentium Pro. Opcodes are fetched, then translated into one or more micro-ops which are stored in cache and then executed by a dedicated core. In theory they could strip out the front end and let you write in micro-ops, but nobody does that because it is terrible for compatibility. Modern CISC gives you the best of both worlds, because you can completely redefine your RISC architecture every time but you still have backward compatibility with code written twenty years ago.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 19, 2014, 03:05:39 PM
Loading those micro-ops from an internal table instead of RAM will most likely be faster too. One CISC op-code to generate several micro-ops internally certainly helps that memory bottleneck.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 19, 2014, 03:51:51 PM
Quote from: psxphill;767126
No, microcode isn't like RISC. It's just a table that the cpu uses as part of running the standard opcodes.

I know what Mico-code is.

But where do you know that the microcode lines are not like RISC?
Mircrocode is list of micro-instruction - each one them the CPU can do in a single cycle.
Where is the difference to what the "Pentium Pro" does?

If you call a Pentium-Pro a RISC CPU with CISC decoder -
why don't you call a 68_000 the same?


Quote from: psxphill;767126
Modern CISC gives you the best of both worlds, because you can completely redefine your RISC architecture every time but you still have
This has nothing to do with modern CISC.
The instructions the programmer see are always a "compressed" form of the internal signals a CPU needs and uses.

This means the orignal 68_000 might internally use 80bit wide instructions.
But the programmer sees only a 16bit word.

The 68010 might already have changed his internal structure slightly and might have 70bits or 85 bits.

A RISC like POWERPC has internally also totally different signals than the programmer uses as opcodes. And every different PPC chip might have slightly different internal signals.

This means every CPU does a decoding from instruction obcodes to internal signals.
And the internal design is different with every CPU generation.

This concept of translation CISC opcodes to is internal format is not new
Every CISC CPU did this since the ice-age.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: Fats on June 19, 2014, 10:11:54 PM
Quote from: psxphill;767118
Berkley RISC started because they noticed that when compiling Unix they were only using 30% of the 68000 instruction set. They weren't using all addressing modes of the add instruction for instance. I don't know if they spent any time on figuring out whether the compiler would have a hard time using them, or even confirming whether the compiler would never use them. Just that compiling unix didn't use them.

...

Their motivation was simplifying the hardware so they could have dedicated hardware for parallel processing, a concept that was put to good use in the Amiga.
 
x86 compilers can now use a lot of the fancy CISC instructions, which internally in the CPU is just a macro for a set of RISC instructions anyway.


One thing to realise is that this CISC vs RISC discussion is from a time when single transistor cost was still important. Moving complexity form the chip to compiler could result in a more cost effective solution at that time.
In recent times where the cache memories are using the majority of the transistor budget this reasoning is not valid anymore.

Also in current when extensions to a CPU instruction set are made I think always the compiler side is included to be sure it can be effectively used.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: psxphill on June 20, 2014, 12:06:16 AM
Quote from: biggun;767143
But where do you know that the microcode lines are not like RISC?

How the 68000 works is in the patent. RISC instructions aren't full of flags like the 68000 microcode is. If anything it's VLIW processor, but it's not RISC.
 
Quote from: biggun;767143
If you call a Pentium-Pro a RISC CPU with CISC decoder -

I didn't, I said the CISC instructions were translated into micro-ops at runtime. Translated means that as each instruction is fetched the frontend and then writes a new program and stores it in fast cache ram which the backend then fetches, decodes and executes.
 
Quote from: biggun;767143
This concept of translation CISC opcodes to is internal format is not new
Every CISC CPU did this since the ice-age.

It's not new, it's been around since the 1990's. But it was new then and it's a different concept entirely.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: commodorejohn on June 20, 2014, 12:50:27 AM
Quote from: psxphill;767158
I didn't, I said the CISC instructions were translated into micro-ops at runtime. Translated means that as each instruction is fetched the frontend writes a new program and stores it in fast cache ram which the backend then executes.
Does it really actually write the sequence to an internal writable control store? I'd think it would be simpler to just execute directly from an internal ROM, but I guess maybe that wouldn't have been fast enough...?
 
Quote
It's not new, it's been around since the 90's. But it was new then.
It's been around much, much longer than that, actually. Mainframes and minis were doing it since the '70s at least.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 20, 2014, 09:05:23 AM
Quote from: commodorejohn;767160
Does it really actually write the sequence to an internal writable control store?.

Chip like Pentium Pro / Pentium 2, AMD K6, Pentium 3 etc - Do not do this
They just execute the code - just like the 68_000 did.

What these CPU's like Pentium Pro, do is "marking" Super Scalar possibilities in the ICache.
Btw 68K Apollo / Phoenix do the same.

Newer Cores like P4 Netburst started to write traces caches where they cache micro-ops.
But not all new cores do this. The uops caches are expensive in hardware and this concept was
often not used by later chips.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 20, 2014, 09:11:28 AM
Quote from: psxphill;767158

I didn't, I said the CISC instructions were translated into micro-ops at runtime. Translated means that as each instruction is fetched the frontend and then writes a new program and stores it in fast cache ram which the backend then fetches, decodes and executes.

All CISC chips always translated the instructions that the users see in internal signals.
This has to be like this.
In case of multicycle instructions these were often translated into 1 instruction per cycle.

Today many CISC chips have several execution units.
These units could be EA units or ALU unit.
These units could be organized vertically = like in the 68060 or some VIA x86, or Intel ATOM - or horizontally like in some other x86 chips.

Having more units is always good to increase performance.
As you see in the example of the 68060 which did nearly each EA calcultion for free.
Vertically organisation has the advantage of being able to hide the cache latency easierly.

A horizontal unit layout work only well if the core has strong Out of order possibilities. If you want to go out of order - layouting your units horizontally makes this also a little bit easier.


Someone did say "Today all CISC cores are RISC cores with CISC decoder"
I wanted to clear this up.
Today CISC cores are not RISC - they have an advances design thats all.


The method of translating CISC user instructions (what programmer write) to internal execution codes - is not new - this concept is there since the dawn of computer age. Every CISC chip did this.

Some use microcode for this.
Some even use millicode on top of this.
Some hardwired this.
But the translation was and is always there.



Listen I did not want to attack anyone.
But I work as CPU designer for a living.
I just wanted to explain some of the stuff which seemed to cause some confusion here.
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: persia on June 20, 2014, 02:56:11 PM
I haven't touched assembler since the old single core days, how do you program multicore chips?
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: freqmax on June 20, 2014, 03:50:22 PM
@biggun, "But I work as CPU designer for a living." What kind of CPU do you design? I thought that was something only very few companies did..
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: biggun on June 20, 2014, 04:34:30 PM
Quote from: freqmax;767208
@biggun, "But I work as CPU designer for a living." What kind of CPU do you design? I thought that was something only very few companies did..

I work for a us company with a 3 letter name,
that does produce the biggest and most expensive
CISC chips and which produces big and expensive RISC chips.

I created some mainboard chip,
I did work on accelerator chips for the CISC brand,
and did parts of two of the latest big RISC chips.

But my personal evil world domination plans are this:
http://www.apollo-core.com
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: nicholas on June 26, 2014, 10:29:29 PM
Quote from: biggun;767212
I work for a us company with a 3 letter name,
that does produce the biggest and most expensive
CISC chips and which produces big and expensive RISC chips.

I created some mainboard chip,
I did work on accelerator chips for the CISC brand,
and did parts of two of the latest big RISC chips.

But my personal evil world domination plans are this:
http://www.apollo-core.com


KFC chips are delicious. ;)
Title: Re: What's so bad about Intel 8086 in technical terms?
Post by: wawrzon on June 27, 2014, 01:57:55 AM
Quote from: biggun;767212

But my personal evil world domination plans are this:
http://www.apollo-core.com

gunnar, you are posting too much! back to work with you, we wanna see results real soon!