Welcome, Guest. Please login or register.

Author Topic: What's so bad about Intel 8086 in technical terms?  (Read 21044 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #14 from previous page: June 16, 2014, 12:59:43 PM »
Quote from: freqmax;766697
Seems the conclusion on x86 is that it was all haphazard and then nobody wanted to do a clean break.

Most successful things are haphazard. The last successful good design I have seen is the PlayStation, but even that has some hardware bugs that they had to maintain throughout the life of the console because fixing them would hurt compatibility.
 
Quote from: freqmax;766823
So now that processors have a frequency ceiling the businesses that stay with x86 will see their competitors run other stuff way faster due efficiencies .. ;)

x86 has always run faster than arm, the only thing arm has is lower power consumption. Which is very important in a phone, tablet or handheld games console. When the device is constantly tethered to the mains it becomes a less important consideration. I have an arm powered NAS, because it's cheap and quiet but it's woefully underpowered.
 
Intel have managed to get power usage for their phone chipsets down a lot in the last few years though. In some cases they have performed identically with lower power, Arm continues to dominate the market because of momentum.
« Last Edit: June 16, 2014, 01:21:54 PM by psxphill »
 

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #15 on: June 16, 2014, 08:33:43 PM »
Quote from: bloodline;766855
But as you will find if you try a low power intel chip, when they get the power usage down to ARM levels they struggle to offer the performance that ARM can. The converse is also true, as ARM ramp up performance, power usage increases to Intel x86 levels.

The benchmarks I saw were identical performance with Intel showing lower power usage. Supposedly the problem for Intel today is they haven't got a chipset with 4G support.
 
Arm architecture has changed a lot since the beginning, it's not a simple RISC processor anymore.
 

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #16 on: June 16, 2014, 11:13:36 PM »
Quote from: bloodline;766910
Hahahahah, there's no such thing as CISC and RISC anymore, all modern processors are a hybrid of these two concepts.

 There is such a thing as RISC, it just happens that ARM no longer fits the description.
 
 Due to moore's law CISC processors have room for lots of cache and registers, which used to only be available to RISC processors because the core took up less chip space. But those features weren't what defined RISC.
 

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #17 on: June 17, 2014, 09:48:34 PM »
Quote from: freqmax;766975
If the C64 can run Unix. Then surely an ARM cpu can too.

The c64 can't run unix, it can run a multitasking os that has a cut down posix-ish c run time. A lot of work went into that for pretty much no reward.
 

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #18 on: June 18, 2014, 11:18:43 AM »
Quote from: matthey;767011
load/store architecture = RISC
register memory architecture = CISC

That is only how they are defined now because all of the other things that made a chip RISC have been taken on by CISC processors. To the point where it largely makes no difference whether it's RISC or CISC.
 
RISC was load store because it allowed the instruction decoding to be simpler, which meant you didn't have to use microcode, which at the time allowed higher instruction throughput. Now RISC have complex instruction decoding and both RISC & CISC can either be micro-coded or not.
 
The only RISC processor that I like is 32 bit MIPS as all the others are horribly complex.
« Last Edit: June 18, 2014, 11:33:07 AM by psxphill »
 

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #19 on: June 18, 2014, 01:12:13 PM »
Quote from: TeamBlackFox;767033
Whats so complex between MIPS32 and MIPS64 other than the extended modes for 64-bit addressing and such?

I'm sure 64 bit mips is better on a technical level (more bits is better right?), but I just prefer the 32 bit version. I thought maybe now the thread is derailed I'd throw in my emotional preference.
 

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #20 on: June 19, 2014, 08:54:01 AM »
Quote from: NorthWay;767087
Much of the original thinking behind RISC designs was that compilers had a hard time using all the fancy CISC instructions.

Berkley RISC started because they noticed that when compiling Unix they were only using 30% of the 68000 instruction set. They weren't using all addressing modes of the add instruction for instance. I don't know if they spent any time on figuring out whether the compiler would have a hard time using them, or even confirming whether the compiler would never use them. Just that compiling unix didn't use them.
 
They then went on to design a cpu with register windows which the people writing the compilers realised was a terrible design choice. Investing in your compiler and cpu design before you commit to it is very important. Adding instructions because you can come up with a single assembler fragment that performs better is a naïve and essentially terrible idea.
 
They coined the term, but there was prior art. I believe the http://en.wikipedia.org/wiki/CDC_6600 was the earliest example of what inspired RISC. Their motivation was simplifying the hardware so they could have dedicated hardware for parallel processing, a concept that was put to good use in the Amiga.
 
x86 compilers can now use a lot of the fancy CISC instructions, which internally in the CPU is just a macro for a set of RISC instructions anyway.
« Last Edit: June 19, 2014, 09:10:21 AM by psxphill »
 

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #21 on: June 19, 2014, 11:14:24 AM »
Quote from: biggun;767121
So was the 68000 already a RISC chip?

No, microcode isn't like RISC. It's just a table that the cpu uses as part of running the standard opcodes.
 
microop's are what I was referring to, so 68060 & anything x86 since the Pentium Pro. Opcodes are fetched, then translated into one or more micro-ops which are stored in cache and then executed by a dedicated core. In theory they could strip out the front end and let you write in micro-ops, but nobody does that because it is terrible for compatibility. Modern CISC gives you the best of both worlds, because you can completely redefine your RISC architecture every time but you still have backward compatibility with code written twenty years ago.
« Last Edit: June 19, 2014, 11:20:11 AM by psxphill »
 

Offline psxphill

Re: What's so bad about Intel 8086 in technical terms?
« Reply #22 on: June 20, 2014, 12:06:16 AM »
Quote from: biggun;767143
But where do you know that the microcode lines are not like RISC?

How the 68000 works is in the patent. RISC instructions aren't full of flags like the 68000 microcode is. If anything it's VLIW processor, but it's not RISC.
 
Quote from: biggun;767143
If you call a Pentium-Pro a RISC CPU with CISC decoder -

I didn't, I said the CISC instructions were translated into micro-ops at runtime. Translated means that as each instruction is fetched the frontend and then writes a new program and stores it in fast cache ram which the backend then fetches, decodes and executes.
 
Quote from: biggun;767143
This concept of translation CISC opcodes to is internal format is not new
Every CISC CPU did this since the ice-age.

It's not new, it's been around since the 1990's. But it was new then and it's a different concept entirely.
« Last Edit: June 20, 2014, 12:53:45 AM by psxphill »