Amiga.org
Amiga computer related discussion => General chat about Amiga topics => Topic started by: ElPolloDiabl on September 01, 2012, 04:48:10 AM
-
These things are expensive and those lucky enough to use them reckon they have plenty of power. I suppose at 8 cores and 3Ghz or over it would be plenty of power.
See this link:
http://www.theregister.co.uk/2012/08/31/ibm_power7_plus_processors/ (http://www.theregister.co.uk/2012/08/31/ibm_power7_plus_processors/)
Wishing that they would come back to consumer market, but I'm wondering are they no longer a generation (more often half a generation) in front of the competition?
Are CPU bragging rights important?
btw. Some people (only one or two) are still a bit delusional about what happened to Amiga. The business failed, get over it. Amiga could have been lots of things, the last real chance was with Escom.
I have a little bitterness, but don't let it bother me. I like it as a niche system, some people prefer it as hobby.
-
Oh, huh, I was going to kvetch that IBM PPCs don't support AltiVec, outside the G5, and therefore it'd be annoying for running software that supported it, but I looked and they've actually incorporated it into POWER6 on up. Whaddya know.
Yeah, it is too bad they have no interest in the desktop market anymore - do they even still make POWER-based workstations, or did that end when they spun off Lenovo?
-
Not sure I would want a Power7... One of these though: http://en.wikipedia.org/wiki/Gulftown_(microprocessor)
-
Oh, huh, I was going to kvetch that IBM PPCs don't support AltiVec, outside the G5, and therefore it'd be annoying for running software that supported it, but I looked and they've actually incorporated it into POWER6 on up. Whaddya know.
Altivec's also supported in the Xbox360 and PS3 cpu's :) they're a bit cheaper, but not really your typical desktop CPU design.
I think IBM gave up the desktop when they realised how small the profit margins were going to be. Better to make CPUs for high-end servers, mainframes, routers/switches/networking and automotive usage all of which have much larger profit margins.
-
I would like to play around with a PPC desktop running OpenSolaris PPC, would give me plenty of fun hours.
So a PPC based normal priced desktop would be fun to see again, not much of a marked of them im afraid :(
-
I think IBM gave up the desktop when they realised how small the profit margins were going to be. Better to make CPUs for high-end servers, mainframes, routers/switches/networking and automotive usage all of which have much larger profit margins.
And collect U.S. Government charity handout money to oligopolies to fund their research and development ($244 million from DARPA). We love and support our big business oligopolies/monopolies in the U.S. even though our hypocritical politicians (all the way to the top) say they are for small business.
If the x1000 can sell for $3000, maybe Hyperion should have the Power 7 thrown in a computer. They could bring out the x10000 supporting 1 of 32 threads+cores and sell it for $30,000 with the AmigaOS 4 faithful lining up.
-
Ya all a bunch of dreamers!!!
-
Not sure I would want a Power7... One of these though: http://en.wikipedia.org/wiki/Gulftown_(microprocessor)
thats way old,
http://en.wikipedia.org/wiki/List_of_Intel_Core_i7_microprocessors#.22Gulftown.22_.2832_nm.29
sandy bridge-e is newer,
http://en.wikipedia.org/wiki/List_of_Intel_Core_i7_microprocessors#.22Sandy_Bridge-E.22_.2832_nm.29
it supports faster memory & http://en.wikipedia.org/wiki/Advanced_Vector_Extensions
-
Ya all a bunch of dreamers!!!
Why? Let PowerPC chips be developed, and we might see some in action
with MorphOS, AmigaOS or AROS?
-
If the x1000 can sell for $3000
It can't. Simple as that.
-
It can't. Simple as that.
It was $2700, but they already did. They sold their entire first run. Which is why they're looking into a second.
Not saying I think it's anything like a reasonable price, but the fact is they did sell.
-
It was $2700, but they already did. They sold their entire first run. Which is why they're looking into a second.
Not saying I think it's anything like a reasonable price, but the fact is they did sell.
And they're rapidly selling their second run too. I have one waiting for me at home.
So, not only did they sell they are also selling. :)
-
Was the price on the second run any lower, out of curiousity?
-
I'd have to check. I paid over $3k in total incl shipping, but i also ordered several other things too.
-
Power 7 chips have the TDP of a nuclear reactor(they only run, AFAIR, with watter cooling) and the price of a small hatchback car... Which means it's the logical choice for X2000 :D
-
And they're rapidly selling their second run too. I have one waiting for me at home.
So, not only did they sell they are also selling. :)
So they are closing in on 100 total units sold?
-
So they are closing in on 100 total units sold?
Hahaha. sorry could not help myself from laughing, you just did todays best comment ;)
Edit: (no offence intended to anyone)
-
I am actually curious - I can't remember, but I thought the first run was mentioned to have been 100 units? In that case they'd have already surpassed it...
-
A faster way to run software that simply doesn't exist that would even take advantage of it, at least on the Amiga side.
Give me a 9Ghz PPC CPU on my SAM and I'll still be asking for software that even tasks the 600 mhz chip that's on it now.
It's like overclocking a calculator when things are essentially a "one trick pony" ordeal.
-
Any CPU engineer will tell you instantly we have hit the wall and all this multi-core desktop CPU stuff is just a scam really. The maximum number of cores without losing efficiency of code execution is effectively 3. Not 4 not 8 but 3. You can not utilise much more than this without actually starting to waste cycles of CPU time delaying/setting up use of threads to run on other cores.
There are many many situations that the AMD 4.2Ghz PC will outgun an Intel i7 3700k fact. One of my lecturers at uni was pretty high up in CPU design for IBM in the 80s and you can bet your ass they tested every possible scenario for server and desktop OS efficiency as far as parallel processing goes. For desktop computers 3 or 4 is about it. So Moore's law is f**ked well and trully unless we start seeing 5 and 6ghz CPUs QUICKLY!
What you want in a desktop computer is intelligent design of the motherboard. 360 did it, the PS3 did it, their gains are from a better architecture than the turbo charged dinosaur that is x86 PC motherboards of 2007. What you won't get is a better CPU than x86-64 for price/performance.
The new Xbox and the new PlayStation are all confirmed to have x86 64bit CPUs already, and this means that those CPUs will be dropped into a much better design of motherboard architecture than any PC for sale in 2013/2014 with the same CPU to compete on price/performance.
And so the console vs x86 PC merry-go-round continues it's cycle :)
@OP Anyway Amiga died the day Commodore died, Escom were never going to catch up 2 years of limbo waiting to own an already luke warm late update to the A1000 chipset (AGA) and just printed some new logos and stuck them on the same old hat machines in 1996
Actually Amiga died the day the A500plus was launched (and was diagnosed with cancer of the incompetent engineer and manager when I saw the specs of the butt ugly cheap and plasticky A500 joke after 12 months of zero marketing for the A1000 in 1986 *PUKE*)
-
So they are closing in on 100 total units sold?
You mean 10 times more than C-USA has sold? No idea.
-
Moore's Law is boned in the long run anyway, there's only so far you can go before you start hitting hard physical limits on circuit density, and I'm willing to bet that we'll hit practical limits well before that. Of course, nobody wants to believe that unless forced to, which means that the software industry will continue to write shoddier and shoddier software counting on ever-increasing computing power until the day when they finally wake up to the headline that we've actually reached peaked computing capacity and everybody starts running around like chickens with their heads cut off panicking about what's going to happen now that you can't just wait a year for better hardware to run your bloaty software on and trying every possible approach they can imagine to avoid having to face the fact that suddenly efficiency will matter again.
And I will laugh.
-
All of you ought to check out the specs for the WiiU's CPU.
http://www.vgleaks.com/world-premiere-wii-u-specs/
http://www.eurogamer.net/articles/2012-08-30-how-powerful-is-the-wii-u-really
I think you'll be surprised.
This technology IS getting down to consumer products. The WiiU's processor is supposed to be slower the the XBOX360's or the PS3's, but its out of order (where as they are less powerful in order processors).
If this was available in a PC it would be pretty formidable.
-
Power 7 chips have the TDP of a nuclear reactor(they only run, AFAIR, with watter cooling) and the price of a small hatchback car... Which means it's the logical choice for X2000 :D
Oh my God, its Amiga, and is selling. Even in small quantities. Unlike some.
-
Oh my God, its Amiga, and is selling. Even in small quantities. Unlike some.
Yet again Vox shows the true depths of his ignorance.
-
"Moore's Law is boned in the long run anyway"
I don't think computational power is in any danger of peaking anytime soon.
It will just be moved to more multiprocessor tech.
Maybe the # of cores will begin to double every 2 years instead of computational speed of each core. Its still doubling processor power.
8 and 16 core desktops will be commonplace soon enough. But yes, this will just lead to less and less efficient software I think.
-
I could see myself that multicore was an excuse to the fact that CPUs were not getting faster some years back. However, even going mult-core has physical limitations, since you still need physical space to stack all the cores onto. Suppose the CPU will no longer be a flatish rectangle but became a giant square IC! :-)
However it is done both x86 and PPC face the same limitations: They are old CPU designs with modern ideas retro fitted onto them, even PC I think. x86[64] is just a hacked up 16-bit CPU from the 80's and PPC has processor limitations of the 90's..
If computers really want to make use of multicore they will need to proper multi core CPUs. x86 and even PPC is useless at this point. Expecially x86 which has as many CPU opcode extensions as it does more cores in every redesign. Computers need a CPU that can run code on multicore where the code can run so that it's speed is multiplied. Without all the botttle necks of trying to run what is at it's basic level a 70's instuction set that gets converted inside the CPU. And there a warning for you!
Thanks to Windows which does not even acknowledge DOS anymore computer technology has been held back for years because of this stupid obsession with compatibility. Throw off the shackles! It looked like the PC was moving forward but no they are still using an old CPU design. It's like they insist on using wheels to transport the CPU where a turbo jet should be powering it! :-?
Apple showed Bill Gates you can go from PPC to x86 in under 6 months. So what's topping x86 Windows going to TNG CPU next year? What? They forgot to design it! They did but bolted x86 emulation back on? And we are back where we started... :-)
-
"Moore's Law is boned in the long run anyway"
I don't think computational power is in any danger of peaking anytime soon.
It will just be moved to more multiprocessor tech.
Maybe the # of cores will begin to double every 2 years instead of computational speed of each core. Its still doubling processor power.
8 and 16 core desktops will be commonplace soon enough. But yes, this will just lead to less and less efficient software I think.
Or a redesign of how software is coded like how they did for gfx cards with hundreds of cores.
-
I don't think computational power is in any danger of peaking anytime soon.
It will just be moved to more multiprocessor tech.
Maybe the # of cores will begin to double every 2 years instead of computational speed of each core. Its still doubling processor power.
8 and 16 core desktops will be commonplace soon enough. But yes, this will just lead to less and less efficient software I think.
I'm not going to say "in the immediate future" or "x years from now" (though I'd bet on sooner rather than later, myself.) Maybe it isn't coming until ten, twenty, sixty years from now, but it is coming. Multicore is great (something we should've been doing pervasively a decade ago, honestly,) but it's nothing at all like a perfect, problem-solved solution to the issue of peak circuit density.
For one thing, there's still the problem of things needing to be implemented in actual, physical silicon - with multicore it doesn't have to be denser, but if it's not it's going to take more die space, and the larger the die, the more signal transfer times become an issue - you can't just have a die a foot across filled with 80 quadzillion cores and expect them to all function in perfect sync with each other, at gigahertz-plus speeds. There's also the matter of control logic for the whole thing, like cache-coherence logic - the complexity of which is going to go up with every core by something like N² - N. The further you push N, the more absurd that's going to get. And speaking of complexity, there's the software side to consider - the operating system that has to schedule threads, pass messages, and arbitrate access to system resources across multiple cores. I don't know that that will necessarily be as bad as the hardware side of things, but then it's still an issue of complexity that's going to scale at least linearly with the number of cores - and worse, it's one that's going to eat up the very CPU time you gain by adding them! Which makes multicore as a whole ultimately a prospect of diminishing returns.
(All of that is, I'm sure, less true for special-purpose tasks like 3D rendering/shading than it is for general-purpose computing, but the problem is that general-purpose computing is what it is most being falsely relied on to fix.)
So no. Multicore processing is great for what it is, but it isn't a solution to the problem of never being able to reach infinity, and relying on it to be that will just mean putting off the confrontation further, and making it harder when it finally does hit.
Not that that will stop anybody.
-
All of you ought to check out the specs for the WiiU's CPU.
http://www.vgleaks.com/world-premiere-wii-u-specs/
http://www.eurogamer.net/articles/2012-08-30-how-powerful-is-the-wii-u-really
I think you'll be surprised.
This technology IS getting down to consumer products. The WiiU's processor is supposed to be slower the the XBOX360's or the PS3's, but its out of order (where as they are less powerful in order processors).
I'm disappointed to hear they will only have two tablets being able to run concurrently because of bandwidth issues:
"According to Nash, bandwidth issues associated with having four concurrent streams coming from one Wii U machine to four GamePad screens stamped out the idea of support for four GamePad controllers."
That means two player games instead of four and I'm wondering if that will affect online interaction as well.
"But at the moment it's purely limited to processing and signal transmission bandwidth and a combination thereof."
-
However it is done both x86 and PPC face the same limitations: They are old CPU designs with modern ideas retro fitted onto them, even PC I think. x86[64] is just a hacked up 16-bit CPU from the 80's and PPC has processor limitations of the 90's..
If computers really want to make use of multicore they will need to proper multi core CPUs. x86 and even PPC is useless at this point. Expecially x86 which has as many CPU opcode extensions as it does more cores in every redesign. Computers need a CPU that can run code on multicore where the code can run so that it's speed is multiplied.
Which is why we need ARM! HeHe :python:
-
Which is why we need ARM!
(My sarcasm detector isn't working very well - that was tongue-in-cheek, right?)
-
We have a Power7 IBM 720 where I work (along with a bunch of older Power6 and Power5 servers). I tend to think that it should be a pretty hefty piece of computing hardware, but it's slow! I personally think it's because the company I work for are retards and aren't using it correctly anyhow.
But yes, I think a Power7 based desktop would be pretty awesome.
slaapliedje
-
(My sarcasm detector isn't working very well - that was tongue-in-cheek, right?)
Yeah, we're getting a lot of ARM promotion here lately and its not like the CPU is anything new.
We have a Power7 IBM 720 where I work (along with a bunch of older Power6 and Power5 servers). I tend to think that it should be a pretty hefty piece of computing hardware, but it's slow! I personally think it's because the company I work for are retards and aren't using it correctly anyhow.
I know the feeling. I've dealt with a lot of people who couldn't figure out how to properly use Radisys OS-9 based systems.
Your employers probably aren't implement optimal coding solutions.
-
Aside from the rambling about what the future of CPU design is the actual discussion about PPC & x86/64 is quite interesting.
I'm not sure what CPU ISA the PS4 or Xbox Next are going to use because I've not been working on them this time (last 10 months). The rumours around them are going all over the place so until there's something official announced or I get to work on one :) I'm not willing to bet. It's been everything from 16-core PPC to 4-core 8-thread x86/64 and AMD/Ati to nVidia GPUs so who knows what will actually be in the Xbox 720/Next/Durango.
You can get some stunning, genuinely stunning performance out of either ISA and games consoles have the advantage of not running a full multitasking OS etc. There's certainly mileage left in PPC and x86/64 for the consoles and the desktop.
-
Yeah, its strange.
We know what the Xbox720's GPU will be based on, but no one's certain about the CPU.
We have a really good idea of what the WiiU will be like.
And my guess is that the PS4 will follow a much more evolutionary path then previous Sony consoles (remaining Cell based).
-
Power 7 chips have the TDP of a nuclear reactor(they only run, AFAIR, with watter cooling)
No, they run just fine with air. We have plenty of them at work. IBM knows how to move air...
There is a special cabinet that can be connected to water for claimed higher efficiency in the cooling.
-
IBM knows how to move air...
Now my sarcasm detector is failing.
THAT definately was sacasm, wasn't it?
-
Yeah, its strange.
We know what the Xbox720's GPU will be based on, but no one's certain about the CPU.
We have a really good idea of what the WiiU will be like.
And my guess is that the PS4 will follow a much more evolutionary path then previous Sony consoles (remaining Cell based).
Dunno, seen specs with PPC + Ati and seen Intel + nVidia so not sure what's really in the Xbox720.
However, fairly certain that the PS4 won't have Cell :) I head they were going with an AMD CPU!
As I said, I think we can safely say that none of us have a clue until they're released, there's just too many conflicting rumours.
-
Dunno, seen specs with PPC + Ati and seen Intel + nVidia so not sure what's really in the Xbox720.
However, fairly certain that the PS4 won't have Cell :) I head they were going with an AMD CPU!
As I said, I think we can safely say that none of us have a clue until they're released, there's just too many conflicting rumours.
We won't know about Sony's product till they're ready to tell us, but since IBM has some connection with AMD/ATI that wouldn't be completely surprising.
Imagine, an AMD based device that's not built by Gloal Foundries.
As to Microsoft, I'm almost certain about the ATI GPU.
And, again, as Microsoft also uses IBMs foundries who knows what CPU will pop up.
-
Yeah, we're getting a lot of ARM promotion here lately and its not like the CPU is anything new.
Indeed; everybody wants to ride the hot new ARM wave that's surely going to last forever, and nobody seems to remember when the new wave was PPC... ;)
-
"DEC MicroPDP-11/23+ (15MHz, 32MB HD, 256KB RAM)"
Love that one, John.
I was interested in that CPU family before there were pre-built computers.
16bit before the 8bit IBM PC was introduced.
-
Yeah, I've been admiring the architecture from afar for years now. I'd love to have one of the full-fledged minicomputer models, but I've got neither the cash nor the space. This was just a very lucky find at the recycle center :)
-
"DEC MicroPDP-11/23+ (15MHz, 32MB HD, 256KB RAM)"
Love that one, John.
I was interested in that CPU family before there were pre-built computers.
16bit before the 8bit IBM PC was introduced.
The 68k is old and it was heavily influenced by the PDP-11/VAX-11 which would be ancient and quite an innovative design back then. I think it would be difficult to make fast on modern hardware but I think the 68k could be modernized and run well enough. A modern 68k would be the easiest to use and have the best code density of any "modernized" CPU (x86, ARM and PPC are old designs too). I think it could compete with ARM for small electrical devices. What do you think of this modernized 68k ISA:
OpenOffice Writer
http://www.heywheel.com/matthey/Amiga/68kF_PRM.odt
PDF
http://www.heywheel.com/matthey/Amiga/68kF_PRM.pdf
html
http://www.heywheel.com/matthey/Amiga/68kF_PRM.html
I know there is no multiprocessing or caching instructions at this point but they are more dependent on the implementation. What do you like and dislike? Any love for the 68k besides me?
-
Yet again Vox shows the true depths of his ignorance.
What ignorance? That X1000 has sold its small offerings in despite of having the price in range of fake Amiga mini?
-
What ignorance? That X1000 has sold its small offerings in despite of having the price in range of fake Amiga mini?
And if you believe that X1000 are running a nuclear power station its time to take your meds.
Power 7 is generations more advanced than the CPU in the X1000, hell my power 5 box would run rings round it.
-
The 68k is old and it was heavily influenced by the PDP-11/VAX-11 which would be ancient and quite an innovative design back then. I think it would be difficult to make fast on modern hardware but I think the 68k could be modernized and run well enough. A modern 68k would be the easiest to use and have the best code density of any "modernized" CPU (x86, ARM and PPC are old designs too). I think it could compete with ARM for small electrical devices. What do you think of this modernized 68k ISA:
OpenOffice Writer
http://www.heywheel.com/matthey/Amiga/68kF_PRM.odt
PDF
http://www.heywheel.com/matthey/Amiga/68kF_PRM.pdf
html
http://www.heywheel.com/matthey/Amiga/68kF_PRM.html
I know there is no multiprocessing or caching instructions at this point but they are more dependent on the implementation. What do you like and dislike? Any love for the 68k besides me?
http://www.freescale.com/webapp/sps/site/homepage.jsp?code=PC68KCF
-
The 68k is old and it was heavily influenced by the PDP-11/VAX-11 which would be ancient and quite an innovative design back then. I think it would be difficult to make fast on modern hardware but I think the 68k could be modernized and run well enough. A modern 68k would be the easiest to use and have the best code density of any "modernized" CPU (x86, ARM and PPC are old designs too). I think it could compete with ARM for small electrical devices. What do you think of this modernized 68k ISA:
OpenOffice Writer
http://www.heywheel.com/matthey/Amiga/68kF_PRM.odt
PDF
http://www.heywheel.com/matthey/Amiga/68kF_PRM.pdf
html
http://www.heywheel.com/matthey/Amiga/68kF_PRM.html
I know there is no multiprocessing or caching instructions at this point but they are more dependent on the implementation. What do you like and dislike? Any love for the 68k besides me?
Well Matt,
Unless there was a lot of revision (and I love the 68K and used to sell 68K based hardware), no contest.
An in order, cacheless processor vs a modern CPU?
We'd lose.
-
http://www.freescale.com/webapp/sps/site/homepage.jsp?code=PC68KCF
Coldfire is cool Zylesea, but they won't sell you a V5 and they never produced the V6.
-
@zylesea
I'm aware of the ColdFire. Freescale weakened and stripped the 68k so much that they ruined it and lost most of the 68k supporters and programmers. They did finally add some useful instructions back in but what a feeble effort. I guess they were successful in their marketing attempt to drive 68k people to the PPC (or ARM and x86 more likely). I was talking about a robust powerful 68k ISA. The 68kF ISA I showed would increase the code density considerably and remove many short branches while allowing for nearly 100% backward compatibility of the 68k family. The CF accomplished none of these.
Well Matt,
Unless there was a lot of revision (and I love the 68K and used to sell 68K based hardware), no contest.
An in order, cacheless processor vs a modern CPU?
We'd lose.
I think full OoOE is wasteful, especially for an electricity conserving device. Doing a few instructions like division out of order can make sense though. The 68060 is a proven excellent processor and could be scaled up with today's tech, modernized and enhanced. The 68kF has a lot of what I'm talking about. I need to finish the addressing modes and add floating point. The 68k has some great advantages that Motorola/Freescale never explored and instead through away.
-
I'd love to see an updated 68k, myself - it's a terrifically friendly architecture. That's one of the biggest reasons I hope Natami does come to fruition - the massive chipset improvements are neat, but I really would like to see the 68k core in action.
-
I will agree with you about code density Matt.
But then I come from a 6809 background.
That processor was designed with position independant, reentrant code in mind.
I can still write code for that that takes up a small fraction of what a "modern" CPU would use.
-
@Iggy
Almost every choice made with the 68kF improves code density. I would expect 5%-15% better code density than 68020 or ColdFire code. The ColdFire has some code density improvements (MVS/MVZ, MOV3Q, BYTEREV, etc.) but they are offset by what they took away (Byte and Word instruction sizes, addressing modes, bitfield instructions, etc.)
-
The only problem with the arguement about the importance of code density is memory is cheap.
-
Just use I7s , much better bang for buck.
-
Just use I7s , much better bang for buck.
Nah, Phenom II X4s (while they're still available).
I picked up a 3.2 GHz 955 for <$80.
Runs at 3.4-3.5.
X86 is cheap.
-
The only problem with the argument about the importance of code density is memory is cheap.
Code density matters for small electrical devices especially with batteries. That is why I talked about competing with ARM and not x86 on the desktop. I'm thinking of laptops, pads, netbooks, smart phones, embedded devices, fanless desktops where ARM leaves something to be desired and x86_64 is like taking a MAC truck to the grocery store. Better code density also means more instructions in the instruction cache and a smaller instruction fetch is needed. Less memory usage is still a small advantage in general, more so on low end electrical devices.
Just use I7s , much better bang for buck.
Can you show me how to program utilizing all the cores? I have a Windows component bug with the file sharing (which is used) at work that crashes. Can you disassemble the component if I send it to you and fix it for me? While you are at it, can you fix our GHz computers from stopping and being unresponsive for several seconds? I suppose I can upgrade to the i7 and it will probably be fast until Windows 9 comes out (the fix for the Windows 8 every other generation mistake). We still use Windows XP so we shouldn't need an i7 CPU but Windows slows down the more and more we use it. Can you fix that too? Intel can always quadruple the number of cores, move to 128 bits and add a few terrabytes of memory so we can finally have CPU Nirvana since we couldn't get there by GHz alone. I would rather have an efficient flexible CPU that I can program than a high latency, resource hogging DSP monster of a CPU in the same way as I would rather drive a nimble little sports car with 300HP than a fire breathing MAC truck with 3000HP. Unfortunately, most people choose the MAC truck because it's faster in a straight line for a few seconds (in a straight line on an open road) and 3000HP is a much bigger number than 300HP. There aren't very many true CPU connoisseurs any more just like sports car connoisseurs are a dying breed too :(.
-
Power 7 is generations more advanced than the CPU in the X1000, hell my power 5 box would run rings round it.
If I'm not mistaken, PA6T is a mobile PPC970 ... and IIRC, PPC970 is a Power6 stripped and modified for desktop?
So it is not generations behind, even if it runs circles around (PA6T might be better in performance per watt).
Other than that...
Nice to see that Freescale is finally doing something with the PPC. But as long as our niche OSs use only one core and one thread, we would get some 4000MIPS from a 130 000MIPS T4 chip. (would be like using dos in i7 system)
-
Can you show me how to program utilizing all the cores? I have a Windows component bug with the file sharing (which is used) at work that crashes. Can you disassemble the component if I send it to you and fix it for me? While you are at it, can you fix our GHz computers from stopping and being unresponsive for several seconds? I suppose I can upgrade to the i7 and it will probably be fast until Windows 9 comes out (the fix for the Windows 8 every other generation mistake). We still use Windows XP so we shouldn't need an i7 CPU but Windows slows down the more and more we use it. Can you fix that too?
None of that is down to the CPU. You'd encounter the same issues running a super-68k CPU because it's software. When you don't have to deal with a modern OS and the thousands of processes it has to manage then you can get plenty of performance out of ANY of these architectures.
Bog it down then it will go slow.
Honestly everyone goes on about AmigaOS vs Windows 7/8 but frankly AOS does absolutely nothing in comparison to it. If, and it's obviously a hypothetical if, AmigaOS development had continued in parity with Windows and x86 over the years then we'd all be whining about the same things. It's not some wonder-CPU-architecture that made AOS usable, it was simply because it was extremely primitive compared to modern operating systems.
-
Any CPU engineer will tell you instantly we have hit the wall and all this multi-core desktop CPU stuff is just a scam really. The maximum number of cores without losing efficiency of code execution is effectively 3. Not 4 not 8 but 3. You can not utilise much more than this without actually starting to waste cycles of CPU time delaying/setting up use of threads to run on other cores. ....For desktop computers 3 or 4 is about it. So Moore's law is f**ked well and trully unless we start seeing 5 and 6ghz CPUs QUICKLY!
It depends on desktop's use how it can share load. Some applications can split their doings to hundreds of small items to be processed parallelly, while some other taks run fastest in one pipe. Some jobs can even be split to SIMD and shader units of the system.
Easy to share/split are: renderings, video encoding, compiling, etc... Even I have used ten CPUs clusters that work very well indeed.
Latest TCOM chips have 32 cores, perhaps more. And on workstations you see things like multiple i7 chips on one motherboard. They are there because they work.
And finally... latest overclocked CPUs run faster than 9Ghz. Even if you can buy only 5Ghz parts from (IBM) shop.
-
If I'm not mistaken, PA6T is a mobile PPC970 ... and IIRC, PPC970 is a Power6 stripped and modified for desktop?
So it is not generations behind, even if it runs circles around (PA6T might be better in performance per watt).
Nope, PA6T is a completely different design to the PPC970.
The PPC970 was derived from POWER4 IIRC.
Power7 is a very nice chip, but the closest we will get to it as consumers will probably be a next generation console, if any of them use the basic core in their designs (i.e., WiiU, XBox720 - PS4 is most likely x86 according to the rumours).
-
Honestly everyone goes on about AmigaOS vs Windows 7/8 but frankly AOS does absolutely nothing in comparison to it. If, and it's obviously a hypothetical if, AmigaOS development had continued in parity with Windows and x86 over the years then we'd all be whining about the same things. It's not some wonder-CPU-architecture that made AOS usable, it was simply because it was extremely primitive compared to modern operating systems.
Windows has a lot of features (and people really use 1% of them) but M$ has failed in basic things.
AOS is primitive? To modern standards, perhaps. But AOS is flexible and simple. To me it seems to offer enough to do all desktop tasks on top of it, we mainly need the SW on top (and to ease up the SW development we need some things to OS). Unless AOS is totally being broken by it's implementators, I doubt it will ever be as sluggish as the mainstream. (not even memory protection should break responsiveness, it has been demonstrated by RTOSs)
-
None of that is down to the CPU. You'd encounter the same issues running a super-68k CPU because it's software. When you don't have to deal with a modern OS and the thousands of processes it has to manage then you can get plenty of performance out of ANY of these architectures.
I can debug, disassemble, fix bugs and optimize code in the AmigaOS because the CPU is easy to use and the code is small. Programming is easier on a flexible low latency CPU than a DSP/SIMD like high latency CPU also. The CPU does matter to me at least. Give me a superscaler N68070@500MHz using the 68kF ISA and I'll be happy ;).
Honestly everyone goes on about AmigaOS vs Windows 7/8 but frankly AOS does absolutely nothing in comparison to it. If, and it's obviously a hypothetical if, AmigaOS development had continued in parity with Windows and x86 over the years then we'd all be whining about the same things. It's not some wonder-CPU-architecture that made AOS usable, it was simply because it was extremely primitive compared to modern operating systems.
The AmigaOS has the basics and it's extensible. That's better than being stuck with whatever bloat is thrown into Windows.
AOS is primitive? To modern standards, perhaps. But AOS is flexible and simple. To me it seems to offer enough to do all desktop tasks on top of it, we mainly need the SW on top (and to ease up the SW development we need some things to OS).
Yep. Simple and extensible.
Unless AOS is totally being broken by it's implementators, I doubt it will ever be as sluggish as the mainstream. (not even memory protection should break responsiveness, it has been demonstrated by RTOSs)
I think partial memory protection could be implemented (with MMU) and not affect responsiveness. I'm talking about protecting code and read only data but NOT copying messages. A full sandbox would likely impact the responsiveness.
-
Code density matters for small electrical devices especially with batteries. That is why I talked about competing with ARM and not x86 on the desktop. I'm thinking of laptops, pads, netbooks, smart phones, embedded devices, fanless desktops where ARM leaves something to be desired and x86_64 is like taking a MAC truck to the grocery store. Better code density also means more instructions in the instruction cache and a smaller instruction fetch is needed. Less memory usage is still a small advantage in general, more so on low end electrical devices.
Actually, ARM has definate advantages in the area.
Its very low power.
X86 isn't quite there yet.
And the 68K never was a low power device.
So, arguing the code density issue from that point makes little sense.
Can you show me how to program utilizing all the cores?
I noticed that a lot of people have mentioned code modularity.
In the '80s we had a 6809 based point of sales system that had about 255 memory resident concurrent tasks that were all assigned priority levels (Microware OS-9 again).
This type of system would have moved very well into an SMP environment.
These days I still tend to code this way. Writing small routines that can thread info to other modules and call other tasks.
With an SMP capable OS this allows the operating system to spread tasks across multiple CPUs. And the software will still run in a single CPU environment.
Its not so much asv writing code for multiple CPUs as it is writing code that can run better in a multiple CPU environment.
-
Any CPU engineer will tell you instantly we have hit the wall and all this multi-core desktop CPU stuff is just a scam really. The maximum number of cores without losing efficiency of code execution is effectively 3. Not 4 not 8 but 3. You can not utilise much more than this without actually starting to waste cycles of CPU time delaying/setting up use of threads to run on other cores.
Apparently the GPU guys didn't get that memo. top end GPUs run half a million threads on hundreds of cores in parallel.
Building faster single threaded CPUs is getting more and more difficult.
The 5GHz variant of the Pentium 4 was cancelled because it was going to use 150W. They went to a more efficient design with multiple cores after that because it is far more power efficient.
Even mobile phones are quad core these days. Expect to get a lot more cores in the future.
-
PS4 is most likely x86 according to the rumours.
But it wont be *just* x86.
Their CTO mentioned what it has very vague terms in a speech. If there's an x86 in there it'll just be one of many processors.
I took it to mean PS4 will be x86 + Cell + GPU, with an FPGA thrown in for good measure.
XBox 720 is said to have a 16 core CPU that IBM designed but no one seems to know what the cores are. According to a leaked Microsoft doc they could be x86 or ARM.
Wii U is supposed to have 3 PPC cores supposedly based on the POWER7 cores.
-
@matthey
Yes but that's my point. You can't compare AmigaOS from 1980's to Windows 7 from 201X hence my hypothetical example of IF AmigaOS had continued to be updated then it'd be in the same sluggish state.
I'm just saying that you're falling into the old-OS vs new-OS fanboy behaviour. You cannot compare the two, they're both OS's but from 30 years apart that do completely different jobs.
There are things to be said for the simplicity of AmigaOS but if you like that sort of thing then you should look at HaikuOS and see how to achieve it's minimal feature set still requires some serious CPU performance before you start running any programs on it.
I agree that you can definitely take the 68k design and get a lot more performance out of it. The 68060 design was already going down the path that x86 successfully followed. Superscalar design, multiple ALUs, Out-of-Order, branch prediction, op-fusion + cache, pre-fetching, etc these are all things that x86 & PPC (and other ISAs) have all successfully integrated and they're as applicable to 68k as to anything else. Although some are more applicable to hard designs than to FPGA based ones apparently.
I think you're on the right track if you take something like the TG68 fpga design and start to do things like improve it to make instructions run in a single cycle, add a 2nd ALU, improve the cache performance ad infinitum.
Andy
-
Actually, ARM has definate advantages in the area.
Its very low power.
X86 isn't quite there yet.
And the 68K never was a low power device.
So, arguing the code density issue from that point makes little sense.
ARM has a simple decoder and consumes very little electricity but the integer CPU could be more powerful. x86 is more powerful but has a very complex decoder wasting electricity. The 68k would fit in between with a moderately complex decoder but is similar in integer performance, if not better than x86 (assuming basic enhancements like in 68kF and no 64 bit for low power target). While consuming more electricity than ARM, the 68k has the best code density which helps performance and allows for a smaller memory footprint. The 68060 was pretty low power consumption for it's performance and time.
I have rated the 4 most common processors according to what I think is important for a modern integer CPU. The asterisks are stars with 5 asterisks being the best:
1) electrical consumption/decoder and pipeline simplicity
2) powerful integer instructions and addressing modes
3) conditional and branch hazard performance
4) code density
5) ease of use
68kF in a modernized 68060 like implementation
1) ***
2) *****
3) ***
4) *****
5) ****
ARM with Thumbs
1) *****
2) ***
3) ***
4) ****
5) ***
x86
1) *
2) ****
3) **
4) ****
5) *
PPC
1) ****
2) *****
3) ****
4) *
5) **
Agree or disagree with these ratings? Of course the x86 is a pig made to fly with plenty of money.
-
Now my sarcasm detector is failing.
THAT definately was sacasm, wasn't it?
Nope, a P795 does not simply have a fan, it has an Air Moving Device. Which is not a fanblade as might be expected, but a rather big river-boat style shovel(?) - a rotating mousewheel with blades digging into the air as it rotates.
You can _hear_ the box power up from a long way away when that thing starts moving. And that in an already noisy datacenter. It works. The box keeps its cool.
-
I'm just saying that you're falling into the old-OS vs new-OS fanboy behaviour. You cannot compare the two, they're both OS's but from 30 years apart that do completely different jobs.
I like what works well. AmigaOS does and Windows does not.
There are things to be said for the simplicity of AmigaOS but if you like that sort of thing then you should look at HaikuOS and see how to achieve it's minimal feature set still requires some serious CPU performance before you start running any programs on it.
Haiku looks pretty kool from the videos I've seen. I might give it a try if support other than x86 gets better. Otherwise, AROS is getting better :).
I think you're on the right track if you take something like the TG68 fpga design and start to do things like improve it to make instructions run in a single cycle, add a 2nd ALU, improve the cache performance ad infinitum.
It would be better to start with the N68050 if Jens ever releases it like he's been talking ;).
-
Apparently the GPU guys didn't get that memo. top end GPUs run half a million threads on hundreds of cores in parallel.
There's a big, big difference between massive multicore in special-purpose applications, and massive multicore for general-purpose computing, though. Graphics in particular is a task that's pretty much tailor-made for massive parallelism. Word processors, file managers, web browsers, and other unglamorous productivity software? Not so much.
-
ARM has a simple decoder and consumes very little electricity but the integer CPU could be more powerful. x86 is more powerful but has a very complex decoder wasting electricity. The 68k would fit in between with a moderately complex decoder but is similar in integer performance, if not better than x86 (assuming basic enhancements like in 68kF and no 64 bit for low power target). While consuming more electricity than ARM, the 68k has the best code density which helps performance and allows for a smaller memory footprint. The 68060 was pretty low power consumption for it's performance and time.
As I understand it the 68K was running into difficulties when it was getting to things like the 68060. That was one of the reasons they abandoned it.
Having a complex and powerful ISA might be wonderful from the programmer point of view but it's most likely the opposite from the hardware designer's point of view. Someone has to implement all those commands in hardware and this can lead to some very tricky situations.
e.g. What happens if your processor is doing some complex operation and an interrupt comes in? Do you hold the interrupt and keep going? What if the operation takes a long time involves reading from RAM? You probably can't wait that long so you have to find a way of halting the processor, storing the state mid instruction, handling the interrupt, recovering the state and restarting where you left off.
Thats the sort of problem the hardware designers have to deal with. Then you have to build it and test it, including that particular behaviour. There's a reason no one but IBM and Intel use CISC these days - and they both tried to get rid of it.
-
As I understand it the 68K was running into difficulties when it was getting to things like the 68060. That was one of the reasons they abandoned it.
Having a complex and powerful ISA might be wonderful from the programmer point of view but it's most likely the opposite from the hardware designer's point of view. Someone has to implement all those commands in hardware and this can lead to some very tricky situations.
This is true enough, but x86 faced the exact same hurdle at the same time - they solved it by moving the Pentium Pro to a RISC microarchitecture which implemented the CISC instruction set, and they've stuck with that approach ever since. I don't see any reason the 68k couldn't do the same. It's a kluge, admittedly, but it's a kluge that could give us a rich, friendly instruction set with increased performance, and that's nothing to sneer at.
e.g. What happens if your processor is doing some complex operation and an interrupt comes in? Do you hold the interrupt and keep going? What if the operation takes a long time involves reading from RAM? You probably can't wait that long so you have to find a way of halting the processor, storing the state mid instruction, handling the interrupt, recovering the state and restarting where you left off.
I don't know of any processor that supports breaking for an interrupt mid-instruction - that would be overly complex to implement, introduce a highly undesirable degree of non-determinacy, and not get you anything more than slightly lower interrupt latency for the trouble.
Besides which, most of the 68k instructions that are particularly long-ish are that way mostly because they haven't been implemented a particularly efficient way - multiplication on the original 68000, for example. One of the key things you'd want to do in a new 68k design is address some of those, anyway.
-
As I understand it the 68K was running into difficulties when it was getting to things like the 68060. That was one of the reasons they abandoned it.
B.S. This was just anti-marketing the 68060 because Motorola decided they were going the PPC route. The 68060 was outperforming the early PPC processors. Apple made their OS incompatible with the 68060 so it wouldn't be the fastest Macintosh available. In the meantime, Intel was having no problems upping the performance of their x86 line which is more difficult to enhance than the 68k family.
Having a complex and powerful ISA might be wonderful from the programmer point of view but it's most likely the opposite from the hardware designer's point of view. Someone has to implement all those commands in hardware and this can lead to some very tricky situations.
Many times you are correct but the 68kF was created with performance considerations like:
1) Address registers only allow full 32 bit register updates.
4) Most new instructions update full 32 bit register.
2) SELcc and SBcc were added instead of MOVcc.
3) Smaller 32 bit immediates are compressed (using sign extend instead of shift).
4) Bitfield instructions are retained as they can be fast and update the whole register.
5) Trashing registers is avoided where possible in many different ways.
6) Better orthogonality, address registers allowed more, less register shuffling needed.
7) Many new instructions have a stealth 3 op format without requiring more units.
It is helpful to know a little bit about how a processor works before creating an ISA ;).
e.g. What happens if your processor is doing some complex operation and an interrupt comes in? Do you hold the interrupt and keep going? What if the operation takes a long time involves reading from RAM? You probably can't wait that long so you have to find a way of halting the processor, storing the state mid instruction, handling the interrupt, recovering the state and restarting where you left off.
It's more complex but it's already handled well in the 68060. The 68040 was kind of a mess though. There are more complex problems addressed all the time in modern processors than this. Take branch hazards for instance.
Thats the sort of problem the hardware designers have to deal with. Then you have to build it and test it, including that particular behaviour. There's a reason no one but IBM and Intel use CISC these days - and they both tried to get rid of it.
Freescale with the ColdFire?
-
There's a big, big difference between massive multicore in special-purpose applications, and massive multicore for general-purpose computing, though.
I know, that why I mentioned single threaded next :-)
Graphics in particular is a task that's pretty much tailor-made for massive parallelism.
Yup.
Word processors, file managers, and other unglamorous productivity software? Not so much.
True, but it's now got to the point that a lot of software doesn't require a high end processor anyway.
On the other hand a lot of the stuff that does require high end processors can be parallelised. I just bought a new high end laptop with a quad core CPU because I run things that can max it out - video editing, recording music and photo editing.
These can all run across multiple cores. That said, editing 22 Mpixel images appears will happily use all 8 hardware threads but it seems to be more limited by the hard disk (which is actually a s**t hot fast SDD).
[/QUOTE]web browsers[/QUOTE]
Actually these do quite a lot at the some time so they can take advantage of multiple processors.
-
Indeed; everybody wants to ride the hot new ARM wave that's surely going to last forever, and nobody seems to remember when the new wave was PPC... ;)
Because they're cheap and made in vast quantities - much, much higher than x86.
But the real reason:
Go to eBay and search for "Android Laptop 10", buy it now only.
10" laptop for £80 (including postage) that's faster than any of the Sam boards.
That's why.
Welcome to the world of ultra low cost computing.
-
I don't know of any processor that supports breaking for an interrupt mid-instruction - that would be overly complex to implement, introduce a highly undesirable degree of non-determinacy, and not get you anything more than slightly lower interrupt latency for the trouble.
Doing it for an interrupt is unnecessary, but a 68010 or above will stop mid-instruction, save the processor state on the stack, and allow you to resume the failed instruction on a bus error.
-
@matthey
Yes but that's my point. You can't compare AmigaOS from 1980's to Windows 7 from 201X hence my hypothetical example of IF AmigaOS had continued to be updated then it'd be in the same sluggish state.
There are things to be said for the simplicity of AmigaOS but if you like that sort of thing then you should look at HaikuOS and see how to achieve it's minimal feature set still requires some serious CPU performance before you start running any programs on it.
Andy
Try one of the various Linux distros, this runs modern software, runs very fast, has a faster boot time than Windows. Also it has never crashed in my experience, although I haven't flogged the thing as much as Windows.
I have tried HaikuOS it may look like BeOS, but it isn't.
-
Try one of the various Linux distros, this runs modern software, runs very fast, has a faster boot time than Windows. Also it has never crashed in my experience, although I haven't flogged the thing as much as Windows.
I had more frequent and severe crashes running Linux than I've ever had running XP. Didn't boot any faster, either.
I have tried HaikuOS it may look like BeOS, but it isn't.
Mind elaborating on this? I missed out on the BeOS salad days, so I really don't know what Haiku has yet to achieve on that front (though it certainly could use some newer features, i.e. a proper GUI wireless client.)
-
One of my lecturers at uni was pretty high up in CPU design for IBM in the 80s and you can bet your ass they tested every possible scenario for server and desktop OS efficiency as far as parallel processing goes.
One of your lecturers who has no experience in processor design in the last 30 years tells you something and you believe him? Priceless.
While some algorithms are inherently parallel and can't be split across cores easily, there are a lot that can be. However it does take intelligence to design your code to work that way & unfortunately most programmers lack intelligence. They only know what they've been taught by people who couldn't hack it, who then become lecturers.
In the real world there are people who have written code that is efficient and can take advantage of a large number of cores. Maybe one day you'll meet one.
There's a big, big difference between massive multicore in special-purpose applications, and massive multicore for general-purpose computing, though. Graphics in particular is a task that's pretty much tailor-made for massive parallelism. Word processors, file managers, web browsers, and other unglamorous productivity software? Not so much.
Out of your example, web browsing is the only one that really needs a performance boost. With javascript and compressed video, there is definitely a need for more performance & there is no reason that can't be by using more cores.
There is a rise in using SQL databases for storage in a lot of applications that you wouldn't have thought, even Android uses SQL. A decent SQL engine will make use of multiple cores.
The future is multiple cores for general purpose computing, programmers will need to adapt.
-
Out of your example, web browsing is the only one that really needs a performance boost. With javascript and compressed video, there is definitely a need for more performance & there is no reason that can't be by using more cores.
Yeah, web browsers are definitely moreso than the other examples. On the other hand, half the reason that's even true is because of the omnipresence of terribly slow (and almost always completely unnecessary) Javascript in modern websites, so even though there is a benefit from a properly multithreaded browser, it's mostly in making bad code less crippling.
The future is multiple cores for general purpose computing, programmers will need to adapt.
Again, I'm not arguing that multicore isn't a good thing; it's lovely for what it is. It's just not an Ultimate Solution to the fact that eventually computers are going to reach a point of maximum practically-attainable power, and all the people who have been blithely counting on Moore's Law to cover for their many, many sins of bloat and shoddy coding and poor optimization are going to run straight into that wall at full steam and then start crying about having to learn to be efficient again.
-
I had more frequent and severe crashes running Linux than I've ever had running XP. Didn't boot any faster, either.
On some of my x86s some windows did not even manage to install itself, etc, etc. Linux went in far more smoothly. Booted faster and was stabler.
I think I've had thousand crashes with windows and only a few with linux (+ perhaps hundred crash of the Linux desktop GUI, that really is not linux fault).
And modern linux remembers what you had running when you powered off, it can restore your work's state pretty nicely.
(latest win crash happed just 2 hours ago with my work laptop, when I lift up the lid of the laptop, bluescreen gave me it's warm welcome. and yesterday the laptop just locked up when IE was used, hard power off worked. I know people blame idiotic IT support, but without M$ there would be no need for such a IT support.)
Modern linux main distros seem to require more than 4Ghz of computing power. I've forced to move to lighter desktops, like LXDE and Enlightenment.
Nice to see Linux getting in places also at work. Even Beaglebone HW is in the lab handling various I/O things and routing.
-
On some of my x86s some windows did not even manage to install itself, etc, etc. Linux went in far more smoothly. Booted faster and was stabler.
I think I've had thousand crashes with windows and only a few with linux (+ perhaps hundred crash of the Linux desktop GUI, that really is not linux fault).
Well that's lovely for you. Doesn't change the fact that I've had XP bluescreen exactly once, and that on account of a crap video driver, while I lost hours of work to sudden core-dumps during my switchover attempts. (And that's not even counting the years during which I was using DSLinux as a portable text editor.)
And modern linux remembers what you had running when you powered off, it can restore your work's state pretty nicely.
Beg pardon, are you telling me that it actually maintains enough stability in a crash to instruct applications to save your work, and then dumps and reboots? Because merely remembering what I had running is peanuts - I can remember what I had running without help.
-
When I suggested Linux I was being friendly. Now that it has gone hostile:
Your OS choice could depend a lot on experience. Hardware incompatibility is also a problem.
My Windows install usually XP becomes more and more troublesome after a number of resource hungry applications are installed, including the anti-virus (solves one problem, creates another.)
When I start running a lot of 3rd party software on Linux I will get back to you on how stable it is.
I think we should get back to discussing Amiga OS varieties?
-
Any CPU engineer will tell you instantly we have hit the wall and all this multi-core desktop CPU stuff is just a scam really. The maximum number of cores without losing efficiency of code execution is effectively 3. Not 4 not 8 but 3. You can not utilise much more than this without actually starting to waste cycles of CPU time delaying/setting up use of threads to run on other cores.
And this is why consumer devices are getting the ability to handle more threads? XBox360 has 3 cores and can handle 6 threads. Wii U supposedly is based on the Power7 and uses 3 cores and can handle 12 threads. Why does Tegra and other ARM SoCs add cores and integrate more functionality with each subsequent version, if consumer applications of multithread are worthless? Yes, I'm extending "desktop" to "consumer" since in my mind it's the same thing.
Consumers can get applications for their iPhones that allow you to take multiple pictures and stitch them together in one giant panorama--that's a CPU intensive and parallizable workload. Any video transcoding will also benefit. Efficiency of code execution? You're not talking about prefetch units, decode units, schedulers, caches, registers, nothing? Just, if you have more than X cores, it hurts code efficiency? I'm not sure any CPU engineer would make a blanket statement about that when not knowing the instruction mix or details of the CPU implementation.
There are many many situations that the AMD 4.2Ghz PC will outgun an Intel i7 3700k fact. One of my lecturers at uni was pretty high up in CPU design for IBM in the 80s and you can bet your ass they tested every possible scenario for server and desktop OS efficiency as far as parallel processing goes. For desktop computers 3 or 4 is about it. So Moore's law is f**ked well and trully unless we start seeing 5 and 6ghz CPUs QUICKLY!
You misunderstand Moore's law. Here: http://arstechnica.com/gadgets/2008/09/moore/
We've still got a ways to go, too:
http://arstechnica.com/science/2012/02/we-can-do-no-moore-a-transistor-from-single-atom/
http://arstechnica.com/science/2012/08/researchers-open-door-to-electronics-made-from-single-layers-of-atoms/
I'm leaning towards agreeing with psxphill in that you need to broaden your pool of knowledge--that one guy might talk a good talk but I think it's a minority opinion that he has.
The new Xbox and the new PlayStation are all confirmed to have x86 64bit CPUs already, and this means that those CPUs will be dropped into a much better design of motherboard architecture than any PC for sale in 2013/2014 with the same CPU to compete on price/performance.
Anything you can cite from the vendors on this? Or is it just rumors as to what CPUs they will use?
I am interested in CPU-related junk and some of your info seems just plain wrong. Nothin' personal.
-
I'm leaning towards agreeing with psxphill in that you need to broaden your pool of knowledge--that one guy might talk a good talk but I think it's a minority opinion that he has.
FYI, when I started to work at my current place (http://www.imec.be) there were people (including university professors) claiming we would never need submicron technology as it would be too expensive and would never be needed. At the time MACs still contained Motorola processors and hardly could multitask.
Currently we are at 20nm node or more than 10 nodes further and still scaling ...
greets,
Staf.
-
I suppose I need context for that. If it's a current university professor expousing such things, yeah, I don't see how he could still believe that in the present day.
If it were university from 10+ years ago, ok, maybe I buy that he wasn't in a minority opinion at the time.
More information needed but it's a minor point of contention, I think. Anyone alive TODAY should be able to form their own opinions based on current data, not relying on one professor. Even if it's a present-day professor, it doesn't hurt to question their opinion. Teachers have their own biases which are sometimes at odds with the real world.
-
I had WinXP BSOD on me a couple of months ago while my system was booting and it corrupted the drive rendering it unbootable.
Don't think I've had such a drastic crash since the days of DOS based Windows.
Since then I've sold most of my Windows hardware and I'm relying on a Win7 equipped netbook.
I've got a copy of Snow Leopard and Parallels7. From now on I'm restricting my Windows use to virtualization.
I just don't trust Microsoft's software. It blows up too readily.
-
A faster way to run software that simply doesn't exist that would even take advantage of it, at least on the Amiga side.
Give me a 9Ghz PPC CPU on my SAM and I'll still be asking for software that even tasks the 600 mhz chip that's on it now.
It's like overclocking a calculator when things are essentially a "one trick pony" ordeal.
While Linux could use that kind of power we need to fully use existing one and develop some real software first. No need for new hardware yet. But everyone that can buy SAM or X1000 should do so.
Meanwhile, some updated AROS PPC, more support of these 3 boards from Linux PPC Distros and AmigaOS/MorphOS development would be nice.
-
But everyone that can buy SAM or X1000 should do so.
Seriously considering a 460, Vox.
Can't justify an X1000, but I'm hoping that whatever Varisys designs as its sucessor is more affordable.
-
Seriously considering a 460, Vox.
Can't justify an X1000, but I'm hoping that whatever Varisys designs as its sucessor is more affordable.
Have been doing the same, but after painful experience with building DJ Nicks SAM 460 board to fully usable system, and also comparing board exapandability and performance, X1000 is in my eyes better solution. Not to mention additional OS 4.2 licence for SAM.
However, who doesn‚t have patience to save and again wait for X1000 should go for SAM 460. 1Ghz CPU, DDR2 and SATA2 is nice setup. Would love to see MorphOS for it too. Debian Wheezy and OS 4.1.5 for now ...
-
Yeah, the MorphOS development team has pretty much ruled out support for Acube hardware, but I wouldn't mind it either.
And the neat thing about the 460 (as opposed to the 400) is that you can use modern video cardss on it.
A Radeon HD 4650 or 4670 should work with the same drivers used on the X1000.
-
Yeah, the MorphOS development team has pretty much ruled out support for Acube hardware, but I wouldn't mind it either.
And the neat thing about the 460 (as opposed to the 400) is that you can use modern video cardss on it.
A Radeon HD 4650 or 4670 should work with the same drivers used on the X1000.
It is sad they don`t wan`t to be avail for avail new HW (it would be much better representation to the world even with high price of SAMs, but 440 can now replace Efika as low end and 460 could be about Peg2 with PCI-E and SATA2). Some licences would be bought, and more importantly, possibility for OS4 crowd to test and maybe even recompile, create for MorphOS.
PCIE bus is the major advancement of SAM 460 surely.
But when compared in expandability (USB, PCI ...) as well as in board pecularities of having awful integrated everything, X1000 is my choice. Hopefully, board will be fully supported and OS 4.2 out by the time I save $3000 + P&P + import fees ... in a country with 500$ average salary.
-
I suppose I need context for that. If it's a current university professor expousing such things, yeah, I don't see how he could still believe that in the present day.
If it were university from 10+ years ago, ok, maybe I buy that he wasn't in a minority opinion at the time.
It was how people thought 10+ ago and the professor was certainly not alone in that line of thinking. The professor is retired now and I attended a lecture from him some months ago. He is now predicting a stop for the scaling around the 7nm. Making transistors that are smaller is claimed to be too difficult from physics point of view and chips should be able to do all we want from them...
My personal opinion is that as long that a chip can't run a game with
a) real-time movie quality graphics
b) opponents with human level intelligence
c) do that continuously for let's say one week when running of a battery
we will keep on scaling.
greets,
Staf.
-
7nm is the current silicon wall that everybody is agreeing on (although it used to be 10nm, and before that 14nm, so don't bet on it not being worked around eventually).
-
So just make the chips bigger... I don't see a problem with that. Make the chip the size of current motherboards for all I care... Just make faster chips, whatever it takes. :laugh1:
-
So just make the chips bigger... I don't see a problem with that. Make the chip the size of current motherboards for all I care... Just make faster chips, whatever it takes. :laugh1:
Two problems:
Speed of light. If a signal can't get from one end of the chip to the other in a clock cycle, you effectively have to start designing asynchroneous / clockless systems for starters. But more importantly our current programming model starts falling apart and every app that wants to take advantage of it will need to work like a large distributed system with all the complexities that involves.
Heat. Pumping heat away fast enough is the biggest current problem to upping clock speed. E.g. with custom (read: expensive) cooling, doubling to tripling current clock frequencies is doable, but you end up with a cooling system that's many times larger than the CPU with current tech.
So at some point we'll need to figure out smarter approaches instead of just throwing more transistors at the performance problem.
-
So at some point we'll need to figure out smarter approaches instead of just throwing more transistors at the performance problem.
Yep, smarter, more efficient designs that make better use of those transistors.
BTW - Everyone take a good look at Vox's posts (at least as long as they aren't deleted). He seemed like a positive member of he community.
-
So just make the chips bigger... I don't see a problem with that. Make the chip the size of current motherboards for all I care... Just make faster chips, whatever it takes. :laugh1:
Unfortunately yield drops by the square of the increase in chip area.
Fortunately technologies such as Silicon Interposer, Through Silicon Via, etc, will allow for many smaller dies to be used as a single chip, via various implementations of die stacking.
http://semiaccurate.com/2012/09/05/hot-chips-talks-all-about-chip-stacking-good-and-bad/
http://semiaccurate.com/2012/09/06/die-stacking-has-promise-and-problems/
-
On the other hand, half the reason that's even true is because of the omnipresence of terribly slow (and almost always completely unnecessary) Javascript in modern websites, so even though there is a benefit from a properly multithreaded browser, it's mostly in making bad code less crippling.
I'm not going to defend every use of javascript as there are some very bad examples out there. However there will always be good uses for it and because websites have become more important, web browsers are becoming more standardised so we're unlikely to see major shifts in the the way websites are designed. HTML5 is mainly just a load of old technologies that have been standardised. I blame tim berners lee, but they knighted him.
Teachers have their own biases which are sometimes at odds with the real world.
Which is usually why they become teachers. It's much easier to tell people incorrect things, if they don't know any better.
-
I'm not going to defend every use of javascript as there are some very bad examples out there. However there will always be good uses for it and because websites have become more important, web browsers are becoming more standardised so we're unlikely to see major shifts in the the way websites are designed. HTML5 is mainly just a load of old technologies that have been standardised. I blame tim berners lee, but they knighted him.
There are indeed some good uses for it - and they aren't the problem. Good Javascript, when it can be found, does only what static page content and server-side scripting can't, and does so efficiently, such that even an old browser can handle it. (I can log into GMail from iWeb on my A1200, for example.) The problem is bad Javascript, which is far more common, almost omnipresent in these days of glitz-focused "Web 2.0" design.
Bad Javascript eats up CPU cycles like popcorn, usually for no other purpose than to make "dynamic" a page that would have been perfectly fine static, or to badly reimplement basic browser functionality. (There is nothing in the world that makes me want to harm my fellow man the way Javascript links do.) The only way to avoid it is to have a Javascript whitelist feature such as NoScript, because it's currently still illegal to kill someone for bad web design. And it gets worse every year.
Good Javascript needs no special measures to work. Bad Javascript is the primary (almost the only) reason people keep having to throw more horsepower at a web browser.
As far as HTML5 goes, I know that HTML4 needed a cleanup, but this just gives more free reign to bad Javascript programmers. It isn't what was needed in the slightest.
-
Speed of light. If a signal can't get from one end of the chip to the other in a clock cycle
This problem was encountered and fixed years ago. IIRC the Pentium 4 was suspected to have pipeline stages just for moving things around.