Amiga.org
Amiga computer related discussion => General chat about Amiga topics => Topic started by: uncharted on October 28, 2007, 12:12:35 PM
-
Reading through some of the comments in the OS X Leopard thread decrying OS X as bloatware got me thinking.
Is AmigaOS really mean and lean because of its design philosophy or is it more to do with its situation?
If you think about it, could it be that AmigaOS is so lean because it was never given the resources by Commodore and has been left to rot ever since?
Back in the early 90's AmigaOS was, size-wise, on a par with MacOS. Could it be that had development continued with a decent amount of resources that AmigaOS would be as fully featured and as large as modern OSes?
Perhaps, the "efficiency" crown jewel of the community in nothing more than a symptom of undernourishment and wasted potential.
Discuss.
-
The mouse driver distribution for my Logitech cordless is just over 60 megabytes. That's just a mouse driver. I think it's the other way round. From that, it looks like Windows lacks the resources and needs special help to get a mouse to have proper behavior.
-
Thanks to windows the complexity of an OS has increased alot. Shadow fading windows, 3D rotating stuff. Compatibility with older OS's. This adds alot of functionality and is never good for the speed. If AmigaOS continued to evolve I'm sure it would be slower, but machines would be faster.
Unless the OS is kept clean and anything you add isn't part of the OS but just a commodity, than it could be small and quick. :)
-
TinyXP gave me no problems and was vastly shrunk in size...
-
hamtronix wrote:
TinyXP gave me no problems and was vastly shrunk in size...
Still 200 megs...
Pity I haven't got the time to install it though...
-
Is AmigaOS really mean and lean because of its design philosophy or is it more to do with its situation?
Kickstart 3.1 was shipped with six DD floppies and you could boot to full Workbench from single floppy. It ran on an Amiga 500 (68000/7MHz) with 1MB RAM. Try this with HP's OS 3.5 or 3.9!
Today we could not imagine MorphOS or OS4 running with so little.
-
Though it's a good question, which I have been thinking about before, there's one thing that made me conclude: BeOS. As tiny as 10 megs (a 500 megs image, where the rest of the megs are being used as BeOS formatted disk space, which could be ran from Windows), compared to the 300 megs of Windows 98 at that time, and it had a HUGE functionality, it even supported my tv card as standard, back then. It was super fast too. I loved that OS to bits.
Pity it's today obsolete, not supporting my usb keyboard, usb harddisks and other hardware. But back then, everything worked out of the box :-D
Just wiki'd it, and I saw it has been made by former apple developers. And it has been adopted by Palm. Another manufacturer whom's OS I love :-)
-
@uncharted
Interesting idea. I think that the efficiency benefits are also visible on higher-end systems. Being forced to be efficient may definitely be a contributing factor. Having said that, I'm convinced that the base system design was efficient to start with.
I'm currently working on schemes to get around memory bandwidth limitations. If I had a brand new machine, it probably wouldn't be necessary. However, the schemes (work in blocks that can be kept in the CPU caches, nothing new really) will also benefit faster systems. Maybe all developers should be given a machine with restricted resources, and told to make it work usably on that.
Hans
-
The AmigaOS os is the most simplified structure and therefor can be advanced upon much further than any other OS.
Therefore I agree that AmigaOS could be the most efficient and resource saving OS ever.
I think it would be best if we all had the same system specs as eachother or near as possible that way we cn path the future more lavishly.
Every other OS is bullied by services, dll's, paging, virtual memory which is no good for a disk based OS.
The only way Amiga should have paging is if its run from solid state or actual ram or in an embedded environment similar to QNX.
DEATH TO SPINNING DISKS
Regards
-
Paradox wrote:
DEATH TO SPINNING DISKS
Regards
Don't you want to say that out loud, plz? :nervous: I haven't backed up my harddisks yet :nervous:
-
Hans_ wrote:
...Maybe all developers should be given a machine with restricted resources, and told to make it work usably on that.
That was literally the case with the BeBox developers. The machines had two processors, but they were only allowed to have around 16 Mb RAM installed (look it up).
-
Back in the 80s/90s, AmigaOS was a lot more efficient than DOS/Windows, and even MacOS to some degree (Macs couldn't really get far without a hard disk, for example). So this claim isn't just comparing to modern machines.
I'm sure if Commodore hadn't gone bust, modern Amigas would have higher system requirements, but obviously it's impossible to judge something which never happened. OS X isn't even the same OS as Mac OS of the 80s/90s anyway, so one can't really say that Mac OS grew more bloated, rather Apple ditched it for another OS. Who knows, if Commodore hadn't gone bust, modern Amigas might be running the bloated Vista... (I believe the plans for next generation Amigas at one time included HP Risc machines running Windows NT?)
-
Iif there's one thing I have learned over the years is that software will always grow to fill the hardware. Amiga hardware has been caught in a 1993 time-warp, so the OS and software run within these hardware constraints. It gets interesting if you emulate some of the different Amiga OS distributions out there: Amikit looks like the most modern AmigaOs you can get in terms of eye candy and "features" but it is far slower than Amigasys or classic amiga, which aren't as "feature" rich. Still, the Amiga would have bloated over time, but not nearly as much as X86. Don't forget the X86 hardware platform is basically designed to suit the windows way of doing things ie its windows that demands the hardware be designed to suit it, not that windows is made to run on the hardware. Much of the efficiency that comes from the Amiga is because the hardware and os are tightly integrated together. But you can't get that with generic mass produced hardware, it has to be propriatory and no users want that because its expensive eg remember the powerpc macs cost nearly twice as much as equivalent PC hardware at the time.
-
Really, it all boils down to this:
"Necessity is the mother of invention."
cos most amigas had a 7MHz processor, and half a meg of RAM, coders and developers got good at fitting a lot into relatively little.
but there was a time when I thought 48K was a lot !
-
@stefcep2
Amikit looks like the most modern AmigaOs you can get in terms of eye candy and "features" but it is far slower than Amigasys or classic amiga, which aren't as "feature" rich.
http://www.amikit.amiga.sk/benchmarks.htm
-
Bloatware typically refers to useless or seldom used features. Roj's 60MB mouse driver? Well, the driver is probably just some 100K DirectInput hook which maps each mouse button to a system signal. The rest of it is the installer, fancy uncompressed BMP graphics, miles and miles of XML exported from a lousy PowerPoint clone, the updater which constantly runs in the background (and is almost always a service and not a task), etc. The drivers for my printer are 85MB, but the printer works just fine using the vanilla driver that comes with XP.
Also keep in mind that the Amiga really didn't have a lot of drivers for things. The whole OS was pretty much hard-coded just for the chipset, which of course was its greatest downfall. Take a look at the Linux kernel, and you'll find tons of hacks to make hundreds of devices work. Even the Macintosh, a closed hardware platform, has to support huge numbers of different hardware configurations, and the OS is expected to adapt to each one, not require you to re-install every time you swap out one or two parts. Any OS by itself is usually quite lean.
Now, the window manager is a different story.
Don't even get me going about game consoles. It bothers me when I hear PS3 developers talking about how they "need" Blu-ray. I can't imagine even filling up a DVD given all the work that's been done with procedural synthesis. Aren't the Cell's vector processors designed with synthesis in mind?
Hellcoder: Thanks to windows the complexity of an OS has increased alot. Shadow fading windows, 3D rotating stuff. Compatibility with older OS's. This adds alot of functionality and is never good for the speed.
Shadow effects and the like are easy to do.
The problem is putting things together into monolithic libraries. In any one session, you might be using less than 5% of the software's features, but it all has to be loaded into memory, just in case you might use it.
But, hey, we just swap it out to VM, so why bother worrying about how big a library is? Just include everything and let the hard drive swap its butt off!
It's too difficult from a developer's standpoint to splice the libraries into individual tools, and compilers are still too stupid to do that for you. Optimization is key, not packaging. Hell, how many times have you worked on a group project and dependencies don't even build properly? Nobody takes proper packaging seriously.
Hans: Maybe all developers should be given a machine with restricted resources, and told to make it work usably on that.
That's a law in many good software firms.
Paradox: The AmigaOS os is the most simplified structure and therefor can be advanced upon much further than any other OS.
Therefore I agree that AmigaOS could be the most efficient and resource saving OS ever.
Interesting username. ;-)
The reality is that AmigaOS doesn't do very much. Yes, it's lean on resources, but it doesn't do most of the things you'd expect from a modern OS.
If I wrote a piece of sorting code that was 5 lines long in C, does that make it efficient and fast? Using more RAM can actually have huge long-term benefits. You just have to gauge how many resources your customers will have in real-world situations and not go over those limits.
It's also faster and more reliable to write good, maintainable code that gets the job done, rather than cryptic code that is amazingly efficient. What good is it that you use 30% less RAM, but the code takes ten times longer to develop and debug?
Much of the efficiency that comes from the Amiga is because the hardware and os are tightly integrated together. But you can't get that with generic mass produced hardware, it has to be propriatory and no users want that because its expensive eg remember the powerpc macs cost nearly twice as much as equivalent PC hardware at the time.
I disagree. Even X86 is pretty efficient if you think about it, because hardware engineers cannot be anywhere near as sloppy as software engineers. That's the result of cutthroat competition, limited (or rather, costly) resources, and a deep awareness of time. Hardware engineers always have to know when things happen, while software tends to sit around, wait for other things to get done, and allow everything to get out of sync.
To improve responsiveness, maybe software compilers should introduce more controls to monitor time, like hardware compilers. Not like I have any experience with hardware design, of course.
-
Probably the closest thing to what AmigaOS would have become if not for bankruptcy is QNX. I don't think it would have gone the Leopard route. But the whole question is ridiculous because Amiga could not have survived, Apple, a far bigger company survived because of Steve Jobs personality, all the other alternative OSs died, sucked into the great Microsoft void. Even with Jobs there was still a Microsoft bailout and the iPod revolution needed to save Apple. The odds against Amiga's survival were, in retrospect, astronomical.
The current Amiga survives by outsourcing American jobs to India, which is far less endearing than the iPod...
-
Waccoon wrote:
...It's also faster and more reliable to write good, maintainable code that gets the job done, rather than cryptic code that is amazingly efficient. What good is it that you use 30% less RAM, but the code takes ten times longer to develop and debug?
...Even X86 is pretty efficient if you think about it, because hardware engineers cannot be anywhere near as sloppy as software engineers. That's the result of cutthroat competition, limited (or rather, costly) resources, and a deep awareness of time. Hardware engineers always have to know when things happen, while software tends to sit around, wait for other things to get done, and allow everything to get out of sync.
To improve responsiveness, maybe software compilers should introduce more controls to monitor time, like hardware compilers.
Hi Waccoon,
I'm also not really an expert, but I know enough to know that you have raised an important point here. Hardware processes are timing critical, but most software processes are not, particularly when multithreading, multiprocessing, and multitasking is being performed. Generally, if a software process is really timing critical, it will either demand the sole context focus of the hardware it's running on, or if it is just order critical, it will either wait for particular stages of completion, or have its entire task written in a single thread. More often, with personal computers, tasks will be separated into threads of order critical jobs, and mutiple threads of a given task, and multiple tasks can run concurrently.
The draw backs you mentioned with having a detailed analysis of software's runtime behaviour, and a lot of effort put in to optimisation, are always critical with commercial development. High level abstraction, and modularisation also really help to keep the code manageable, portable adaptable, etc.
There are very expensive compilers which do a great job of separating programs into efficient, out of order processing, etc, but many developers don't have the opportunity to compile with these. Sometimes, the processing savings are just not worth it, as well. Some universities actually lease the idle time on their expensive compilers.
At uni, I did an analysis of different compiler's optimisation of common sets of code. For some types of tasks, it really makes a huge difference. Some object oriented code can become a laughable mess when built with free compilers.
-
itix wrote:
Is AmigaOS really mean and lean because of its design philosophy or is it more to do with its situation?
Kickstart 3.1 was shipped with six DD floppies
???
My Kickstart 3.1 was 1 ROM chip for each of my A500s and two ROM chips for A1200 or A4000.
What came on six disks was OS3.1 (Workbench 3.1)...
itix wrote:
and you could boot to full Workbench from single floppy. It ran on an Amiga 500 (68000/7MHz) with 1MB RAM.
I can remember when I had the OS running, "Kind Words" was printing an huge document in the background, "ACall" was downloading something from an BBS in the background for quite a while, "Reflections" was rendering an image in the background while I was playing a game - and all that on my basic A500 with 512kB CHIPMEM...
itix wrote:
Try this with HP's OS 3.5 or 3.9!
...
As far as I remember OS3.5 and OS3.9 require an 68020 CPU and an CD-ROM drive.
A standard A500 doesn't have this - you would have to expand it first to run one of the mentioned OS versions.
-
Dandy wrote:
itix wrote:
Is AmigaOS really mean and lean because of its design philosophy or is it more to do with its situation?
Kickstart 3.1 was shipped with six DD floppies
???
My Kickstart 3.1 was 1 ROM chip for each of my A500s and two ROM chips for A1200 or A4000.
What came on six disks was OS3.1 (Workbench 3.1)...
That's why he wrote *with* and not *on*.
-
AmiKit wrote:
@stefcep2
Amikit looks like the most modern AmigaOs you can get in terms of eye candy and "features" but it is far slower than Amigasys or classic amiga, which aren't as "feature" rich.
http://www.amikit.amiga.sk/benchmarks.htm
Yep OK CPU speed is dependent on x86 CPU hardware so will always trump 680x0 these days but the interesting one is the intuition benchmark. In my day to day use I would say Amikit with magellan or OS2.9 feels to be running far slower than AIAB and Amigasys than this benchmark would indicate. Much of this has to do with the font antialaising which does look great
-
Hi,
By the time OS3.0 appeared, you needed a Mac way larger than the basic Amiga to run Mac OS, and it´s multitasking was very poor.
It is unfair to compare WB3.0 with windows 3.1, because it was older. It should be better to compare it with Win95, and this one is bigger, slower, requires more memory, and it´s multitasking still was way inferior.
I believe if development of Amiga Os could keep up with Mac and Windows we would have a system around 100Mb or maybe 200Mb, mainly due to extra tools, and better graphics. There is no need to waste more than this to look cool, as Aros has proved :-D
-
W
Also keep in mind that the Amiga really didn't have a lot of drivers for things. The whole OS was pretty much hard-coded just for the chipset, which of course was its greatest downfall. Take a look at the Linux kernel, and you'll find tons of hacks to make hundreds of devices work. Even the Macintosh, a closed hardware platform, has to support huge numbers of different hardware configurations, and the OS is expected to adapt to each one, not require you to re-install every time you swap out one or two parts. Any OS by itself is usually quite lean.
this is my point: the fact the computers need to install and load in device drivers is the windows way of doing things: each device could have its drivers built into rom, so that the driver automatically interfaces with the OS as soon as the device is plugged in: nothing should have to be installed, or loaded at boot, this is true plug and play, not plug and pray
-
The reality is that AmigaOS doesn't do very much. Yes, it's lean on resources, but it doesn't do most of the things you'd expect from a modern OS.
Such as? The OS is merely the vehicle whereby the user interacts with the computer. The real "doing' happens when an app is used to write a letter, play a game,video, cd mp3, edit video, burn discs, do a 3d render etc, web, email etc. The Os does none of this , but provide an onscreen display, mouse and keyboard to let the user issue commands via the app to the hardware to do this stuff. The AmigaOS is perfectly capable in its current form to allow the user to interact with the hardware every bit as well as Vista, except the Amiga software apps don't have all the same functionality of the Vista apps because there's been no development in the apps to speak of for 10 years.
Getting back to core OS functions, how can it possibly REQUIRE megabytes of programming code so as to work out where the mouse pointer is on the screen? why does an old P3 500 512 meg boot xp as fast as a dual core running at 3000 mhz with 4 times as much ram, much faster hard drive, bigger caches? Yeah hard drive spin speeds are not much faster but the data density is so much higher, so more bytes per spin are being read, but even so it still takes 30+ seconds to boot!
The point is the user experience in interacting with the computer (ie using the OS) has not improved despite hardware speed and capacity increasing 1000 fold. Thats an issue related to the fact that the hardware x86 design is dictated by the OS that will run on it, not the other way around.
-
stefcep2 wrote:
Thats an issue related to the fact that the hardware x86 design is dictated by the OS that will run on it, not the other way around.
In what way is an x86 processor limited by the OSs that run on it? Windows could be ported to any processor architecture and still look and feel the same it does on x86, provided the processor was fast enough to run it. Similarly, any OS could be ported to x86.
Waccoon wrote:
I disagree. Even X86 is pretty efficient if you think about it, because hardware engineers cannot be anywhere near as sloppy as software engineers
One of the main factors that prevents x86 being efficient is the legacy of previous chip designs. One example of x86 legacy is the A20 line gate. See here:
http://en.wikipedia.org/wiki/A20_line
I believe the 64bit chips leave some of this hardware legacy behind.
Going back to the original topic, is AmigaOS bloated? Depends on what you're comparing it to. Compared to Windows XP/Vista and OSX it is certainly not bloated. It is also small compared to most modern Linux distributions. However, compared to an OS like MenuetOS it is large. Take a look:
http://www.menuetos.net/
What we should really be asking is does AmigaOS do what we need it to do? I believe the answer is yes. Most of the functionality of computers should come from apps anyway.
-
Yea, an OS goes only so far. How much time does the average user cares about an application loader is debatable. The Amiga has, at this point, no killer apps and lacks even most of the basic apps to get into the game.
-
[/quote]
In what way is an x86 processor limited by the OSs that run on it? Windows could be ported to any processor architecture and still look and feel the same it does on x86, provided the processor was fast enough to run it. Similarly, any OS could be ported to x86.
I do not know enough details about x86 **processors**, i was talking to the whole PC architectural design which nowdays is x86 based for home use.(Nevertheless per clock cycle its my understanding that the x86 processors did less than 68k.). Whether you run Windows, OSX, or Linux, they are still running on the same hardware design, along with all its limitations.
Waccoon wrote:
I disagree. Even X86 is pretty efficient if you think about it, because hardware engineers cannot be anywhere near as sloppy as software engineers
What we should really be asking is does AmigaOS do what we need it to do? I believe the answer is yes. Most of the functionality of computers should come from apps anyway.[/quote]
Agreed. The question that needs to be asked is why does this functionality need 1000 times the hardware resources under windows, or put another way can't the user interact with the PC 1000 times faster
-
Absolutely, an OS is nothing without Apps, but I think we should care when we buy hardware that is 1000 times faster and spacous and only to find we go 50% slower in being able to control those apps just because we 'upgraded" the OS.
-
HenryCase wrote:
stefcep2 wrote:
Thats an issue related to the fact that the hardware x86 design is dictated by the OS that will run on it, not the other way around.
In what way is an x86 processor limited by the OSs that run on it? Windows could be ported to any processor architecture and still look and feel the same it does on x86, provided the processor was fast enough to run it. Similarly, any OS could be ported to x86.
Waccoon wrote:
I disagree. Even X86 is pretty efficient if you think about it, because hardware engineers cannot be anywhere near as sloppy as software engineers
One of the main factors that prevents x86 being efficient is the legacy of previous chip designs. One example of x86 legacy is the A20 line gate. See here:
http://en.wikipedia.org/wiki/A20_line
This is not applicable for EFI enabled Mactels, some Gateway MCEs, some Xeon servers boards unless you want to boot non-EFI enabled boot loaders (via EFI's Compatibility Support Module (CSM)). Early IA-32 Mactels doesn't have CSM.
In post-1997 PCs, modern IBM PC and compatibles do not have separate chips physically on-board (keyboard controller, interrupt controller and etc); they actually have one chip which emulates all these IBM PC and compatible chips called the Super I/O chip. The Super I/O chip emulates A20 line and 'Fast A20 gate' options.
A20 gate issue doesn’t affect computation performance once X86 enters Protected Mode.
-
hamtronix wrote:
TinyXP gave me no problems and was vastly shrunk in size...
Dig deep enough and you will find problems. Buttons that no longer work, settings tabs that are completely ghosted, and software that just won't install because it needs some obscure service that you never used before. Yes, Windows can be made much smaller, but it's far from problem free.
As for AmigaOS being bloated, it's all relative. My Nokia N95 has about 100MB of software by default. Compared to that the 200MB for OS4 is quite good.
________
DAIHATSU COPEN HISTORY (http://www.toyota-wiki.com/wiki/Daihatsu_Copen)
-
stefcep2 wrote:
Iif there's one thing I have learned over the years is that software will always grow to fill the hardware. Amiga hardware has been caught in a 1993 time-warp, so the OS and software run within these hardware constraints. It gets interesting if you emulate some of the different Amiga OS distributions out there: Amikit looks like the most modern AmigaOs you can get in terms of eye candy and "features" but it is far slower than Amigasys or classic amiga, which aren't as "feature" rich. Still, the Amiga would have bloated over time, but not nearly as much as X86. Don't forget the X86 hardware platform is basically designed to suit the windows way of doing things ie its windows that demands the hardware be designed to suit it, not that windows is made to run on the hardware.
The development platform for 64bit Windows 2000 was DEC’s Alpha. This code base forms the basis for IA-64 and AMD64 (Codename: Anvil) Windows XP (NT5.2) and 2K3 (NT5.2) editions. NT5.2 codebase serves as basis for second Windows Vista development refresh i.e. early Windows Vista development builds was based from NT5.1 (Windows XP 32bit edition).
Actually, PowerPC’s little endian mode was designed with Windows NT in mind. This mode was dropped in PowerPC 970. For Xbox 360 development, Microsoft has Windows NT5.x based kernels and modifed DirectX9 stack running on Apple's PowerMac G5s and it doesn’t require PowerPC's little endian modes.
XBOX360 carries a superset of DirectX9 stack and it's Xenon CPU include instructions for Direct3D.
Win32 layers is portable enough for running on MIPS, Alpha, ARM, PowerPC and 'etc'. Windows CE (with desktop) on ARM/MIPS based device "look and feels" like Windows 9x/NT4 btw.
http://the-gadgeteer.com/review/moreio_ezpad_ce_net_device_review
Much of the efficiency that comes from the Amiga is because the hardware and os are tightly integrated together. But you can't get that with generic mass produced hardware,
Factor in AROS X86.
it has to be propriatory and no users want that because its expensive eg remember the powerpc macs cost nearly twice as much as equivalent PC hardware at the time.
Windows NT runs fine on MIPS and Alphas. Remember Newtek’s Lightwave accelerators workstations such as Raptors in CUAmiga/ Amiga Format magazines?
-
HenryCase wrote:
stefcep2 wrote:
Thats an issue related to the fact that the hardware x86 design is dictated by the OS that will run on it, not the other way around.
In what way is an x86 processor limited by the OSs that run on it? Windows could be ported to any processor architecture and still look and feel the same it does on x86, provided the processor was fast enough to run it. Similarly, any OS could be ported to x86.
Waccoon wrote:
I disagree. Even X86 is pretty efficient if you think about it, because hardware engineers cannot be anywhere near as sloppy as software engineers
One of the main factors that prevents x86 being efficient is the legacy of previous chip designs. One example of x86 legacy is the A20 line gate. See here:
http://en.wikipedia.org/wiki/A20_line
For Pentium(P5) and greater, the P5 processor has a FAST A20 option that bypasses the A20 line completely. To set the A20 line, there is no need for delay loops or polling i.e. you only need 3 simple asm instructions.
in al, 0x92
or al, 2
out 0x92, al
FAST A20 capability must be detected or something else can happen. Bootloader such as Grub can enable A20 gate.
-
Roj wrote:
The mouse driver distribution for my Logitech cordless is just over 60 megabytes. That's just a mouse driver. I think it's the other way round. From that, it looks like Windows lacks the resources and needs special help to get a mouse to have proper behavior.
For basic mouse functions, Windows XP and Vista doesn’t need Logitech’s setpoint software.
Logitech’s Setpoint 4 software includes photos for different Logitech’s mices.
-
stefcep2 wrote:
In what way is an x86 processor limited by the OSs that run on it? Windows could be ported to any processor architecture and still look and feel the same it does on x86, provided the processor was fast enough to run it. Similarly, any OS could be ported to x86.
I do not know enough details about x86 **processors**, i was talking to the whole PC architectural design which nowdays is x86 based for home use.
AMD's EV6 bus architecture (for K7 Athlon) was based on DEC's Alpha EV6 i.e. "big-tin" or workstation platform.
There are K7 Athlon (slot versions) motherboards that supports both Alphas and K7 Athlons. With AMD’s X86 motherboard partners, AMD managed to make an EV6 based motherboard cheaper.
AMD's HyperTransport (for K8/K10) is based on Alpha EV7’s bus architecture.
(Nevertheless per clock cycle its my understanding that the x86 processors did less than 68k.).
Motorola 68060 IPC would be blown away by AMD K7/K8 and Intel Pentium III/Pentium M/Core/Core 2.
Motorola 68060 is not same league as DEC’s Alpha in running Lightwave.
In basic terms, AMD K7/K8/K10 Athlon and Intel Pentium III/Pentium M/Core can issue and retire 3 X86 instructions per cycle. Intel Core 2 Duo can can issue and retire 4 X86 instructions per cycle.
68060 can only issue two instruction (one integer and one float) per cycle or 2 integer instructions and one branch instruction per clock cycle.
AMD K7 3-way issue instruction can be a mix of float or integer e.g. 3 integer, 3 floats, 2 integer + 1 float, 1 integer + 2 floats,
X86 is just an ISA and modern X86 processors (Intel Pentium Pro, AMD K5) translates (emulates) these CISC (variable length) instruction to RISC like instructions (e.g. fix length instructions) over multiple-pipelines (i.e. 6 wide for K7 Athlon).
-
I think the majority want bloatware and if you look at the typical Linux distro that is exactly what you get. Although you can make Linux run from a floppy, it generally doesn't do much. Not only do they add lots of functionality to the Linux kernel to make a distro which is arguably decent, but then they go further and instead of making a nice lean (as possible) full-featured OS, they go stick bundles of applications INTO THE OS DISTRO itself. Often WITHOUT the option of NOT installing them. This is a real putoff!
-
@Hammer
Sounds like you know your processors in detail that few know.
However comparing 68060 that came out in 1994 to a PIII or K7 (1999) is not being very fair.
What were the abilities of the 1993 Pentium or 1995 Pentium Pro in regards to clock cycles?
Was a Pentium better than a 68060 performance wise at the same clock speed?
If the 68k line was not effectively killed (as far as desktop CPU goes) would it not have been improved to still be competitive.
Or is the some architectural reason that would have limited it from the same type of improvements that make the x86 still viable?
-
they go stick bundles of applications INTO THE OS DISTRO itself. Often WITHOUT the option of NOT installing them. This is a real putoff!
Eh? Which distro does not give you the option of not installing the bundled apps? I've worked with RedHat, Debian and Gentoo and I selected what I wanted to install in all of them.
-
stefcep2 said:
Absolutely, an OS is nothing without Apps, but I think we should care when we buy hardware that is 1000 times faster and spacous and only to find we go 50% slower in being able to control those apps just because we 'upgraded" the OS.
I agree that wasting the power of a computer system is foolish, and one of the ways this is done is to use a bloated OS. However, the fact of the matter is a true upgrade will always have to offer new features so that people will upgrade, and if you want these new features to be running alongside the old features you are going to have a greater drain on resources. The real issue is that most OS upgrades are unnecessary.
@Hammer
I would just like to say that I mentioned the A20 line as an example x86 legacy, but I dont know all the details of legacy support for older software. Do you know of any other examples?
zhulien said:
I think the majority want bloatware and if you look at the typical Linux distro that is exactly what you get."
One of the strengths (and weaknesses) of Linux is the sheer diversity of distros. Most mainstream distros do come with a lot of software ready to install from the disk. I am grateful for this because I find installing Linux software a pain (lack of an .exe type of program container is one major reason Linux isn't ready for widespread acceptance IMHO).
However, the Linux distro ranges from super-bloated (Sabayon for example) to small yet useable (Puppy Linux, DSL, etc...). You also have distros built for speed (ArchLinux, VectorLinux, etc...) and customisation (Gentoo is a good example). Of course the most customisable distro would a LFS (Linux From Scratch), more info here: http://www.linuxfromscratch.org/
-
I thought I'd sit back and see what people's opinions where on the matter. There are a couple of interesting thoughts in here, but it seems that more people either completely missed the point or are still hung up on the same old arguments from 1996.
-
uncharted wrote:
I thought I'd sit back and see what people's opinions where on the matter. There are a couple of interesting thoughts in here, but it seems that more people either completely missed the point or are still hung up on the same old arguments from 1996.
That's because there was nothing to discuss. Could AmigaOS have become the most bloated OS of all time? Yes. Was the size of the OS limited by the storage options? Yes. Does that answer your original questions?
It's more interesting to debate how AmigaOS compares with more modern OS's, especially considering the role of the OS has stayed constant.
-
AmigaOS wouldn't be mean and lean if you were running the following items when it booted up:
A web server
SQL Server
Encryption Software
You guys should really think about what people have running in the background in their Windows and MacOS machines these days all the time. Services for instance. If you shut these things down, you can't do half as much with the particular OS but even those systems run faster when they are paired down and less is running.
-
DonnyEMU wrote:
You guys should really think about what people have running in the background in their Windows and MacOS machines these days all the time. Services for instance. If you shut these things down, you can't do half as much with the particular OS but even those systems run faster when they are paired down and less is running.
But how many of those services are necessary to keep running all the time? Also, I'm pretty sure that the services don't get used fully, it would make sense to keep the size of them to a minimum (i.e. split them into smaller chunks).
-
HenryCase wrote:
That's because there was nothing to discuss.
Really? Perhaps you'd like to explain that further?
Could AmigaOS have become the most bloated OS of all time? Yes. Was the size of the OS limited by the storage options? Yes. Does that answer your original questions?
No. Because those were not questions I was asking. Did you read the original post?
It's more interesting to debate how AmigaOS compares with more modern OS's, especially considering the role of the OS has stayed constant.
How is that more interesting? It's so {bleep}ing tedious reading through the same antiquated arguments over an over again.
I give up. :-(
-
uncharted wrote:
I give up. :-(
Please dont give up. I apologise for the tone of my previous post, it was a little rude.
Could AmigaOS have become the most bloated OS of all time? Yes. Was the size of the OS limited by the storage options? Yes. Does that answer your original questions?
No. Because those were not questions I was asking. Did you read the original post?
Yes I did read the OP. These were the questions you asked:
1. Is AmigaOS really mean and lean because of its design philosophy or is it more to do with its situation?
2. If you think about it, could it be that AmigaOS is so lean because it was never given the resources by Commodore and has been left to rot ever since?
3. Back in the early 90's AmigaOS was, size-wise, on a par with MacOS. Could it be that had development continued with a decent amount of resources that AmigaOS would be as fully featured and as large as modern OSes?
My answers:
1. The hardware AmigaOS runs on did stop Commodore going too over the top with the design. However, you have to consider the time when AmigaOS was designed (the first version, the rest followed the same template). AmigaOS was certainly more flash than the other OS's released at the same time, it only seems small in comparison with modern OS's.
2. AmigaOS had no need to be bloated, especially considering it wasn't an open platform (all Amiga h/w produced by Commodore at the time), so they could make it compact. I'm a bit worried that you're looking for a bloated OS (that's how I'm interpreting it, apologies if I'm wrong).
3. I already answered this, but yes as long as there was new Amiga h/w produced there is nothing intrinsic in the design of the OS that would prevent AmigaOS becoming bloated. Whether the developers would have chosen to take this route is another matter.
It's more interesting to debate how AmigaOS compares with more modern OS's, especially considering the role of the OS has stayed constant.
How is that more interesting? It's so {bleep}ing tedious reading through the same antiquated arguments over an over again.
You may not have found it interesting, but others might have. Your questions only make sense when you consider the historical context so it is natural to compare what AmigaOS is now with the modern computing world.
uncharted wrote:
HenryCase wrote:
That's because there was nothing to discuss.
Really? Perhaps you'd like to explain that further?
My comment was a reaction to your comment: "I thought I'd sit back and see what people's opinions where on the matter. There are a couple of interesting thoughts in here, but it seems that more people either completely missed the point or are still hung up on the same old arguments from 1996."
From that I was quite rightly assuming that you weren't interested in listening to a debate about bloat on AmigaOS vs modern OS'. Hence why I gave you the two answers to the questions we could ask if we weren't going to discuss AmigaOS in comparison with newer/other systems.
Does that help you see my point of view?
-
@HenryCase
Don't mind me, I'm just being a grumpy sod today.
-
uncharted wrote:
@HenryCase
Don't mind me, I'm just being a grumpy sod today.
I should be the one apologising uncharted, I was a little out of order with my tone before. Sorry.
Getting (almost) back on topic, how long do you think OS4 (not the version we have now) would have taken to come out if Commodore hadn't stopped producing Amigas?
Here's some release info for key versions of Workbench:
v1.0 - 1985
v2.0 - 1990
v3.0 - 1992
v3.1 - 1994
So I'm thinking v4 would have been around 1995/1996? Of course it wouldn't have been as good as the version we have now. I suppose it would have been launched with the AAA chipset Amigas.
-
HenryCase wrote:
uncharted wrote:
@HenryCase
Don't mind me, I'm just being a grumpy sod today.
I should be the one apologising uncharted, I was a little out of order with my tone before. Sorry.
Getting (almost) back on topic, how long do you think OS4 (not the version we have now) would have taken to come out if Commodore hadn't stopped producing Amigas?
Here's some release info for key versions of Workbench:
v1.0 - 1985
v2.0 - 1990
v3.0 - 1992
v3.1 - 1994
So I'm thinking v4 would have been around 1995/1996? Of course it wouldn't have been as good as the version we have now. I suppose it would have been launched with the AAA chipset Amigas.
No, '95-'96 would have been post-AAA, Hombre chipset. AAA was to be 3.0, but CBM put it's development on pause, instead releasing the interim AGA. When they restarted AAA development, they soon found themselves too far behind the curve, so they began Hombre, slated for release in '95.
-
My guess is that we would be looking at AmigaOS 8.0 today, a single DVD distro, requiring just half a gig of ram on a Core 2 Duo machine.
The custom chips would have long been done away with, it's easier and cheaper to use off the shelf video cards.
Trypos would have long been replaced with a BSD variant.
Toaster, now 100% software would be included with every Amiga.
Amiga would have several firewire posts, and a ton of USB ports, as well as DVI and ethernet.
-
downix wrote:
No, '95-'96 would have been post-AAA, Hombre chipset. AAA was to be 3.0, but CBM put it's development on pause, instead releasing the interim AGA. When they restarted AAA development, they soon found themselves too far behind the curve, so they began Hombre, slated for release in '95.
Thanks for this info.
Just out of interest, if AAA had been released instead of AGA (i.e. at the same time) how would it have compared, tech specs wise, with IBM-PC compatible and Apple graphics h/w?
-
HenryCase wrote:
downix wrote:
No, '95-'96 would have been post-AAA, Hombre chipset. AAA was to be 3.0, but CBM put it's development on pause, instead releasing the interim AGA. When they restarted AAA development, they soon found themselves too far behind the curve, so they began Hombre, slated for release in '95.
Thanks for this info.
Just out of interest, if AAA had been released instead of AGA (i.e. at the same time) how would it have compared, tech specs wise, with IBM-PC compatible and Apple graphics h/w?
AAA compared well to IBM/Apple from roughly 1995 levels, but was due to come out in 1990/1992. The main advantage to AAA was the ability to grow the system, that is the chipset was no longer bound to the CPU, but was modular, using an interconnect bus (originally the AMI bus, but later replaced with PCI) enabling upgradeability to Hombre when that was due to ship in 1995.
AAA was based on ECS, not AGA, don't forget, so the talk of releasing it today is still a step backwards, as it would break AGA apps.
-
persia wrote:
The custom chips would have long been done away with, it's easier and cheaper to use off the shelf video cards.
I must disagree with you here, if the last 12 years has taught us anything it is that it is IMPOSSIBLE to build an amiga at a competetive price using off the shelf parts.
The custom chip set has always been of critical importance to the success of the amiga, and I think that any new amiga cannot compete on price and specification UNLESS it uses a third generation custom chipset, even if it is little more than an asic gluing together some third party chips.
The amigaone was extortionately priced and was only bought by the true believers, it did not add any new users to our community.
I hope that the next minimig version does not copy AGA but goes straight to an HDTV ready graphics mode, IE: 1920 x 1080 x 24 with an external DSP for 24 bit audio. The custom chipset meant that the amiga could do more than a pc yet cost less, if the amiga is to survive another 25 years we must return to that concept.
-
Another problem with off the shelf parts is the very short life cycle of parts nowadays, we suffered greatly when motorola announced the end of the 68k line, yet there is a suggestion that we use a whole raft of third party chips which will result in obsolescence and the need to redesign the amiga every time one of these chips is discontinued, at least in future, if an amiga manufacturer were to use custom chips they would have more control over the lifecycle of their products.
Also, an integrated custom chipset results in tighter kickstart code, and a more stable platform.
-
A6000 wrote:
I hope that the next minimig version does not copy AGA but goes straight to an HDTV ready graphics mode, IE: 1920 x 1080 x 24 with an external DSP for 24 bit audio. The custom chipset meant that the amiga could do more than a pc yet cost less, if the amiga is to survive another 25 years we must return to that concept.
Fully agreed. While, sure, modern day GPU's are more than adequate, truth is, the rest of a PC or Mac's chipset is downright anemic for performance. I build these things every day, and deal with these limitations. Example, the common AC97 sound system that's universal nowadays. It is bogged to the CPU, which means you loose performance with it even existing in the memory map, and not all BIOS allow for disabling of them. Same with the disk controllers, USB controller, etc.
A from-scratch design would definately stand out, and with the reduced costs for prototyping and custom fabbing, could really be done. I've been working on my chipset for how long now? 10 years as of last friday. I know these costs, and I've witnessed the cost to produce go through the floor. My first-gen design would have cost me $480,000 for each of the 8 chips it used. Today, $1,850 for the same chipset. My current design of 2 chips would cost me roughly $4500 for a full speed prototype, vs the millions back in 1997.
Look at the rise of nVidia and XGI, two fabless semiconductor companies that have risen over the past few years. See where they're going. It's more than possible for us.
-
I must disagree with you here, if the last 12 years has taught us anything it is that it is IMPOSSIBLE to build an amiga at a competetive price using off the shelf parts.
Huh? All the problems associated with building an Amiga with off the shelf parts are due to no serious company trying. It's even more impossible to build an Amiga with custom chips than it is with off the shelf parts when you're producing anything less than 100,000. Remember Phase5? They needed to sell 20,000 PPC cards to get even and they sold half that and their boards did not even have custom ASIC's. Imagine how many more you'd have to sell to get even with a fully custom chipset.
-
AmiGR wrote:
I must disagree with you here, if the last 12 years has taught us anything it is that it is IMPOSSIBLE to build an amiga at a competetive price using off the shelf parts.
Huh? All the problems associated with building an Amiga with off the shelf parts are due to no serious company trying. It's even more impossible to build an Amiga with custom chips than it is with off the shelf parts when you're producing anything less than 100,000. Remember Phase5? They needed to sell 20,000 PPC cards to get even and they sold half that and their boards did not even have custom ASIC's. Imagine how many more you'd have to sell to get even with a fully custom chipset.
Exactly. Phase5 by not using custom logic was forced to pay far more per-board than if they had ASIC'd the parts together, to reduce the overall cost of production. That is why the VIC-20 could price-undercut the TI-99A so much, Commodore custom-made the chips, resulting in lower cost to produce. Yes, the R&D and initial cost is higher, but the end-price is far lower.
The MiniMig, don't forget, uses a "custom made" single chip to replace 4 chips, which themselves were custom made to reduce the cost to produce the original multi-thousand-chip Lorraine unit. Your arguement about cost is a paper tiger, the cost of producing is nothing when compared to the cost savings by having reduced the overall number of parts in the product.
-
Custom graphics chips were a bad idea. First of all you are reinventing the wheel, companies like NVIDIA and ATI spend millions of dollars on designing video cards, don't tell me that a company that can't pony up an additional $7K for it's OS could possibly do what NVIDIA and ATI do.
Also, having the video on a card means that you are in control, maybe you by your system with a low end video card and then expand later or you replace an old card with new. Either way you are in control.
Of course if you make the computer then a custom video chip set can be a big dongle I suppose, but the bigger and better dongle is intel's trusted platforn technology, that's what Apple uses.
I admit speculating on what Amiga would have done had they survived is difficult and there was virtually zero chance of survival, but the point is that had Amiga survived, Amiga OS today would look abosolutely nothing like Amiga oS 4.
-
Exactly. Phase5 by not using custom logic was forced to pay far more per-board than if they had ASIC'd the parts together, to reduce the overall cost of production. That is why the VIC-20 could price-undercut the TI-99A so much, Commodore custom-made the chips, resulting in lower cost to produce. Yes, the R&D and initial cost is higher, but the end-price is far lower.
According to Laire, the licences to use the VHDL synthesis software were half a million. Plus R&D, they really would have no chance to break even, even if they used ASIC's. At the numbers they sold, the cost savings of the production would not outweight the R&D and setup costs.
The MiniMig, don't forget, uses a "custom made" single chip to replace 4 chips, which themselves were custom made to reduce the cost to produce the original multi-thousand-chip Lorraine unit. Your arguement about cost is a paper tiger, the cost of producing is nothing when compared to the cost savings by having reduced the overall number of parts in the product.
That is true when we're talking about numbers but look at the post I replied to and tell me, what would be the chances of the AmigaOne, for instance, being cheaper had it not been based on off-the-self hardware? This market does not really have the numbers to allow companies to produce and sell enough to cover the cost of custom hardware and make profit.
-
Exactly. Phase5 by not using custom logic was forced to pay far more per-board than if they had ASIC'd the parts together, to reduce the overall cost of production. That is why the VIC-20 could price-undercut the TI-99A so much, Commodore custom-made the chips, resulting in lower cost to produce. Yes, the R&D and initial cost is higher, but the end-price is far lower.
Too bad this scheme did not continue on Amiga series. I wonder why Commodore was selling Amiga 1000 with such insane price tag.
-
itix wrote:
Exactly. Phase5 by not using custom logic was forced to pay far more per-board than if they had ASIC'd the parts together, to reduce the overall cost of production. That is why the VIC-20 could price-undercut the TI-99A so much, Commodore custom-made the chips, resulting in lower cost to produce. Yes, the R&D and initial cost is higher, but the end-price is far lower.
Too bad this scheme did not continue on Amiga series. I wonder why Commodore was selling Amiga 1000 with such insane price tag.
Hrm? $1000 for a machine that beat $4000 workstations, graphically?
Incidentally, the cost to produce the VIC20 was $165, vs $485 for the similar-spec TI-99A4
-
AmiGR wrote:
Exactly. Phase5 by not using custom logic was forced to pay far more per-board than if they had ASIC'd the parts together, to reduce the overall cost of production. That is why the VIC-20 could price-undercut the TI-99A so much, Commodore custom-made the chips, resulting in lower cost to produce. Yes, the R&D and initial cost is higher, but the end-price is far lower.
According to Laire, the licences to use the VHDL synthesis software were half a million. Plus R&D, they really would have no chance to break even, even if they used ASIC's. At the numbers they sold, the cost savings of the production would not outweight the R&D and setup costs.
Funny, at the same time I paid a lot less for my software. For Eddas development, I shelled out approx $1200 for my software package
The MiniMig, don't forget, uses a "custom made" single chip to replace 4 chips, which themselves were custom made to reduce the cost to produce the original multi-thousand-chip Lorraine unit. Your arguement about cost is a paper tiger, the cost of producing is nothing when compared to the cost savings by having reduced the overall number of parts in the product.
That is true when we're talking about numbers but look at the post I replied to and tell me, what would be the chances of the AmigaOne, for instance, being cheaper had it not been based on off-the-self hardware? This market does not really have the numbers to allow companies to produce and sell enough to cover the cost of custom hardware and make profit.
Actually, very good chances. I did a cost breakdown for a similar move at about the same time, by migrating to a fixed ASIC and integrating as much as possible, saved almost $45 on production costs. The toolup would have cost approx $37000, mind you, so you'd have to sell 825 boards to break even. But this would have eliminated the whole Mai-supply issue, and given you a faster chipset to boot.
-
persia wrote:
Custom graphics chips were a bad idea. First of all you are reinventing the wheel, companies like NVIDIA and ATI spend millions of dollars on designing video cards, don't tell me that a company that can't pony up an additional $7K for it's OS could possibly do what NVIDIA and ATI do.
Also, having the video on a card means that you are in control, maybe you by your system with a low end video card and then expand later or you replace an old card with new. Either way you are in control.
Of course if you make the computer then a custom video chip set can be a big dongle I suppose, but the bigger and better dongle is intel's trusted platforn technology, that's what Apple uses.
I admit speculating on what Amiga would have done had they survived is difficult and there was virtually zero chance of survival, but the point is that had Amiga survived, Amiga OS today would look abosolutely nothing like Amiga oS 4.
Who said graphics? Above I said that the current crop of graphics chips do a fine job. It's the cost of the rest of em that can drive you up.
My suggestion, if you were to take it, would be to license one of the GPU's out there (nVidia or XGI would be my suggestion) then design a chipset around that and the CPU, giving you a trifecta of performance. GPU's are underutilized even under heavy loads in several cases due to bad system management thanks, in part, to the system chipsets under them. A solid supporting chipset could cut production costs, and make a more powerful solution.
-
Actually, very good chances. I did a cost breakdown for a similar move at about the same time, by migrating to a fixed ASIC and integrating as much as possible, saved almost $45 on production costs. The toolup would have cost approx $37000, mind you, so you'd have to sell 825 boards to break even. But this would have eliminated the whole Mai-supply issue, and given you a faster chipset to boot.
And if you add the costs of hiring someone to design and test such a chipset? It's not as if Eyetech had anyone with the skills of doing that, they tried to hire Escena to design a custom chipset (on FPGA, iirc, but that's a different story) but failed.
-
AmiGR wrote:
Actually, very good chances. I did a cost breakdown for a similar move at about the same time, by migrating to a fixed ASIC and integrating as much as possible, saved almost $45 on production costs. The toolup would have cost approx $37000, mind you, so you'd have to sell 825 boards to break even. But this would have eliminated the whole Mai-supply issue, and given you a faster chipset to boot.
And if you add the costs of hiring someone to design and test such a chipset? It's not as if Eyetech had anyone with the skills of doing that, they tried to hire Escena to design a custom chipset (on FPGA, iirc, but that's a different story) but failed.
Quite correct. Don't forget, I'm talking as a guy that does know VHDL and Verilog, and finds hardware a fun thing to play with.
Note, I can make as good a case for using commodity chipsets in such a solution as well. I just don't like seeing both sides of any arguement dismissed so casually, as most likely the best solution would be a mixture of both.
-
Note, I can make as good a case for using commodity chipsets in such a solution as well. I just don't like seeing both sides of any arguement dismissed so casually, as most likely the best solution would be a mixture of both.
Agreed. I won't pretend to have any real-world experience with chipset design or that I've produced any at any point myself anyway. ;-)
-
AmiGR wrote:
Note, I can make as good a case for using commodity chipsets in such a solution as well. I just don't like seeing both sides of any arguement dismissed so casually, as most likely the best solution would be a mixture of both.
Agreed. I won't pretend to have any real-world experience with chipset design or that I've produced any at any point myself anyway. ;-)
Well, how about I present the proposal I made to one of the guys awhile back.
Xilinx offers free cores for a few functions, including a HT module and PPC bus. Take those, add a DDR2 controller, and viola, you now have a fully functioning module that can substitute for an AM2 Athlon on a motherboard. Using the CPU fan mount for support, you can even fit it into an existing socket. Now, the performance wouldn't be worth the work, but if you swapped out the BIOS with an OpenFirmware, you'd have a fully functional PPC based machine w/o needing to develop a new motherboard beyond custom making the firmware, which would limit you to a specific motherboard or a limited selection of motherboards, which you retail for a slight markup.
-
Xilinx offers free cores for a few functions, including a HT module and PPC bus. Take those, add a DDR2 controller, and viola, you now have a fully functioning module that can substitute for an AM2 Athlon on a motherboard. Using the CPU fan mount for support, you can even fit it into an existing socket. Now, the performance wouldn't be worth the work, but if you swapped out the BIOS with an OpenFirmware, you'd have a fully functional PPC based machine w/o needing to develop a new motherboard beyond custom making the firmware, which would limit you to a specific motherboard or a limited selection of motherboards, which you retail for a slight markup.
I like the idea and I'd be willing to bet that the performance, worth the effort or not, would be better than any ArticiaS machine. ;-)
-
AmiGR wrote:
Xilinx offers free cores for a few functions, including a HT module and PPC bus. Take those, add a DDR2 controller, and viola, you now have a fully functioning module that can substitute for an AM2 Athlon on a motherboard. Using the CPU fan mount for support, you can even fit it into an existing socket. Now, the performance wouldn't be worth the work, but if you swapped out the BIOS with an OpenFirmware, you'd have a fully functional PPC based machine w/o needing to develop a new motherboard beyond custom making the firmware, which would limit you to a specific motherboard or a limited selection of motherboards, which you retail for a slight markup.
I like the idea and I'd be willing to bet that the performance, worth the effort or not, would be better than any ArticiaS machine. ;-)
Um... yeah, you could say that again. 8)
The beauty about the design was that it was made to support up to 4 CPU's. A single CPU would have been a waste of memory bandwidth. But, still, the cost for the unit would have more than offset the cost savings on the motherboard. But performance would have been better I'd have bet.
-
Too bad this scheme did not continue on Amiga series. I wonder why Commodore was selling Amiga 1000 with such insane price tag.
Hrm? $1000 for a machine that beat $4000 workstations, graphically?
Amiga 1000 sales were struggling while cheaper (yet still expensive but technically inferior) Atari ST was selling like hot cakes here in Europe. Consumers simply could not afford an Amiga 1000.
But of course they got it finally right and inexpensive Amiga 500 made Amiga successful. I just wonder why it did not happen with Amiga 1000 when they are based on the same chipset.
-
itix wrote:
Too bad this scheme did not continue on Amiga series. I wonder why Commodore was selling Amiga 1000 with such insane price tag.
Hrm? $1000 for a machine that beat $4000 workstations, graphically?
Amiga 1000 sales were struggling while cheaper (yet still expensive but technically inferior) Atari ST was selling like hot cakes here in Europe. Consumers simply could not afford an Amiga 1000.
But of course they got it finally right and inexpensive Amiga 500 made Amiga successful. I just wonder why it did not happen with Amiga 1000 when they are based on the same chipset.
but they're not. Actually your arguement is a perfect example of Commodores willingness to cut costs by the use of Custom Chipsets. The A1000's chipset was just the three main chips + CIA's. While the A500, they migrated logic that previously was on the board into Agnus, making the Fat Agnus, *AND* consolidated yet other logic to make the Gary chip. In short, they cut the cost to produce by almost a third through the use of custom chip logic, consolidating chips from the previous design into newer, cheaper units.
-
Every day our community shrinks, for the amiga to survive we need new users or when we die the amiga will die.
We need to pursuade new users to buy a 3rd gen amiga ( when they become available) instead of :-
1. pc
2. games console
3. mac
4. linux box
5. anything else
To do this the new amiga must be as far ahead of the pc as it was in 1984, and be reasonably priced.
This can only be done by a manufacturer with a business plan to make, market and sell a minimum of 500,000 units a year.
The cost of designing a custom chipset will be amortised over 500,000 units and production costs will allow the amiga to compete on price.
The major costs come in persuading people to buy the machine (marketing).
The amiga does not have to replace the pc, simply! carve out a niche for itself.
-
A6000 wrote:
Every day our community shrinks, for the amiga to survive we need new users or when we die the amiga will die.
We need to pursuade new users to buy a 3rd gen amiga ( when they become available) instead of :-
1. pc
2. games console
3. mac
4. linux box
5. anything else
To do this the new amiga must be as far ahead of the pc as it was in 1984, and be reasonably priced.
This can only be done by a manufacturer with a business plan to make, market and sell a minimum of 500,000 units a year.
The cost of designing a custom chipset will be amortised over 500,000 units and production costs will allow the amiga to compete on price.
The major costs come in persuading people to buy the machine (marketing).
The amiga does not have to replace the pc, simply! carve out a niche for itself.
Quite right, and it is more than doable. I would even strongly suggest partnering with nVidia, as they have a GPU and chipset design with some merit, but need to utilize it on a non-PC. Some smart development, custom ASIC for the CPU, maybe a better memory controller, we'd have something. Imagine a true legacy-free system paired with a super-thin OS embedded into the mobo.
-
Our saintly OEM could participate in the opensource openGL graphics processor projects, Nvidia's processors are discontinued after 2-3 years, we don't want that pain again.
-
Quite right, and it is more than doable. I would even strongly suggest partnering with nVidia, as they have a GPU and chipset design with some merit, but need to utilize it on a non-PC. Some smart development, custom ASIC for the CPU, maybe a better memory controller, we'd have something. Imagine a true legacy-free system paired with a super-thin OS embedded into the mobo.
Or you could just focus on the software, which is what really matters.
-
koaftder wrote:
Quite right, and it is more than doable. I would even strongly suggest partnering with nVidia, as they have a GPU and chipset design with some merit, but need to utilize it on a non-PC. Some smart development, custom ASIC for the CPU, maybe a better memory controller, we'd have something. Imagine a true legacy-free system paired with a super-thin OS embedded into the mobo.
Or you could just focus on the software, which is what really matters.
And wind up on a dead end as you sink 18 months into developing "only software" just to have the one hardware piece you relied on dry up, with no alternative?
Have you not learned the lessons of the A1 and Pegasos?
-
downix wrote:
koaftder wrote:
Quite right, and it is more than doable. I would even strongly suggest partnering with nVidia, as they have a GPU and chipset design with some merit, but need to utilize it on a non-PC. Some smart development, custom ASIC for the CPU, maybe a better memory controller, we'd have something. Imagine a true legacy-free system paired with a super-thin OS embedded into the mobo.
Or you could just focus on the software, which is what really matters.
And wind up on a dead end as you sink 18 months into developing "only software" just to have the one hardware piece you relied on dry up, with no alternative?
Have you not learned the lessons of the A1 and Pegasos?
Where they went wrong was tying down to a custom board. Stuffing some CPU core on an asic and having NVidia roll out a chip doesn't do anything but make it cost more. Heres an idea, do what apple did, float your platform on standard pc hardware. Nobody cares what chips are in the box. Advances in hardware don't impress people anymore. This is the late 1980s.
-
koaftder wrote:
downix wrote:
koaftder wrote:
Quite right, and it is more than doable. I would even strongly suggest partnering with nVidia, as they have a GPU and chipset design with some merit, but need to utilize it on a non-PC. Some smart development, custom ASIC for the CPU, maybe a better memory controller, we'd have something. Imagine a true legacy-free system paired with a super-thin OS embedded into the mobo.
Or you could just focus on the software, which is what really matters.
And wind up on a dead end as you sink 18 months into developing "only software" just to have the one hardware piece you relied on dry up, with no alternative?
Have you not learned the lessons of the A1 and Pegasos?
Where they went wrong was tying down to a custom board. Stuffing some CPU core on an asic and having NVidia roll out a chip doesn't do anything but make it cost more. Heres an idea, do what apple did, float your platform on standard pc hardware. Nobody cares what chips are in the box. Advances in hardware don't impress people anymore. This is the late 1980s.
And gain vendor lockin like Apple is suffering from now? With AMD, Intel and nVidia all forcing Apple to cancel products ahead of schedule, delaying the rollout of products, and generally hampering the platform development? Sure, sign me up, and watch as we go *poof*. Apple can get away with it due to their user base, we can't. We're the other guys, the guys nobody bets on! We want a future, we can't be the other guys, we have to be the best guys.
So, I'm willing to discuss this option, tell me, how do you propose gaining the documentation to enable us to even port our OS to the next-gen Intel or AMD CPU's? The next-gen chipsets? Next gen GPU's? Now now, no bringing up todays products, we're talking a new platform, we need to hit the ground with the new OS, new apps all running on the new CPU Intel ships in 18 months before anyone else has had a chance to code for it. Propose to me how we manage that, with our market how it is today.
-
Actually, it's 2007 :lol:
The amiga is/was a very special machine, we do not want a pc clone by another name.
Whilst it is possible that apples OS could replace windows (if they let it), it is more likely that most macs will be running vista eventually.
-
The amiga is/was a very special machine, we do not want a pc clone by another name.
I find Downix' argument far more convincing. What is a "PC clone" supposed to be? Computers are standard nowadays, no matter what OS they run.
Whilst it is possible that apples OS could replace windows (if they let it), it is more likely that most macs will be running vista eventually.
You really do not understand Macs then. Macs are not Wintel PCs. They can run Windows, yes, but I know of no single person who has bought a Mac to run Windows exclusively. I know of people who used to use bootcamp to run a few apps and they are now running their Windows apps on virtual machines alongside MacOS and even in patched Vm environments that make Windows applications appear as windows on the MacOS desktop. No-one spends the money to get a Mac to get a Vista machine, people can buy cheaper and more powerful Vista machines elsewhere.
-
AmiGR wrote:
The amiga is/was a very special machine, we do not want a pc clone by another name.
I find Downix' argument far more convincing. What is a "PC clone" supposed to be? Computers are standard nowadays, no matter what OS they run.
pc's are standard, the amiga is different.
Whilst it is possible that apples OS could replace windows (if they let it), it is more likely that most macs will be running vista eventually.
You really do not understand Macs then. Macs are not Wintel PCs. They can run Windows, yes, but I know of no single person who has bought a Mac to run Windows exclusively. I know of people who used to use bootcamp to run a few apps and they are now running their Windows apps on virtual machines alongside MacOS and even in patched Vm environments that make Windows applications appear as windows on the MacOS desktop. No-one spends the money to get a Mac to get a Vista machine, people can buy cheaper and more powerful Vista machines elsewhere.
Guilty as charged.
The important consideration is software availability, if a new amiga runs on a "standard" pc motherboard, then software producers will make their job easier by insisting we buy a copy of vista and run pc software.
-
And gain vendor lockin like Apple is suffering from now? With AMD, Intel and nVidia all forcing Apple to cancel products ahead of schedule, delaying the rollout of products, and generally hampering the platform development? Sure, sign me up, and watch as we go *poof*. Apple can get away with it due to their user base, we can't. We're the other guys, the guys nobody bets on! We want a future, we can't be the other guys, we have to be the best guys.
So, I'm willing to discuss this option, tell me, how do you propose gaining the documentation to enable us to even port our OS to the next-gen Intel or AMD CPU's? The next-gen chipsets? Next gen GPU's? Now now, no bringing up todays products, we're talking a new platform, we need to hit the ground with the new OS, new apps all running on the new CPU Intel ships in 18 months before anyone else has had a chance to code for it. Propose to me how we manage that, with our market how it is today.
Your argument doesn't make much sense on the processor side. Sure, new procs come out every year or 6 months or something, but architecturally they are the same.
The graphics processors and other types of devices, yup, moving targets. It's just something you have to deal with. Creating your own just means that you get stuck in an expensive and never ending cycle of obsolescence while the bigger players eat your lunch.
A small company like Amiga doesn't have the funds to go an talk to nvidia or IBM and pay for customized versions of different types of hardware just for the sake of having all the specs let alone actually having them fabbed.
Processor specs are easily avaiable. The linux folks are putting together software that runs on all kinds of stuff, the bsd guys are doing it, hell even Aros is running on cheap common hardware.
Aros will have a web browser before you could even work out a spec for a custom processor or get all the paperwork done to get access to the latest specs on an NVidia or ATI up and coming device.
-
I'm gonna get flamed for this, but here it goes.
In my oppinion the smartest thing Amiga Inc ever did was AmigaDE. Too bad they failed.
-
Still available as new and improved amiga anywhere :lol:
-
koaftder wrote:
And gain vendor lockin like Apple is suffering from now? With AMD, Intel and nVidia all forcing Apple to cancel products ahead of schedule, delaying the rollout of products, and generally hampering the platform development? Sure, sign me up, and watch as we go *poof*. Apple can get away with it due to their user base, we can't. We're the other guys, the guys nobody bets on! We want a future, we can't be the other guys, we have to be the best guys.
So, I'm willing to discuss this option, tell me, how do you propose gaining the documentation to enable us to even port our OS to the next-gen Intel or AMD CPU's? The next-gen chipsets? Next gen GPU's? Now now, no bringing up todays products, we're talking a new platform, we need to hit the ground with the new OS, new apps all running on the new CPU Intel ships in 18 months before anyone else has had a chance to code for it. Propose to me how we manage that, with our market how it is today.
Your argument doesn't make much sense on the processor side. Sure, new procs come out every year or 6 months or something, but architecturally they are the same.
SSE4 or SSE4a anyone? Yes, the general core design is constant, but the add-ons touted by both companies is incresingly proprietory. AMD will soon be shipping CPU's with ATI GPU's embedded in them. Would it not make sence to be targeting something like that?
The graphics processors and other types of devices, yup, moving targets. It's just something you have to deal with. Creating your own just means that you get stuck in an expensive and never ending cycle of obsolescence while the bigger players eat your lunch.
A small company like Amiga doesn't have the funds to go an talk to nvidia or IBM and pay for customized versions of different types of hardware just for the sake of having all the specs let alone actually having them fabbed.
Processor specs are easily avaiable. The linux folks are putting together software that runs on all kinds of stuff, the bsd guys are doing it, hell even Aros is running on cheap common hardware.
Aros will have a web browser before you could even work out a spec for a custom processor or get all the paperwork done to get access to the latest specs on an NVidia or ATI up and coming device.
Amiga has plenty of funds to approach the "other white meat" GPU vendors, such as ST Micro, VIA and XGI. Each of them is hungry for a vendor to include them by default, and are willing to be wine and dined. But you still do not know the cost of fabbing, I do. It is not as expensive as it once was.
-
And how do you convince a graphics card maker you are serious.
I can imagine it now
Chip Maker: How many computers do you produce.
Amiga: Well we have never actually produced amy, but the company we bought the name from, well, actually they didn't either, but the company before them, they, well they could have produced computers and so could the one before that, if it hadn't ended up bankrupt.
Chip Maker: I see, and how many are you planning to produce?
Amiga: Well we had this guy who said he was going to produce a lot of Amigas, but, well you know, he doesn't have a website, or a company actually, but I'm sure it'll all turn out.
Chip Maker: What exact is Amiga Incs business?
Amiga: Well, we basically outsource American Jobs to India.
And let's not even get to the City of Kent fiasco...
-
Just as I thought I was out, they drag me back in!
THERE IS NO SUGGESTION OR POSSIBILITY THAT AMIGA INC WILL DO THIS.
What I am suggesting here must be done by an OEM with SERIOUS intentions of Mass Production of Amiga computers. A company fully committed to the Amiga concept. A company staffed with people with the vision and talent of the original hi-torro company, but with a lot more money.
-
A6000 wrote:
Just as I thought I was out, they drag me back in!
THERE IS NO SUGGESTION OR POSSIBILITY THAT AMIGA INC WILL DO THIS.
What I am suggesting here must be done by an OEM with SERIOUS intentions of Mass Production of Amiga computers. A company fully committed to the Amiga concept. A company staffed with people with the vision and talent of the original hi-torro company, but with a lot more money.
This I don't think will happen, but it does not need to happen either. Hi Torro needed a lot of cash due to the nature of the chip market back then. Today, with modern HDL toolkits, they could have done a lot more with a lot less cash.
-
persia wrote:
And how do you convince a graphics card maker you are serious.
Easily. You actually talk to them. Chip makers are more than willing 9 times out of 10 to talk and even give you specs for little more than NDA's. I know I've gotten a ton of docs on a variety of chipsets, from high-end audio DSP's to next-gen CPU's, and I'm a guy working out of his closet.
-
downix wrote:
A6000 wrote:
Just as I thought I was out, they drag me back in!
THERE IS NO SUGGESTION OR POSSIBILITY THAT AMIGA INC WILL DO THIS.
What I am suggesting here must be done by an OEM with SERIOUS intentions of Mass Production of Amiga computers. A company fully committed to the Amiga concept. A company staffed with people with the vision and talent of the original hi-torro company, but with a lot more money.
This I don't think will happen, but it does not need to happen either. Hi Torro needed a lot of cash due to the nature of the chip market back then. Today, with modern HDL toolkits, they could have done a lot more with a lot less cash.
Anything less will fail, Mass Production is necessary to achieve a competetive retail price, a lot of money is required for Marketing.
-
A6000 wrote:
downix wrote:
This I don't think will happen, but it does not need to happen either. Hi Torro needed a lot of cash due to the nature of the chip market back then. Today, with modern HDL toolkits, they could have done a lot more with a lot less cash.
Anything less will fail, Mass Production is necessary to achieve a competetive retail price, a lot of money is required for Marketing.
It is? Funny, the company I work for spends less in marketing than most companies spend in staples, yet we do well.
Work smarter, not harder, and you can do a lot for a little my friend.
-
AmiGR wrote:
they go stick bundles of applications INTO THE OS DISTRO itself. Often WITHOUT the option of NOT installing them. This is a real putoff!
Eh? Which distro does not give you the option of not installing the bundled apps? I've worked with RedHat, Debian and Gentoo and I selected what I wanted to install in all of them.
eg: Ubuntu doesn't, Puppy doesn't, Vector doesn't.
-
Ah.. Guys. You've all gone way off topic.
So I will too. ;-)
Look. Clearly the future is x86 for the desktop. Stuffing around with PPC is clearly a dead end for so many reasons and you will never ever be anywhere near the cutting edge. PPC will only buy us a couple of years more based on the current state of Amiga OS(or alike) development.
That said, the implementation of x86 should be done like Apple, ie, on custom hardware, for the obvious reason of minimising hardware support. So you pick a manufacturer(some unknown Taiwanese for instance) to partner with and rebrand some of their x86 hardware as Amiga, and include an AmigaOS as standard, hopefully in a nice case. That's gotta be too easy.
As for worrying about GPUs and the like, forget it. There are no developer resources to take advantage of it anyway. You couldn't sign console devs to work on it because the return would be too low. The latest GPUs would only come in handy if you could dual boot Windows for Windows games, which is what we're trying to avoid anyway.
My vote would be to partner with the company producing those wedge shaped computers reminiscent of the A500 of old, as you wouldn't be expecting much from them hardware wise, and wouldn't be picked on for not being able to utilise it fully anyway. I think it would be a great stepping stone and maybe I am alone in thinking they are cool.
(http://www.blogsmithmedia.com/www.engadget.com/media/2007/03/cybernet-zpc-945sl.jpg)
If you can't get your OS5 ready in time or an x86 port of OS4, you simply use AROS, or you get an OS licence for something that can act as a host for AROS libs, like say QNX(QNX would sit well with Amigans). Think OSX running on BSD. There done. You've got the Amiga API combined with a modern OS with modern features like memory protection etc, you got apps already ported to QNX like browsers and Office stuff, and you've got all this sitting on fully branded and stable Amiga branded PC/Windows compatable computer. And it would eventually run AROS native too if you wanted or even Windows in a virtual machine. How AmigaInc could lose with this approach I do not know.
But now I will go BACK on topic and at last mention my take on OS bloat.
Bloat is a function of abstraction more than anything else. Abstraction is a boon to developers and allows them to design and develop software at a higher level or with a higher level language or API. I'm no expert but generally I imagine more low level knowledge is required for software devs in AmigaOS than most others. Abstraction lends itself to modularisation which also increases the ability to coordinate efforts, which in turn increases developer output, and leads to more ambitious productions. Abstraction typically creates software layers, which can be better maintained, and all this is made possible by the growth in computing power. So while you lose out in terms of speed in most cases you gain by software that is likely to be more ambitious, interoperable, frequently updated, and easily maintained. AmigaOS isn't bloated in comparison, but it probably isn't considered as feature rich either.
Ok. Going back to my happy place.
-
Why not put a CPU core (PPC or simplified x86), a GPU and a decent amount of RAM all on the same silicon?
Isn't that enough to be revolutionairy? An AmigaOS running decently on a matchbox sized computer.
If only Clone-A could squeeze AGA, some memory and a PPC/680x0 core on it's silicon too we'd be part way there and loving it.
I'm not saying all the RAM needs be on the chip. It'd be like Amiga Chip RAM.
Ok. Going back to my happy place now.
-
No-one spends the money to get a Mac to get a Vista machine, people can buy cheaper and more powerful Vista machines elsewhere.
I work at a university. All of the high end macs that have been purchased this year have been used as solely windows machines.
This is because they were cheaper than the equivalent PCs from tear 1 suppliers. However they are XP not Vista. Apple may have to reconsider their generous educational discounts.
-
I would just like to say that I mentioned the A20 line as an example x86 legacy, but I dont know all the details of legacy support for older software. Do you know of any other examples?
Stacked model X87 FPU and FXCH instruction. Both which was fixed(workaround) in AMD's K7 Athlon i.e. hardwired FXCH (effective latency of 0-cycles), hardware translator for FPU stack model to register FPU model.
X87's lack of fused FADD and FMUL instructions i.e. Intel Core 2's fixes this issue by detecting (hardware) dependant FADD and FMUL instructions and fuse the together.
Stack FPU model was dumped in AMD64/Intel64/X64 modes.
-
Or is the some architectural reason that would have limited it from the same type of improvements that make the x86 still viable?
RISC hype...
What were the abilities of the 1993 Pentium or 1995 Pentium Pro in regards to clock cycles?
Like the other P6 class cores(e.g. Pentium II, Pentium III, Pentium M, Core1), Pentium Pro has three x86 decoders i.e. 3 X86 instructions per cycle.
Pentium Classic can issue two X86 instructions per cycle (with limitation).
The P6 has partially pipelined FPU (for instruction multiplies). Like Pentium Classic's FPU, 68060's FPU is not piplined.
K7 Athlon has a fully superpipelined FPU.
-
So what about modern PCs and Macs which all use multi-cores, how do they compare with the old style P4s?
Hammer wrote:
Or is the some architectural reason that would have limited it from the same type of improvements that make the x86 still viable?
RISC hype...
What were the abilities of the 1993 Pentium or 1995 Pentium Pro in regards to clock cycles?
Like the other P6 class cores(e.g. Pentium II, Pentium III, Pentium M, Core1), Pentium Pro has three x86 decoders i.e. 3 X86 instructions per cycle.
Pentium Classic can issue two X86 instructions per cycle (with limitation).
The P6 has partially pipelined FPU (for instruction multiplies). Like Pentium Classic's FPU, 68060's FPU is not piplined.
K7 Athlon has a fully superpipelined FPU.
-
HenryCase wrote:
downix wrote:
No, '95-'96 would have been post-AAA, Hombre chipset. AAA was to be 3.0, but CBM put it's development on pause, instead releasing the interim AGA. When they restarted AAA development, they soon found themselves too far behind the curve, so they began Hombre, slated for release in '95.
Thanks for this info.
Just out of interest, if AAA had been released instead of AGA (i.e. at the same time) how would it have compared, tech specs wise, with IBM-PC compatible and Apple graphics h/w?
About on par with ATI's Mach 32.
http://en.wikipedia.org/wiki/ATI_Mach
Factor in Intel has 860/960 RISC 3D hybrid chip...
-
persia wrote:
So what about modern PCs and Macs which all use multi-cores, how do they compare with the old style P4s?
This is a large topic.
http://arstechnica.com/articles/paedia/cpu/core.ars
http://arstechnica.com/articles/paedia/cpu/core.ars/4
http://arstechnica.com/articles/paedia/cpu/core.ars/5
This refers to Intel's Core 2 Duo/Quads.
PowerPC G4's Altivec implementation was good, but it was throttled by a crap bus and chipsets. If G4 used EV6 bus; the outcome would have been different. Both AMD and Intel have 128bit FP hardware units with plenty of bus bandwidth.
-
BigBenAussie wrote:
Ah.. Guys. You've all gone way off topic.
So I will too. ;-)
Look. Clearly the future is x86 for the desktop. Stuffing around with PPC is clearly a dead end for so many reasons and you will never ever be anywhere near the cutting edge. PPC will only buy us a couple of years more based on the current state of Amiga OS(or alike) development.
There are more chips out there than PPC and x86. ARM, MIPS, SuperH, and SPARC are all still viable, and each has their own unique strengths that lend themselves to a desktop platform, such as more efficiency, better OP-per-clock ratio, and... they're LICENSEABLE. You could take them, embed them into a system-on-chip and produce a more cost-efficient system than any x86 machine could hope to be. Broaden your horizons sometimes, I did and I haven't looked back. *pets his SPARC*
-
Fully agreed. While, sure, modern day GPU's are more than adequate, truth is, the rest of a PC or Mac's chipset is downright anemic for performance. I build these things every day, and deal with these limitations. Example, the common AC97 sound system that's universal nowadays.
As for AC97 or HDA, Lintel/Wintel/Mactel's chipsets are designed with a modern X86/X64 processor in mind.
-
There are more chips out there than PPC and x86. ARM, MIPS, SuperH, and SPARC are all still viable, and each has their own unique strengths that lend themselves to a desktop platform, such as more efficiency, better OP-per-clock ratio, and... they're LICENSEABLE.
Let's see SPARC IV competes against Celeron/Pentium Dual Core (Core 2 based), K8 Sempr0n and Athlon 64s in the race to bottom(for price).
-
Hammer wrote:
persia wrote:
So what about modern PCs and Macs which all use multi-cores, how do they compare with the old style P4s?
This is a large topic.
http://arstechnica.com/articles/paedia/cpu/core.ars
http://arstechnica.com/articles/paedia/cpu/core.ars/4
http://arstechnica.com/articles/paedia/cpu/core.ars/5
This refers to Intel's Core 2 Duo/Quads.
PowerPC G4's Altivec implementation was good, but it was throttled by a crap bus and chipsets. If G4 used EV6 bus; the outcome would have been different. Both AMD and Intel have 128bit FP hardware units with plenty of bus bandwidth.
Fully agreed, hence why I'm currently modifying the SPARC T1 to use a Hypertransport bus rather than it's current proprietory bus design. Cuts costs, *and* booses speed. But, lacking an Altivec-like design of my own at the moment, it's still little more than a design exercise.
-
Hammer wrote:
There are more chips out there than PPC and x86. ARM, MIPS, SuperH, and SPARC are all still viable, and each has their own unique strengths that lend themselves to a desktop platform, such as more efficiency, better OP-per-clock ratio, and... they're LICENSEABLE.
Let's see SPARC IV competes against Celeron/Pentium Dual Core (Core 2 based), K8 Sempr0n and Athlon 64s in the race to bottom(for price).
The IV? Egads guy, get into at least 2005. Not that I'm one to talk, I run a IIi.
Entry level ATX boards start at under $200 including a 600Mhz CPU. While performance wise the Core2 is higher at this price point, admitedly, it's not so far ahead that it's a blowout.
-
I've thought the same thing but no little of the OS so I can't answer intelligently; but I loved the way you ended your post......discuss.....made me laugh. :lol: :lol: :lol: :lol:
-
downix wrote:
Hammer wrote:
persia wrote:
So what about modern PCs and Macs which all use multi-cores, how do they compare with the old style P4s?
This is a large topic.
http://arstechnica.com/articles/paedia/cpu/core.ars
http://arstechnica.com/articles/paedia/cpu/core.ars/4
http://arstechnica.com/articles/paedia/cpu/core.ars/5
This refers to Intel's Core 2 Duo/Quads.
PowerPC G4's Altivec implementation was good, but it was throttled by a crap bus and chipsets. If G4 used EV6 bus; the outcome would have been different. Both AMD and Intel have 128bit FP hardware units with plenty of bus bandwidth.
Fully agreed, hence why I'm currently modifying the SPARC T1 to use a Hypertransport bus rather than it's current proprietory bus design. Cuts costs, *and* booses speed.
.
Officially, Sun Microsystems is currently evaluating AMD's Torrenza for all Sun platforms.
http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_543~112780,00.html