Welcome, Guest. Please login or register.

Author Topic: Does Linux have an Amiga feel?  (Read 41319 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline vidarh

  • Sr. Member
  • ****
  • Join Date: Feb 2010
  • Posts: 409
    • Show all replies
Re: Does Linux have an Amiga feel?
« Reply #14 from previous page: July 18, 2013, 05:00:56 PM »
Quote from: Crumb;741188
Linux with its monolithic kernel seems to be the past. It's nowhere as extensible as AmigaOS was in its first day. Any BSD seems much more evolved and advanced than Linux, at least in extensibility.


What part of Linux exactly is it you think is hard to extend?

Quote

The rest of the OS (GNU) is not my cup of tea, starting from the lack of coherence between its parts (core and GUI) and all the heterogeneous and badly integrated apps.


The lack of integration we can agree on - it's one of the things I miss from AmigaOS, as I see the same lack of integration on OS X and Windows.

Quote

Linux is slow, no matter what you do. Any AmigaOS flavour runs rings in terms of speed.


That's true, if you can run them on the same hardware. AmigaOS also does far less. Your comparison is like saying a Ferrari will outcompete a tank on speed. Of course it does - it has no armour to drag along, nor a heavy cannon, and it's not built to handle terrain. Great, until you're on a battlefield where someone fires shells at you. The problem is very few users and applications are consistently well behaved, unless you spend a lot of time teaching users by making their machine crash when they make mistakes, and selecting applications by rejecting anything that isn't close enough to perfect.

Users overwhelmingly opted for armour over raw speed and elegance back in the 90's.

It's also true that there are certainly aspects of Linux that could be made faster, or at least *more responsive to user input*. OS X and Windows have the same issues where the OS does not do nearly enough to favour user input and. Part of the problem is simply that hardware is "fast enough" to mask most of the effects, and secondly that Linux users and developers, like Windows and OS X users, have grown up expecting that this is what computers are like, and don't know any better.

Quote

All the rest of OSes: OSX, Windows, BeOS... are far more integrated, intuitive and usually faster.


I don't know where you get that idea. I use OS X at work. It regularly freezes up for me - I have to reboot at least once a week (vs "never" for my Linux laptop at home). It's slow. It's bloated. Whenever I have to use Windows, I cringe at how horribly it performs on hardware hundreds of times faster than hardware I used to comfortably use Linux on. The only machine I regularly use that performs well runs Ubuntu.

These days Linux sometimes even beat Windows on games performance, which has traditionally been a problem due to generally lagging in 3D driver support.

Quote

AmigaOS flavours are already enjoyable out of the box, these are already fast, no need to waste hours tweaking them like Linux.


Are you serious? I spent hundreds of hours tweaking my Amiga systems back in the day, and by modern standards what I ended up with was still primitive compared to what I get out of the box with Linux. I still love Amiga, and the overall feel is still great, but a bare bones AmigaOS system is extremely primitive.

Quote

Amigans enjoy tweaking their systems but it's not mandatory at all.


Perhaps not mandatory, but I would be unable to use one productively for more than five minutes without tweaking it.

Quote

Android apps suddenly die and leave your phone frozen and you have to reboot it. It's funny because Android devices are the perfect example of Linux: these require incredible high amounts of resources to do stuff that would work much better on AmigaOS. And memory protected or not Android apps crash and slow down your phone so much that you have to reboot it. I have to reboot phones with "memory protection" much more often than I have to reboot AmigaOS flavours just for the simple fact that Linux is coded like memory was infinite and never exhausted.


How often do you think what crashed was the Linux kernel as opposed to the Android framework on top? This is like complaining that application X crashed, so Windows is ****.

My bet is the crash is caused by the higher level frameworks 99.999% of the time, given that I have Linux boxes at work that regularly handle far more crap than any of us throw at our phones for 5+ *years* without crashing (or rebooting) a single time. With the right software you can even upgrade the running kernel without rebooting.

Linux is also not coded like memory is infinite and never exhausted, either. It offers fine grained control. You can choose whether or not to use swap (try that on OS X, and you'll see "fun" stuff happen - I tried recently and soon had the kernel use 100% CPU). You can switch off overcommit (in which case applications will get errors when trying to allocate too much, just like on AmigaOS). And if you don't, and the system runs out of memory, the kernel will kill processes for you to reclaim sufficient memory to ensure the system as a whole can keep running.

Regardless of these settings, the Linux *kernel* will not crash because you run out of memory. It will kill applications if necessary in order to keep as much as possible running, and it is very good at it. It may become unresponsive for ages if there's much swap space and applications are poorly written, but the kernel will survive and the system will pretty much always be able to recover even in the face of the most brutally abusively written application (in terms of memory allocations)

That said, I don't like the default choices (and I don't like them on Windows or OS X either), but the irony is that the reason I don't like them today is that I rarely run into memory limits, and so it is almost always a failure situation when I run out of memory to the extent that I now finally do prefer the Amiga-way of forcing applications to deal with failing memory allocations.

Quote

Most of stuff that comes with these OSes that take gigabytes of space is rubbish or are 14 outdated GUIs for a cli tool that got recently updated and crashes and burns.


None of which you *need* if you don't want to. If you don't want a wide selection built in, get a minimal distro and you get one that fits in a few MB. Most Linux users see this as a *feature*: Start the installer, pick the package subsets you want, and have almost every piece of software you want already installed when you're done. But you're not forced to do that. Conversely, I have built Linux installs for embedded devices with 4MB of total storage - it's not hard.

Quote

All in all: when Linux crashes and burns you always end up having to edit weird config files located at random paths, instead of having a GUI emergency boot that boots with basic VGA modes and 640x480 I guess it's much more intuitive for these bearded kernel hackers.


This is ca. a decade out of date for most situations. Or you've tried a distro aimed at hardcore users.

I don't even remember where the X config is on my Ubuntu machine, as I've never had a reason to look at it.

Quote

If I wanted to run all that GNU apps I would run them on a unix environment (but a more modern one that use microkernel, not a monolithicly obsolete one like Linux).

Monolithic kernels are so 70s...


Which "modern unix environment" is it you have in mind? If you're thinking OS X, think again. While Mach (that OS X's Darwin kernel is based on) did eventually evolve into a microkernel system, the Mach version OS X is based on is not.

You could run it on Hurd. Say Debian/Hurd (Debian is mainly a Linux distro, but they ship experimental versions using a Hurd kernel, as well as a FreeBSD version).

But as it stands, in almost every respect, Linux is the most modern unix environment out there, and most of the "real" Unix versions have pretty much died off as Linux has become pretty much the standard. For good reasons. The only real alternative with an actual Unix license is Solaris and derivatives (OpenSolaris, Illumos, SmartOS), but these are almost exclusively used on servers.

Or, I guess, you could run it on Linux/L4 - a Linux version that runs parts of the Linux kernel on top of the L4 nano-kernel. It's even more buzzwords compliant than a microkernel, and has some intriguing properties. In fact, using Linux to kickstart micro-, nano- and exo-kernel projects is pretty much its own research field these days, because Linux provides a huge array of device drivers and a lot of other useful code (like filesystems), so many microkernel (take it to include nano- and exo-kernels) projects use parts of Linux to bootstrap their systems, and so there's a number of solutions for "wrapping" various parts of the Linux kernel, such as network or filesystem drivers to let them run on foreign kernels, or simply run the entire Linux kernel on top of the research-kernel du jour.

It's also worth considering that while the Linux kernel is monolithic, it is not much more monolithic than AmigaOS is: That is, the entire kernel runs in the same address space, without kernel-threads being protected against each other. But at the same time, you can load and unload kernel models with the same ease as you can load libraries and device drivers on AmigaOS (or most other modern OS's).  It is not monolithic in the "old" sense of a kernel built as a single unit. Linux has done that since the 90's. The core of the Linux kernel - the bits that absolutely must be built into a single unit - is tiny (it'd fit nicely in a later kickstart ROM...).

There are lots of things about the Linux kernel design I don't like, but it is also nothing like a 70's kernel. There are sources for both available - why don't you take a look.
 

Offline vidarh

  • Sr. Member
  • ****
  • Join Date: Feb 2010
  • Posts: 409
    • Show all replies
Re: Does Linux have an Amiga feel?
« Reply #15 on: July 18, 2013, 05:02:43 PM »
Quote from: Mrs Beanbag;741225
I don't know what you are talking about here. I have absolutely NEVER heard of client software running through a client-side window manager. How would that even work? Every time you ran software on a different client you'd have to start another window manager on it. And then every window might look and behave differently, which would be insane!


I am running out of the office now, so I can't reply to rest right now, but what you describe is simply not how X window managers work. X Window Managers connect to the X *Server* like another client, and use special commands. Clients connect to the X *Server* as well. Your client doesn't even need to know that a window manager is running.