Welcome, Guest. Please login or register.

Author Topic: Reinvent the OS cont  (Read 2135 times)

Description:

0 Members and 1 Guest are viewing this topic.

guest11527

  • Guest
Re: Reinvent the OS cont
« Reply #74 from previous page: June 28, 2015, 05:39:56 PM »
Quote from: kolla;791761
I can think of quite a few, most notably perhaps is Microsoft. But really, you are totally overstating this "problem", changes between releases are typically small, big changes in sub systems are announced and discussed to death long before they are actually implemented in a release.
Probably you have never been in the position of maintaining a large scale system. I'm not talking about hobbyists system here. You cannot simply take "off the shelve" hardware for Linux and hope that just because it runs now it will continue to run with the next kernel.

Examples? Last week, kernel update, but vmware does no longer work. Some obscure call in the network layer of linux takes now one argument instead of two, where the second argument was always zero. The cost to include a backwards compatible call that ignores the second argument (which is in this particular case the right approach) would have been close to zero. The cost of patching vmware is (for us) larger because it includes downtime and man hours of work.

Another example? Apparently, some kernel call the NVIDIA driver needs is no longer in place, so if you update your kernel, graphics no longer works, and re-compiling the nvidia driver fails. There is a new driver, but this has other bugs that were not fixed. This is "annoying" for a personal user because the procedure of updating such drivers is more complex than on other systems, but it causes real costs on large scale systems.
Quote from: kolla;791761
So your company wants to have drivers in the kernel, but you are too cheap and/lazy to actually maintain them?
I don't have a company, but I'm working in a computing center. Even if I had a company, I would rather prefer to sell products that continue to work for ten years with drivers unchanged and have the operating system maintainer to keep care of it rather than to continuously have to invest money in a product that no longer creates any profit because it is no longer sold. Somehow the engineers that do all the patching and cleanup behind the kernel hackers have to be paid, too. You seem to forget that.

I'm not a fan of Windows and I don't use it privately, but for today, I can get a driver from Windows Vista (probably ten years old) and install it under Windows 8.1 and it works. You can think of this as you like, but if you ask me, then that shows at least that MS cares about their products and keeps the interface stable. It creates an ecosystem which makes it worth for hardware vendors to invest into it by selling products once and keep consmers happy. Of course MS make their customers pay for this, but one way or another, nothing is for free.

For Linux, you pay for service time because you need somebody to clean up behind the kernel hackers causing consistently interface changes. For Windows, you pay for the product. For an enterprise, it is often simpler and easier just to pay for the product and get the warranty that it continues to run. For Linux, this model also exists in the form of support contracts (e.g. by SuSe, the one we have) but my overall impression is that even with that you have to invest more time if you upgrade SuSe to the next release because there are so many dependencies between the installations that you cannot simply carry your software over. Basically, you have to replace the entire software layer with a new release.

Open source and closed source are just two different models. You have to pay for either, one way or another. Closed source is often easier for the user, and not necessarily more expensive, depending on how you value your time. If I, in my spare time, run my linux system, that's all fine for me. But if I had to write invoices to my boss for that, then that would be a quite expensive system.  
Quote from: kolla;791761
That unmaintained code stops working is a good thing, good riddance.

Actually, no. It rather means that the Os "vendor" does not have the goal of creating a commercial ecosystem that would allow hardware vendors to invest into. Which is pretty much the problem why open source never reached mainstream. Unixoid systems are successful where its vendor keeps care about it itself and uses a much stricter development model, and where hardware and Os are much more coupled than in the PC market. For example, see Android. That's "Linux with a Windows business model". Because Os vendor and hardware vendor are identical or much more dependent on each other, no problem.
 

Offline Iggy

  • Hero Member
  • *****
  • Join Date: Aug 2009
  • Posts: 5348
    • Show only replies by Iggy
Re: Reinvent the OS cont
« Reply #75 on: June 28, 2015, 05:55:49 PM »
Quote from: kolla;791762
Linux, for some strange reason, runs on just about anything, despite these so called flaws that you point out, flaws that btw you pretty much have made up in your fantasies.

Unless you show the world a better way yourself, you are in no position to question the software engineering skills of Torvalds, whos software projects reach world wide, affecting almost everyone's lives at these point.

Actually, I feel I have EVERY right to question Torvalds, since he's merely ripped off UNIX, a project that took Bell and Universities across the United States years to create.
And his position on micro kernel OS' is assinine.
I've noticed that as Linux has moved forward, that there has been an attempt to absorb features from better structured OS' like the ability to dynamically load and unload some drivers.
But, for the most part, its still an oversized, poorly structured mess.
Oh, and those devices you think I'm benefiting from?
Experience with Android may be the one thing that could drive me to buy a new Apple product.
The only "fantasy" I have is that all the deluded fanboys of this crap wake up.
There is a reason its free, most people would not pay for crap like that.

BTW - I've actually had a hand in creating real hardware and software, so attacking my ego won't work. My opinions come from real experience.
« Last Edit: June 29, 2015, 12:46:01 AM by Iggy »
"Not making any hard and fast rules means that the moderators can use their good judgment in moderation, and we think the results speak for themselves." - Amiga.org, terms of service

"You, got to stem the evil tide, and keep it on the the inside" - Rogers Waters

"God was never on your side" - Lemmy

Amiga! "Our appeal has become more selective"
 

Offline Karlos

  • Sockologist
  • Global Moderator
  • Hero Member
  • *****
  • Join Date: Nov 2002
  • Posts: 16867
  • Country: gb
  • Thanked: 4 times
    • Show only replies by Karlos
Re: Reinvent the OS cont
« Reply #76 on: June 29, 2015, 04:10:32 AM »
Kernel.

That is all. ;-)

Quote
Actually, I feel I have EVERY right to question Torvalds, since he's merely ripped off UNIX, a project that took Bell and Universities across the United States years to create

Sounds like revisionist history combined with kernel/entire OS conflation to me. Unix is a complete OS.

Torvalds wrote a kernel, initially as an experiment, which was subsequently used for GNU since the intended kernel for the latter was not (and in truth still is not) ready.

GNU is the "rip off" of Unix that you are thinking of, not Torvald's work. And kudos Stallman and co for doing it; without their work, computing would be far more closed, restrictive and expensive than it is.
« Last Edit: June 29, 2015, 04:24:10 AM by Karlos »
int p; // A
 

Offline KimmoK

  • Sr. Member
  • ****
  • Join Date: Jun 2004
  • Posts: 319
    • Show only replies by KimmoK
Re: Reinvent the OS cont
« Reply #77 on: June 29, 2015, 06:54:30 AM »
Quote from: Iggy;791755
...Carve it down to its minimum, and move anything unnecessary to loadable modules.
It would have lots advantages, but Torvalds doesn't understand this.
And THAT makes me distrust his skill as a software engineer.

At least Torvalds is higher caliber OS talent than Gates. ;-)
(Gates is better in business/making money)

20 years ago I noticed that Torvalds does not think Amiga had proper multitasking when applications can disable task switching as they wish. Sure, it is not ideal, and most likely because Carl S. did not have time to implement other means to handle some things.

Torvalds took advantage of the fast pace of CPU development, not trying to implement a small, fast, responsive and efficient OS. Torvalds succeeded better than anyone else in competing with M$ in generic mainstream systems etc... He was far ahead in 64bit + SMP ++++ vs M$.

If Torvalds had had big box Amiga+RAM+HDD when he was a kid... who knows what had happened...

If we one day have all modern key features in our OS... perhaps Linus might help us with the Linux/posix sandbox build.... :crazy:

+++
It seems Linus is pretty unhappy with the desktop, check "Desktop environment criticism".
https://en.wikipedia.org/wiki/Linus_Torvalds
(It seems I'm not the only who is unhappy with Linux desktop evolution of past few years...)
++++
I did not know Linus was/is a PowerPC fan!
"In 2008, Torvalds stated that he used the Fedora distribution of Linux because it had fairly good support for the PowerPC processor architecture, which he had favoured at the time."
(or perhaps he just had happened to buy PowerPC apple HW)
« Last Edit: June 29, 2015, 11:26:46 AM by KimmoK »
- KimmoK
// Windows will never catch us now.
// The multicolor AmigaFUTURE IS NOW !! :crazy:
 

Offline agami

  • Sr. Member
  • ****
  • Join Date: Sep 2010
  • Posts: 320
  • Country: au
  • Gender: Male
    • Show only replies by agami
    • Twitter
Re: Reinvent the OS cont
« Reply #78 on: June 29, 2015, 09:15:34 AM »
Any reinvention of the Amiga OS should not use/copy anything from Linux. You will never find a more wretched hive of clump and complexity.

I use it for servers. When I did use it for desktops I spent too much time in Linux forums instead of actually being productive. I look forward to trying SteamOS; Perhaps the folks at Valve have abstracted all the crap I don;t want to deal with.
---------------AGA Collection---------------
1) Amiga A4000 040 40MHz, Mediator PCI, Voodoo 3 3000, Creative PCI128, Fast Ethernet, Indivision AGA Mk2 CR, DVD/CD-RW, OS 3.9 BB2
2) Amiga A1200 040 25MHz, Indivision AGA Mk2 CR, IDEfix, PCMCIA WiFi, slim slot load DVD/CD-RW, OS 3.9 BB2
3) Amiga CD32 + SX1, OS 3.1
 

Offline Iggy

  • Hero Member
  • *****
  • Join Date: Aug 2009
  • Posts: 5348
    • Show only replies by Iggy
Re: Reinvent the OS cont
« Reply #79 on: June 29, 2015, 12:53:32 PM »
Quote from: agami;791797
Any reinvention of the Amiga OS should not use/copy anything from Linux. You will never find a more wretched hive of clump and complexity.

I use it for servers. When I did use it for desktops I spent too much time in Linux forums instead of actually being productive. I look forward to trying SteamOS; Perhaps the folks at Valve have abstracted all the crap I don;t want to deal with.

"wretched hive of clump and complexity" - Nice turn of phrase, I like it.

OK, now to address the two prior post, Kolla, Karlos, I know you both use Linux and It is not my intent to disturb your "universal scheme of things".
But Torvald's kernel is intended to be the core of a UNIX clone.
This HAS resulted in something that at times definately rises above what Microsoft has managed to do.
Could it be a lot better?
Considering its origin, probably not.
But if your argument is that its better than the alternatives, well that's a tough argument to counter.
Still, we have been retreading ideas with roots from over forty years ago.

With the hardware resources currently at hand, its easier to generate oversized, inefficient code than it is to create the tighter work our much more limited hardware encouraged in the past.

I've mentioned Xenix in this thread once before, because Linux REALLY reminds me of it.
Large, cumbersome, and incapable of true real time response.

Finally, this was started as a discussion on the merits of recreating the OS.
I didn't intend it to subvert it into a discussion on the merits of Linux or any other OS.
It has just been my contention that we could do better by avoiding using a monolythic kernel.
« Last Edit: June 29, 2015, 12:58:20 PM by Iggy »
"Not making any hard and fast rules means that the moderators can use their good judgment in moderation, and we think the results speak for themselves." - Amiga.org, terms of service

"You, got to stem the evil tide, and keep it on the the inside" - Rogers Waters

"God was never on your side" - Lemmy

Amiga! "Our appeal has become more selective"
 

Offline Fats

  • Hero Member
  • *****
  • Join Date: Mar 2002
  • Posts: 672
    • Show only replies by Fats
Re: Reinvent the OS cont
« Reply #80 on: June 29, 2015, 01:42:10 PM »
Quote from: Thomas Richter;791754
You forget one thing. Even if I release the source code, it means that the source requires continuous updating and maintaining to keep it working with the kernel. Now, which company wants to make this commitment, namely not only release source, but also employ engineers to continuously update the source whenever the kernel interface changes - i.e. with every release.   Using the passive here means that you don't know who maintains it. It is like throwing source code over a wall and hoping somebody picks it up, and that this "somebody" has enough knowledge and experience to do a good job. That is a problem for professional software engineering.  

If you are lucky, you do not need further commitment. If you are not so lucky, your driver will stop working with the next kernel release and your investment is lost.


True that just releasing the source code is not enough; you need to do the job of getting your code up to kernel standards and upstream it. But this is a one-time effort and from then on your driver will be maintained by the kernel devs; if some internal ABI changes your driver will be updated if needed before the ABI change is included in a release kernel.

Quote from: Thomas Richter;791770
Examples? Last week, kernel update, but vmware does no longer work. Some obscure call in the network layer of linux takes now one argument instead of two, where the second argument was always zero. The cost to include a backwards compatible call that ignores the second argument (which is in this particular case the right approach) would have been close to zero. The cost of patching vmware is (for us) larger because it includes downtime and man hours of work.

Another example? Apparently, some kernel call the NVIDIA driver needs is no longer in place, so if you update your kernel, graphics no longer works, and re-compiling the nvidia driver fails. There is a new driver, but this has other bugs that were not fixed. This is "annoying" for a personal user because the procedure of updating such drivers is more complex than on other systems, but it causes real costs on large scale systems.  I don't have a company, but I'm working in a computing center. Even if I had a company, I would rather prefer to sell products that continue to work for ten years with drivers unchanged and have the operating system maintainer to keep care of it rather than to continuously have to invest money in a product that no longer creates any profit because it is no longer sold. Somehow the engineers that do all the patching and cleanup behind the kernel hackers have to be paid, too. You seem to forget that.


Two examples of companies that have chosen to not go for the right thing and upstream their drivers and have chosen to use the open source code from other people but not wanting to open their own code. Yes, in that case they are on their own. It means you should stick to configs supported by those companies.

Quote from: Thomas Richter;791770
It rather means that the Os "vendor" does not have the goal of creating a commercial ecosystem that would allow hardware vendors to invest into.


Sure the core kernel devs are not marketing driven; one can debate if that is good or bad. From the other end Red Hat seems to be doing pretty fine in commercializing Linux. Recently also long term supported older kernels have been made more official.

Quote from: Thomas Richter;791770
Which is pretty much the problem why open source never reached mainstream. Unixoid systems are successful where its vendor keeps care about it itself and uses a much stricter development model, and where hardware and Os are much more coupled than in the PC market.


As if PC drivers don't cause problems. The number of people doing PC support is much bigger in our company than for Linux; and we are doing EDA so Linux is not a minority overhere. Some of those PC support people do nothing else then testing drivers and even user land software to see if they don't cause problems before rolling out company wide.
Trust me...                                              I know what I\'m doing
 

guest11527

  • Guest
Re: Reinvent the OS cont
« Reply #81 on: June 29, 2015, 04:30:17 PM »
Quote from: Fats;791808
True that just releasing the source code is not enough; you need to do the job of getting your code up to kernel standards and upstream it. But this is a one-time effort and from then on your driver will be maintained by the kernel devs; if some internal ABI changes your driver will be updated if needed before the ABI change is included in a release kernel.
That's wishful thinking. In reality, if there is nobody testing the kernel module and checking its correctness, including the API to your hardware, chances are better than even that it breaks. Even if it compiles. Examples? I'm currently trying to get a couple of ancient intel 830M graphics based laptops working. They used to work with the i810 intel X11 driver. This one was made obsolete and replaced by the intel driver, which requires a kernel module. This kernel module failed to support the i830 for a long time, so I could only run VESA on these laptops. The kernel devs, most notably the folks from intel, made a huge effort to re-create a kernel driver from scratch, though it was never really entirely working due to some exotic DVO chips present in the mentioned laptops. I committed myself fixing this up, and kernel 3.19.x included the necessary fixes for both the intel DVO, a natsemi DVO and to work around some bios/ACPI bugs of the machines. Just to find myself that in 4.1.x i830 support is again broken, simply because the kernel devs do not have such ancient machines themselves. So I have to test the code again and find the bugs, made due to modifying code that, quite frankly, should have better been put into a separate kernel module and left alone. Yes, one unqiue driver for all intel chipset graphics might be nice, clean and lean, but as this example proves, it is not a practical approach.

You have to continuously test the code, make bug reports and help the devs to keep it working. Unlike what you say, it is *not* a one-time commitment.
 

Offline psxphill

Re: Reinvent the OS cont
« Reply #82 on: June 30, 2015, 05:09:20 PM »
Quote from: Thomas Richter;791814
You have to continuously test the code, make bug reports and help the devs to keep it working. Unlike what you say, it is *not* a one-time commitment.

Exactly the problem we are seeing already with Android phones.
 
 If you're lucky then a teenager has figured out how to hack together all the different cherry picks and has cobbled something together that barely works. Otherwise you're stuck with the OS that shipped on the phone and can't run new apps.
 

Offline Iggy

  • Hero Member
  • *****
  • Join Date: Aug 2009
  • Posts: 5348
    • Show only replies by Iggy
Re: Reinvent the OS cont
« Reply #83 on: July 02, 2015, 12:55:48 AM »
So, considering Thomas issues, is anyone ready to admit that I might be right and that driver don"t belong inside the kernel.
I think these issues clearly illustrate one of the major weaknesses of a monolythic kernel.
Beyond the fact that the ever changing kernel makes ever new revision a chore to adapt to.
This is far different in a micro kernel OS where core processes are relatively stable and the external modules are well documented.
"Not making any hard and fast rules means that the moderators can use their good judgment in moderation, and we think the results speak for themselves." - Amiga.org, terms of service

"You, got to stem the evil tide, and keep it on the the inside" - Rogers Waters

"God was never on your side" - Lemmy

Amiga! "Our appeal has become more selective"
 

Offline KimmoK

  • Sr. Member
  • ****
  • Join Date: Jun 2004
  • Posts: 319
    • Show only replies by KimmoK
Re: Reinvent the OS cont
« Reply #84 on: July 02, 2015, 06:52:24 AM »
Quote from: Iggy;791934
So, considering Thomas issues, is anyone ready to admit that I might be right and that driver don"t belong inside the kernel....

It has been clear to me always. ;-)

(+modern OSs come with 2Gb of drivers, but not all of them run all the time even when kernel is monolithic, so I personally need to learn how it's done on those monster OSs)

Is there study anywhere how monolithic vs true microkernel perform on modern HW...?
During ancient times monolithic design performed better.


And what if the kernel is able to update itself on the fly like on QNX?
(I suppose it does not change anything vs driver usage.)

++
Just spotted from mzone... it seems Linux kernel has now 20 million codelines:
https://www.bellingcat.com/wp-content/uploads/2015/06/Low-loader-vehicle-yard_DG-imagery_Final.pdf
Now considering that original AmigaOS had only 1million codelines ... WOW.

As an oldschool programmer I tend to think that the more codelines you have, the more bugs you most likely have.
So, in that theory, whole AmigaOS should have less bugs than what just linux kernel has.
Then I imagine ... on linux those 20million codelines are run in kernel space (no mp) vs 1million codelines as total AmigaOS ...

(I know it's not that simple and perhaps there is 15million testing related codelines in linux kernel, ;) .)
« Last Edit: July 02, 2015, 10:51:49 AM by KimmoK »
- KimmoK
// Windows will never catch us now.
// The multicolor AmigaFUTURE IS NOW !! :crazy:
 

Offline Iggy

  • Hero Member
  • *****
  • Join Date: Aug 2009
  • Posts: 5348
    • Show only replies by Iggy
Re: Reinvent the OS cont
« Reply #85 on: July 02, 2015, 04:25:06 PM »
Quote from: KimmoK

...Is there study anywhere how monolithic vs true microkernel perform on modern HW?...

...During ancient times monolithic design performed better...

...Just spotted from mzone... it seems Linux kernel has now 20 million codelines:..

...As an oldschool programmer I tend to think that the more codelines you have, the more bugs you most likely to have...[/jQUOTE


Actually it has always been my experience that micro kernel OS' have significantly better real time response. And its an obvious fact that monolithic kernels are far too large.
"Not making any hard and fast rules means that the moderators can use their good judgment in moderation, and we think the results speak for themselves." - Amiga.org, terms of service

"You, got to stem the evil tide, and keep it on the the inside" - Rogers Waters

"God was never on your side" - Lemmy

Amiga! "Our appeal has become more selective"