Welcome, Guest. Please login or register.

Author Topic: NetSurf OS3.x Issues  (Read 40990 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline utri007

Re: NetSurf OS3.x Issues
« Reply #224 from previous page: February 27, 2016, 02:04:04 PM »
I'll spend this evening with beer and friends, so tomorrow evening is next test time for me.
ACube Sam 440ep Flex 800mhz, 1gb ram and 240gb hd and OS4.1FE
A1200 Micronic tower, OS3.9, Apollo 060 66mhz, xPert Merlin, Delfina Lite and Micronic Scandy, 500Gb hd, 66mb ram, DVD-burner and WLAN.
A1200 desktop, OS3.9, Blizzard 060 66mhz, 66mb ram, Ide Fix Express with 160Gb HD and WLAN
A500 OS2.1, GVP+HD8 with 4mb ram, 1mb chip ram and 4gb HD
Commodore CDTV KS3.1, 1mb chip, 4mb fast ram and IDE HD
 

Offline jennadk

  • Newbie
  • *
  • Join Date: Mar 2015
  • Posts: 26
    • Show only replies by jennadk
Re: NetSurf OS3.x Issues
« Reply #225 on: February 27, 2016, 02:37:38 PM »
Am I correct in assuming this version  requires a FPU?

Edit: I don't have one and simply get the error "8000000B" when trying to start it. (Amiga 1200 / ACA1231 / 3.1 ROMs / OS3.9BB4)
________________________________________________
Amiga 1200 ACA1231|IDE2CF|16GB CF|Indivision|3ComLAN
Morphos Mini G4 1.4Ghz|1GB RAM
 

Offline wawrzon

Re: NetSurf OS3.x Issues
« Reply #226 on: February 27, 2016, 05:42:52 PM »
Quote from: chris;804753
OK, this is more like I would expect - everything running consistently at a slower speed.

I know exactly what is causing the slowdown now.  clib2 uses memory pools with a puddle size and expected allocation size of 4K.  I modified that in the newer build to use normal memory allocations instead.

What is happening, is that early memory allocations are fast and efficiently allocated in 4K chunks.  Then, when bits of memory is de-allocated it leaves holes.  When new memory blocks are allocated it - and this is where I'm not sure of the implementation details in the OS - is trying to fill in the gaps in the already-allocated pools?  With a lot of pools it may be taking some time to search through and find a gap of the correct size, which is similar to how normal memory allocations work when searching through all of RAM (and thus a similar speed).

Quite simply, we are allocating and de-allocating so much memory that we quickly lose any advantage of memory pools.

To fix it... well, that's tricky.  The correct way would be to pool together elements of the same size to avoid fragmentation, but I can't do that in the core and all libraries without re-writing all the memory allocations (which would definitely not be popular).  Note I already do this in the frontend everywhere it is practical (this was one of my earlier OS3 optimisation attempts!)

It may simply be a case of making the memory pools bigger, and I will try that first.

its pretty much as what i imagine to happen. okay, i would expect the system should take care of it, but perhaps its woth to try if it can be tweaked in the application code.
 

Offline chris

Re: NetSurf OS3.x Issues
« Reply #227 on: February 27, 2016, 05:54:37 PM »
Quote from: jennadk;804762
Am I correct in assuming this version  requires a FPU?

Edit: I don't have one and simply get the error "8000000B" when trying to start it. (Amiga 1200 / ACA1231 / 3.1 ROMs / OS3.9BB4)


It *shouldn't*, certainly early versions were working without FPU but recent ones I can't run either (although I've not tried for a while).

It's built with soft float, but maybe not everywhere.
"Miracles we do at once, the impossible takes a little longer" - AJS on Hyperion
Avatar picture is Tabitha by Eric W Schwartz
 

Offline utri007

Re: NetSurf OS3.x Issues
« Reply #228 on: February 27, 2016, 10:37:26 PM »
Chris version should work every amiga
 with aga and enoug memory.
ACube Sam 440ep Flex 800mhz, 1gb ram and 240gb hd and OS4.1FE
A1200 Micronic tower, OS3.9, Apollo 060 66mhz, xPert Merlin, Delfina Lite and Micronic Scandy, 500Gb hd, 66mb ram, DVD-burner and WLAN.
A1200 desktop, OS3.9, Blizzard 060 66mhz, 66mb ram, Ide Fix Express with 160Gb HD and WLAN
A500 OS2.1, GVP+HD8 with 4mb ram, 1mb chip ram and 4gb HD
Commodore CDTV KS3.1, 1mb chip, 4mb fast ram and IDE HD
 

Offline jennadk

  • Newbie
  • *
  • Join Date: Mar 2015
  • Posts: 26
    • Show only replies by jennadk
Re: NetSurf OS3.x Issues
« Reply #229 on: February 27, 2016, 11:41:38 PM »
Fair, oddly if I launch from the CLI it hangs rather than crashes. Specifically it hangs here (at my 64 color AGA screen being detected):

global_history_init: Loaded global history
(193.284900) content/llcache.c:1355 llcache_process_metadata: Retriving metadata
(193.413112) content/fs_backing_store.c:889 get_store_entry: url:http://www.google.com/favicon.ico
(193.570131) content/fs_backing_store.c:897 get_store_entry: Failed to find ident 0x634ae90c in index
(193.727449) content/fs_backing_store.c:1923 fetch: entry not found
(193.839826) content/fetchers/curl.c:280 fetch_curl_setup: fetch 0x8e6d558, url 'http://www.google.com/favicon.ico'
(194.015665) amiga/font.c:54 ami_font_setdevicedpi: WARNING: Using diskfont.library for text. Forcing DPI to 72.
(194.182763) amiga/plotters.c:102 ami_init_layers: Screen depth = 6

Also, for the installer script to run I have to rename the readme to get rid of the "_os3" line. Thank you so much for all your efforts!
________________________________________________
Amiga 1200 ACA1231|IDE2CF|16GB CF|Indivision|3ComLAN
Morphos Mini G4 1.4Ghz|1GB RAM
 

Offline apj

Re: NetSurf OS3.x Issues
« Reply #230 on: February 28, 2016, 10:09:30 AM »
@Chris
I think I've found why my libnix version is not getting clib2 slowdown.
I've based it on libnix 3.0 by Diego Casorran who made some improvements in memory operations code.

Changes between libnix 2.1 and 3.0:
* stdlib/realloc.c: Changed CopyMem() to bcopy()
* string/memchr.c: Speed Improvement
* string/memcmp.c: Speed Improvement
* string/memcpy.c: Replaced CopyMem() by internal bcopy()
* string/memset.c: Optimized small operations

Offline chris

Re: NetSurf OS3.x Issues
« Reply #231 on: February 28, 2016, 11:34:51 AM »
Quote from: jennadk;804782
Fair, oddly if I launch from the CLI it hangs rather than crashes. Specifically it hangs here (at my 64 color AGA screen being detected):

global_history_init: Loaded global history
(193.284900) content/llcache.c:1355 llcache_process_metadata: Retriving metadata
(193.413112) content/fs_backing_store.c:889 get_store_entry: url:http://www.google.com/favicon.ico
(193.570131) content/fs_backing_store.c:897 get_store_entry: Failed to find ident 0x634ae90c in index
(193.727449) content/fs_backing_store.c:1923 fetch: entry not found
(193.839826) content/fetchers/curl.c:280 fetch_curl_setup: fetch 0x8e6d558, url 'http://www.google.com/favicon.ico'
(194.015665) amiga/font.c:54 ami_font_setdevicedpi: WARNING: Using diskfont.library for text. Forcing DPI to 72.
(194.182763) amiga/plotters.c:102 ami_init_layers: Screen depth = 6

I might have to add some extra debug in to track this down, but firstly are you using the version from my test link (ntlworld) or Aminet?  If the latter, please try the test version instead.

Secondly, try opening your Choices file (in Users/your-username) and adding:
friend_bitmap:1
Or change the value to 0 if it is already present.

edit is this the 8000000b error?

Quote
Also, for the installer script to run I have to rename the readme to get rid of the "_os3" line. Thank you so much for all your efforts!

Thanks, fixed.
« Last Edit: February 28, 2016, 11:48:15 AM by chris »
"Miracles we do at once, the impossible takes a little longer" - AJS on Hyperion
Avatar picture is Tabitha by Eric W Schwartz
 

Offline olsen

Re: NetSurf OS3.x Issues
« Reply #232 on: February 28, 2016, 02:07:41 PM »
Quote from: chris;804753
OK, this is more like I would expect - everything running consistently at a slower speed.

I know exactly what is causing the slowdown now.  clib2 uses memory pools with a puddle size and expected allocation size of 4K.  I modified that in the newer build to use normal memory allocations instead.

What is happening, is that early memory allocations are fast and efficiently allocated in 4K chunks.  Then, when bits of memory is de-allocated it leaves holes.  When new memory blocks are allocated it - and this is where I'm not sure of the implementation details in the OS - is trying to fill in the gaps in the already-allocated pools?  With a lot of pools it may be taking some time to search through and find a gap of the correct size, which is similar to how normal memory allocations work when searching through all of RAM (and thus a similar speed).

Quite simply, we are allocating and de-allocating so much memory that we quickly lose any advantage of memory pools.

To fix it... well, that's tricky.  The correct way would be to pool together elements of the same size to avoid fragmentation, but I can't do that in the core and all libraries without re-writing all the memory allocations (which would definitely not be popular).  Note I already do this in the frontend everywhere it is practical (this was one of my earlier OS3 optimisation attempts!)

It may simply be a case of making the memory pools bigger, and I will try that first.
I suspect that this may not make much of a difference. The memory pools, which is what the malloc()/alloca()/realloc()/free() functions in clib2 are built upon, were intended to avoid fragmenting main memory. This is accomplished by having all allocations smaller than the preset puddle size draw from a puddle that still has enough room left for it to fit. Fragmentation happens inside that puddle.

The problems begin when the degree of fragmentation inside these puddles becomes so high that the only recourse is to allocate more puddles and allocate memory from that. The number of puddles in use increases over time, and when you try to allocate more memory, the operating system has to first find a puddle that still has room and then try to make the allocation work. Both these operations take more time the more puddles are in play, and the higher the fragmentation within these puddles is. Allocating memory will scale poorly, and what goes for allocations also goes for deallocations.

The other problem is with memory allocations whose length exceeds the puddle size. These allocations will be drawn from main memory rather than from the puddles. This will likely increase main memory fragmentation somewhat, but the same problems that exist with the puddles apply to main memory, too: searching for a chunk to draw the allocation from takes time, and the same goes when deallocating that chunk. There's an additional burden on this procedure because the memory pool has to keep track of that "larger than puddle size" allocation, too.

Because all the memory chunk/puddle, etc. allocations and deallocations use the humble doubly-linked Exec list as its fundamental data structure, the amount of time spent finding the right memory chunk, and putting the fragments back together, scales poorly. Does this sound familiar?

From the clib2 side I'm afraid that the library can only leverage what the operating system provides, and that is not well-suited for applications which have to juggle large number of allocated memory fragments.

Question is what size of memory chunk is common for NetSurf, how many chunks are in play, how large they are. If you have not yet implemented it, you might want to add a memory allocation debugging layer and collect statistics for it over time.

It may be worth investigating how the NetSurf memory allocations could be handled by an application-specific, custom memory allocator that sits on top of what malloc()/alloca()/realloc()/free() can provide and which should offer better scalability.
 

Offline itix

  • Hero Member
  • *****
  • Join Date: Oct 2002
  • Posts: 2380
    • Show only replies by itix
Re: NetSurf OS3.x Issues
« Reply #233 on: February 28, 2016, 05:44:13 PM »
Quote from: olsen;804804

Question is what size of memory chunk is common for NetSurf, how many chunks are in play, how large they are. If you have not yet implemented it, you might want to add a memory allocation debugging layer and collect statistics for it over time.

It may be worth investigating how the NetSurf memory allocations could be handled by an application-specific, custom memory allocator that sits on top of what malloc()/alloca()/realloc()/free() can provide and which should offer better scalability.


I would just take a shortcut and install TSLFmem: http://dump.platon42.de/files/

Other than that, there is no really solution. Designing good memory allocator is an art of its own where one has to consider memory fragmentation, allocation performance and deallocation performance.

Yeah, I read from previous page that with TLSFmem Netsurf is crashing but this is very likely due to internal memory trash somewhere in Netsurf... with good old memory lists and standard memory pools buffer under/overflows often go unnoticed but with TLSF you are likely going to crash right away.

Of course, Wipeout session could reveal this albeit it is going to be painfully slow experience with such a complex application.
My Amigas: A500, Mac Mini and PowerBook
 

Offline jennadk

  • Newbie
  • *
  • Join Date: Mar 2015
  • Posts: 26
    • Show only replies by jennadk
Re: NetSurf OS3.x Issues
« Reply #234 on: February 28, 2016, 06:47:24 PM »
This time when launching from the CLI it gave me "8100 0005" which is a corrupted list in Freemem and then my machine hard rebooted. I am using the latest & greatest from your site. At this point I'm willing to cede there's maybe an odd library incompatibility? Something making my memory management not work quite right. My particular 1230 card, while lacking a FPU, does have a MMU.

I'd be happy to run a debug mode, but don't have a cross-compiler set up. Any programming I do is in Python or PHP, which says exactly how much I like compilers.

Edit: this is what I mean about the corrupted list http://eab.abime.net/showthread.php?t=64159

Edit Edit: Okay, I'm getting inconsistent and different freezes/crashes depending on how I launch this, so I'm going to probably try on a clean OS install next. Still get the original 8000000B error when trying the regular double-click launch instead of launching from the CLI, which makes no sense as I get different errors or a hang there.
« Last Edit: February 28, 2016, 07:02:56 PM by jennadk »
________________________________________________
Amiga 1200 ACA1231|IDE2CF|16GB CF|Indivision|3ComLAN
Morphos Mini G4 1.4Ghz|1GB RAM
 

Offline kamelito

Re: NetSurf OS3.x Issues
« Reply #235 on: February 28, 2016, 06:56:50 PM »
@Olsen
Lists, this remind me of this : https://isocpp.org/blog/2014/06/stroustrup-lists

Kamelito
 

Offline chris

Re: NetSurf OS3.x Issues
« Reply #236 on: February 28, 2016, 11:24:15 PM »
Quote from: jennadk;804816
Edit Edit: Okay, I'm getting inconsistent and different freezes/crashes depending on how I launch this, so I'm going to probably try on a clean OS install next. Still get the original 8000000B error when trying the regular double-click launch instead of launching from the CLI, which makes no sense as I get different errors or a hang there.

I'm confident that there is some memory trashing happening in NetSurf, I just wish I knew how to figure out where.

There is a slightly different code path depending on whether it is started from WB or CLI which might account for the differences in how it crashes.  The trashing must happen before the branch which narrows it down a bit, but not much.

I can add some more logging which might help see which function is crashing, and running something like Enforcer might help (although I have no way of interpreting any output).
"Miracles we do at once, the impossible takes a little longer" - AJS on Hyperion
Avatar picture is Tabitha by Eric W Schwartz
 

Offline chris

Re: NetSurf OS3.x Issues
« Reply #237 on: February 28, 2016, 11:38:59 PM »
Quote from: olsen;804804
I suspect that this may not make much of a difference. The memory pools, which is what the malloc()/alloca()/realloc()/free() functions in clib2 are built upon, were intended to avoid fragmenting main memory. This is accomplished by having all allocations smaller than the preset puddle size draw from a puddle that still has enough room left for it to fit. Fragmentation happens inside that puddle.

Hi Olaf, thanks for commenting.  I increased the puddle size to 16K from the default 4K and it seems to have helped in my limited testing (no feedback yet from anybody else).  I figure this reduces the number of puddles in the list which need to be searched through, as well as allowing larger allocations into the pool.

Quote
From the clib2 side I'm afraid that the library can only leverage what the operating system provides, and that is not well-suited for applications which have to juggle large number of allocated memory fragments.

Which is exactly what you get in a web browser, with lots of memory being allocated for one page, and then for the next, and some of the old memory being deallocated...

Quote
Question is what size of memory chunk is common for NetSurf, how many chunks are in play, how large they are. If you have not yet implemented it, you might want to add a memory allocation debugging layer and collect statistics for it over time.

I haven't.  I think there are cache statistics in the log which might offer some clues though.  Beyond the cache data everything else is structures which should all be very small.
"Miracles we do at once, the impossible takes a little longer" - AJS on Hyperion
Avatar picture is Tabitha by Eric W Schwartz
 

Offline wawrzon

Re: NetSurf OS3.x Issues
« Reply #238 on: February 29, 2016, 12:21:51 AM »
Quote from: olsen;804804
I suspect that this may not make much of a difference. The memory pools, which is what the malloc()/alloca()/realloc()/free() functions in clib2 are built upon, were intended to avoid fragmenting main memory. This is accomplished by having all allocations smaller than the preset puddle size draw from a puddle that still has enough room left for it to fit. Fragmentation happens inside that puddle.

The problems begin when the degree of fragmentation inside these puddles becomes so high that the only recourse is to allocate more puddles and allocate memory from that. The number of puddles in use increases over time, and when you try to allocate more memory, the operating system has to first find a puddle that still has room and then try to make the allocation work. Both these operations take more time the more puddles are in play, and the higher the fragmentation within these puddles is. Allocating memory will scale poorly, and what goes for allocations also goes for deallocations.

The other problem is with memory allocations whose length exceeds the puddle size. These allocations will be drawn from main memory rather than from the puddles. This will likely increase main memory fragmentation somewhat, but the same problems that exist with the puddles apply to main memory, too: searching for a chunk to draw the allocation from takes time, and the same goes when deallocating that chunk. There's an additional burden on this procedure because the memory pool has to keep track of that "larger than puddle size" allocation, too.

Because all the memory chunk/puddle, etc. allocations and deallocations use the humble doubly-linked Exec list as its fundamental data structure, the amount of time spent finding the right memory chunk, and putting the fragments back together, scales poorly. Does this sound familiar?

From the clib2 side I'm afraid that the library can only leverage what the operating system provides, and that is not well-suited for applications which have to juggle large number of allocated memory fragments.

Question is what size of memory chunk is common for NetSurf, how many chunks are in play, how large they are. If you have not yet implemented it, you might want to add a memory allocation debugging layer and collect statistics for it over time.

It may be worth investigating how the NetSurf memory allocations could be handled by an application-specific, custom memory allocator that sits on top of what malloc()/alloca()/realloc()/free() can provide and which should offer better scalability.


thanks for confirmation olaf. this is ecxactly what i had in mind. however designing application specific memory allocator oer allocation method is probably not the right way..
 

Offline chris

Re: NetSurf OS3.x Issues
« Reply #239 on: February 29, 2016, 12:50:04 AM »
Leap day treat: I've managed to get NetSurf compiling with optimisations enabled!
It doesn't provide much noticeable speed-up (most of the heavy processing happens in the libraries which were already built with optimisations on) and Javascript still doesn't work (fatal error on launch).

Usual place, built with optimisations but without Javascript, for testing.
"Miracles we do at once, the impossible takes a little longer" - AJS on Hyperion
Avatar picture is Tabitha by Eric W Schwartz