Welcome, Guest. Please login or register.

Author Topic: newb questions, hit the hardware or not?  (Read 63566 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline psxphill

Re: newb questions, hit the hardware or not?
« on: July 16, 2014, 12:28:37 AM »
Quote from: matthey;769066
The mentality of some of the so called next generation Amiga guys is to get away from hardware dependency.

If you can't hit the hardware to read the mouse buttons then it's not an Amiga. However that only supports two ports, if you want more than two then you need to use the OS.
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #1 on: July 17, 2014, 03:20:34 PM »
Quote from: commodorejohn;769198
Low-level optimization does not prevent you from doing high-level optimization

It does not completely prevent you from doing it, but writing complex code in assembler is a lot harder than writing it in C#/Java because you have to do everything yourself.
 
I've replaced assembler with C in projects that has ended up with the C code being quicker, it was rewritten in C to make it portable but in the process I noticed simple ways to optimise the algorithm. The effort required to optimise the assembler code wasn't worth it, neither was writing another version of it for a different assembler.
 
Quote from: commodorejohn;769198
(and the idea that assembly is more prone to bugs than high-level languages is a myth. Bugs come from sloppy thinking, not from lack of language features.)

It's not a myth. It's much easier to spot bugs in a high level language than it is to spot one in assembler. If you have infinite time to study a small assembler program then yes you can get it to zero bugs & sometimes that is important. Time pressure usually means that if you can't spot a bug by speed skimming your code as it scrolls by then it's likely to ship. Automated testing and code analysis also helps reduce bugs, but again these are easier to do with high level languages than assembler.
 
I guess you have never had to spend a couple of months writing 400 lines of code a day to have it tested for a couple of days before it's deployed to thousands of users who work outside of office hours. You need that to work. Language doesn't give you it magically, but spending the time to evaluate whether you could do it faster in assembler is likely to break your budget.
 
Having the same source compiled for PPC and 68k is more important than shaving off a few microseconds. Even if you could speed layer operations up by a further 10%, it would only make a noticeable difference if software was constantly performing layer operations. If it was doing anything else then it will diminish the return you get.
 
If your argument is that every single piece of software ever written should be micro-optimised, then you're likely to be dead before any of the software is finished. It would be cheaper to phone up Motorola and pay them to design a faster 68k just for you.
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #2 on: July 17, 2014, 05:08:22 PM »
Quote from: commodorejohn;769216
I just take exception to your claims that using assembler is never relevant.

I've only seen that he is taking exception to criticism that he hasn't written it in assembler, with people justifying themselves by saying it's a myth that writing in assembler takes longer and is more error prone.
 
Quote from: commodorejohn;769212
That was never what I was arguing. I was simply saying that the fact that algorithm optimization should come first doesn't make low-level optimization irrelevant.

His argument appears to be: Before you do any optimization you should determine how much time the code is actually running for. In this circumstance he believes it doesn't run often enough that writing it in assembler would have any noticeable effect. This goes for any change that increases the on going maintenance cost. Sometimes optimising an algorithm in C has no visible benefits because it's not called often enough and it's more cost effective to throw away the optimised version.
« Last Edit: July 17, 2014, 05:22:36 PM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #3 on: July 17, 2014, 08:45:13 PM »
Quote from: Thorham;769222
Of course it takes longer and is easier to mess up (doesn't mean you end up with bug riddled code, like someone claimed).

How does "easier to mess up" not mean "you end up with bug riddled code"?
 
Quote from: Sean Cunningham;769225
The developers on the original PSX titles were coding games in C and couldn't achieve anything but porkish performance and absolutely nothing for the first couple generations could achieve anything approaching arcade quality responsiveness and framerates.

I believe every single PSX game ever was mostly written in C, although there are a couple of early games that might have had parts written in assembler, because of the bugs in them. Tekken 2 has a bug that mostly goes unnoticed due to luck, which requires emulators be more accurate than Sony had envisaged. Otherwise it looks like http://smf.mameworld.info/img/tekk0009.png
 
Tekken and Tekken 2 were written for the arcade System 11 hardware first and then ported. System 11 appears to have been originally based on one of the PSX prototypes as the GPU is very different. At the end of System 11 life they started shipping with the PSX GPU, so a lot of games detect which GPU is fitted and adapt to it. Some versions of arcade Tekken 2 will run on the newer hardware, but the early versions won't. I suspect they used different tool chains or libraries for those games, which might be where the Tekken 2 bug comes from.
 
I think one of the tomb raider games has some odd register usage which someone speculated made it look like it was partly written in assembler too. The Saturn version was much worse than the PSX version. https://www.youtube.com/watch?annotation_id=annotation_2637363251&feature=iv&src_vid=q6oh_y9Tdao&v=z3GalI7AVj8 https://www.youtube.com/watch?v=NgQP7JOqgsk I believe the Saturn was the lead platform & it shipped three months earlier than the PSX version. http://www.game-rave.com/psx_galleries/battle_tombraider/index.htm
 
The reason why the software became better was mostly due to the performance analyser telling you what was actually making your software slow. Up until then they had people blindly making low level optimisations crossing their fingers they would work, instead it would tell you that actually it's caused by cache misses which means you need to rewrite/restructure your engine to make it fit in the cache better. Other reasons could be that the GPU was being starved because you were overloading GTE, or maybe the GPU was saturated because you were trying to draw too much or were trying to use too many textures. You needed to be able to rapidly change your engine all the way through development and that puts assembler out of the question.
 
I believe Gran Turismo was the first game to be developed using the Performance Analyser. Namco did a faster version of ridge racer which was bundled with ridge racer type 4, it ran at 60fps instead of the original 30fps. I don't know whether they just used their experience or whether this benefited from the performance analyser.
 
When Namco wrote Ridge Racer the PSX didn't exist in it's finished form & the hardware was actually quite different. Once SN Systems talked them into using PC's for development and putting the console hardware onto ISA cards then the Target boxes were returned back to Sony. So not many of the DTL-H500 target boxes exist, so it's hard to tell how different. I don't think Namco went back and optimised it for the final hardware. The CD drive didn't exist when the wrote the game either, which is one of the reasons it is a single load and only uses the drive for red book audio at run time. They only got hold of a prototype drive after the game was finished.
 
I don't believe that Sega ever had any tools like the ones Sony had, so the PSX games just kept getting better. While Saturn had some good games, they were generally poor. It did well for 2d games because I think the fill rate for 2d might have been higher than the PSX. Also Sony banned 2d games in some regions for a while because they wanted to focus on 3d games, which might have been why the 2d shooters ended up on the Saturn.
 
Quote from: Sean Cunningham;769225
The Saturn didn't have the true 3D acceleration that the PSX had

The Saturn had the exact same "3d" capabilities as the PSX. Both had hardware to do the 3d to 2d transforms as both GPU's could only render 2d, the main difference was the PSX scanned triangles and looked up the textures while the Saturn scanned the textures and plotted quads. The Saturn could draw the same screen coordinate more than one or not at all, which made the graphics look a bit wonky and made it hard to do transparency and gouraud shading. The PSX GPU could accept quads, but it split them into two or more triangles for rendering (it also has to split triangles sometimes too as the render has some specific requirements to reduce texture coordinate rounding errors) but this itself causes other rendering issues (though these can be worked round easier than the Saturn issues).
 
The Saturn had a 2d display chip as well & a 2nd cpu, which for games that were released on both formats was probably underutilised. You couldn't justify taking a game that ran and spend another year to make it run another 20% quicker when the market was so much smaller.
 
The only major low level optimisation that Sony introduced was inlining the GTE opcodes (geometry transform engine that does the 3d to 2d transformations) originally you called them through a function as they tried to hide and abstract everything about the hardware so that future consoles could be backward compatible. They backed off in this circumstance because they measured the effect. Sony really tried hard to make developers write software that was portable to different hardware. There were three main revisions of retail PSX, which all ran at different speeds. Games with race conditions are a problem if you only test on one speed of console, but it mostly worked out. It wasn't until the PS2 where the PSX GPU is software emulated & they have to patch games to make them run properly. The had to do something similar for the PS3 backward compatibility, they advertised that job on their web site. There are no 100% accurate PSX emulators out there, because nobody even knows what that means (including it seems Sony as they can't even emulate the 100% GTE accurately).
 
IMO the PSX is like the Amiga, while the Saturn is like the ST. In the next generation the PS2 was like the Saturn, the dreamcast and xbox was like a PC and the gamecube was the nicest hardware. The 360 and PS3 were pretty similar due to Microsoft buying the PS3 CPU from IBM (read the book http://www.amazon.co.uk/Race-New-Game-Machine-The/dp/0806531010). Sony kept their tradition of making more and more complex hardware that required low level optimisation for it to work properly, which is what finished Ken Kutaragi's career. They've both gone back to PC hardware now, with ram type being the main difference. Which introduces interesting issues for cross platform games.
« Last Edit: July 17, 2014, 10:35:12 PM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #4 on: July 17, 2014, 11:34:36 PM »
Quote from: Thorham;769236
Just because it's easier to make mistakes doesn't mean you can't properly debug your code.

In a lot of cases if you can't see the bug because it jumps out of the screen and it only fails in specific cases, then it will get shipped. That is why there has been so much money spent on better compilers and code analysis tools.
 
Quote from: Thorham;769236
Also, the bug riddled thing makes it sound like you can't write good software in assembly language, which is obviously nonsense.

It's not impossible, but neither is winning the jackpot on the lottery. It's just very unlikely. If you're on a death march http://en.wikipedia.org/wiki/Death_march_(project_management)
 
then you will release software as soon as you can because you're sick of it. Writing it in a high level language will definitely increase the chance of releasing it without major bugs.
 
It's also more likely to be good if the source code is from an existing project that has already had many hours of testing, like layers V45.
 
Quote from: biggun;769237
Yea - I know what you mean,
God blessed me with the gift that I can write the most complex algorithms and they are always bugfree.
I never need to debug. Whether I write in C or ASM or right away in hexcode. My code is always bug free.
;-)
 
So if you are like me then coding everything right away in ASM is fine.
But I was told that some people find coding in C easier.

You either have a different definition of complex than I do, or that is sarcasm, or both.
« Last Edit: July 17, 2014, 11:42:08 PM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #5 on: July 18, 2014, 05:30:33 PM »
Quote from: Sean Cunningham;769263
None of them invested in coding the way the AM divisions at SEGA did.

That was more because they already had experience with 3d. Most developers had no experience of 3d at all at the beginning as they had megadrive/snes/amiga backgrounds. That is why Sony got involved with Namco.
 
I always found the Saturn ports of games to be very disappointing and nothing like the frame rate or resolution of the arcade games they were supposed to be. I just watched a video of fighting vipers and it looks quite a low frame rate.
 
Quote from: Bif;769279
The R3000 was interesting in its instruction pairings and I think the compiler wasn't daring enough to get as aggressive as it could.

FWIW It's not an R3000, although Sony went to great lengths to make you think it was.
 
Sony licensed in the R33300 from LSI and modified it. You get HDL and you can change what you want, like adding the GTE and making it so the data cache could only run in scratch pad mode (the R33300 can be run time switched between a traditional cache and scratchpad). You then go back to LSI and they turn it into a sea of gates. The MDEC, DMA, IRQ and the RAM interface was also included here, if you look at the decap of the chip it's pretty much just one big blob of algorithmically generated gates. While gates designed by humans tend to have well defined areas for each piece of functionality. If we had the HDL you could convert it to run on an FPGA.
 
There aren't instruction pairings as such, it's pipelined so that each instruction should finish in a cycle and there is no register stalling (apart from GTE and mult/div). So if you read from a register that hasn't been written to yet by the previous instruction then you get the old contents unless an interrupt has occurred. There is a FIFO write cache so writes don't always stall (this is a standard R33300 feature which can be turned on or off at runtime they didn't bother crippling that) and that can throw you off if you don't know about it.
 
Quote from: Bif;769279
With these older compilers it could be a lot of trial in error in how a loop is constructed, pointer increment vs. index increment, the magic amount of times to unroll a loop, etc.

The instruction cache has only 1 set, so it's very easy to churn the instruction cache when you call a function. If the entire function plus functions it calls cannot fit in the cache then just moving them around in memory can make a huge difference. But that could happen whether you write your application in C or assembler, the key is to have your code written in such a way that you can easily refactor it & that isn't assembler.
 
Quote from: Bif;769279
Now for PSX, I believe one of the things that really dragged down early game performance was the piss poor Sony APIs we were forced to use. Not only did they not always make a lot of sense, their performance was atrocious in some cases (for no great reasons, just brain dead code / API design), with no legal way around it.

There is some interesting code in their libraries, it was in part caused by having to work around bugs in the hardware. Some of the later APIs were better, some of them were worse. The problem being that Sony only wanted you to use their libraries because then they only needed to make sure that the next hardware would work with those libraries. They should have spent more effort on them to start with, because they were reluctant to improve them later on. Even the BIOS has some pretty poor code in it, which they didn't fix because they didn't want to hurt compatibility. It was definitely a lesson for them.
 
Quote from: biggun;769276
Haha lol.
But its true that I write sometimes a handfull instructions in Hexcode directly.

I do too, but too infrequently and for too many different cpus to remember the opcodes. I generally look them up and poke them into something with a disassembler as I don't usually have a cross assembler.
 
Quote from: Bif;769279
though I would say I'd certainly avoid 100% ASM coding just for the sake of it, I'm too old for that crap now.

There is a lot more investment in better compilers these days. If anyone likes staring at ASM trying to figure out ways of making it faster and wants better Amiga software then writing a new back end for gcc or clang would probably be the best bet.
« Last Edit: July 18, 2014, 06:12:21 PM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #6 on: July 19, 2014, 10:18:44 AM »
Quote from: Bif;769355
If you didn't throw an instruction outside the end of a loop you wasted cycles.

Yeah, that is where it got it's name from "Microprocessor without Interlocked Pipeline Stages". Even a branch modifying the program counter is performed without interlocks, so the following instruction is executed.
 
Branch and load delay slots made the hardware much simpler, moving the complexity to the compiler. The cpu in your pc is doing peep hole optimisation constantly, which could be done at compile time. The disadvantage is that you're baking a lot of cpu architecture into the binary. Which is why virtual machines are more interesting. The ART runtime on android is moving away from JIT and going from byte code to optimised code at install time.
 
Quote from: Bif;769355
I'm only now remembering a bit more where I recall designing every loop to do at least 6 loads before anything else. Or maybe that was the R5900, my memory is not that reliable.

I can't think why doing 6 loads would make a difference without a datacache, so it probably is r5900. The cache prefetch in R5900 is interesting as it triggers the data to be fetched from ram into the cache, but doesn't stall the application waiting for results. So you can request all your data is fetched, then do some calculation on data previously loaded into the cache before finally loading the newly cached data into registers. This is the kind of thing that even coding in assembler is really hard to get optimal, because you might end up flushing data out of the cache that you will need again soon.
 
The PS1 was definitely the best design that Sony did. The PS2 and PS3 were too complex and it's hard to think of the PS4 as anything other than a fixed specification PC.
 
I believe that if commodore had ignored AAA and started AGA earlier, but included some chunky 8 and 16 bit modes and included some simple texture mapping in the blitter and released it in 1990 then they would have stood a chance against the 3d consoles & doom on PC etc. AGA was designed in a year, AAA was started in 1988. So giving them two years should have been enough.
« Last Edit: July 19, 2014, 10:42:36 AM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #7 on: July 19, 2014, 01:26:15 PM »
Quote from: spirantho;769360
Floppy access can be MUCH faster without the OS overheads on smaller chunks of data, and less memory footprint, which is very important. Plus to use the OS routines, you need the OS in memory too, which can be a very large chunk of available memory.
1% is massively understating the potential gains, in speed and memory.

You should be able to see the result by using WHDLoad.
 
I think the reason why games kept using custom disk loading was due to piracy and not enough people caring about running games from a hard disk.
 
The are plenty of PC games that did exactly the same thing in the mid 80's, but eventually they decided that allowing the games to be installed on a hard disk would boost sales. http://www.vintage-computer.com/vcforum/showthread.php?16334-PC-Floppy-Disk-Games-Copy-Protection
 
I remember removing the floppy disk protection check from one of the lemmings games on the PC so it could run without the original disk in the drive.
« Last Edit: July 19, 2014, 01:29:16 PM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #8 on: July 19, 2014, 08:30:05 PM »
Quote from: Thomas Richter;769374
Why? Do you think that games use higher magic for loading? The trackdisk device also reads the data in full tracks, and then decodes the entire track in a single go, buffering the result. The limiting factor is the I/O speed of the floppy, and the timing of step motor. Everything else is just software and quite a bit faster than any type of I/O operation.

trackdisk.device is awful, in the abacus book there was a program that patched 1.2 trackdisk to double it's speed (http://issuu.com/ivanguidomartucci/docs/amiga-disk-drives-inside-and-out---ebook-eng page 249, real page 240). Either the person at commodore/amiga who wrote trackdisk didn't understand the hardware, or it was written before functionality was added/worked and the code wasn't revisited. I think commodore improved it in release 2 but it was a little late by then as amiga games were already in decline.
 
If we'd had the os loaded from flash rom or hard disk, instead of mask rom then it would have made more sense to use the os.
 
Final Fight uses the OS for disk loading during levels, so it's entirely possible. But I guess it is slower, has less ram and cpu in the process.
When dos.library and the filesystem was bought in, it was only minimally changed to fit into the amiga & it wasn't a great design in the first place. commodore also improved it in release 2, but there are some things they couldn't change because of compatibility.
« Last Edit: July 19, 2014, 08:49:21 PM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #9 on: July 20, 2014, 07:40:46 AM »
Quote from: commodorejohn;769393
No, no, no, Thorham! You can't manage a large-scale project in assembler, therefore that doesn't exist!

It's a monolithic kernel and has little hardware support, yet it's taken 9 years. That sounds unmanageable to me, you seem to be confused between something being unmanageable and something existing. I'll give you the benefit of the doubt that you just don't understand the meaning of words rather than trying to bend the meaning on purpose (if you want an analogy: unmanageable hair doesn't mean you're bald).
 
http://www.osnews.com/story/1385/Comparing-MenuetOS-SkyOS-and-AtheOS/
 
Quote from: LiveForIt;769394
The issue is that Amiga500 does not have lot RAM, there is space for pre fetching blocks.
Instead the disk has rotate to correct sector read a block discard the rest, rotate to next sector read a block and discard the rest.

trackdisk.device only ever reads and buffers whole tracks.
 
If you read the abacus chapter you'll see that the trackdisk in 1.2 doesn't use the word sync to find where the track starts, it reads more than a track worth and then uses the cpu to search through the result. I think they might have stopped doing that in release 2.
 
The disk format wasn't optimal for the hardware either. For reading it would make more sense if there was just one $4489 per track, this wouldn't affect writing as you have to write an entire track even if you have only modified one byte anyway. It looks like they wanted to allow sector writing because paula can search for the sync word when writing, but it doesn't have any way of checking which sector it would be writing to. My guess is the disk format was decided on and code hacked to work on the hardware that existed but nobody had time, or thought it would be a good idea, to go back and review the design after the hardware was finished.
« Last Edit: July 20, 2014, 08:23:26 AM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #10 on: July 20, 2014, 04:33:23 PM »
Quote from: LiveForIt;769411
Well it has to read the RAW data, decode the MFM, after the MFM is decoded can know whats on it, to see what block that was requested.

It doesn't need to do that for every block though, if you request a block from the last track read then it can skip steps.
 
Quote from: commodorejohn;769429
And I'll give you the benefit of the doubt and assume that you're a robot from space who does not understand the thing we hu-mans refer to as a "joke."

We humans call it sarcasm. I just thought your mobbing attempt should be debunked for it's inaccuracy.
 
Quote from: Thorham;769421
You're right, I haven't. Doesn't mean it's impossible.

There are lots of things that are theoretically possible but practically impossible. Usually because people over estimate their abilities and their theory was incomplete.
 
Quote from: Thomas Richter;769417
Actually, no. If you only had a single sync word, then PAULA had only a single chance of finding the sync word per track

Is this a big deal? Why wouldn't it find the sync word? If there are frequent bit errors then the floppy disk is going to fail anyway as there is no redundancy for all the data bits.
 
The only advantage I can think of is you can start reading sooner as you don't need to wait for the start of a track, but I'm not sure how much help that is if you need to re-arrange the track in ram anyway. Especially with the hokum that kickstart 1.x uses.
 
Quote from: LiveForIt;769411
On the other hand with no sectors, the be only one CRC for large block of 4489, so more data being lost if there was a read/write error, maybe diving it into sector made it more reliable I don't know.

You could use multiple CRC's, but if you're expecting errors then you could lose the only 4489. I don't know how well trackdisk copes with damaged sectors. Especially if it's the sector number that gets corrupted.
 
Quote from: Thomas Richter;769417
With the relatively short sector gap the Amiga trackdisk layout has, this would be rather impossible. The chance of overwriting the next sector would be very high. For the PCs, the uncertainty in write alignment is compensated with the higher inter-sector gap (i.e. the sector can overflow a little bit behind its natural location, then fills the sector gap without overwriting the next sector header).

Yes you would need to make the sector gap larger & I think they would have done that if paula had the functionality in. In write mode with sync turned on it does read from the disk and wait for the word sync. However there is no sector number comparison, my conjecture is that they may have started out down this path and given up due to time/available gates rather than a desire to do it all with the CPU.
 
Quote from: Thomas Richter;769417
From the RKRM description, you would have only gotten unaligned MFM data in the buffer after the track gap, and hence would need to re-align manually - which is what they did. However, PAULA is not that stupid.

I'd assumed that the hardware reference manual was written after trackdisk.device was. There will have been documentation, but exactly what we'll never know. The software and hardware engineers had the opportunity to talk to each other about how the hardware worked and supposedly they did on other occasions. It kinda worked well enough and fixing it might not have been seen as a priority, the developer might have been arrogant about his ability and never bothered to discuss it with anyone or he might have been arrogant enough to say that he'd tried making the hardware work and it didn't so he'd been forced to do it that way and nobody ever took him on.
 
The Kickstart 1.2 easter egg would suggest that it was released before development moved to commodore. My guess is that the trackdisk developer didn't transition to commodore, allowing any misinformation to disappear as to why it was coded like that.
 
Quote from: Thorham;769412
Anyone who says that managing big assembly language projects is impossible, is basically saying that we humans are too damned stupid for that. Speak for yourself, please.

You might want to read this:
 
http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
 
My favourite quotes
 
"One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision."
 
"If you’re incompetent, you can’t know you’re incompetent. […] the skills you need to produce a right answer are exactly the skills you need to recognize what a right answer is."
 
Over the years I've often found people say things are impossible when they are possible, but say they are possible when they are impossible. It's usually down to whether they want to do something rather than any practical reasons.
 
Quote from: Thorham;769434
I'm talking about the larger projects like Gnome and KDE. These were undoubtedly not written in a couple of weeks.

You'd need to design it first, or you'll code yourself into a corner and end up constantly rewriting it all to get some new functionality that you think of tomorrow (and the next day). The problem with doing something for a hobby is that you don't have any external influence. If you don't manage scope then you're going to get bored
« Last Edit: July 20, 2014, 05:14:36 PM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #11 on: July 20, 2014, 05:45:07 PM »
Quote from: Thorham;769437
I never said that I can take on large, complex projects in assembly language by myself and get it right, I said that it's not impossible to do large, complex projects in assembly language and do them properly. Although I'm fairly confident, I'd have to try and see for myself if I could do it or not.

You don't have any idea what your own or anyone elses ability to achieve it is. You also don't have enough experience to evaluate your or anyone elses results.
 
If you read and understood the link I posted then saying it's possible to do it and being fairly confident you could do it yourself is a bad sign for your ability to actually do it. It will definitely have an effect on how long you think it will take and how long it will actually take.
 
Quote from: Thomas Richter;769438
For the history lesson on the sync register, I found that apparently trackdisk was written before PAULA was completed (back then called "Portia") and CBM decided not to update trackdisk for the new features but rather rush it out. AmigaOs was late anyhow.

Well that was my initial guess. While AmigaOS was late, so were the chips. Although that was partly commodore themselves switching AGNUS from YUV to RGB colour space. When they were hawking the breadboards round there was no floppy, sound and rs232 would be much more useful than floppy disk. I got the impression that trackdisk predated dos.library, but I find that kind of history really interesting. So any links etc would be great.
 
Quote from: Thomas Richter;769438
It's also interesting to note that the trackdisk in Kickstart 2.0 and above no longer requries user buffers in chip memory. In worst case, it copies data to a chip mem buffer, or decodes using the CPU, bypassing the blitter. The track buffer remains, of course, in chip ram. Thus, the ugly BufMemType hack for the FFS is actually no longer required (and should not be required by any sane device.)

I knew that you could use it to read into fast ram, but my knowledge (or at least my memory of) how that affects decoding is limited. The MFM decoding using the blitter is pretty insane, they could have implemented one pass MFM decoding and encoding into the blitter. On a chip ram only 68000 system it's probably still faster than using the CPU to do it though, on faster CPU's with fast ram then using a lookup table is probably much better.
 
I think you'd need to specify MEMF_24BITDMA in BufMemType for a Zorro II SCSI card in a Zorro III system with fast ram, but you might not consider that sane.
« Last Edit: July 20, 2014, 06:02:20 PM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #12 on: July 20, 2014, 11:30:31 PM »
Quote from: Thomas Richter;769449
dos is just a rushed port of Tripos

Yeah, they do seem to have ripped the guts out of it and transplanted it with exec devices reasonably well. But the whole BCPL thing is tragic. I don't think anything other than the 1.x C: directory ever used the a5 global vector (which was why they were impervious to the powerpackerpatcher & you had to use the ARP equivalents). I guess AROS 68k doesn't implement the global vector either, but I don't know.
 
Quote from: Thomas Richter;769449
It is insane. Any driver worth its money should know which memory it can reach (by DMA or otherwise), and should take appropriate means to get the data into memory regions where it is accessible for it, possibly using additional buffers.

Additional buffers is the wrong way to do it. Ideally you'd be able to ask for memory that you and another task can access and there would be some way of trackdisk.device or scsi.device to tell exec what memory it needed, the mmu pages for your task and the other task would then get setup properly so that the memory could be accessed.
 
This would change quite a lot though, I think with the way AllocMem works you need BufMemType. I'm not that bothered about that, MaxTransfer is much higher up my list of wtf.
« Last Edit: July 20, 2014, 11:35:18 PM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #13 on: July 21, 2014, 07:00:08 AM »
Quote from: itix;769461
Wasnt MaxTransfer workaround for buggy harddisks that could not transfer more than 64K at once?

You should be able to set MaxTransfer to ffffffff on ATA hard drives, there is a fixed upper limit of 1fffe which is impossible to exceed that should be hard coded in the device and you are then supposed to ask the drive how many words to transfer after issuing a command. commodore don't seem to know about the 1fffe limit & they assume it will transfer what they request.
 
The highest most ATA drives can transfer is 1fe00 because it always transfers multiples of 200 and 20000 would overflow the count. There is nothing to stop a drive from normally being able to transfer 1f800 bytes from only being able to transfer 800 one time because of temporary buffer constraints or sector remapping. If the drive doesn't transfer as much as you need it to then afterwards you're supposed to issue more commands.
 
There might be hard disks that react differently to how commodore expect them to, but the hard disks are operating within specification. commodore either never read it, or decided it was too hard to implement properly (maybe because it was written in assembler?).
 
Any problems with SCSI drives are likely to be caused by similar bugs in the relevant .device code. It sounds better if you have can convince everyone that it's the superiority of the Amiga that it requires kludges to work around bugs in cheap hard disks that were made by lesser people for the inferior PC.
« Last Edit: July 21, 2014, 07:39:20 AM by psxphill »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #14 on: July 21, 2014, 02:16:42 PM »
Quote from: Thomas Richter;769476
I would not have a problem with an interface that provides the *ideal* memory type for such users that want to optimize the throughput, but the overall design principle should be that a device can handle whatever the user provides, regardless of the buffer memory type or the transfer size. Everthing else is just asking for trouble.

I would prefer that you asked for memory that was applicable rather than adding lots of layers which will just slow everything down when you pass the wrong type of ram.
 
 
Quote from: Thorham;769442
I know we can go to the moon, and drive remote controllable vehicles around on Mars, so I know that human beings have the ability to pull off some damn difficult things, and that's all I need to know.

 Do you know that they spent a lot of money on high level language development so they didn't have to rely on someone writing a complex assembly language for the software for those projects?
« Last Edit: July 21, 2014, 02:23:01 PM by psxphill »