Welcome, Guest. Please login or register.

Author Topic: newb questions, hit the hardware or not?  (Read 56560 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline Thorham

  • Hero Member
  • *****
  • Join Date: Oct 2009
  • Posts: 1149
    • Show only replies by Thorham
Re: newb questions, hit the hardware or not?
« Reply #89 from previous page: July 17, 2014, 07:27:44 PM »
Quote from: Leffmann;769228
That's a bit over the top :) he's perfectly merited to be this assertive, and he is right in what he says
No, he's not, because he's saying assembly language is a waste of time. To me, my hobby is NOT a waste of time, thank you very much. It would be a different story if he said that it's a waste of time for himself, but he acts as if it's a waste of time for everyone.

Quote from: Leffmann;769228
- there are no gains to be gotten from withering away doing micro-optimizations on parts that have little or no bearing on the performance of the program.
Obviously. It's just that when you write everything in assembler from the start (hobby!), you wouldn't write compiler style crap in the first place.

Quote from: matthey;769229
It's like a puzzle with beauty in the simplest and most logical code.
Indeed :)

Quote from: matthey;769229
Some people have to code for a living
Fortunately I don't :)
 

Offline wawrzon

Re: newb questions, hit the hardware or not?
« Reply #90 on: July 17, 2014, 08:24:23 PM »
This thread is becoming unnecesarily persnonal. Apparently everybody agrees that high level languages are best to maintain huge modular projects while asm is best for in place optimizations. None needs to convince others of personal interests or choices. Especially doest make sense to insult or attack others. Use your skill where you want or where its used best. People are different for a reason, just must realize they can coop complementarly instead to quarrel.
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #91 on: July 17, 2014, 08:45:13 PM »
Quote from: Thorham;769222
Of course it takes longer and is easier to mess up (doesn't mean you end up with bug riddled code, like someone claimed).

How does "easier to mess up" not mean "you end up with bug riddled code"?
 
Quote from: Sean Cunningham;769225
The developers on the original PSX titles were coding games in C and couldn't achieve anything but porkish performance and absolutely nothing for the first couple generations could achieve anything approaching arcade quality responsiveness and framerates.

I believe every single PSX game ever was mostly written in C, although there are a couple of early games that might have had parts written in assembler, because of the bugs in them. Tekken 2 has a bug that mostly goes unnoticed due to luck, which requires emulators be more accurate than Sony had envisaged. Otherwise it looks like http://smf.mameworld.info/img/tekk0009.png
 
Tekken and Tekken 2 were written for the arcade System 11 hardware first and then ported. System 11 appears to have been originally based on one of the PSX prototypes as the GPU is very different. At the end of System 11 life they started shipping with the PSX GPU, so a lot of games detect which GPU is fitted and adapt to it. Some versions of arcade Tekken 2 will run on the newer hardware, but the early versions won't. I suspect they used different tool chains or libraries for those games, which might be where the Tekken 2 bug comes from.
 
I think one of the tomb raider games has some odd register usage which someone speculated made it look like it was partly written in assembler too. The Saturn version was much worse than the PSX version. https://www.youtube.com/watch?annotation_id=annotation_2637363251&feature=iv&src_vid=q6oh_y9Tdao&v=z3GalI7AVj8 https://www.youtube.com/watch?v=NgQP7JOqgsk I believe the Saturn was the lead platform & it shipped three months earlier than the PSX version. http://www.game-rave.com/psx_galleries/battle_tombraider/index.htm
 
The reason why the software became better was mostly due to the performance analyser telling you what was actually making your software slow. Up until then they had people blindly making low level optimisations crossing their fingers they would work, instead it would tell you that actually it's caused by cache misses which means you need to rewrite/restructure your engine to make it fit in the cache better. Other reasons could be that the GPU was being starved because you were overloading GTE, or maybe the GPU was saturated because you were trying to draw too much or were trying to use too many textures. You needed to be able to rapidly change your engine all the way through development and that puts assembler out of the question.
 
I believe Gran Turismo was the first game to be developed using the Performance Analyser. Namco did a faster version of ridge racer which was bundled with ridge racer type 4, it ran at 60fps instead of the original 30fps. I don't know whether they just used their experience or whether this benefited from the performance analyser.
 
When Namco wrote Ridge Racer the PSX didn't exist in it's finished form & the hardware was actually quite different. Once SN Systems talked them into using PC's for development and putting the console hardware onto ISA cards then the Target boxes were returned back to Sony. So not many of the DTL-H500 target boxes exist, so it's hard to tell how different. I don't think Namco went back and optimised it for the final hardware. The CD drive didn't exist when the wrote the game either, which is one of the reasons it is a single load and only uses the drive for red book audio at run time. They only got hold of a prototype drive after the game was finished.
 
I don't believe that Sega ever had any tools like the ones Sony had, so the PSX games just kept getting better. While Saturn had some good games, they were generally poor. It did well for 2d games because I think the fill rate for 2d might have been higher than the PSX. Also Sony banned 2d games in some regions for a while because they wanted to focus on 3d games, which might have been why the 2d shooters ended up on the Saturn.
 
Quote from: Sean Cunningham;769225
The Saturn didn't have the true 3D acceleration that the PSX had

The Saturn had the exact same "3d" capabilities as the PSX. Both had hardware to do the 3d to 2d transforms as both GPU's could only render 2d, the main difference was the PSX scanned triangles and looked up the textures while the Saturn scanned the textures and plotted quads. The Saturn could draw the same screen coordinate more than one or not at all, which made the graphics look a bit wonky and made it hard to do transparency and gouraud shading. The PSX GPU could accept quads, but it split them into two or more triangles for rendering (it also has to split triangles sometimes too as the render has some specific requirements to reduce texture coordinate rounding errors) but this itself causes other rendering issues (though these can be worked round easier than the Saturn issues).
 
The Saturn had a 2d display chip as well & a 2nd cpu, which for games that were released on both formats was probably underutilised. You couldn't justify taking a game that ran and spend another year to make it run another 20% quicker when the market was so much smaller.
 
The only major low level optimisation that Sony introduced was inlining the GTE opcodes (geometry transform engine that does the 3d to 2d transformations) originally you called them through a function as they tried to hide and abstract everything about the hardware so that future consoles could be backward compatible. They backed off in this circumstance because they measured the effect. Sony really tried hard to make developers write software that was portable to different hardware. There were three main revisions of retail PSX, which all ran at different speeds. Games with race conditions are a problem if you only test on one speed of console, but it mostly worked out. It wasn't until the PS2 where the PSX GPU is software emulated & they have to patch games to make them run properly. The had to do something similar for the PS3 backward compatibility, they advertised that job on their web site. There are no 100% accurate PSX emulators out there, because nobody even knows what that means (including it seems Sony as they can't even emulate the 100% GTE accurately).
 
IMO the PSX is like the Amiga, while the Saturn is like the ST. In the next generation the PS2 was like the Saturn, the dreamcast and xbox was like a PC and the gamecube was the nicest hardware. The 360 and PS3 were pretty similar due to Microsoft buying the PS3 CPU from IBM (read the book http://www.amazon.co.uk/Race-New-Game-Machine-The/dp/0806531010). Sony kept their tradition of making more and more complex hardware that required low level optimisation for it to work properly, which is what finished Ken Kutaragi's career. They've both gone back to PC hardware now, with ram type being the main difference. Which introduces interesting issues for cross platform games.
« Last Edit: July 17, 2014, 10:35:12 PM by psxphill »
 

Offline Thorham

  • Hero Member
  • *****
  • Join Date: Oct 2009
  • Posts: 1149
    • Show only replies by Thorham
Re: newb questions, hit the hardware or not?
« Reply #92 on: July 17, 2014, 08:50:06 PM »
Quote from: psxphill;769235
How does "easier to mess up" not mean "you end up with bug riddled code"?
Just because it's easier to make mistakes doesn't mean you can't properly debug your code. Writing good software in assembly language just takes longer. Also, the bug riddled thing makes it sound like you can't write good software in assembly language, which is obviously nonsense.
 

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show only replies by biggun
    • http://www.greyhound-data.com/gunnar/
Re: newb questions, hit the hardware or not?
« Reply #93 on: July 17, 2014, 09:01:40 PM »
Quote from: Thorham;769236
Just because it's easier to make mistakes doesn't mean you can't properly debug your code. Writing good software in assembly language just takes longer. Also, the bug riddled thing makes it sound like you can't write good software in assembly language, which is obviously nonsense.


Yea - I know what you mean,
God blessed me with the gift that I can write the most complex algorithms and they are always bugfree.
I never need to debug. Whether I write in C or ASM or right away in hexcode. My code is always bug free.
;-)

So if you are like me then coding everything right away in ASM is fine.
But I was told that some people find coding in C easier.

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #94 on: July 17, 2014, 11:34:36 PM »
Quote from: Thorham;769236
Just because it's easier to make mistakes doesn't mean you can't properly debug your code.

In a lot of cases if you can't see the bug because it jumps out of the screen and it only fails in specific cases, then it will get shipped. That is why there has been so much money spent on better compilers and code analysis tools.
 
Quote from: Thorham;769236
Also, the bug riddled thing makes it sound like you can't write good software in assembly language, which is obviously nonsense.

It's not impossible, but neither is winning the jackpot on the lottery. It's just very unlikely. If you're on a death march http://en.wikipedia.org/wiki/Death_march_(project_management)
 
then you will release software as soon as you can because you're sick of it. Writing it in a high level language will definitely increase the chance of releasing it without major bugs.
 
It's also more likely to be good if the source code is from an existing project that has already had many hours of testing, like layers V45.
 
Quote from: biggun;769237
Yea - I know what you mean,
God blessed me with the gift that I can write the most complex algorithms and they are always bugfree.
I never need to debug. Whether I write in C or ASM or right away in hexcode. My code is always bug free.
;-)
 
So if you are like me then coding everything right away in ASM is fine.
But I was told that some people find coding in C easier.

You either have a different definition of complex than I do, or that is sarcasm, or both.
« Last Edit: July 17, 2014, 11:42:08 PM by psxphill »
 

Offline Sean Cunningham

  • Jr. Member
  • **
  • Join Date: Apr 2014
  • Posts: 95
    • Show only replies by Sean Cunningham
    • http://www.imdb.com/name/nm0192445
Re: newb questions, hit the hardware or not?
« Reply #95 on: July 18, 2014, 01:18:23 AM »
Sorry but the Saturn games were not "poor".  The AM divisions games were outstanding and offered arcade feel, something virtually no PSX game ever did, across its entire lifetime.  The closest to that feel that I ever got was from Psygnosis' Wipe-out, but it still didn't have the refresh.  The Tekken series were okay but still didn't feel "arcade" and felt laggy compared to VF2 though the Tekken series was lightyears better than Toshinden, ugh.

None of the PSX 3D fighters had the same responsiveness or arcade feel as the VF series, Fighting Vipers, Last Bronx, etc. and the 2D fighters that were available for both platforms played better on the Saturn.  One of the few 3D fighters available for both, Dead or Alive, was better on the Saturn (I had it for both, to counter your Tomb Raider example).  The PSX had some clever games but it was a major disappointment and rarely got pulled out at my house, and I had both systems the first day they went on sale.  

I played Ridge Racer some and Midnight Club and Rage Racer but they didn't have the arcade feel of Sega Rally Championship.  The top PSX track-n-field game was disappointing after playing Sega's high-resolution 60fps game.  

Sorry, nope.  3rd party offerings on the Saturn were generally not too good, I'll give you that, unless they were 2D.  None of them invested in coding the way the AM divisions at SEGA did.  But playing PSX games it was like Sony had never been to an arcade before.  NAMCO was likely the most successful but they still seemed like they couldn't quite get there.
« Last Edit: July 18, 2014, 01:30:41 AM by Sean Cunningham »
 

Offline LiveForIt

Re: newb questions, hit the hardware or not?
« Reply #96 on: July 18, 2014, 03:12:58 AM »
Quote from: biggun;769237
Yea - I know what you mean,
God blessed me with the gift that I can write the most complex algorithms and they are always bug free.
I never need to debug. Whether I write in C or ASM or right away in hex code. My code is always bug free.
;-)

Where few remember the Hex values, do you have exceptional memory, or did you spent a lot of time looking at HEX code.

Quote
But I was told that some people find coding in C easier.

Its easier to debug some thing if you have some debug symbols, or else you need to remember the assembler code, some people have problems with that if they have not looked at code in a while, there for debug symbols and stack traces are big help. Even more so if your trying to fix something someone else wrote.

Quote
So if you are like me then coding everything right away in ASM is fine.

I think that comes with experience, if he knows what he want to do, and know how need to do it then that's fine.

But some times you don't know what best approach is and you need to try different methods out, to find the best one, unless you already know what best machine code to use, it maybe not best idea to spend too much time optimizing a bad idea, that your going trow away latter for a different approach.

I have seen few examples people spending a lot of time writing assembler code, to do some thing ending up doing every thing on the CPU, instead of using existing routines or OS functions that take advantage of DMA, and hardware acceleration.

Its also silly to not use a existing routine that someone has spent years perfecting, and ending up writing your own routines that ends up being slower. So its a good idea to do some bench-marking.

And if you did write better routine way not replace / optimize the old routine instead of bloating the code with duplication.

And when it comes to bug free code, I have see my share of programs that where so called bug free, that wrote out side of allocated memory blocks.

It a good idea to run Enforcer/Mungwall, program can run with out crashing, but yet corrupt memory for other applications or the OS, if blocks close to the block that was overwritten was reserved by another program this might happen while your writing the program so you might not notice it.

For example allocating some memory example 256, and counting to 256, instead of 255, that's a mistake that is so easy to do.
« Last Edit: July 18, 2014, 05:14:54 AM by LiveForIt »
 

Offline itix

  • Hero Member
  • *****
  • Join Date: Oct 2002
  • Posts: 2380
    • Show only replies by itix
Re: newb questions, hit the hardware or not?
« Reply #97 on: July 18, 2014, 04:21:39 AM »
Quote from: Thorham;769236
Just because it's easier to make mistakes doesn't mean you can't properly debug your code. Writing good software in assembly language just takes longer. Also, the bug riddled thing makes it sound like you can't write good software in assembly language, which is obviously nonsense.


There is limit how large code base you can manage yourself. With assembly language you hit this limit sooner than with C language. And again in C language you probably hit this limit sooner than with C++. And so on. Very likely your project written in assembly language never grow to "full scale" as you get tired to maintain huge code base.

Asm makes it easier to make bugs because you must remember small details more. Which was not problem to me when I was writing in 68k asm but for example on Amiga you must remember to assign parameters to right registers. Using stack for temporary variables in asm is more difficult than in C. In asm you must count how many bytes you need from the stack and then calculate correct offset to access each variable in the stack. With good C compiler register usage is possibly more efficient because at least in theory it could compute optimal register usage in functions. In practise it isnt so, it seems.

But coding in 68k asm can be fun. I did that several years in the past.
My Amigas: A500, Mac Mini and PowerBook
 

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show only replies by biggun
    • http://www.greyhound-data.com/gunnar/
Re: newb questions, hit the hardware or not?
« Reply #98 on: July 18, 2014, 07:11:46 AM »
Quote from: psxphill;769258
You either have a different definition of complex than I do, or that is sarcasm, or both.



Haha lol.
I thought a little bit of fun does not hurt, right :-D
Better than people getting at each others throats here.


But its true that I write sometimes a handfull instructions in Hexcode directly.
As you know I did develop the instruction decoders for three full 68K CPUs (68050/Apollo/Phoenix)
therefore I have quite good practise in knowing how every 68K instruction is encoded....

guest11527

  • Guest
Re: newb questions, hit the hardware or not?
« Reply #99 on: July 18, 2014, 07:29:27 AM »
I recommend:

http://xkcd.com/378/

http://www.pbm.com/~lindahl/real.programmers.html

Now, be serious. Anyone recommending assembler for a full-scale project probably either has never done it, or has a different definition on "full scale" than I do. "layers" is a small project. (Yes, really).

ViNCEd is fully assembler, but still only medium size. Just to give you an idea, that's four years of work in this code. The same in C would have taken one fourth of its time. Besides debugging, which was one problem, the major problem was extending and enhancing the project. For a higher language, you rewrite a couple of functions or classes, change the interfaces here and there, and the compiler warns you for places where it no longer fits, most of the time at least. For assembler, I had to go through the complete code base over and over again. The version you have is version 3, which is more or less "rewritten for the third time" because that's more or less the only viable option you have for assembler. Well, not exactly rewritten, but for each new revision, I went through the complete code, every single function, line by line. Each version took about three month to complete, and more than a year to debug completely to my satisfaction.

As a Hobbyist, you may find time to do that, for a mid-size project. As a professional that needs to ship software at some point, and that need to attack somewhat larger scale projects than this, such a development pace is unacceptable. The relation of work to generated code is quite bad.

I also learned during the development that my assembler code looks more and more like compiler output. To get the project managable, you'd better pick a strict register allocation policy (so you know what are scratch registers and what is not) and depend on single-entry, single-exit, single-return value. That keeps the code managable. Code that does not becames quite early unmanagable - ARexx is such an example: Completely in assembler, multiple return values, no clear register allocation policy - not maintainable anymore, a big pile of mess.

Then I learned to use the compiler for larger projects, and rely only on assembler where needed. VideoEasel is such a project. That's a bit larger-scale (still not big, but larger). C everywhere, except where assembly is really required. It took also three attacks to get the project done. First some prototypes, then I started in assembler, but learned that it would be unmanagable. Version 2 was never completed. Then I started again, version 3, in C. That version was completed.

Thus, I really learned the hard way *not* to use assembler. Anyone who claims that assembler is the way to go has not yet tried to write a full-fledged full-scale application with it. Been there, done that. Life and learn.

Greetings, Thomas
 

Offline Bif

  • Full Member
  • ***
  • Join Date: Aug 2009
  • Posts: 124
    • Show only replies by Bif
Re: newb questions, hit the hardware or not?
« Reply #100 on: July 18, 2014, 09:19:22 AM »
Quote from: Sean Cunningham;769225
Your name made me recall the first generation of next-gen consoles and how they relate directly to this discussion.  The developers on the original PSX titles were coding games in C and couldn't achieve anything but porkish performance and absolutely nothing for the first couple generations could achieve anything approaching arcade quality responsiveness and framerates.

The PlayStation series are an interesting study on this topic. I think each machine faced different problems and required something quite different to achieve high performance.

For PSX, indeed I'm really not aware of much ASM being used on projects. I don't think I ever used any in my area. I think this was for two reasons: 1) all the graphics and sound heavy lifting needed was done via dedicated hardware. The main CPU was really too slow to waste precious cycles doing heavy lifting, and 2) there wasn't anything terribly special about the CPU that ASM would produce vastly better performance than C (no vector unit, etc.). I think the interesting irony here is that the PSX (of all Playstations) hardware setup (mediocre CPU with custom hardware to do heavy lifting) would be closest to the Amiga. In this case its the old slow platform that required no ASM to squeeze performance out of it, the opposite of what you might expect.

Now for PSX, I believe one of the things that really dragged down early game performance was the piss poor Sony APIs we were forced to use. Not only did they not always make a lot of sense, their performance was atrocious in some cases (for no great reasons, just brain dead code / API design), with no legal way around it. In my game area, using their stuff robbed the R3000 of 10% of its total cycles across the whole game. I'm sure this was pretty much true for almost any game that ever shipped. I got frustrated and bypassed the problem area, 10% game cycles back. I did fear the trouble I might get in - they eventually found out what I was doing through performance analysis of our games, but just gave me the nudge nudge wink wink as having games perform that much better is not going to look bad for their brand. I'm sure other gameplay areas ran into similar issues and worked out improvements over time.

For PS2 I spent a crapload of time writing ASM. Gobs of heavy lifting code written for both the main vector unit and R3000. Luckily it supported inline ASM so you only had to code the critical part of each function in ASM - it's really not bloody exciting fun or useful coding function call/entry code in ASM. At that point ASM was the only way to use the vector unit to full advantage, and it can provide a huge boost in performance. In the PS2 the R3000 also sat pretty much unused so I used the crap out of it for my stuff, and I think coding in ASM did help in many cases. When a loop to get something done is just several cycles it can really help to knock one or two cycles off. The R3000 was interesting in its instruction pairings and I think the compiler wasn't daring enough to get as aggressive as it could. I think I also got a lot of performance out of trial and error with straight C code though. With these older compilers it could be a lot of trial in error in how a loop is constructed, pointer increment vs. index increment, the magic amount of times to unroll a loop, etc.

The PS3 requires even more gobs of ASM to make it shine as all the power is in the SPUs, and there are lesser amounts of other hardware you can offload work to. You need ASM to take advantage of the vector processing in the SPUs. Actually, that is not fully true - unless you are insane, you use "intrinsics" instead to get at vector or other instructions that a compiler cannot easily use. Intrinsics are essentially ASM instructions, but they do not take registers as arguments, they take C variables. The compiler then does the work of register allocation. It's a beautiful compromise as register allocation/tracking is always what drives me totally nuts about ASM programming when dealing with a more complex algorithm, and a good compiler is going to do a better job of this than you unless you REALLY want to spend a lot of time thinking about how to beat it. I did have to work with a code base that was actually programmed in a large amount of real SPU ASM, probably out of stubbornness as I couldn't see a performance advantage to it - I really wanted to bitchslap the programmer as it is brutally hard to try to understand and debug that amount of someone else's ASM.

Now I've not touched a bit of Amiga code in 25 years, but if I had to get something going, I think I'd be inclined to at least try to code in C/C++ as much as possible, and only ASM optimize the heavy lifting tight loops where the compiler is sucking. I'd try to use the libraries provided, but if they became a problem, I'd bypass them. Just saying not sure there is 100% one right or wrong way to do anything, it will depend, though I would say I'd certainly avoid 100% ASM coding just for the sake of it, I'm too old for that crap now.
« Last Edit: July 18, 2014, 09:24:34 AM by Bif »
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #101 on: July 18, 2014, 05:30:33 PM »
Quote from: Sean Cunningham;769263
None of them invested in coding the way the AM divisions at SEGA did.

That was more because they already had experience with 3d. Most developers had no experience of 3d at all at the beginning as they had megadrive/snes/amiga backgrounds. That is why Sony got involved with Namco.
 
I always found the Saturn ports of games to be very disappointing and nothing like the frame rate or resolution of the arcade games they were supposed to be. I just watched a video of fighting vipers and it looks quite a low frame rate.
 
Quote from: Bif;769279
The R3000 was interesting in its instruction pairings and I think the compiler wasn't daring enough to get as aggressive as it could.

FWIW It's not an R3000, although Sony went to great lengths to make you think it was.
 
Sony licensed in the R33300 from LSI and modified it. You get HDL and you can change what you want, like adding the GTE and making it so the data cache could only run in scratch pad mode (the R33300 can be run time switched between a traditional cache and scratchpad). You then go back to LSI and they turn it into a sea of gates. The MDEC, DMA, IRQ and the RAM interface was also included here, if you look at the decap of the chip it's pretty much just one big blob of algorithmically generated gates. While gates designed by humans tend to have well defined areas for each piece of functionality. If we had the HDL you could convert it to run on an FPGA.
 
There aren't instruction pairings as such, it's pipelined so that each instruction should finish in a cycle and there is no register stalling (apart from GTE and mult/div). So if you read from a register that hasn't been written to yet by the previous instruction then you get the old contents unless an interrupt has occurred. There is a FIFO write cache so writes don't always stall (this is a standard R33300 feature which can be turned on or off at runtime they didn't bother crippling that) and that can throw you off if you don't know about it.
 
Quote from: Bif;769279
With these older compilers it could be a lot of trial in error in how a loop is constructed, pointer increment vs. index increment, the magic amount of times to unroll a loop, etc.

The instruction cache has only 1 set, so it's very easy to churn the instruction cache when you call a function. If the entire function plus functions it calls cannot fit in the cache then just moving them around in memory can make a huge difference. But that could happen whether you write your application in C or assembler, the key is to have your code written in such a way that you can easily refactor it & that isn't assembler.
 
Quote from: Bif;769279
Now for PSX, I believe one of the things that really dragged down early game performance was the piss poor Sony APIs we were forced to use. Not only did they not always make a lot of sense, their performance was atrocious in some cases (for no great reasons, just brain dead code / API design), with no legal way around it.

There is some interesting code in their libraries, it was in part caused by having to work around bugs in the hardware. Some of the later APIs were better, some of them were worse. The problem being that Sony only wanted you to use their libraries because then they only needed to make sure that the next hardware would work with those libraries. They should have spent more effort on them to start with, because they were reluctant to improve them later on. Even the BIOS has some pretty poor code in it, which they didn't fix because they didn't want to hurt compatibility. It was definitely a lesson for them.
 
Quote from: biggun;769276
Haha lol.
But its true that I write sometimes a handfull instructions in Hexcode directly.

I do too, but too infrequently and for too many different cpus to remember the opcodes. I generally look them up and poke them into something with a disassembler as I don't usually have a cross assembler.
 
Quote from: Bif;769279
though I would say I'd certainly avoid 100% ASM coding just for the sake of it, I'm too old for that crap now.

There is a lot more investment in better compilers these days. If anyone likes staring at ASM trying to figure out ways of making it faster and wants better Amiga software then writing a new back end for gcc or clang would probably be the best bet.
« Last Edit: July 18, 2014, 06:12:21 PM by psxphill »
 

Offline Bif

  • Full Member
  • ***
  • Join Date: Aug 2009
  • Posts: 124
    • Show only replies by Bif
Re: newb questions, hit the hardware or not?
« Reply #102 on: July 19, 2014, 09:11:55 AM »
Quote from: psxphill;769317
There aren't instruction pairings as such, it's pipelined so that each instruction should finish in a cycle and there is no register stalling (apart from GTE and mult/div).


Yeah you are right, I wasn't thinking superscalar pairing, my memory was bringing back the weirdness with the branch delay slot, where the instruction after the branch is always executed. If you didn't throw an instruction outside the end of a loop you wasted cycles. I can hardly remember any of this stuff, your memory and knowledge is really quite amazing, you live up to your moniker. I'm only now remembering a bit more where I recall designing every loop to do at least 6 loads before anything else. Or maybe that was the R5900, my memory is not that reliable.
 
Quote from: psxphill;769317
There is a lot more investment in better compilers these days. If anyone likes staring at ASM trying to figure out ways of making it faster and wants better Amiga software then writing a new back end for gcc or clang would probably be the best bet.


Yeah I agree, I was going to say the same thing but got too tired typing all that out. I still think there is room to use ASM to leverage some things that compilers can probably never be good at. E.g. in early days on integer only machines you could do tricks with Add With Carry type instructions to shave an instruction or two off a tight loop. That's the kind of stuff I'd be looking at if I went down to ASM.
 

Offline psxphill

Re: newb questions, hit the hardware or not?
« Reply #103 on: July 19, 2014, 10:18:44 AM »
Quote from: Bif;769355
If you didn't throw an instruction outside the end of a loop you wasted cycles.

Yeah, that is where it got it's name from "Microprocessor without Interlocked Pipeline Stages". Even a branch modifying the program counter is performed without interlocks, so the following instruction is executed.
 
Branch and load delay slots made the hardware much simpler, moving the complexity to the compiler. The cpu in your pc is doing peep hole optimisation constantly, which could be done at compile time. The disadvantage is that you're baking a lot of cpu architecture into the binary. Which is why virtual machines are more interesting. The ART runtime on android is moving away from JIT and going from byte code to optimised code at install time.
 
Quote from: Bif;769355
I'm only now remembering a bit more where I recall designing every loop to do at least 6 loads before anything else. Or maybe that was the R5900, my memory is not that reliable.

I can't think why doing 6 loads would make a difference without a datacache, so it probably is r5900. The cache prefetch in R5900 is interesting as it triggers the data to be fetched from ram into the cache, but doesn't stall the application waiting for results. So you can request all your data is fetched, then do some calculation on data previously loaded into the cache before finally loading the newly cached data into registers. This is the kind of thing that even coding in assembler is really hard to get optimal, because you might end up flushing data out of the cache that you will need again soon.
 
The PS1 was definitely the best design that Sony did. The PS2 and PS3 were too complex and it's hard to think of the PS4 as anything other than a fixed specification PC.
 
I believe that if commodore had ignored AAA and started AGA earlier, but included some chunky 8 and 16 bit modes and included some simple texture mapping in the blitter and released it in 1990 then they would have stood a chance against the 3d consoles & doom on PC etc. AGA was designed in a year, AAA was started in 1988. So giving them two years should have been enough.
« Last Edit: July 19, 2014, 10:42:36 AM by psxphill »
 

Offline ppcamiga1

Re: newb questions, hit the hardware or not?
« Reply #104 on: July 19, 2014, 10:36:20 AM »
More than twenty years ago when I bought the Amiga 1200,

the most annoying thing is that the games does not work with a hard drive,

because some idiots doing these games "optimized" them and read out data from floppy disk

without the operating system.

On the Amiga with floppy disk only those games were faster maybe about 1%,

but on the Amiga with a hard drive were useless because these games do not use hard disk.

The same games work better on pc because these games use hard disk.


The second of most annoying thing on the Amiga 1200,

was that software does not work with VGA monitor.

Because again some idiots "optimized" the software,

users lost the ability to connect at low cost VGA monitor to Amiga.

Those idiots could have gained maybe 1.5% on performance, maybe not.

VGA cable to connect to the Amiga 1200 may cost 4 Euro maybe less.

Users should not be forced to purchase scandoubler for 150 Euro

and more because some developers are too stupid,

to give up with useless "optimization".

AGA must be differently programmed to use an ordinary monitor,

and differently to use VGA monitor.

It is sad but this is what Commdore did many years ago, and developers have to just accept it.




Access to the hard disk on the classic Amiga, should be made only through the system,

the original IDE interface is too slow, software for classic Amiga should work with FastATA.

Access to the graphics on the Amiga classic, should be made only through the system,

users should be able to connect at a low cost VGA monitor to Amiga.

Access to the keyboard and mouse on the Amiga classic, should be made only through the system,

users should be able to use USB mouse and keyboard with USB interface only and

without additional hardware.