While am coding an effect, my only concern is enjoy myself doing something
difficult and proving to myself that I can solve the problems. I don't think to
people's reaction when they see my demos.
Of course, I agree with you in this: I code for fun. If it wasn't funny I wouldn't do it, but what people thinks about a production is important (although not as important as the fun I have coding). For example in the last Euskal Party I may have been the second in the intro compo because only one intro was going to be presented. But I only had a rotozoom and some plasma with no music and I thought that althought I had fun doing the code, it had not the quality to be presented. The effects are quite optimized (although they are in C) but showing two effects with no music sounds pretty crap for me. My AGA support was crap (I used WritePixelArray) because I couldn't mix asm with my code then. Now I'm adding an asm c2p (but it's done by Azure). I hope someday I show something of quality. But now I'm learning and I still can't.
that "more or less"is not satisfactory to me. The "simple copying" of data is
often done using the gfx card blitter, wich is not accessible to the coder.
The simple copying of data is done by the cpu in 3D demos as people does when they copy their c2p buffer from fastram to chip memory.
When you copy data from the fastram (or you write it directly to the gfx ram) you do it at maximum speed if you have coded it correctly. If I write a 32bit word to gfx ram that is going to run at the maximum speed regardless of if it is running in Zorro3/PCI/AGP.
Ok I think there are some special transfermodes for AGPs but they are only used by the card to take textures from fastram when it has its memory full. That doesn't help if I'm writing pixels directly.
With AGA I can decide wich resolution and which refresh rate I
want to use. Gfx board allows each different resolution and rates, hence I
cannot reach their limit.
With a gfx card you can force it to run at one resolution, you may force it to only use one refresh rate, but it's better to read the frecuenzy of the screen and use that value for the maximum fps. That way you will achieve the limit.
You can use blitter functions for RTG if you want, but most of people don't use them because most of demos are 3D and don't use special custom chip magic ;-D
I respect a lot coders (like you) who still get the max of the chipset. I understand you when you say that you aren't going to support RTG because you use AGA to the max. That's very reasonable because making it work with RTG would be a lot of work.
But I think that you should agree with some of us when we say that with 3D demos it's very easy to add RTG support. When you do 3D stuff and you do a function that uses your cpu at 100% you will get little speedup using RTG. You will have reached the maximum speed because the bottleneck is the cpu.
And if the zorro2 bus has less bandwitch you will have reached the max speed still, because it will not be possible to get more speed.
A1 and Peggies and PC with emulators have PCI and AGP cards. I guess in
some months we will see programs (maybe demos) requiring DE FACTO AGP cards.
This is good for Amiga in general, but not for demo scene
That will not change my point in any way, because a 68k version of that demo still will use the bus to the maximum. You don't do different code for cards with different buses, you code them in a similar way pc users did in MS-DOS, you get the address of the screen and you have almost full control. You have the added bonus of being able to use RTG functions like they do with DirectX, so you have the good things of both worlds.
Moreover, I heard (but maybe I am wrong) that there are 2 different RTG standrd,
the Picasso96 and the CyberGfx. So one should even support 2 different
standards?
no, you don't have to add support for both if you don't want. Picasso96 is CyberGraphX compatible, so you can get the address of the screen with CGX only if you want. I have support for both but I don't see any speedup so supporting CGX will be more than enough.
So with AGA I can do BOTH CPU-only chunky effects AND chipset supported funny
tricks. And you can join these techniques together.
Ok, as I have said I agree with you in not supporting RTG if you use special AGA features like the copper etc... but most of demos are 3D only these days.
(BTW, how many of these "new
generation coders" are able to proper initialize audio state machine to begin
audio output?)
I don't know but "new generation demos" work a lot better in every machine that "old generation demos". I don't have any problem with audio initialization with 99% of modern demos. That can not be said of old demos.
I have an A4000 and some people didn't support us in the old times, so it's funny to read that. The "new generation coders" support everything. That's fine for me.
With RTG, instead I can only do one sort of effects. I think the
real progress is not the contraposition of old and new, but rather a synthesis of
old (in this case coding techinques) and new.
You can do every kind of effects but you will have to use the cpu and gfx card blitter instead of AGA. You loose the copper, that's right. :-/
Sometimes you have to use different techniques with AGA and RTG, RTG effects sometimes are more difficult (for example with a selected palette it's easier to simulate transparency with AGA using bitplanes, but you will have to work hard to make it moove smoothly with RTG) It's fun because you are also limited. :-)
When I see a 3D only demo I only think: Umm they may have done it RTG compatible, that would have made it run smoother. For example Loonies did a great job with "The Castle" supporting both AGA and RTG.
Talk to coders that do stuff for both AGA and RTG and you will see that if your demo is 3D only, they way the code looks is similar and you reach the maximum speed of the machine.
Anyway, keep on milking the power of AGA :-)
I love to see AGA prods that use AGA to the max.
Best Regards ;-)