Kronos wrote:
Since you alllrady cuted the stream into pieces, with "best used by" date, you shouldn't even have to worry about timing at all.
The question is just if the maximum time a filter needs is still in time on the specific computer in the a specific load-situation is still within limits. Thats something decided by the user and the coder of the filter. If a package is done faster, no prob....
Each frame is on 1ms in size so the it must get fromt the start to the end of the effects chain in 1ms, or it gets dropped. I would probably add a check at every DSP, to make sure the Frame is still valid before wasting CPU time on it.
If the user specifies larger frames it has more time to pass through the effects chain... but the longer it takes to pass through the DSP.
The signal starts life with a DSP object, which either mathmatically creates the signal or reads a sample buffer, or maybe a bit of both ;-), passes through other DSP objects (which act upon the frame and then pass it on) and ends at the play buffer, from which the DACs play the sound.
The code for the DSP objects is not really time dependant, only to say that it must execute as fast as possible and behave the same each time.
My idea for 1ms frame comes from the idea that each audio signal is 16bit and sampled at 44100Hz... so each frame is about 86kb... this easily fits into a modern L2 cache and probably some L1 caches (though my Athlon64 only has a 64kb L1 data cache). Now if the processor concentrates on 1 audio stream at a time, it will spend pretty much all it's time working from the cache.
Now I need to get the DSP interpretor to fit into the L1 cache, so that Main Memory access is minimal.
So the question is, how many free cycles you still have after applying the effect to every sound-bit of the stream, and wether that is enough for decoding.
That's a good question, tests have shown that my old Athlon 600Mhz could pump a single "1ms Frames" through about 12 hardcoded Lowpass filter objects in 1ms, though trying to get that Data to the Sound card to about 50ms. Obvously a virtual DSP object would be much slower. But speed isn't an issue as I now have a much better CPU :-D
If not, you'll need pre-compiling (no decoding during run-time).
Hah, now I'm quite happy again doing the completly timing-issue free stuff I'm doing :-D
:-D Nah, this is more fun.