Well, I don't care if the resulting dataset is larger than the original sound, since I'm not interested in this for the purpose of compression. I just want to try it out of curiosity, to see if it really does work :-) (and no, I can't just take your word for it ;-) )
I understand that the frequencies and amplitudes of a complex sound will vary over time, and thus so will the amplitudes of the constituent pure tones. So how would you get around this? Perform the Fourier analysis on each individual sample, then generate the pure tones for each sample one sample in length, then play back all of the resulting (different) samples consecutively?
Again, I know this is pointless since the goal is to end up with pretty much the same sound you started with. It is purely for personal interest not for any practical application that I want to try this. Imagine how exciting it would be to take a sample of your own voice saying "hello", perform the necessary analyses to determine the parameters for its constituent pure tones, then generate all of those tones simultaneously and magically hear your own voice saying "hello"!
--
moto