bblackwood wrote on Fri, 11 June 2004 16:31 
chrisj wrote on Fri, 11 June 2004 15:10  Since libsamplerate ('Secret Rabbit Code', as used in Audacity) is _software_ async SRC using essentially a virtual analog intermediate stage and NOT just a veryhighrate intermediate stage, I still think you should check it out.

What does that mean? The best SRCs I've seen do simply upsample to a very hi fs before downsampling. I might have missed it but what on earth is 'a virtual analog intermediate stage'?

Maybe Zoesch can help with this one if you're serious about wanting to know. There's a reason why libsamplerate gets qualitatively different results than hifs converters if you use a really low quality level what happens is you start losing the sharpness of the brickwall filter, rather than you start getting artifacts. I'll try to explain though... I got it a bit wrong, we're talking about 'periodic sinc interpolation', not windowed sinc interpolation. However it works out to be a window anyway or sort of like a wavelet.
http://wwwccrma.stanford.edu/~jos/resample/Theory_Ideal_Ban dlimited_Interpolation.html
I don't understand a lot of that math either, but look at the pictures!
Essentially, in order to do this, you set up a sinc function that looks like a tiny sine wave with a lot of prering and postring. It forms a sort of filter. For every sample, you juxtapose your original waveform's samples against this sinc waveform, and you take the points where it coincides with the source samples and add them all together to produce a composite result that's the convolution of the original file with this funny sinc wave thing.
That works as a filter. Specifically, it works as a really good brickwall filter at the resulting sample rate you're converting to. It also means that the more surrounding samples you use in the convolution, the better accuracy you can get. So to do a superhighquality version you might be using all the samples from a tenth of a second around the immediate sample so you'd be doing math on about four thousand samples for every output sample. It gets slow, but that's how it works.
And because you're convolving the input source data against a mathematical construct, the output accuracy isn't limited by an intermediate sample rate at all. There is no intermediate sample rate, just the mathematical shape of the desired waveform at whatever accuracy your computer will support. It becomes strictly a matter of how many adjacent samples you're willing to consider in other words, CPU load.
This shows some pictures of how it works when your output sample time position is nowhere near the input sample position
http://wwwccrma.stanford.edu/~jos/resample/Implementation.h tml
The mathematical formula for that wave is what moves along with the output sample position, and it sums the intersections of the input samples as they go across the wave. Also note that some of the samples like the ones right near the point you're working on are gonna be subtracted, not added! If it's under the horizontal line representing zero for the sinc function, you'll be taking an attenuated version of that sample and subtracting, not adding it. How attenuated? That depends on how far away from the zero line the sample's time position falls and again, that's a mathematical calculation with NO intermediate sample rate to be concerned with. If you were using a highresolution math library (like the ones that can give you 100 pages of the value of pi) you could reasonably expect to get billions of times the 'intermediate resolution' of the highfs converters. In practice, you just get way more 'intermediate resolution'. It's like the difference between 16 bit fixed and 32 bit float hard to specify exactly how much better the resolution is when you're using a floatingpoint variable mantissa, but in practice it's 'way better'. Less crunchy.
*chirp* *chirp*
Sorry. Zoesch? Bueller?
Anyway it's good, mmkay?