R/E/P Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: 1 2 [3]  All   Go Down

Author Topic: 24/96 to 16/44.1 - Best Method?  (Read 9154 times)

chrisj

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 959
Re: 24/96 to 16/44.1 - Best Method?
« Reply #30 on: June 11, 2004, 05:33:44 PM »

bblackwood wrote on Fri, 11 June 2004 16:31

chrisj wrote on Fri, 11 June 2004 15:10

Since libsamplerate ('Secret Rabbit Code', as used in Audacity) is _software_ async SRC using essentially a virtual analog intermediate stage and NOT just a very-high-rate intermediate stage, I still think you should check it out.

What does that mean? The best SRCs I've seen do simply upsample to a very hi fs before downsampling. I might have missed it but what on earth is 'a virtual analog intermediate stage'?


Maybe Zoesch can help with this one if you're serious about wanting to know. There's a reason why libsamplerate gets qualitatively different results than hi-fs converters- if you use a really low quality level what happens is you start losing the sharpness of the brickwall filter, rather than you start getting artifacts. I'll try to explain though... I got it a bit wrong, we're talking about 'periodic sinc interpolation', not windowed sinc interpolation. However it works out to be a window anyway- or sort of like a wavelet.

http://www-ccrma.stanford.edu/~jos/resample/Theory_Ideal_Ban dlimited_Interpolation.html

I don't understand a lot of that math either, but look at the pictures! Very Happy

Essentially, in order to do this, you set up a sinc function that looks like a tiny sine wave with a lot of pre-ring and post-ring. It forms a sort of filter. For every sample, you juxtapose your original waveform's samples against this sinc waveform, and you take the points where it coincides with the source samples and add them all together to produce a composite result that's the convolution of the original file with this funny sinc wave thing.

That works as a filter. Specifically, it works as a really good brickwall filter at the resulting sample rate you're converting to. It also means that the more surrounding samples you use in the convolution, the better accuracy you can get. So to do a super-high-quality version you might be using all the samples from a tenth of a second around the immediate sample- so you'd be doing math on about four thousand samples for every output sample. It gets slow, but that's how it works.

And because you're convolving the input source data against a mathematical construct, the output accuracy isn't limited by an intermediate sample rate at all. There is no intermediate sample rate, just the mathematical shape of the desired waveform at whatever accuracy your computer will support. It becomes strictly a matter of how many adjacent samples you're willing to consider- in other words, CPU load.

This shows some pictures of how it works when your output sample time position is nowhere near the input sample position- http://www-ccrma.stanford.edu/~jos/resample/Implementation.h tml

The mathematical formula for that wave is what moves along with the output sample position, and it sums the intersections of the input samples as they go across the wave. Also note that some of the samples- like the ones right near the point you're working on- are gonna be subtracted, not added! If it's under the horizontal line representing zero for the sinc function, you'll be taking an attenuated version of that sample and subtracting, not adding it. How attenuated? That depends on how far away from the zero line the sample's time position falls- and again, that's a mathematical calculation with NO intermediate sample rate to be concerned with. If you were using a high-resolution math library (like the ones that can give you 100 pages of the value of pi) you could reasonably expect to get billions of times the 'intermediate resolution' of the high-fs converters. In practice, you just get way more 'intermediate resolution'. It's like the difference between 16 bit fixed and 32 bit float- hard to specify exactly how much better the resolution is when you're using a floating-point variable mantissa, but in practice it's 'way better'. Less crunchy.

*chirp* *chirp*

Sorry. Zoesch? Bueller?

Anyway- it's good, mmkay? Very Happy

bblackwood

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 7036
Re: 24/96 to 16/44.1 - Best Method?
« Reply #31 on: June 11, 2004, 05:39:59 PM »

Wow, that looks fascinating. Gonna have to read more about this...
Logged
Brad Blackwood
euphonic masters
Pages: 1 2 [3]  All   Go Up
 

Site Hosted By Ashdown Technologies, Inc.

Page created in 0.102 seconds with 22 queries.