ruffrecords wrote on Thu, 26 October 2006 10:37 |
danlavry wrote on Tue, 24 October 2006 22:48 |
Hi Ian,
In principle, any truncation, should be dithered, because as I pointed out, not truncating may lead to energy concentration in some specific frequencies. The energy concentration I am talking about may have peaks far above the noise floor of a dithered signal. Again, the average value and the rms value of the un dithered signal may be lower, but the peaks of the un dithered signal may be higher then the dithered noise floor.
So in principle, any word length reduction (truncation)should be dithered.
|
Hi Dan,
I agree. I wonder what happens though in a typical DAW? Suppose you make a stereo sub mix of a drum kit, for example; presumably this sub mix has been dithered during its creation? Repeat this for other elements of the music then make a final stereo mix of the sub mixes. Presumably dither is applied again? I know the answer will be DAW specific but there must be some general principles to follow.
Ian
|
I am not sure what all the DAW do. Ideally, the DAW would keep everything without truncation until the end of the process, when one MUST truncate to "fit" the data into some format (such as AES or SPDIF).
I do not know what word length is used for the "sub mix". Ideally one can keep it stored with wide words, such as offered by the mix bus. Dither is needed only when you reduce the word length (less bits).
Say you have some sub mix of some word length, and you want to add to it another track, or another sub mix, or do some EQ... The DAW should allow you to do such operations, and when the operations call for more bits (to the left and or to the right), so at this point there is no reason to truncate, thus no dither is required.
From dither standpoint, I would treat a sub mix the same way I treat a single track. It has some given number of bits. The more significant bits (hopefully 15-21 bits) carry music, the lower bits (22 to whatever) carry noise.
I have said it before a number of times, but here again:
There is the issue of bits for digital processing and DAW.
There is the issue of bits for conversion.
The DAW needs a lot of bits, to allow adding tracks, amplifying / attenuating / EQ / reverb.... all take "work space", so you need a lot of bit to do the processing. Once done, you reduce the number of bits.
The conversion bits is a different story. Say your music is recorded with a 24 bit format where the noise floor is such that only the top 16 bits carry music. In this example, the bottom 8 bits are of no value. All that noise is due to limitations such as mic pre noise, AD noise or what not...
Just because one loads such a file into a DAW with say 48 bits bus, does not mean at all that the extra DAW bits will make the music on that track end up with better then 16 bit. It will not. Say you decide to do an EQ and boost the 10-20KHz by 6dB. The computation will immediately call for a lot more bits, one on the MSB side, and many on the LSB. However, when you boost the music signal over 10-20KHz, you also boost the noise over 10-20KHz...
So how many bits? For conversion, 20 bits result out of an AD is very rare! For DAW, 20 bits is very limiting, you need much more then that....
Regards
Dan Lavry
http://www.lavryengineering.com