blueintheface wrote on Mon, 05 June 2006 12:28 |
Hi - this is my first post here, though I've been a voyeur for some time.
That mammoth thread that was here somewhere on the virtues - or otherwise - of high sample-rates was as informative and interesting as anything I've ever read anywhere - in the audio field. Anyone bookmark a link?
Anyway, Roger Nichols is not my favourite person at the moment - Elemental Audio and price hikes and all - but this isn't about that. This is about Roger's article in Sound On Sound May 2006.
Either I'm not understanding something, or the science is a bit dubious - like MOTU's demo's of the superiority of high sample rates!
Roger's main assertion is that the resson 24 Bit audio sounds better - 'particularly at the bass end' is because:
Quote: | The 256 times higher resolution is in effect everywhere in the waveform, from the lowest levels to the highest peaks. A sample point nearing 0dB full scale is 256 times more accurate than the same sample recorded at 16-bit.
|
Hmmm.
Is it more accurate to say that the 24-bit sample is digitally described with greater resolution, but does that mean what you get post D/A is 256 times more accurate?
Quote: | Let's cut down the confusion with bit sizes, let's use the smallest bit in the 24-bit scale as a reference and call it a step. The difference between Sample A and Sample B in the 24-Bit recording is 16 steps. The difference between the same samples in the 16-Bit recording is 112 steps. That is 96 steps away from where it should have been - a 700% error in low-frequency signal.
|
Again, I'm not disputing the superiority of 24-Bit resolution, I'm just skeptical of the 'science' behind these explanations. Anyone?
Edit: spelling
|
Are you quoting the statements accurately?
The first statement (as posted) was about having 256 more accuracy with 24 bits (then 16 bits). That one is correct IN THEORY. Each additional bit is a factor of 2 improvement so with 8 bits you have 2*2*2*2*2*2*2*2 = 256. From an ear stand point, each bit is 6dB additional improvement, so 8 more bits will improve the dynamic range by 48dB.
But first, even in theory, note that the improvement is about fine detail BELOW the 96dB range offered by a 16 bit format. In other words, a perfect 16 bits yields 0.001526% accuracy so the additional bits will improve on that.
Second, we can talk about 24 bits all day long, but there is no converter that will yield real 24 bits. The lowest bits are noise. In fact, take a mic, any mic. Take a mic-pre, any mic pre. Set the mic pre gain to say 30-40dB. You now have enough noise to burry the top 5-6 bits with noise making them useless. Your real world statement becomes: My 20 bit AD is receiving enough noise to make it function as an 18 bit AD (or much less), so I have a 4 times improvement over a 16 bits machine, that is 12dB more accuracy.
Regarding the second statement. It is completely flawed. Using the lowest bit as a reference is off, causing that very misleading conclusion, about 700% error.
Say we have a million dollars deal, and I call a million 100%.
Say I got short changed by a dollar. What is the percent “error”? It is only 0.0001%.
Say I use a dollar as a “reference”, making it the “100% point”. Then a missing dollar is 100% error. Such “approach” is of course ridiculous! It is 100% out of 100000000%, where the maximum starting point (when talking percentage) should be 100%.
Not to mention that that lowest step is buried in huge amount of noise to start with.
Not to mention that you do not need 24 bits – 144dB dynamic range. Having 120dB is fantastic range from ear standpoint.
The rest of the comment about sample A vs. B having 112 steps error is weird. Why 112?
But the weirdest statement was about: “700% error in low frequency signal”. It is totally and completely out to lunch. What does any of it has to do with frequency? Nothing! In theory, one can have 256 more accuracy BELOW the 0.0015%. In practice, nowhere near it. And that is for any signal and ANY FREQUENCY.
Regards
Dan Lavry
http://www.lavryengineering.com