Nika Aldrich wrote on Sat, 11 December 2004 01:18 |
Dan,
If you have a range - say a to b, which requires two quantization steps, or 1 bit, and you divide this range into two (to add 6dB of dynamic range), how many quantization steps did you add? How many quantization steps are there total?
Now add two quantization steps - equivalent to adding a whole bit - into how many regions is the area cut? How much dynamic range have you added?
Again, you said that I'm flatout wrong on this. I'm trying to get you to show how. You're the engineer. It's all math. Math doesn't lie, and engineers don't make mistakes, right?
So, if you go from 1 bit (2 steps) of quantization to 2 bits (4 steps) what happens to the dynamic range? +6dB? Really?
Nika
|
Nika,
No, the a to b range is not about 2 quantization steps. It is the range of operation of the whole converter, where you are always above the lower limit and below the upper limit. The focus is not about how many steps there are or how many lines defining the boundary between codes. It is about the size of the smallest step.
When your signal gets you from code to code, crossing boundaries, that is SIGNAL. The part of the signal that stays between the lines is NOISE. You take the same converter range and add a bit - double the regions between transition points (I call it codes), and each step is half the size it used to be. The noise is half, because it is defined by the step size which is half. In terms of voltage it is proportionally so.
When modeling the behavior make sure to model in such a way that will yield the following: the quantization noise is assumed to be equal probability over a range of a step.
For example:
Take a random number generator between say 0 and 10 and find the rms AC power and from it the rms voltage (take the square root). Now take a random number between say 0 and 5 and do the same. The rms voltage is halved.
I just “forced” you to assume that the signal within a step has an equal probability to land anywhere. Is the assumption a good one? It is a great assumption when you have a signal range broken to say 256 steps (8 bits). Even a very deliberate signal, say a 1KHz sine wave, shares very little with such a fine grid, not to mention a 16 bit with 65536 steps. The assumption of a flat random distribution is way beyond statistical certainty.
So our 6dB per bit is really so for the practical cases of conversion,
Such assumption falls apart when using say a 1KHz sine wave with a 1 bit converter. If the sine wave is 0.1 the step size, one can not claim it has the same probability to be at 0.1 as it does at say .9 of the step. So we lost our randomness. This is exactly why we have quantization distortions and noise when the signal approaches a few LSB’s (very few quantization transitions). BTW, we add dither to regain that randomness.
So when figuring that 6dB per bit, we rely on the concept of flat random statistical distribution, and can do so because we assume 8 or more bits (that is plenty).
So we do not need to worry about the fact that a 1 bit converter noise is not random. We start by saying that if it is highly random (such as the case of 6 or more bits) each additional bit yields 1/2 the noise.
With enough bits, the MSB is “more connected to the signal”. In fact, our 1KHz sine wave will yield 500usec of 0’s followed by 500usec of 1’s – predictable and not random, thus the noise error is not at the MSB. The next bit is still pretty systematic, and still makes for a very predictable pattern…. The LSB is just plain flat random noise…
It is this noise that determines the dynamic range.
So when modeling a 1 or 2 bit converter, to see the behaviour of an 8 or more bits, do it with a flat random quantization noise, and the outcome will be correct…
I hope it helps
Dan Lavry