Nika Aldrich wrote on Mon, 10 May 2004 23:08 |
I believe what you might be talking about is not a multibit design, in general terms, but rather a Successive Approximation Register, or SAR design. This type of design has been largely abandoned in the industry as having too much distortion (or noise), negating any benefits it might bring. This has been overcome by using heavily oversampled 1 bit designs, that have now become heavily oversampled multibit designs. The weaknesses in the 1 bit designs were particular types of distortion that would present itself in certain, very specific conditions, but the benefits were that it provided nearly perfect integral and differential linearity. Further, the noiseshaping (an inherent part of a DSM) was capable of pushing the noisefloor into a range that allowed far improved performance in the audible range. The new multibit DSM designs couple the benefits of SAR converters (not having the specific distortion components yielded by 1 bit designs) with the benefits of 1 bit designs (low noisefloor in the audible range and low differential and integral non-linearity). I think you would have an extremely difficult time designing an SAR based converter that exceeded the performance of even the mediocre designs used on the market right now.
Nika.
|
Hi Nika,,
yes, by saying multibit, I mean a SAR design. I think they are very good for audio, because they can sample and convert a complete 18-bit precision value at a single and exact point in time, whereas a sigma-delta ADC operates on many very low precisions samples that are averaged.
I agree that a 4 or 5-level sigma delta converter may be much better than a 1-bit. Unfortunately I did not find the number of bits in the respective data-sheets.
However, I consider a high-precision 18-bit conversion much better, in that it is able to do a real conversion or sample at time point X without averaging over time and computation. It is compatible with the theory of sampling, whereas I do not see the validity of the sigma delta process.
As I read the data sheet of the AKM5394 (which I think is one of the very good sigma-delta ADCs), there is a very interesting THD plot on page 21. It shows that THD+N is in fact very low (-104dB) for a full scale input. But at a -30dB input level THD linearly rises to -88dB and the diagram suggests that for a -60dB input the THD would rise to really unacceptable levels.
Exactly the same combination is a fact for sigma-delta DACs. They have very low distortion for a full-scale output, but for a low level, where low distortion is needed, they completely fail in presence of a good R2R multi-bit DAC.
As for a modern SAR-ADC the distortion is specified flat at about -100dB for input signals down to -60dB. That would suggest to me, that a multibit ADC is much better for music, as it is able to precisely convert low level inputs, and it is low-level signals that would need a low distortion figure.
As I understand, sigma delta ADC need the noise-shaping process, because of the incredible high quantization noise that comes out of the 1- or very low-bit modulator.
A high precision multibit ADC has extremely low quantization errors, and thus the complete average, decimate and calculate to insignificant 24-bit numbers process of sigma-delta ADCs is not necessary.
It is just straightforward precision sampling.
I actually don't know what is better.
I think I have to try it out.
Charles