Dan:
I don't want to assume or speak too much for too many people, but I suspect that it's here (above) where many are not following you. You have mentioned the speed versus accuracy trade-off many times, however, this aspect of it is not very intuitive, and is therefore not well understood, and therefore it keeps coming up again and again.
In the paper design stage (presumably before we think about how long it takes to charge a cap) it isn't obvious why 96kHz mandates less precision. The way most people tend to think about sampling the higher rate is MORE precision, in terms of acquiring the signal.
So...in THEORY (before we think about the practical reality of having to build the circuit) do we give up precision to sample faster, or is it that we give it up in practice due to the various limitations of parts and physics we have to work within?
Thanks
David Stewart
Thank you for your comments. I know that some of the concepts regarding sampling are NOT intuitive. It is difficult to explain that more samples are not better in a world where more pixels are better, but the fact remains, samples are not pixels and there are issues that are not easy to convey to people that did not chose to take an EE or math career. I wrote my paper to try to simplify things, but I guess it is still too difficult for many to follow.
So let’s just say that Nyquist was right, and we have 100 years of hand on experience, including test equipment, the communication industry, digital video, digital audio and much more.
And even without that experience, it is proven solidly to be mathematically correct that more samples than needed (as indicated by Nyquist) are going to add ZERO content, and are totally redundant.
Regarding that speed – accuracy tradeoff, that is easier to understand. Analogies can be misleading, but say you take on a task to color a picture with crayons and “stay within the lines”. The picture is intricate. I bet doing it in 10 seconds will be a lot less accurate than if you took 10 minutes. The same statement applies towards so many things. Devices and circuits also have speed limitations (and speed is in fact bandwidth). A given size capacitor takes time to charge, a logic gate takes time to change states and so on. Doing things fast goes against doing things accurately. Devices and circuits can be optimized for maximum speed, power, accuracy and more. They are most often optimized to provide a combination of acceptable tradeoff. When you relax on one requirement, you end up with more “breathing room” for other requirements.
Regarding the sigma delta design, yes, in theory you give up accuracy for speed. The noise shaping concept is about moving noise from a frequency range you wish to use for the signal, to other frequencies. Think of it as digging a hole. You can either dig a deep hole of small diameter, or very shallow hole of a large diameter. It is the same amount of dirt, but a different result. The depth of the hole is analogues to the accuracy, the diameter represent the bandwidth. Do you want great 20KHz or not so great 100KHz?
That answers your question about paper design. But I am an engineer and therefore equally interested in the real parts and circuits. Speed vs accuracy is a solid concept. Speed vs power is another one and there are others. Those concepts are no different than the first law of thermodynamics – never proven but no one so far came up with a single example to contradict it.
Regards
Dan Lavry