Nika,
Well, you’ve identified my major concern with the suggestion. The way I thought about it is was you would take the jitter spectrum as you suggested early in the thread, and feed it into a program that would spit out the THD+N graph. By using a program as a ‘virtual converter’, you’ll be able to get around the idiosyncrasies of any real converter that might be used in a given test. After all, real converters have their own flaws and you wouldn’t want a clock’s jitter spec to be influenced by a particular choice of converter.
As usual, the devil is in the details. First you’d have to take the output of a real jitter spectrum measurement such as is available from the AP setup. That’s the easy part. Then a program would have to model the spectrum as a linear combination of noise and jitter frequency components. Steve’s committee will have to agree on basis functions for modeling jitter. There aren’t many reasonable choices and they are all roughly equivalent. For example, you might choose A*gaussian(mean, variance) as the template for a noise term and B*phi(omega t) as the template for a periodic term. In any case the general idea is that you would derive a function J(t), that behaved like the real device and whose terms were a linear combination of the basis functions. The value of J(t) would represent the expected error of the real clock at a given time, t.
So the error at a given sample would simply be sin(omega*t) – sin(omega*t + J(t)). To get the THD+N for a given frequency you’d just run the program for some nominal period (say 10 seconds) at the given sampling rate and resolution, and compute the RMS error.
Lots of details to work out, and after writing this, I’m not so sure that you could get any group to converge on the myriad decisions required for this to work. There are definitely compromises to be made, and of course, the programs that did all the computations would have to be open sourced to ensure that there wasn’t any cheating. Furthermore, after all this simulation and what-not, would you have a spec that truly represented a device’s capabilities and limitations?
In spite of all these obstacles, I really do think some sort of THD+N spectral plot would be infinitely more useful to audio engineers than a jitter spectrum.
-Dennis