I've found an online applet that estimates jitter from SSB phase noise specs supplied as dBc at various offsets (Hz) from the fundamental. What I'm wondering is how such a computation is made; that is, given an SSB phase noise curve, how does one compute the RMS jitter?
In a given datasheet, I see both phase jitter and SSB phase noise specified; the jitter given with condition 1σ (is the sigma to mean standard deviation?) of 0.2 ps, and then says SSB phase noise 100 Hz -95 dB, 1 kHz -125 dB, 10 kHz -140 dB, 100 kHz -145 dB. So which is more important for audio? The jitter spec, or the phase noise (I'm assuming they are related, but I don't understand exactly how), and if the latter, at the lower or at the higher offsets?