chikkenguy wrote on Fri, 01 April 2005 12:39 |
that makes sense. thats pretty much what i thought it was... could you explain why a pll has intrinsically worse performance than an internal master clock?
|
The A/D chip does not actually use a 44.1kHz master clock. It uses something like a 2.8MHz master clock, so the actual clock signal needs to be at this rate. This is much too high, however, for transmitting through 10' cables with any accuracy. So instead, the clock is generated at this rate in an external box, converted down to 44.1kHz, transmitted through a 10' cable, then the PLL not only slaves an internal clock to it, but also upsamples it as well back to 2.8MHz as well.
The question is, which is better?:
A. internal chip running at 2.8MHz sends a clock pulse about 1" on a circuit board trace, or
B. external box has a clock running at 2.8MHz, converts it to 44.1KHz, sends it through a 10' cable to another device (through connectors and whatnot, in the middle of a noise EMI/RFI environment), which uses a PLL to try to both reduce jitter artifacts but also to raise it back to 2.8MHz, and then sends it 1" on a circuit board trace.
Quote: |
i was also thinking that to minimize jitter in a pll, you could put another pll after the first one. sort of like a second stage of jitter filtration... is this done?
|
If the first PLL is designed to allow the slowest response that is possible, then putting a second one after that effectively doubles the response time and makes it too long - the clocks can fall out of sync. Better to have one PLL that is tuned to provide the maximum amount of buffering short of falling out of sync.
Nika