SRC is liable to encode jitter (phase modulation) into the audio so it's indeed wise to be careful about using it throughout in the production chain. Use a house sync to keep all devices locked, and if you need different sample rates, use a synchronous SRC boxes like the Weiss SFC2 or software solutions like Weiss Saracon or Barbabatch (audioease).
But that wasn't the question I suppose.
The Benchmark is not the only DAC to use an ASRC for a front end. Cranesong HEDD is another one, and I've seen them pop up in consumer products too. Benchmark is the only DAC that I know of that makes their use of an SRC their unique selling proposition. That's a bit of a pity since their real USP is that the product is just perfectly engineered throughout, with all details taken care of in an effective, elegant and economical way. I suppose just doin' a good job is not something you can put on a sales leaflet, even if that's what it is truly about.
There are three main processes in an ASRC.
*Ratio estimation: measure the ratio of the output frequency to the input frequency.
*Upsampling and decimation.
*Interpolation. Most ASRC's use some form of polynomial (spline) interpolation. This doesn't work very well on straight samples, but precision increases dramatically after upsampling. A 5th order interpolator will make absolute maximum errors below -120dBfs when supplemented by an 8 times upsampling filter. Decimation is optional, but improves precision further.
The current breed of ASRC chips have interpolation algorithms that are accurate beyond the 24-bit level. The quality of the interpolation is no longer an issue. This leaves the ratio estimation and the up/down sampling chain as the dominant factors.
For reasons of latency, the filters' pass band performance is usually specified somewhere around 0.01dB. The audibility of the "end stops" (pre and post echos) of such filters is subject of a minor controversy (minor in that most people don't think it is audible) but it should be mentioned if only to point out that "-140dB THD+N" does not imply that the output signal tracks the input to within 1 lsb.
The ratio estimator is another can o'worms. It compares the input vs output clock rate and phase and low-pass filters the result in order to get an accurate measure. Both the input and output frequencies are analogue quantities (time=analogue) so the ratio estimator entails an implicit analogue-to-digital converion. ASRC's are not purely digital, even though you can write them entirely in DSP code.
Any instabilities in the ratio signal will get encoded as phase modulation into the output. Such instabilities may stem from jitter. The whole jitter attenuation capability of an ASRC hinges on the low-pass filter. The jitter attenuation characteristic equals the frequency response of the ratio estimator's post filter. This filter fulfils the same function as the loop filter in an analogue clock recovery PLL. The advantage of the ASRC is then that you need only one crystal oscillator to cover all sampling rates.
Of course the ratio estimator, being an implicit ADC, suffers from quantisation. The phase between the two clocks is quantised to a time span equal to 1 period of the highest frequency clock in the chip (sometimes a multiple of the output rate, sometimes a separate master clock signal). This error is added to the input jitter before being attenuated by the lowpass filter. Whether this effect is detectable at all depends on the spectral distribution of the quantisation error which in turn depends on the ratio of the input and output clocks.
If you're using an ASRC in a production chain, you're not free to choose these things, but if you're using an ASRC as a DAC front end, use an odd ball output frequency to minimise the odds of this happening.
If you take care to use such an odd output frequency, and if you can live with the inband ripple issue, ASRC's are a practical alternative to analogue PLL's, especially when multiple input sampling rates should be supported.