R/E/P > Dan Lavry

Bypass 192 I/O?

(1/2) > >>

I am looking for an off-the shelf solution.  

"Here is my dilemma: Using an HD/Accel 3 system, it seems that Digi is intent on keeping their I/O in the signal path. So if I use a converter, I still have to go through their circuitry, which I don't want to do unless there is some overwhelming reason specifically related to sonic quality. I understand Apogee is coming out with their own card that accepts Digi protocol, and also that Prism has "used" the Digi protocol in their designs…”

Hello to you.

Being a moderator is a new for me. I am going to do my best to stay on technical tracks. As a maker of audio gear, I must be particularly sensitive to not promoting my gear, and in all fairness to other manufacturers,  I must not get into any specifics about anyone’s gear.

Regarding the technical side of your comments:

”My ultimate goal is to create surround masters (up to 7.1), for now in a laboratory setting. Can you speak as to the pros/cons of the various output media (DSD, etc.), and the best conversion process to get there? And if the "best conversion process" can bypass the Digi box, or Apogee, etc., how would this happen? Would I have to scrap Protools?”

Other than costs issues, I am not aware of any down side due to properly done digital transfer into a workstation or a storage device. If the unit does the transfer works properly, and each bit finds its place on the hard drive (or whatever) all is fine. There are no issues regarding clock time jitter when going into the storage media, or into a real time processing hardware. As long as each bit is properly captured and handled (no errors), all is fine. So far the issue of data handling is relatively easy.

What about retrieving data from digital storage, workstation or what not? This time, one may or may care about the issue of time jitter. You care when the data drives a DA converter and you are listing to it. Time jitter all of a sudden becomes important, because it impacts the sonic outcome. But in all fairness, the main responsibility for cleaning up the jitter is on the DA circuitry (such as the PLL circuitry of the digital audio receiver). True, less incoming jitter helps a bit. Some go for a re clocking scheme based on SRC circuit. I view it as trading off “jitter sonics” for “SRC sonics” (known as widening of the main lobe on the FFT). I can not talk here about my own methods.

So costs issues aside, I do not think the data I/O is where the problems raise their ugly head.

Regarding your question how to achive 192KHz conversion:
Almost all makers of 192KHz AD gear use AKM chips or Crystal (Cirrus) chips for the conversion process.

Dan Lavry  

Am I going overboard here, or should I really focus on the D/A like I think you might have suggested?  Why is the D/A such a problem that it can't simply be sync'd to some baseline and do a data redundancy check?  Or store the file history of the changes made during mixing, and force the D/A to write from a set of change parameters referenced to the original file?  As long as the "change parameters" can be translated sonically this should be feasible (perhaps (as a quick-and-dirty suggestion) a relative orientation to the spectral analysis of the original dataset based on the time domain?).  In this case, the computer would simply acts a "relative carrier," rather than providing the baseline for the entire D/A process.  The amount of computer memory required would be phenomenal, but isn't this what the computer is allowing us to do?  To me, this would also help in the processing of 3-D sonic data because each W,X,Y,Z could be maintained in a matrix, and even visualized.  

I thank you very much for your response to my earlier question, and I hope you can find the time to answer this somewhat-abstract question related to "removing sources of error".  


Okay...I performed a few models to test this theory.  The theory was that rather than using the signal from the computer to drive the D/A, an alternative method can be used which takes the original dataset from the A/D, and after mixing/editing the file in the computer, uses the original dataset plus the "change file"  to directly guide the D/A...so the D/A can be driven from the original signal and overcome jitter problems.  

The test involved:

1) taking the A/D signal and passing it directly to the D/A as a baseline.  I force-clocked the D/A, so there should be no disconnect between what the A/D is outputting and the D/A is inputting.  

2) I then performed the same test, this time passing the signal through the computer.  So I went from the A/D into the computer and then into the D/A.  I fed the original clocking signal from the A/D into the D/A.  I saved a copy of the file from the A/D as a reference file to be used later.  The signal from this test was nearly identical to the first.  I attribute the actual difference to the processor...

3) I then took the saved file on the computer from step (2), modified it, and saved it as a copy.  I compared the two binary files (the original with the modified one), and subtracted the original one from the modified one.  This was called my binary "change" file.  I then re-added the "change" file to the original file, and sent it out through the D/A, using my original A/D clocking to force-clock the D/A.  The conversion worked!  

From this, I gather that the D/A process can be driven by the original A/D clock parameters.  To accomplish this, it is necessary to preserve the original file from the A/D, and then subsequent changes to this original file.  It is also necessary to save the actual clocking used during the A/D conversion.  Each change made to the file in the computer is saved as a new file, and subtracted from the sum of the previous set of files, and then when it is time to output the data, the D/A can be driven by the original set of parameters used to input them into the computer in the first place.  So the sum of the "change files" plus the original file drives the process, rather than the final file itself.  

Mathematically, I believe the variance between these two approaches will differ by four cosine factors: 1) due to obliquity of the wire connectors relative to an ideal signal path; 2) due to transistor effects in the computer; 3) and 4) due to increased distance of cabling from the clock.  The parameterization, I expect, should be somewhere around (cos(theta))4.  This factor should be correctable via a polynomial mapping of the differnce, which can be taken out during calibration.  

In all, an interesting experiment.  

My World

From what I can tell thus far, it seems to be something like (assuming a properly tuned room):
-- 3-d input format, such as W,X,Y,Z (B-format)
-- into a closely-held mic pre (I like your idea of the pre being close to the mic, in fact I think you should patent this idea if you haven't already)
-- into the front end (all the stuff that makes it "sound good" like compressors, EQ's, etc.)
-- into a pristine A/D
-- into a solid computer with no data loss, probably through some signal carrier
-- out to a pristine D/A to monitors, and format conversion to DSD, etc.

Much of what you say is about sonics, which is a never ending debate… For example, some would prefer an analog EQ or compressor, and they are not all wrong either. It is one thing to do it before the AD conversion, but after the AD, the price (cost and sonic) is a whole other AD and DA…

Apparently from your previous post, the D/A can be a significant source. This is interesting to me because the D/A always struck me as a low-jitter process, assuming it was clocked properly to begin with. And proper clocking I have always attributed to proper calibration...bypassing the jitter characteristics of the computer and clocking the D/A directly from the clock on the A/D…

There are 3 places where even a tiny jitter (less than 100pSec) will impact the signal:
1. AD conversion jitter on the input circuitry (Sample and hold circuit)  
2. AD conversion jitter on the output circuitry (input to the analog filter circuitry)
3. Sample rate converter clocks (both input and output clocks)

All issues regarding data transfer and handling can tolerate orders of magnitude more jitter, many nsec, or even tenth of nsec. There are 1000psec in a nsec.

So it is not so much about keeping things (AD, DA and computer). It is a cool idea to keep the AD and DA using the same clock, but your computer would have to be “locked to” (properly buffered) to the same clock as well. The up side of such approach is the ability to use one good internal clock for AD and DA. The down side, is loss of flexibility. Given a choice between salving the AD or the DA, I would certainly prefer the AD to get the best clock (such as internal crystal), and let the DA operate on PLL. Why? Because the AD is where you define the data. Whatever  is lost in the AD is gone forever. A good AD data played on a bad DA clock (or bad DA device) can be “fixed” by changing to a good DA clock (or device).

So a question comes up--it's no surprise that each vendor builds things differently...different wiring, different connectors, etc. And each wire composition (silver, copper, cryogenic, single-crystal, etc.) and capacitors, etc. pass the information differently...and the "consumer-friendly" units, by the whole, don't factor as much sonics as production. So then to help keep the signal "pristine," but with all the correct elements, do I have to go to the trouble of re-wiring the system with common wires, etc.?
I think this was the reason I related it to the Digi I/O...because I am going to great lengths to fine-tune my signal path, and then the Digi I/O comes along as somewhat of a "transfer box" that I can really just re-wire myself, with my choice of manufacturer's philosophies.

I have a lot of friends in audio that are forever listening to types of OP amps, transistors, capacitors… chasing materials, listening again and again… I know people that would rule out bipolar transistor based amps, and others dislike FETs… It is pretty nuts out there… You can have an encyclopedia of what is mostly nonsense!

The fact is: Polystyrene caps are great for sample hold, but may be less then ideal elsewhere. Some OP amp will be very clean in one circuit configuration, and distort in another. In general, the whole issue of what material and type of electronic components work best is VERY MUCH DEPENDENT on the circuit itself.

Not unlike words in the English language, it is the way you put them together that makes a sentence. Of course it is not a perfect analogy. Of course there are well made parts and poorly made parts out there. But the key to good performance is about good use of parts as an integral part of a circuit.

Of course, most of the “quest” for better parts has been on the analog side. It started way before digital audio, and “everyone knows” that digital is at least a step away from the analog signal. Say we are looking for a X2 gain stage. As a designer, I would be very happy to have good results, and good results are no more no less than a perfect magnification by a factor of 2. Say I met the goal. Should I then care about the way it was accomplished? Was it bipolar? tube? A “piece of wood” with 2 wires and ground?
Unfortunately, in audio, too often the focus is about how one gets there, not how well the result is.

Regarding wiring that “Digi transfer box”, it is not about rewiring, it is about a Digi decision to make their hardware proprietary, like a “pass protection code”. I respected their right of intellectual property. The couple of manufactures you mentioned “broke the code” (not that difficult) and so far, Digi decided not to act on it.
In any case, it would be more than wiring a transfer box, and there is no reason to do it. The digital I/O box works fine. It is a digital data transfer box, with no AD and DA conversion.

Am I going overboard here, or should I really focus on the D/A like I think you might have suggested? Why is the D/A such a problem that it can't simply be sync'd to some baseline and do a data redundancy check?

A D/A has to be very precise in terms of converting from digital to analog. The issue is not about receiving correct data. Say you want “only” 16 bits precision. For a 10V p-p signal, there are 65536 small steps, about 150uV (micro volts). For 20 bits, each step is 1 uV.  The codes change each 22uSec (micro second) for a 44.1KHz CD rate, and the voltage should follow the code very precisely…

Dan Lavry


[0] Message Index

[#] Next page

Go to full version