R/E/P Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: [1]   Go Down

Author Topic: Bypass 192 I/O?  (Read 5818 times)

12345

  • Full Member
  • ***
  • Offline Offline
  • Posts: 140
Bypass 192 I/O?
« on: October 01, 2004, 05:53:01 PM »

I am looking for an off-the shelf solution.  
MW
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Bypass 192 I/O?
« Reply #1 on: October 01, 2004, 10:00:27 PM »

"Here is my dilemma: Using an HD/Accel 3 system, it seems that Digi is intent on keeping their I/O in the signal path. So if I use a converter, I still have to go through their circuitry, which I don't want to do unless there is some overwhelming reason specifically related to sonic quality. I understand Apogee is coming out with their own card that accepts Digi protocol, and also that Prism has "used" the Digi protocol in their designs…”

Hello to you.

Being a moderator is a new for me. I am going to do my best to stay on technical tracks. As a maker of audio gear, I must be particularly sensitive to not promoting my gear, and in all fairness to other manufacturers,  I must not get into any specifics about anyone’s gear.

Regarding the technical side of your comments:

”My ultimate goal is to create surround masters (up to 7.1), for now in a laboratory setting. Can you speak as to the pros/cons of the various output media (DSD, etc.), and the best conversion process to get there? And if the "best conversion process" can bypass the Digi box, or Apogee, etc., how would this happen? Would I have to scrap Protools?”

Other than costs issues, I am not aware of any down side due to properly done digital transfer into a workstation or a storage device. If the unit does the transfer works properly, and each bit finds its place on the hard drive (or whatever) all is fine. There are no issues regarding clock time jitter when going into the storage media, or into a real time processing hardware. As long as each bit is properly captured and handled (no errors), all is fine. So far the issue of data handling is relatively easy.

What about retrieving data from digital storage, workstation or what not? This time, one may or may care about the issue of time jitter. You care when the data drives a DA converter and you are listing to it. Time jitter all of a sudden becomes important, because it impacts the sonic outcome. But in all fairness, the main responsibility for cleaning up the jitter is on the DA circuitry (such as the PLL circuitry of the digital audio receiver). True, less incoming jitter helps a bit. Some go for a re clocking scheme based on SRC circuit. I view it as trading off “jitter sonics” for “SRC sonics” (known as widening of the main lobe on the FFT). I can not talk here about my own methods.

So costs issues aside, I do not think the data I/O is where the problems raise their ugly head.

Regarding your question how to achive 192KHz conversion:
Almost all makers of 192KHz AD gear use AKM chips or Crystal (Cirrus) chips for the conversion process.

BR
Dan Lavry  

Logged

12345

  • Full Member
  • ***
  • Offline Offline
  • Posts: 140
Re: Bypass 192 I/O?
« Reply #2 on: October 02, 2004, 01:12:08 AM »

Am I going overboard here, or should I really focus on the D/A like I think you might have suggested?  Why is the D/A such a problem that it can't simply be sync'd to some baseline and do a data redundancy check?  Or store the file history of the changes made during mixing, and force the D/A to write from a set of change parameters referenced to the original file?  As long as the "change parameters" can be translated sonically this should be feasible (perhaps (as a quick-and-dirty suggestion) a relative orientation to the spectral analysis of the original dataset based on the time domain?).  In this case, the computer would simply acts a "relative carrier," rather than providing the baseline for the entire D/A process.  The amount of computer memory required would be phenomenal, but isn't this what the computer is allowing us to do?  To me, this would also help in the processing of 3-D sonic data because each W,X,Y,Z could be maintained in a matrix, and even visualized.  

I thank you very much for your response to my earlier question, and I hope you can find the time to answer this somewhat-abstract question related to "removing sources of error".  

Regards,
MW
Logged

12345

  • Full Member
  • ***
  • Offline Offline
  • Posts: 140
Re: Bypass 192 I/O?
« Reply #3 on: October 04, 2004, 05:30:05 PM »

Okay...I performed a few models to test this theory.  The theory was that rather than using the signal from the computer to drive the D/A, an alternative method can be used which takes the original dataset from the A/D, and after mixing/editing the file in the computer, uses the original dataset plus the "change file"  to directly guide the D/A...so the D/A can be driven from the original signal and overcome jitter problems.  

The test involved:

1) taking the A/D signal and passing it directly to the D/A as a baseline.  I force-clocked the D/A, so there should be no disconnect between what the A/D is outputting and the D/A is inputting.  

2) I then performed the same test, this time passing the signal through the computer.  So I went from the A/D into the computer and then into the D/A.  I fed the original clocking signal from the A/D into the D/A.  I saved a copy of the file from the A/D as a reference file to be used later.  The signal from this test was nearly identical to the first.  I attribute the actual difference to the processor...

3) I then took the saved file on the computer from step (2), modified it, and saved it as a copy.  I compared the two binary files (the original with the modified one), and subtracted the original one from the modified one.  This was called my binary "change" file.  I then re-added the "change" file to the original file, and sent it out through the D/A, using my original A/D clocking to force-clock the D/A.  The conversion worked!  

From this, I gather that the D/A process can be driven by the original A/D clock parameters.  To accomplish this, it is necessary to preserve the original file from the A/D, and then subsequent changes to this original file.  It is also necessary to save the actual clocking used during the A/D conversion.  Each change made to the file in the computer is saved as a new file, and subtracted from the sum of the previous set of files, and then when it is time to output the data, the D/A can be driven by the original set of parameters used to input them into the computer in the first place.  So the sum of the "change files" plus the original file drives the process, rather than the final file itself.  

Mathematically, I believe the variance between these two approaches will differ by four cosine factors: 1) due to obliquity of the wire connectors relative to an ideal signal path; 2) due to transistor effects in the computer; 3) and 4) due to increased distance of cabling from the clock.  The parameterization, I expect, should be somewhere around (cos(theta))4.  This factor should be correctable via a polynomial mapping of the differnce, which can be taken out during calibration.  

In all, an interesting experiment.  

Sincerely,
My World
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Bypass 192 I/O?
« Reply #4 on: October 04, 2004, 08:30:42 PM »

From what I can tell thus far, it seems to be something like (assuming a properly tuned room):
-- 3-d input format, such as W,X,Y,Z (B-format)
-- into a closely-held mic pre (I like your idea of the pre being close to the mic, in fact I think you should patent this idea if you haven't already)
-- into the front end (all the stuff that makes it "sound good" like compressors, EQ's, etc.)
-- into a pristine A/D
-- into a solid computer with no data loss, probably through some signal carrier
-- out to a pristine D/A to monitors, and format conversion to DSD, etc.

Much of what you say is about sonics, which is a never ending debate… For example, some would prefer an analog EQ or compressor, and they are not all wrong either. It is one thing to do it before the AD conversion, but after the AD, the price (cost and sonic) is a whole other AD and DA…

Apparently from your previous post, the D/A can be a significant source. This is interesting to me because the D/A always struck me as a low-jitter process, assuming it was clocked properly to begin with. And proper clocking I have always attributed to proper calibration...bypassing the jitter characteristics of the computer and clocking the D/A directly from the clock on the A/D…

There are 3 places where even a tiny jitter (less than 100pSec) will impact the signal:
1. AD conversion jitter on the input circuitry (Sample and hold circuit)  
2. AD conversion jitter on the output circuitry (input to the analog filter circuitry)
3. Sample rate converter clocks (both input and output clocks)

All issues regarding data transfer and handling can tolerate orders of magnitude more jitter, many nsec, or even tenth of nsec. There are 1000psec in a nsec.

So it is not so much about keeping things (AD, DA and computer). It is a cool idea to keep the AD and DA using the same clock, but your computer would have to be “locked to” (properly buffered) to the same clock as well. The up side of such approach is the ability to use one good internal clock for AD and DA. The down side, is loss of flexibility. Given a choice between salving the AD or the DA, I would certainly prefer the AD to get the best clock (such as internal crystal), and let the DA operate on PLL. Why? Because the AD is where you define the data. Whatever  is lost in the AD is gone forever. A good AD data played on a bad DA clock (or bad DA device) can be “fixed” by changing to a good DA clock (or device).

So a question comes up--it's no surprise that each vendor builds things differently...different wiring, different connectors, etc. And each wire composition (silver, copper, cryogenic, single-crystal, etc.) and capacitors, etc. pass the information differently...and the "consumer-friendly" units, by the whole, don't factor as much sonics as production. So then to help keep the signal "pristine," but with all the correct elements, do I have to go to the trouble of re-wiring the system with common wires, etc.?
I think this was the reason I related it to the Digi I/O...because I am going to great lengths to fine-tune my signal path, and then the Digi I/O comes along as somewhat of a "transfer box" that I can really just re-wire myself, with my choice of manufacturer's philosophies.

I have a lot of friends in audio that are forever listening to types of OP amps, transistors, capacitors… chasing materials, listening again and again… I know people that would rule out bipolar transistor based amps, and others dislike FETs… It is pretty nuts out there… You can have an encyclopedia of what is mostly nonsense!

The fact is: Polystyrene caps are great for sample hold, but may be less then ideal elsewhere. Some OP amp will be very clean in one circuit configuration, and distort in another. In general, the whole issue of what material and type of electronic components work best is VERY MUCH DEPENDENT on the circuit itself.

Not unlike words in the English language, it is the way you put them together that makes a sentence. Of course it is not a perfect analogy. Of course there are well made parts and poorly made parts out there. But the key to good performance is about good use of parts as an integral part of a circuit.

Of course, most of the “quest” for better parts has been on the analog side. It started way before digital audio, and “everyone knows” that digital is at least a step away from the analog signal. Say we are looking for a X2 gain stage. As a designer, I would be very happy to have good results, and good results are no more no less than a perfect magnification by a factor of 2. Say I met the goal. Should I then care about the way it was accomplished? Was it bipolar? tube? A “piece of wood” with 2 wires and ground?
Unfortunately, in audio, too often the focus is about how one gets there, not how well the result is.

Regarding wiring that “Digi transfer box”, it is not about rewiring, it is about a Digi decision to make their hardware proprietary, like a “pass protection code”. I respected their right of intellectual property. The couple of manufactures you mentioned “broke the code” (not that difficult) and so far, Digi decided not to act on it.
In any case, it would be more than wiring a transfer box, and there is no reason to do it. The digital I/O box works fine. It is a digital data transfer box, with no AD and DA conversion.

Am I going overboard here, or should I really focus on the D/A like I think you might have suggested? Why is the D/A such a problem that it can't simply be sync'd to some baseline and do a data redundancy check?

A D/A has to be very precise in terms of converting from digital to analog. The issue is not about receiving correct data. Say you want “only” 16 bits precision. For a 10V p-p signal, there are 65536 small steps, about 150uV (micro volts). For 20 bits, each step is 1 uV.  The codes change each 22uSec (micro second) for a 44.1KHz CD rate, and the voltage should follow the code very precisely…

BR
Dan Lavry
Logged

12345

  • Full Member
  • ***
  • Offline Offline
  • Posts: 140
Re: Bypass 192 I/O?
« Reply #5 on: October 06, 2004, 12:30:59 AM »

Thanks for your response.  

I think the Digi I/O was a great idea by Digi...for them.  But now I am locked into their circuitry, connectors...  It is something I cannot control unless I switch platforms.

I am glad you think that the A/D is important to get the best clock--I too support this idea that the signal must start from purity and be preserved until the digital storage stage.  (The reason I am glad you think this is because I respect your work and to hear an expert say it is nice for my ego.)  After that, it can always be accessed without loss...and re-sent through different D/A's.  Honestly, I have done very little work on the D/A side because I don't even consider it important until the A/D is worked out.  I have spent so much time trying to configure the front-end that I am *hoping* that it can more or less be applied in reverse when the time comes.  (hopefully?).  

Now I was wondering, that if the sound from the 3 critical areas requiring <100psec accuracy could be filtered out of the time domain before hitting these stages, what would happen?  What if I decided that having real-time access to my data is not important?  What if I don't need to hear the music at all, for that matter?  What if I could input an analog signal, let the converter cache it and process it for 10 hours if necessary to keep it pristine, and after 10 hours I would end up with the data on hard disc?  I know this is mostly what A/D's do, but how about taking this to the extreme?  

Let's say I send the sound through an isolation or power transformer and reduce line distortion, such as by restricting electromagnetic energy to a very narrow frequency pass band?  This could attenuate higher harmonic and spike distortions which fall outside the narrow range of the transformer's low-pass filter.  

I have seen work such as this on bound capacitor-toroid (by toroid I mean large toroid) systems based on an increase of internal series inductance and reduction in primary to secondary capacitance and phase cancellation.  At low frequencies, the capacitor acts as an open switch.  At high frequencies, the capacitor behaves as a closed switch.  This is tuned by choice of capacitor and series inductance.  Overall, the system resolves both differential and common mode noise.  

As the sound passes through this massive "filter," what if we permit the entire system to "clean" itself so that, although it takes a long time to get the data, when we get it it is ready for action in the computer.  It would slow mixing and tracking time in a fast-paced studio, but for those that don't care so much about throughput it seems like allowing the system to "take its time" could yield benefits in the time domain.  In this case, the *preservation* of the signal would take precedence over *clocking* of the signal.  Some people would call it "slow," but some people might like it.  

Couldn't this type of filter "clean up" the 150 uV to 1uV steps...and make the 22uSec timespans less stringent?  
Logged

12345

  • Full Member
  • ***
  • Offline Offline
  • Posts: 140
Re: Bypass 192 I/O?
« Reply #6 on: October 06, 2004, 03:08:24 PM »

Just a note--the previous suggestion for the "pass band" filter was only a brainstorm...I was simply suggesting a way to "verify" or "clean" the signal that might take more time, but might result in a higher level of confidence.  There are 3 or 4 implementations that could work...but the above method seems like it would be inexpensive and for the most part, off-the-shelf.  

I am curious to hear how many people might be interested in giving up "real-time" mixing for the sake of a possibly cleaner signal.  I am almost afraid to hear, because I think that for the most part the answer would be "no."  But if this is possible, then I am curious to hear what the outcome would sound like.  

Something tells me that this might lead back to question the original signal approximation technique of conversion...I think.  

I have also been thinking of a completely different approach to conversion involving interferometry.  It would go something like this: for frequencies input into some controller, the frequencies would be passed under 3 reference signals...such as a 50 Hz signal, a 52 Hz signal, and a 48 Hz signal (these are arbitrary).  All 3 signals would then be overlayed with the original signal and interfered.  An interference pattern would result that can be digitally extrapolated back to the original signal.  I believe this approach would yield higher spatial accuracy and also resolve some time-domain issues since the interference pattern can be calculated outside of the time domain.  In this case, we would not be sampling the signal, but rather the resultant interference of the signal.  A nice thing about interferometry is that the output is pre-calculated, rather than having to worry about signal integrity issue.  

Regards,
My World
Logged

Bob Olhsson

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3968
Re: Bypass 192 I/O?
« Reply #7 on: October 06, 2004, 07:04:44 PM »

My World wrote on Tue, 05 October 2004 23:30

...I am locked into their circuitry, connectors...  It is something I cannot control unless I switch platforms...  

I wouldn't say you are really locked into anything since all of their interfaces support AES/EBU as do all of the better converters. Another platform just requires another company's interface boards or cards. I think a lot of the advertising claiming "support" is little more than marketing hype.
Pages: [1]   Go Up
 

Site Hosted By Ashdown Technologies, Inc.

Page created in 0.075 seconds with 17 queries.