R/E/P Community

R/E/P => R/E/P Archives => Dan Lavry => Topic started by: danlavry on October 13, 2004, 06:08:42 PM

Title: Time delay problems, real or not?
Post by: danlavry on October 13, 2004, 06:08:42 PM
Time delays, when is it real?

Some times delay are important to watch for. Others are of academic interest or of little practical use. Let us examine some cases:

An important, and often overlooked case is when mixing (adding) a signal that appears on more than one track. Perhaps the simplest example is a stereo recording, when some portion of the sound arrives at both the L and R channel. The common practice in stereo is to use a stereo converter with equal delay on both channels. Yet, any additional processing done to one channel but not the other may make the delays unequal.

Of course, the same situation applies to multi channel recording. Not unlike stereo, it is best (from time matching stand point) to use a multi channel AD utilizing a common clock. Mixing AD’s made by different manufacturers is likely to introduce time delays between channels. Again, keeping the portion of the sound (signals) shared by more than one channel at the equal delay is a good idea. The equal delay concept all the way to the mix can prevent problems.

What are the problems?

Say you wish to add 2 simple signals. Both are a equal 1KHz sine wave tone. The expected result is to double the amplitude. But if one tone is delayed by say 500uSec  both signals are out of phase and the addition will yield a total cancellation.

Reducing the delay to less than 500usec will cause a partial cancellation. The concept of cancellation or partial cancellation (attenuation) does not require equal amplitude waves, or even equal waves. Such signal attenuation due to time delay happens to the portion of the sound wave that is shared by the channels being added (mixed).

A lot of delay is required to cause attenuation of very low frequency energy. But higher frequencies are much more susceptible to such a mix. For example, a 20KHz signal cycle lasts 50usec. Half a cycle is 25usec, therefore 25usec is a point of maximum attenuation. The same 25usec inter channel delay will have little effect on an 100Hz tone, where a cycle lasts 10000usec.

How good of a time match?

Of course, the answer depends on how much delay is acceptable and at what frequency.
Below is some reference data I computed for those interested:

25usec delay at 1KHz attenuates by -.027dB
25usec delay at 5KHz attenuates by -.688dB
25usec delay at 10KHz attenuates by -.3.01dB
25usec delay at 15KHz attenuates by -8.343dB
25usec delay at 20KHz attenuates completely (no signal)

10usec delay at 1KHz attenuates by -.004dB
10usec delay at 5KHz attenuates by -.108dB
10usec delay at 10KHz attenuates by -.436dB
10usec delay at 15KHz attenuates by -1.002dB
10usec delay at 20KHz attenuates by 1.841dB

5usec delay at 1KHz attenuates by -.001dB
5usec delay at 5KHz attenuates by -.027dB
5usec delay at 10KHz attenuates by -.108dB
5usec delay at 15KHz attenuates by -.243dB
5usec delay at 20KHz attenuates by .436dB

1usec delay at 1KHz attenuates by -.0004dB
1usec delay at 5KHz attenuates by -.0001dB
1usec delay at 10KHz attenuates by -.004dB
1usec delay at 15KHz attenuates by -.009dB
1usec delay at 20KHz attenuates by .017dB

The data above shows is a good indicator for the amount of attenuation when mixing 1KHz, 5KHz, 10KHz 15KHz and 20KHz tone due to some delay (25, 10, 5 or 1usec).

This is one case when delay can make a big difference. Note that I am talking about ELECTRIC SIGNALS DELAY, not acoustic delay of sound in the air. It is difficult (if not impossible) to control the acoustic delay to say 1usec. Yet, keeping the AD conversion and processing delay EQUAL will guard from such cancellation. I AM NOT TALKING ABOUT AN ACOUSTIC ISSUE SUCH AS MIC PLACMENT. I AM TAKING ABOUT AN ELECTRICAL SIGNAL HANDLING ISSUE.

To be continued...

Br
Dan Lavry
     
Title: Re: Time delay problems, real or not?
Post by: Kendrix on October 14, 2004, 09:02:59 PM
Another interesting topic.

In the domain of sound waves the generally accepted rule of thumb is the 3:1 minimum ratio of distances for positioning 2 mics to record the same source.  As I understand it, this distance/delay is enough to minimize the phase induced "attenuation" by offsettng the signals sufficiently to cause them to be effectively uncorrelated phase-wise for most frequencies/sources.  In this case no cancellation occurs. ( real world sources are not pure sine wave generators)

If we apply such a rule to the electronic/digital domain it suggests that you might be OK if the differential delay between correlataed channels is greater than X.  If you can tolerate X delay from a musical/timing standpoint then the phase-induced attenuation might not be an issue.  

What value might X have? I suspect something on the order of 10 miliseconds would have sonic impact at key frequencies.

So, if you can't keep all the channels sufficiently tight ( delay wise - for instance because of differential processing) then it might be good to really loosen them up.

Am I not accounting for something?
Title: Re: Time delay problems, real or not?
Post by: bobkatz on October 14, 2004, 09:49:23 PM
Kendrix wrote on Thu, 14 October 2004 21:02



In the domain of sound waves the generally accepted rule of thumb is the 3:1 minimum ratio of distances for positioning 2 mics to record the same source.  As I understand it, this distance/delay is enough to minimize the phase induced "attenuation" by offsettng the signals sufficiently to cause them to be effectively uncorrelated phase-wise for most frequencies/sources.  In this case no cancellation occurs. ( real world sources are not pure sine wave generators)




Right, and this applies to real world acoustical situations and when the two mikes are being combined to the same channel (in mono). Burroughs determined that if this were anechoic, you would need far more than 3:1 ratio, but normal acoustics, and the normal attenuation due to air, and the tolerance of the ear accepts 3:1

BUT

Quote:



If we apply such a rule to the electronic/digital domain it suggests that you might be OK if the differential delay between correlataed channels is greater than X.  If you can tolerate X delay from a musical/timing standpoint then the phase-induced attenuation might not be an issue.  




Well, in multitracking, there usually are no correlated channels... everything's isolated if it's overdubbed. Or, if not isolated, then the correlation comes from the room acoustics and it falls into 3:1 anyway. Can you give an example of your idea and explain why this is different from the acoustical case, where you generate correlated musical signals at different times and your idea of increasing the delay to avoid comb filtering becomes relevant?

I'm trying to think of a for instance... When would you even get two correlated sources into two different tracks of a multitrack except through use of microphones and normal acoustics anyway?
Title: Re: Time delay problems, real or not?
Post by: Rick Sutton on October 14, 2004, 10:26:57 PM
I've got a real world example that I'd like to get your opinion on. I'm recording a solo acoustic guitar album and am using 4 mics into 4 channels of pro tools. Two of the mics get processed through Lavry blue ad/da . Because I only have two channels of Lavry ad/da, the other two channels are processed through Digi 888 ad/da. Both converters are slaved to a Rosendahl clock. All four channels are returned to seperate faders on an analog console for mixing. Am I getting timing errors (other than the acoustic ones) due to different processing times in the two converters? If I m getting errors, are they potentially significant considering that the mics on the Lavry are a spread pair and the other mics (on the 888) are positioned close together and in the middle of the spread pair. Obviously there are phase differences present in the mic positioning and I'm wondering if this negates any need to match converters...or does it make it more important that all mics go through the same converters?
Thanks, Rick
Title: Re: Time delay problems, real or not?
Post by: bobkatz on October 15, 2004, 10:23:40 AM
Rick Sutton wrote on Thu, 14 October 2004 22:26

I've got a real world example that I'd like to get your opinion on. I'm recording a solo acoustic guitar album and am using 4 mics into 4 channels of pro tools.






The time delay differences I've observed between different models of A/D (even when synchronized) have been as low as "less than a sample" to as much as a few samples. Very very rarely in the milliseconds.

My guess is that if you are using two sets of mikes that obey the 3:1 rule or greater you will not run into any trouble with the additional delay differences of different models of converters. For example, a stero pair close and a pair in the ambient field of the room. I once recorded an ensemble that exact same way, and analysed the delay by sending a short click through line inputs of the console into all the converters. But the delay, which was LESS than a sample and therefore not alignable in the EDL by simply sliding, was totally academic, as the second converter was on a pair of ambience mikes many feet away from the main mikes.

BK
Title: Re: Time delay problems, real or not?
Post by: Kendrix on October 15, 2004, 01:53:47 PM
bobkatz wrote on Fri, 15 October 2004 02:49

Kendrix wrote on Thu, 14 October 2004 21:02

 




Well, in multitracking, there usually are no correlated channels... everything's isolated if it's overdubbed. Or, if not isolated, then the correlation comes from the room acoustics and it falls into 3:1 anyway. Can you give an example of your idea and explain why this is different from the acoustical case, where you generate correlated musical signals at different times and your idea of increasing the delay to avoid comb filtering becomes relevant?

I'm trying to think of a for instance... When would you even get two correlated sources into two different tracks of a multitrack except through use of microphones and normal acoustics anyway?



Well-This is a bit of a theoretical discussion however:  Dans original post suggested one case related to the common component of a stereo micing pair when some sort of differential processing might be applied.  Another might be duping one track and compressing one and mixing with the other/uncompressed track.  Sometimes folks dup a track and apply different EQ or other processing to each and mix together.  
Title: Re: Time delay problems, real or not?
Post by: danlavry on October 15, 2004, 02:31:37 PM
bobkatz wrote on Fri, 15 October 2004 15:23

Rick Sutton wrote on Thu, 14 October 2004 22:26

I've got a real world example that I'd like to get your opinion on. I'm recording a solo acoustic guitar album and am using 4 mics into 4 channels of pro tools.






The time delay differences I've observed between different models of A/D (even when synchronized) have been as low as "less than a sample" to as much as a few samples. Very very rarely in the milliseconds.
BK


Hi Bob,

I appreciate your participation, I learn from your comments  and most often I am in agreement with you.

Regarding time delay differences, please look at my numbers and you will see that a few samples delay can cause a mess. A one sample delay differance at 44.1Khz is almost 25uSec - 3dB loss at 10KHz, 8.3dB loss at 15KHz...

Say you are doing some high frequency processing (such as some EQ on one channel), you may be adding a lot of delay and that will cause cancellation.

My comments so far were made to warn people about potential delay problems in terms of attenuation. My "tables" for attenuation at 1usec... 25usec suggest that 25usec is a potential problem, thus it is good to stay with better delay matching.

Let me expand the statement some: longer delays are not going to, manifest as "only the cause of high frequency attenuation". One ends up with a comb filter like EQ. You will find deep notches in all sorts of frequencies! The action we are talking about is in fact a 2 tap analog FIR filter. The amplitude of the first mic is the first coefficient. The amplitude of the second mic is the coefficient of the second mic. The time delay between the arrival of the first and second signal is the tap delay of the FIR filter.

Delays of a few samples can "dig big holes" in the mix, like notch filters do. It is not totally like analog notch filters:
Being an FIR, the effect does not produce change in phase.

BR
Dan Lavry        
Title: Re: Time delay problems, real or not?
Post by: danlavry on October 15, 2004, 06:40:12 PM
Time delay, when is it less real?

I have no reason to question reports stating that the ear can hear 2usec interaural delay. It is true that a change in time delay of say 2usec is a very tiny acoustic difference, 0.024 inch!!!  How can anyone hear that? Can one hold their head that steady?

Clearly such reports do no suggest that you leave the room, come back to the same sitting position within .024 inches. What they tell you is that a sudden change in acoustic distance (or time delay delay) can be heard.

Say you want to repaint a part of your wall. The color is say light blue. The place when the old and new paint meet requires a near perfect match, because “side by side” comparison “magnifies" the difference. In fact, the old paint serves as a reference point for comparison with the new paint. But say you decided to paint the whole room with the new paint. Without a reference, the new paint may look indistinguishable from the old paint…

I am pretty sure that anyone hearing 2usec or 100usec interaural differences are talking about a “sudden change in delay”. So the report is interesting, but does it have any value in the music world? Just because the ear can hear a sudden change, do we need to be concerned with it? Not without the existence of mechanisms that introduce such sudden changes while listening to music. I am not aware of any.

Some have proposed that interaural sensitivity should dictate faster impulse response, thus possibly faster sample rates. These is of course another confusion between apples and oranges, thus another fruit salad:

One can take a fast impulse or a slow impulse, and delay it in time. The impulse width is determined by the audio bandwidth. Imulse and bandwidth are one in the same! But you can take a 10usec, 1000usec or any signal (impulse or not) and delay it by 1nsec, sec or an hour. Impulse and delay are independent from each other. The interaural response is about delay, not about impulse width.

The confusion stems from the fact that indeed, it is easier to detect small time timing differences when listening to an impulse (short duration with a fast and distinct attack). But it is wrong to assume that by making the impulse narrower and narrower (faster and faster), such interaural audibility will become better and better. Why?

What happens when we run say a 2usec impulse through a device capable of handling no faster than 20usec impulse? Recalling that we can view impulse as a bandwidth issue, the question can be reworded:
What happens when you run a signal containing 1MHz energy through a device limiting the bandwidth to 100KHz? The answer is: only 100KHz is going to pass through.
We can now look at the answer in the time domain: 100KHz bandwidth? It is a 20usec impulse. The narrow impulse changed into a wider one.

So making the impulse narrower beyond some limit, will not yield faster and more distinct sudden sound. The impulse simply does not get narrower beyond some point. It does get weaker, because the bandwidth limitations imposed by the lowest bandwidth device in the chain removes some of the energy out of the signal. Weaker signal yes, narrower impulse, no.

Again, what is the limiting factor? It is the lowest bandwidth device in the chain (mic speaker or anything else). The proposals to go for 1MHz sampling does dont make sense, not even with 500KHz microphones, speakers, amplifiers, converters… There is still the ear which can not accommodate 500KHz. The ear will only react to the energy portion within the hearing bandwidth, and will filter anything else. The ear will react and interpret a supper fast impulse as a lower amplitude, wider impulse.

To be continued…

BR
Dan Lavry

 


Title: Re: Time delay problems, real or not?
Post by: bobkatz on October 15, 2004, 11:35:25 PM
danlavry wrote on Fri, 15 October 2004 14:31


Hi Bob,

I appreciate your participation, I learn from your comments  and most often I am in agreement with you.




I'm glad, because you're the authority, Dan!

Quote:



Regarding time delay differences, please look at my numbers and you will see that a few samples delay can cause a mess. A one sample delay differance at 44.1Khz is almost 25uSec - 3dB loss at 10KHz, 8.3dB loss at 15KHz...




This is quite evident and I even teach this in my book! But the question was not about processing a mix through different delay mechanisms and trying to mix the signals together. It was about whether there are any practical problems recording multiple microphones through different models of A to D converters. And I answered, "not if you obey the 3:1 rule". I was answering one question and you're answering another...

Bottom line, as you say: If there is a time delay between any two CORRELATED signals which are going to be mixed together into a single channel, and the time delay is shorter than about 20 ms and/or the level difference between the two channels is less than about 10-15 dB, the result will be audibly-significant comb filtering. No argument there! But that wasn't the question I was answering...
Title: Re: Time delay problems, real or not?
Post by: Rick Sutton on October 16, 2004, 06:56:17 PM
Dan, and everyone contributing to this thread, I am very glad that this topic was raised as what i'm learning here is positively impacting my current project. As was stated in my previous post I am using two different converters to record a solo acoustic guitar album. What I didn't make entirely clear is that the center mic ( two different center mics, C12 and KM56c, were recorded to allow for choice later) is the same distance from the source as the outer pair.
With the "heads up" that  this thread has provided, I have done several tests and come up with the following results.
Compared to an 888 converter the Lavry Blue is 48 samples behind (@44.1) on the A/D and 22 samples behind on the D/A for a total of 70 samples as it appears back on the analog mixing board.
Armed with this data I copied three tracks on the PT system and brought the center mic (recorded with 888) back 48 samples (A/D difference) on one set of tracks leaving the original set as recorded. So the original set is coming back on the converters that they were recorded with and the second set is coming back strictly on 888's but with the Lavry tracks and 888 tracks aligned. I had help setting up a "blind" test to see which I preffered and found that the one with the clearest, widest stereo image was the aligned version. Even though the version I picked was being played with the inferior converters, the difference in alignment was the more important factor.
Now I either have to re-align the tracks as I record the rest of the project or shell out more $ and get enough Lavry converters to cover all tracks with the same converter.
Time to buy some lottery tickets I guess.
Best regards, Rick
Title: Re: Time delay problems, real or not?
Post by: danlavry on October 17, 2004, 12:02:19 AM
BOB Katz said:

"Bottom line, as you say: If there is a time delay between any two CORRELATED signals which are going to be mixed together into a single channel, and the time delay is shorter than about 20 ms and/or the level difference between the two channels is less than about 10-15 dB, the result will be audibly-significant comb filtering. No argument there! But that wasn't the question I was answering..."


I see, you are correct. We are talking about different issues.
I am sorry I had it confused and I stand corrected.

My issue is strictly about what happens when correlated signals are added electrically with delay. Of corse one way to avoid it is to make sure the condition does not exist or at least minimized.

What is the origin of the 3:1 rule? Is it based on experience?

BR
Dan Lavry

Title: Re: Time delay problems, real or not?
Post by: Rick Sutton on October 17, 2004, 03:00:04 AM
Quote:

What is the origin of the 3:1 rule? Is it based on experience?


Are you guys talking about the old 3 to 1 ratio that goes back to Lou Burroughs, Electro Voice's mic "expert" of the fifties and sixties? Maybe even goes back before him, but he's the one I remember always expounding it.

Title: Re: Time delay problems, real or not?
Post by: bobkatz on October 17, 2004, 10:42:43 AM
Rick Sutton wrote on Sat, 16 October 2004 18:56



With the "heads up" that  this thread has provided, I have done several tests and come up with the following results.
Compared to an 888 converter the Lavry Blue is 48 samples behind (@44.1) on the A/D and 22 samples behind on the D/A for a total of 70 samples as it appears back on the analog mixing board.





Great, Rick! That's exactly what you had to do. And for a 3-mike setup I'm sure it will be fine, though I think it would be intuitive (and perhaps redundant) to say that the outer pair of mikes (L and R) should go through one converter and the center mike through another  Smile. There could be some subsample differences between the analog filter implementations in each A/D and while it is probably inaudible in the case of a spaced stereo pair I would err on the side of caution. Anyway, it seems logical from the point of view of stereo imaging to use the same model on the L versus R.

Which brings up an instance I ran into many years ago. The plug in record cards for 2 channels of a Studer A 80 turned out to be slightly different instances. The stereo image was a little "phasey" or skewed, though I had aligned the machine to whin a micro-inch of its life, the record frequency responses and biases were well-matched, and all. But there was some obviously some phase shift between the two channels because of a slightly different R/C combination in the pre-emphasis section. Obviously this is an extreme case, and in an A/D situation, the low-pass filters should not exhibit phase shift in the audible band, but when you compare a Lavry with a Digidesign, who knows whether the upsampling ratios are extremely different and things will be a lot more different than just latency!

Good luck on winning the lottery. If you get a spare Lavry Gold, send me one, will you?

BK
Title: Re: Time delay problems, real or not?
Post by: bobkatz on October 17, 2004, 10:49:23 AM
danlavry wrote on Sun, 17 October 2004 00:02




I see, you are correct. We are talking about different issues.





Hi, Dan! No problem. I'd rather that we were both right and simply not communicating than that one of us was right and one was wrong.


Quote:



What is the origin of the 3:1 rule? Is it based on experience?




The origin of the 3:1 rule is a wonderful book by Lou Borroughs called "Microphone Design and Application (if I remember correctly)....." that is woefully out of print. In it, he demonstrates that the frequency response errors when feeding an acoustical source into two spaced microphones which are combined into one channel become insignificant when the relative spacing is at least 3:1.

He actually does frequency response charts of real microphones in real spaces with real acoustics. Clearly, anechoically, 3:1 would not even be close to enough to reduce comb filtering to inaaudibilty, as there you have only amplitude on your side, but no early reflections to help you. In the real world, the early reflections and reverberant environment permit using "only" a difference of 3:1. Clearly, greater than 3:1 ratio is even more desirable.

Hope this helps, and I'm really enjoying reading your expertise on this forum!
Title: Re: Time delay problems, real or not?
Post by: Bob Olhsson on October 17, 2004, 12:38:52 PM
danlavry wrote on Sat, 16 October 2004 23:02

 What is the origin of the 3:1 rule? Is it based on experience?

It was made up by Lou Burroughs, a founder and for years the head of professional product sales for Electro-Voice. It was based on talking to many leading recording engineers during the early 1960s. Lou's presentations were famous for him driving a nail into a piece of wood with the microphone he was demonstrating.
Title: Re: Time delay problems, real or not?
Post by: Rick Sutton on October 17, 2004, 01:02:14 PM
Quote:

That's exactly what you had to do. And for a 3-mike setup I'm sure it will be fine, though I think it would be intuitive (and perhaps redundant) to say that the outer pair of mikes (L and R) should go through one converter and the center mike through another


You bet! The stereo pair are indeed going through the Lavry and are used as the main source in mixing. They are U87's with Innertube Audio retrofits through a Focusrite ISA 215. The inner mic goes through a Tab Funkenwerk v72 into the 888.
I really appreciate the feedback and help that the pros on these forums provide. I've been in the recording business for thirty five years and with technology and methods always changing it is extremely helpful to have the collective knowledge of so many people available to all of us.
Many thanks and best regards, Rick