R/E/P Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: 1 2 3 [All]   Go Down

Author Topic: A question  (Read 12276 times)

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
A question
« on: June 19, 2005, 02:09:45 AM »

I suspect this has probably been covered a million times before but I'm still confused. I recently read this following excerpt from an article from Apogee concerning recording ultrasonics:

"Why Record Ultrasonics?
As is widely recognized, most of us can ’t hear much above 18 kHz, but that does not mean that there isn’t anything up there that we need to record – and here's another reason for higher sampling rates. Plenty of acoustic instruments produce usable output up to around the 30 kHz mark – something that would be picked up in some form by a decent 30 in/s half-inch analog recording. A string section, for example, could well produce some significant ultrasonic energy.

Arguably, the ultrasonic content of all those instruments blends together to produce audible beat frequencies which contribute to the overall timbre of the sound. If you record your string section at a distance with a stereo pair, for example, all those interactions will have taken place in the air before your microphones ever capture the sound.You can record such a signal with 44.1 kHz sampling and never worry about losing anything –as long as your filters are of good quality and you have enough bits.

If, however, you recorded a string section with a couple of 48-track digital machines, mic on each instrument feeding its own track so that you can mix it all later, your close-mic technique does not pick up any interactions.The only time they can happen is when you mix – by which time the ultrasonic stuff has all been knocked off by your 48 kHz multitrack recorders, so that will never happen. It would thus seem that high sampling rates allow the flexibility of using different mic techniques with better results
."


Now I appreciate that most Mics won't be able to record 30khz but for arguments sake let's say we have some mics that do. Are the beat frequencies referred to in the article caused by non linearities such as the air and the ear and must we actually hear the original frequencies in order to hear the Beat frequencies?
In a sentence: Is there any truth in this article or is it more Voodoo?

Thanks,

Karl Odlum
Logged

David Satz

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 661
Re: A question
« Reply #1 on: June 19, 2005, 06:41:51 PM »

Karl, there are some possibly valid arguments in favor of wideband audio electronics and sampling rates higher than the bare minimum, but the statements that you've quoted here aren't among them.

Audible "beat" tones are a phenomenon that occurs in a listener's ears, not in the air of a hall. Thus these tones don't occur at all if either or both of the original "pure" frequencies lie beyond the listener's hearing range. A 50 kHz tone plus a 51 kHz tone, for example, even at very high sound pressure levels, won't produce audible 1 kHz difference energy unless there is non-linear distortion in the playback equipment.

--best regards
Logged

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: A question
« Reply #2 on: June 20, 2005, 09:01:23 AM »

Thanks for that David. So the statement I quoted is a load of baloney? I always assumed in order to hear "beat" tones that one had to hear the original frequencies that cause the beating and the ear subsequently distorted them. Quotations like the one from apogee just serve to confuse matters.

Karl Odlum
Logged

David Satz

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 661
Re: A question
« Reply #3 on: June 20, 2005, 12:20:51 PM »

Karl, what I said was the simplified version. Air itself can be driven into non-linear behavior--at enormous sound pressure levels which can cause instant, traumatic physical injury or death. But musical performances, as heard by audiences at ordinary listening distances, probably never have anything above 20 kHz that gets within, say, 80 dB of such levels. Admittedly I may not know everything the kids are listening to these days ...

--best regards
Logged

dcollins

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 2815
Re: A question
« Reply #4 on: June 20, 2005, 07:59:35 PM »

kraster wrote on Mon, 20 June 2005 06:01

Quotations like the one from apogee just serve to confuse matters.



Maybe someone from Apogee should come on here and explain what they are talking about, because I think Mr. Satz is 100% correct.

And it's not even controversial...

DC

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: A question
« Reply #5 on: June 21, 2005, 05:36:36 AM »

The fact that it states that it can improve your close Micing techniques is the biggest 'Red Herring' there as most Mics won't capture these ultrasonic frequencies and most speakers won't reproduce them. And even if they did we still wouldn't hear them. So the sample rate is a moot point.
Logged

David Satz

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 661
Re: A question
« Reply #6 on: June 21, 2005, 12:37:26 PM »

kraster, I'd nominate "arguably" as the biggest weasel word in the Apogee quote. Taken literally, it's a boast: "I can talk in a way that seems to make sense, as long as it never has to be tied to any actual reality." Then, unfortunately, they live up to their boast.

The thing is, an idea isn't necessarily wrong just because someone has tried to use a bogus argument in its favor. For example, you're right about the bandwidth limits of most microphones and speakers, but there are exceptions. And the way something is limited to a particular bandwidth can be more important than the bandwidth itself, as far as audible transparency is concerned.

There really are some other possibly valid arguments in favor of audio circuitry with wider (within reason) bandwidth than we can hear, or sampling rates higher (within reason) than 44.1 kHz. I won't go into them here, but they are for strictly practical reasons in particular situations--not because "wider bandwidth sounds better." The latter claim is widely believed by audiophiles, and it's the kind of statement which can't ever be disproved, so they go on believing it. But there hasn't been any proof of it in all these years, either, and one would think that it could rather easily be proved if it were true.

--best regards
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: A question
« Reply #7 on: June 21, 2005, 06:04:40 PM »

“Thanks for that David. So the statement I quoted is a load of baloney? I always assumed in order to hear "beat" tones that one had to hear the original frequencies that cause the beating and the ear subsequently distorted them. Quotations like the one from apogee just serve to confuse matters.

Karl Odlum”


Hi Karl,

Yes it is a bunch of baloney.

1. The "beat tones” do NOT occur when adding musical material with LINEAR summation. A proper addition IS A LINEAR processes, be it circuit or software.

2. To have “beat tones” one must have a NON LINEAR processing, and the outcome is very non musical. Say for simplicity sake you have some instrument A playing 1KHz and its harmonics (2,3,4,5….30KHz), and instrument B playing 1.3KHz (nearly a third chord) with harmonics (2.6,  3.9…. 28.6…29.9KHZ). Do you want to have the difference of say 28.6 and 30KHz (it is 1.4KHz) in the audio? And at the same time also have 29.9-30KHz (which is 100Hz)?... In fact by the time you have the various sums and differences the beats are all over the place, and your best cure is LINEARITY thus no beats.

3. If you have beats, you have non linearity. Is the non linearity restricted to high frequency extension? If not, then the beats will occur with low frequencies and real mics and ears, and that is bad news, counter to transparency.

4. Say the non linearity is restricted only to signals above 20KHz (magic, is it not), then the Apogee argument advocates taking high frequency harmonics that we do not normally hear in live performance, and throwing combinations of sums and differences of those ultrasonic back into the audio band we do hear…

When I see such “educational” material, I wonder. I wonder about the caliber of the “educators”…I wonder about the motivation to writing such stuff...

Regards
Dan Lavry
www.lavryengineering.com

Logged

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: A question
« Reply #8 on: June 22, 2005, 07:06:35 AM »

Thanks for your reply, Dan. I had already assumed what you said in your post was the case until I read the Apogee article I quoted which left me confused. I then assumed what I had learnt before was wrong because Apogee know what they're talking about and make some decent converters.

This is pretty unfair on the average Joe (myself included). I and many others make purchasing decisions based on this kind of info. Their (apogee's) argument at first glance seems plausible. But they're twisting the facts to suit their own ends.

Karl
Logged

Max

  • Newbie
  • *
  • Offline Offline
  • Posts: 45
Re: A question
« Reply #9 on: June 22, 2005, 01:09:26 PM »

kraster wrote on Sun, 19 June 2005 07:09

I suspect this has probably been covered a million times before but I'm still confused. I recently read this following excerpt from an article from Apogee concerning recording ultrasonics:

"Why Record Ultrasonics?
As is widely recognized, most of us can ?t hear much above 18 kHz, but that does not mean that there isn?t anything up there that we need to record ? and here's another reason for higher sampling rates. Plenty of acoustic instruments produce usable output up to around the 30 kHz mark ? something that would be picked up in some form by a decent 30 in/s half-inch analog recording. A string section, for example, could well produce some significant ultrasonic energy.

Arguably, the ultrasonic content of all those instruments blends together to produce audible beat frequencies which contribute to the overall timbre of the sound. If you record your string section at a distance with a stereo pair, for example, all those interactions will have taken place in the air before your microphones ever capture the sound.You can record such a signal with 44.1 kHz sampling and never worry about losing anything ?as long as your filters are of good quality and you have enough bits.

If, however, you recorded a string section with a couple of 48-track digital machines, mic on each instrument feeding its own track so that you can mix it all later, your close-mic technique does not pick up any interactions.The only time they can happen is when you mix ? by which time the ultrasonic stuff has all been knocked off by your 48 kHz multitrack recorders, so that will never happen. It would thus seem that high sampling rates allow the flexibility of using different mic techniques with better results
."


Now I appreciate that most Mics won't be able to record 30khz but for arguments sake let's say we have some mics that do. Are the beat frequencies referred to in the article caused by non linearities such as the air and the ear and must we actually hear the original frequencies in order to hear the Beat frequencies?
In a sentence: Is there any truth in this article or is it more Voodoo?

Thanks,

Karl Odlum


Hi Karl,

Thank you for bringing this to our attention. The Purple Pages and the Apogee Guide to Digital Audio were written many years ago by folks that no longer work for Apogee. While much of the material is useful for beginners, there are some inaccuracies that the current Apogee team found unacceptable and the decision was made to pull these documents. Unfortunately, they were still on the website as of this morning, an oversight on my part. They have since been removed.

For the record, we agree that this is just non-sense. In re-reading this together, Lucas and I are not even sure what point the writer was trying to make here. Dan, I think this was written around when you still worked at Apogee, perhaps you can shed some light on this? (just kidding Wink ).

Seriously, I apologize for the oversight and thanks again for pointing it out.
Logged
Max Gutnik
Apogee Electronics

David Satz

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 661
Re: A question
« Reply #10 on: June 22, 2005, 02:31:39 PM »

Now, that was a classy response; no cheesy denials, no bogus counterattacks. I say, hats off to Apogee for this.
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: A question
« Reply #11 on: June 22, 2005, 07:06:12 PM »

Max said:

“Hi Karl,

For the record, we agree that this is just non-sense. In re-reading this together, Lucas and I are not even sure what point the writer was trying to make here. Dan, I think this was written around when you still worked at Apogee, perhaps you can shed some light on this? (just kidding ).

Seriously, I apologize for the oversight and thanks again for pointing it out”



Max

It is refreshing to see such immediate acknowledgment that their was badly flawed material on your website. Strange that others found it. It did do some damage, steering some people towards 192KHz, which for a while was highly promoted by many companies including the one you are salesman for.I was under the impression it was put there as one of the attempts to sell 192KHz, which is relatively a recent development.  No feather in your cap.

Regarding the “mysterious unknown writer”, are you suggesting that your company stood behind the statements about digital audio, not knowing who the writer is? Also you, not I, should be in a position to know the date of the publication. Such knowladge even with an error of 5 years, would set me apart from that “educational material” by many years. My relationship with Apogee ended in 1990 even though my electronic designs continued to be used (some still are).

While you are making changes to your website you may want to correct another mistake. You say Jerry Goodwin designed what you call UV22. I put Nyquist band dither in the first A/D for Dorian Recordings in about 1988. Vince did the digital part. I built the unit before partnering with Apogee and before I even met Jerry. Jerry and I improved the statistical properties of the signal and Apogee called it UV22. The concept while old is being marked as something which it is not. The concept of Nyquist band dither and the statistical improvements came before “noise shaping” a new and very powerful concept.

Noise shaping is the foundation to modern conversion, and is also used by modern dither algorithms, providing a psychoacoustic advantage (shifting the error signal from audible to less audible hearing range). The “HR” in the new UV22HR is misleading because HR is commonly used to indicate high resolution. In fact, the latest improvements were done to fix a computability problem between UV22 and data compressed signals, not to provide high resolution.

Dan Lavry
www.lavryengineering.com
Logged

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: A question
« Reply #12 on: June 23, 2005, 12:04:35 AM »

David Satz wrote on Wed, 22 June 2005 19:31

Now, that was a classy response; no cheesy denials, no bogus counterattacks. I say, hats off to Apogee for this.



Yes indeed. You can't say fairer than that. I guess that clears up the confusion!

Karl
Logged

Greg Reierson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 425
Re: A question
« Reply #13 on: June 23, 2005, 10:26:30 AM »

Max wrote on Wed, 22 June 2005 12:09



Thank you for bringing this to our attention. The Purple Pages and the Apogee Guide to Digital Audio were written many years ago by folks that no longer work for Apogee. While much of the material is useful for beginners, there are some inaccuracies that the current Apogee team found unacceptable and the decision was made to pull these documents. Unfortunately, they were still on the website as of this morning, an oversight on my part. They have since been removed.

For the record, we agree that this is just non-sense. In re-reading this together, Lucas and I are not even sure what point the writer was trying to make here. Dan, I think this was written around when you still worked at Apogee, perhaps you can shed some light on this? (just kidding Wink ).

Seriously, I apologize for the oversight and thanks again for pointing it out.


Any chance you might publish that in Mix, EQ, etc. so we don't have to relive this discussion over and over and over.......


GR

Logged

Terry Demol

  • Full Member
  • ***
  • Offline Offline
  • Posts: 103
Re: A question
« Reply #14 on: June 23, 2005, 09:13:31 PM »

David Satz wrote on Sun, 19 June 2005 23:41

Karl, there are some possibly valid arguments in favor of wideband audio electronics and sampling rates higher than the bare minimum, but the statements that you've quoted here aren't among them.

Audible "beat" tones are a phenomenon that occurs in a listener's ears, not in the air of a hall. Thus these tones don't occur at all if either or both of the original "pure" frequencies lie beyond the listener's hearing range. A 50 kHz tone plus a 51 kHz tone, for example, even at very high sound pressure levels, won't produce audible 1 kHz difference energy unless there is non-linear distortion in the playback equipment.

--best regards


I've been thinking about this over the last few days and
maybe there is something we haven't considered here.

We know for a fact that air itself manifests 2nd harmonic
distortion on any sound wave travelling through it due to
the density difference between the high and low pressure
parts of a wave (compression and rarefaction).

We also know that any medium that imposes 2nd harmonic
distortion on a wave will also impose intermodulation
distortion.

So it appears to me that there will be some intermodulation
occuring by the air "carrier" itself before the sound reaches
our ears.

Does this make sense?

Cheers,

Terry
Logged

David Satz

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 661
Re: A question
« Reply #15 on: June 23, 2005, 10:44:32 PM »

Terry, what you say is true at extremely high (i.e. literally deafening) sound pressure levels. But if this "air distortion" were significant at ordinary listening levels, we would never hear any sounds in our lives except those that had been affected by it. Also, the farther away any sound source is--the more air it has traveled through--the stronger would be its harmonic content, even as the strength of the tone fades due to distance. And finally, that effect should hold true for microphones as well as for human listeners, since the medium is still air; thus any recording made at a distance, even under anechoic conditions, would show notable amounts of harmonic distortion.

Is this getting absurd enough for you yet? None of these things occurs in reality, so I think that you can reasonably set the entire fantasy/conjecture aside.

--best regards
Logged

Nika Aldrich

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 832
Re: A question
« Reply #16 on: June 23, 2005, 11:32:43 PM »

Dan,

Just for the record, I don't think I've anywhere, in any context seen Apogee "promote" 192kS/s sample rates.  I have seen them make said available to those who require it to meet market demand from record labels, etc.  But I have never seen anything in Apogee's writing regarding supposed benefits of extraneously high sample rate recording.

Nika
Logged
"Digital Audio Explained" now available on sale.

Click above for sample chapter, table of contents, and more.

Lucas van der Mee

  • Newbie
  • *
  • Offline Offline
  • Posts: 12
Re: A question
« Reply #17 on: June 24, 2005, 10:42:28 AM »

And since we are setting things straight again:

I am proud to say, Apogee designs are 100% Lavry-free, they have been for over a decade and …
…business is better than ever!

Lucas van der Mee
Sr Design Engineer
Apogee Electronics
Logged
Lucas van der Mee
Sr. Design Engineer
Apogee Electronics

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: A question
« Reply #18 on: June 24, 2005, 04:43:37 PM »

[quote title=Terry Demol wrote on Fri, 24 June 2005 02:13
I've been thinking about this over the last few days and
maybe there is something we haven't considered here.

We know for a fact that air itself manifests 2nd harmonic
distortion on any sound wave travelling through it due to
the density difference between the high and low pressure
parts of a wave (compression and rarefaction).

We also know that any medium that imposes 2nd harmonic
distortion on a wave will also impose intermodulation
distortion.

So it appears to me that there will be some intermodulation
occuring by the air "carrier" itself before the sound reaches
our ears.

Does this make sense?
Cheers,
Terry
[/quote]


Terry,

Lets first agree that the ear can hear a certain bandwidth (for example 22KHz corresponding to 44.1KHz sampling or even 48KHz corresponding to 96K sampling).

Whatever we hear in the live performance space WILL include ALL the signals that we want to record and reproduce. Assuming that the air manifests harmonics, intermod or whatever you wish to assume, if it falls within the hearing range, it is already recorded. The mic (covering the audio range) will pick it up.

Adding high frequency capability that causes the same alterations (harmonics, intermod or whatever) on top of material that already contains the audible outcome, means you are doing it twice.

So assuming that such alterations could take place and have a sonic outcome, one is better off to make sure that we DO NOT include the high frequencies. The inclusion of the high frequencies will “double up” the effect, when comparing with the reference material (original performance).

For example, say we have 29KHz and 30KHz tones, and some mechanism in the air to generate a difference of 1KHz. That 1KHz is audible, will be recorded in the performance space and heard on playback. But including the 29KHz and 30KHz in the recording will introduce the ADDITIONAL 1KHz energy due to a new interaction, on top of the already recorded 1KHz. Of course the proper amount of 1KHz is is already in the recorded material, so the additional energy is unwanted.

If what you suggested is correct, it would amount to one more argument AGAINST recording signals outside the hearing range.

Regards
Dan Lavry
www.lavryengineering.com


Logged

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: A question
« Reply #19 on: June 24, 2005, 07:20:47 PM »

Can the same argument be used concerning intermodulation distortion resulting from non linearities in loudspeakers? ie. The speaker "hears" or passes frequencies above the range of human hearing and causes beat tones in the audible range as a result of non linearities in the speaker. If somehow these Beat tones are restricted to the upper range of human hearing could it go someway in explaining peoples discernment of an exaggeration of top end when conducting higher sample rate tests? Is there some kind of device in the output path that could restrict the beat tones to a high range? e.g. the crossover.

I can't hear 20khz no matter how hard I try but the intermodulation distortion from the speaker (if restricted to the upper range)could fool me into thinking that I'm hearing extra stuff at the top end.

I've heard a lot of people talking about better transient definition in higher sample rates but could the speaker distortion just exaggerate the top end leaving the impression that there is better transient response?

(sorry if this has been covered I'm just curious)

Karl
Logged

bobkatz

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 2926
Re: A question
« Reply #20 on: June 25, 2005, 06:48:45 AM »

David Satz wrote on Tue, 21 June 2005 12:37



reasons in particular situations--not because "wider bandwidth sounds better." The latter claim is widely believed by audiophiles, and it's the kind of statement which can't ever be disproved, so they go on believing it. But there hasn't been any proof of it in all these years, either, and one would think that it could rather easily be proved if it were true.

--best regards



It's fortunate that in the digital domain we can construct filters and make listening experiments that put the nail on the coffin of "wide bandwidth is better per se". I'm nearly 100% convinced that "it's the filters, not the bandwidth, that we hear" (when band limiting above about 20 kHz). Ironically, we need extremely high sample rates in order to conduct the experiments to prove that we don't need them  Smile
Logged
There are two kinds of fools,
One says-this is old and therefore good.
The other says-this is new and therefore better."

No trees were killed in the sending of this message. However a large number of
electrons were terribly inconvenienced.

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: A question
« Reply #21 on: June 25, 2005, 01:33:14 PM »

I suppose the point I was trying to make above is that in listening tests both professionally conducted and those,in particular, less rigorously controlled there are other salient factors such as loudspeaker intermodulation distortion at higher sampling rates and bad filter design at lower sampling rates that conribute to differences perceived between them. I've no doubt that many people perceive differences between them but ultrasonic perception is pretty far down the list of plausible explanations for the perceived differences.

Karl
Logged

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: A question
« Reply #22 on: June 25, 2005, 03:13:41 PM »

Lucas van der Mee wrote on Fri, 24 June 2005 15:42


…business is better than ever!




I think it quite disingenuous to be boasting about your business success in a thread that's debunked some of your misleading (by your own admission) marketing material.
Logged

Lucas van der Mee

  • Newbie
  • *
  • Offline Offline
  • Posts: 12
Re: A question
« Reply #23 on: June 25, 2005, 08:09:18 PM »

Point taken Karl...
I am just very annoyed by Dan's repetitive false claims on my work.
I think I should have written:...and our equipment is better than ever.

Lucas van der Mee
Sr. Design engineer
Apogee Electronics
Logged
Lucas van der Mee
Sr. Design Engineer
Apogee Electronics

David Satz

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 661
Re: A question
« Reply #24 on: June 25, 2005, 08:30:54 PM »

Mr. Van der Mee, up to this point the postings in this thread from people at Apogee have been greatly to Apogee's credit. And now for some reason you evidently would like to change that.

Just speaking as an ordinary user of this forum, if you wish to pursue a dispute with Dan Lavry, I hope that you will do so in a new discussion thread specifically devoted to the facts of that situation. This thread has already achieved what it set out to do. And since the rest of us here haven't got much clue as to why you're running into thermal overload, kindly be specific as to your facts. Or maybe just count to 10 (in decimal, please, not binary) and think it all over.

--best regards
Logged

Terry Demol

  • Full Member
  • ***
  • Offline Offline
  • Posts: 103
Re: A question
« Reply #25 on: June 26, 2005, 08:31:00 AM »

[quote title=danlavry wrote on Fri, 24 June 2005 21:43]
Terry Demol wrote on Fri, 24 June 2005 02:13
I've been thinking about this over the last few days and
maybe there is something we haven't considered here.

We know for a fact that air itself manifests 2nd harmonic
distortion on any sound wave travelling through it due to
the density difference between the high and low pressure
parts of a wave (compression and rarefaction).

We also know that any medium that imposes 2nd harmonic
distortion on a wave will also impose intermodulation
distortion.

So it appears to me that there will be some intermodulation
occuring by the air "carrier" itself before the sound reaches
our ears.

Does this make sense?
Cheers,
Terry
[/quote



Terry,

Lets first agree that the ear can hear a certain bandwidth (for example 22KHz corresponding to 44.1KHz sampling or even 48KHz corresponding to 96K sampling).

Whatever we hear in the live performance space WILL include ALL the signals that we want to record and reproduce. Assuming that the air manifests harmonics, intermod or whatever you wish to assume, if it falls within the hearing range, it is already recorded. The mic (covering the audio range) will pick it up.




Yes, of course.

Maybe there is "air IM" but, as ou say, it doesn't matter. As
long as the ADC converts what the ear would hear, that's all
that counts.  

Quote:



Adding high frequency capability that causes the same alterations (harmonics, intermod or whatever) on top of material that already contains the audible outcome, means you are doing it twice.

So assuming that such alterations could take place and have a sonic outcome, one is better off to make sure that we DO NOT include the high frequencies. The inclusion of the high frequencies will “double up” the effect, when comparing with the reference material (original performance).

For example, say we have 29KHz and 30KHz tones, and some mechanism in the air to generate a difference of 1KHz. That 1KHz is audible, will be recorded in the performance space and heard on playback. But including the 29KHz and 30KHz in the recording will introduce the ADDITIONAL 1KHz energy due to a new interaction, on top of the already recorded 1KHz. Of course the proper amount of 1KHz is is already in the recorded material, so the additional energy is unwanted.




I see your point here, but I'm not sure I agree %100.

It would depend on the ADC's inherent IMD at higher
frequencies. It should be very low for a well designed unit
and as such should not be a significant issue. Possibly some
opamp based IP circuits are more susceptible to higher freq IM
due to their rising distortion Vs rising freq characteristics
but there are others that are very good HF performers.

Also IME low gain apps such as ADC front ends are the easiest
to keep linear at higher frequencies.


Regards,

Terry
Logged

Johnny B

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1134
Re: A question
« Reply #26 on: June 26, 2005, 12:14:34 PM »

[quote title=bobkatz wrote on Sat, 25 June 2005 11:48]
David Satz wrote on Tue, 21 June 2005 12:37


"Wider bandwidth sounds better."


Many "ear people" feel this truly is the case, oddly, it is only with respect to converters that some people argue for a "lesser" or inferior bandwidth.







Logged
"As far as the laws of mathematics refer to reality,
they are not certain; as far as they are certain,
they do not refer to reality."
---Albert Einstein---

I'm also uncertain about everything.

bobkatz

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 2926
Re: A question
« Reply #27 on: June 26, 2005, 12:21:42 PM »

[quote title=Johnny B wrote on Sun, 26 June 2005 12:14]
bobkatz wrote on Sat, 25 June 2005 11:48

David Satz wrote on Tue, 21 June 2005 12:37


"Wider bandwidth sounds better."


Many "ear people" feel this truly is the case, oddly, it is only with respect to converters that some people argue for a "lesser" or inferior bandwidth.





To quote Bob Olhsson from my book: "The issues of the audibility of bandwidth and the audibility of artifacts caused by limiting bandwidth must be treated separately. Blurring these issues can only lead to endless arguments."

Remember that it is impossible to create a filter in the analog domain that does not have phase shift, some noise, and some distortion. The more complex the filter in the analog domain, the worse its potential to sound. This is not necessarily the case in the digital domain. Thus, a very good reason why wide-bandwidth analog circuits often sound better... a simple one-pole low-pass filter at 100 kHz is pretty invisible to the ear. But a simple one-pole filter at 20 kHz is not. And constructing a complex, sharp low-pass filter at 20 kHz in the analog domain is almost sure to result in audible artifacts.

I could go on, I could write a book about it. Oh wait, I did write a chapter already Smile. More evidence is in, by the way, on the side of the argument that "it is the filters, not the bandwidth, that we hear", and I plan on putting that in the second edition of "Mastering Audio."
Logged
There are two kinds of fools,
One says-this is old and therefore good.
The other says-this is new and therefore better."

No trees were killed in the sending of this message. However a large number of
electrons were terribly inconvenienced.

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: A question
« Reply #28 on: June 26, 2005, 05:35:58 PM »

[quote title=bobkatz wrote on Sun, 26 June 2005 17:21]
Johnny B wrote on Sun, 26 June 2005 12:14

bobkatz wrote on Sat, 25 June 2005 11:48

David Satz wrote on Tue, 21 June 2005 12:37


"Wider bandwidth sounds better."


Many "ear people" feel this truly is the case, oddly, it is only with respect to converters that some people argue for a "lesser" or inferior bandwidth.





Remember that it is impossible to create a filter in the analog domain that does not have phase shift, some noise, and some distortion. The more complex the filter in the analog domain, the worse its potential to sound. This is not necessarily the case in the digital domain. Thus, a very good reason why wide-bandwidth analog circuits often sound better... a simple one-pole low-pass filter at 100 kHz is pretty invisible to the ear. But a simple one-pole filter at 20 kHz is not. And constructing a complex, sharp low-pass filter at 20 kHz in the analog domain is almost sure to result in audible artifacts.



Hi Bob,

As long as you are at it, lets keep the eye on the ball and see when and if we have to deal with what:

With today's technology, you DO NOT have to deal with 20KHz analog filter.

On the AD side, the conversion is done at a high oversampling rate, and filtering (decimation) to 20KHz (for 44.1KHz CD) is in the DIGITAL domain. As you stated, it can be done without any phase shift.  

On the DA side, the data, even with CD's (as low as 44.1KHz rate) is up sampled to a much higher rate in the DIGITAL DOMAIN and it can be done without any phase shift.  

So all objections that are based on a 20KHz analog filters have no leg to stand on. Talking about it is, in fact, raising issues that were gone 15 years ago...

Regards
Dan Lavry
www.lavryengineering.com
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: A question
« Reply #29 on: June 26, 2005, 05:50:33 PM »

[quote title=Terry Demol wrote on Sun, 26 June 2005 13:31

I see your point here, but I'm not sure I agree %100.

It would depend on the ADC's inherent IMD at higher
frequencies. It should be very low for a well designed unit
and as such should not be a significant issue. Possibly some
opamp based IP circuits are more susceptible to higher freq IM
due to their rising distortion Vs rising freq characteristics
but there are others that are very good HF performers.

Also IME low gain apps such as ADC front ends are the easiest
to keep linear at higher frequencies.

Regards,
Terry
[/quote]


Terry,

I believe you were talking about some ultrasonic impact that happens in the air. David Satz. pointed out that such interaction only takes place at very high pressure levels, and to the best of my knowledge he is correct. But I decided to go with the assumption that some degree of what you said may possibly be correct, and the simple logic I used suggests that it is UNDESIRABLE to include the ultrasonic.

Now you are talking about AD non linearity at ultrasonic frequencies, which could completely changes the conversation. But "surprisingly", my answer is the same:

Any non linearity at high frequency, be it a converter, the air itself, the speakers, is undesirable. The presence of such non linearity will ADD signals that were not present prior to recording. The signals we want are already picked by the mic.

So anyone that wishes to have their gear extended to higher frequencies, better be sure that the linearity holds up over the range of operation. The arguments are often: "well, the device is less linear way up there but we do not hear that high", and of course such arguments are flawed, because non linearity may make signals we do hear at lower frequencies.

I read the Boyk paper a number of times with great interest, and I appreciate his work and his contributions. He measured all sorts of musical instruments in very high frequencies, and I do not dispute his findings at all. What I disagree with are the conclusions regarding what needs to be done in view of his findings. Some people, including some well known ear people understandably came to the very simplistic conclusion that we need to record it all, and that there is no harm in doing so. I disagree with that conclusion:

We want to record what we hear at the performance place, and nothing more. Any mechanism AT THE PERFORMANCE, by which high frequency energy will "fold back" to the audible range will be picked up and recorded. We need to stay true to that recorded material. We can do so by LIMITING the mic to the bandwidth of the ear, and by NOT ALLOWING the material we do not hear into the electronics. If we allow high frequencies that we did not hear at the performance into the electronics audio chain, we in fact become "sitting ducks" to any non linearity we may encounter.

Of course, I am not suggesting that all audio gear be limited to 20KHz. The frequency cutoff is a COMPROMISE between various factors (including "safety margins"). I am suggesting that it is good to eliminate signals above what we can hear, when all considerations allow so. I am saying that the arguments suggesting an advantage in recording higher frequencies than we can hear are "180 degrees out of phase". ..Or put another way the opposite of what they should be.

We are lucky that mic and speaker makers did not push the bandwidth in the manner that the converter makers did. With 20KHz mic, a high frequency non linearity is not a problem because there are no high frequency "tones" (energy) to "fold back".
Again, the high frequency (beyond our hearing) is either "trouble" or "potential trouble", making one more argument against 192KHz sampling rate. I have a few more up my sleeve, for another time.

A little off subject:

The MP3's are getting better, because they are trying to be "almost as good as possible". To do so, one needs to have a relatively clear grasp of what is the maximum possible required bandwidth and dynamic range. Only then does one start applying principles of psychoacoustics for data compression.

Meanwhile, the "leadership" of pro audio (many of whom are very lacking in technical  knowledge) have pushed the industry into a far from optimal place! Stay tuned for the next installment of this insanity - people with lesser technical leading the industry towards 384KHz. Are the blind leading the ignorant, or are the ignorant leading the blind?

Regards
Dan Lavry
www.lavryengineering.com

Logged

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: A question
« Reply #30 on: June 26, 2005, 11:02:17 PM »

Hi Dan,



I read Richard Black's 1999 AES paper on the effects of this phenomenon (inspired, I believe by Mr. Bob Katz's listening experiments). He maintains if there is(was) insufficient attenuation in Fs/2 at 44.1khz and that frequencies immediately above Fs/2 would be insufficiently filtered and cause Alisaing Intermodulation Distortion in the audible range.

Richard Black's experiments concluded that even small amounts of spurious frequencies could cause Intermodulation distortion:
"(The tweeter) was found to give audible intermodulation when fed with 9kHz (approx.) at -12dBW and 21kHz at -47dBW".

Does the filtering on current chip designs sufficiently attenuate frequencies in the stop band to minimise Alisaing Intermodulation Distortion?

Thanks,

Karl
Logged

Terry Demol

  • Full Member
  • ***
  • Offline Offline
  • Posts: 103
Re: A question
« Reply #31 on: June 27, 2005, 07:17:31 AM »

danlavry wrote on Sun, 26 June 2005 22:50


Terry Demol wrote on Sun, 26 June 2005 13:31



I see your point here, but I'm not sure I agree %100.

It would depend on the ADC's inherent IMD at higher
frequencies. It should be very low for a well designed unit
and as such should not be a significant issue. Possibly some
opamp based IP circuits are more susceptible to higher freq IM
due to their rising distortion Vs rising freq characteristics
but there are others that are very good HF performers.

Also IME low gain apps such as ADC front ends are the easiest
to keep linear at higher frequencies.

Regards,
Terry



Terry,

I believe you were talking about some ultrasonic impact that happens in the air. David Satz. pointed out that such interaction only takes place at very high pressure levels, and to the best of my knowledge he is correct. But I decided to go with the assumption that some degree of what you said may possibly be correct, and the simple logic I used suggests that it is UNDESIRABLE to include the ultrasonic.

Now you are talking about AD non linearity at ultrasonic frequencies, which could completely changes the conversation. But "surprisingly", my answer is the same:

Any non linearity at high frequency, be it a converter, the air itself, the speakers, is undesirable. The presence of such non linearity will ADD signals that were not present prior to recording. The signals we want are already picked by the mic.




Dan,

My apologies, I glossed through your post too quickly,
too busy these days.

Yes, I totally understand what you were referring to
WRT the "air IMD" happening in two instances. I thought you
were referring to electronic IMD at the ADC.

It would require speakers of sufficient bandwidth  however
there are plenty of tweeters that go out to at least 40k
these days. The popular ring modulator style come to mind.

It is an interesting subject in it's own right.  

Regards,

Terry


Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: A question
« Reply #32 on: June 27, 2005, 11:42:20 AM »

kraster wrote on Mon, 27 June 2005 04:02

Hi Dan,



I read Richard Black's 1999 AES paper on the effects of this phenomenon (inspired, I believe by Mr. Bob Katz's listening experiments). He maintains if there is(was) insufficient attenuation in Fs/2 at 44.1khz and that frequencies immediately above Fs/2 would be insufficiently filtered and cause Alisaing Intermodulation Distortion in the audible range.

Richard Black's experiments concluded that even small amounts of spurious frequencies could cause Intermodulation distortion:
"(The tweeter) was found to give audible intermodulation when fed with 9kHz (approx.) at -12dBW and 21kHz at -47dBW".

Does the filtering on current chip designs sufficiently attenuate frequencies in the stop band to minimise Alisaing Intermodulation Distortion?

Thanks,

Karl



Karl,

You can go to my web site at www.lavryengineering.com and click on support. Look for my article named: "Sampling, Oversampling, Imaging, Aliasing". It is a PDF file.

In the file you will see some plots showing what happens when you have no oversampling, a X2 oversampling, X4 oversampling...
Notice the following:
When one begins with audio data that is limited to say 22KHz (CD format), and oversamples by say X2 to 88.2KHz, the frequency range between 22KHz and about 66KHz (88.2KHz-22KHz) is free of activity.
Say you up sample by X4 from 44.1KHz to 176.4KHz, there is a "dead zone" between 22KHz and 176-22=154KH.
Of course if you up sample higher, the "dead zone" increases.

The analog filter foe a DA needs to remove the high frequency image energy. In the case of X1 oversampling, you need to pass 20KHz and block 22.1KHz and that is a tough job - a 2KHz transition band. But in the case of X4, you need to pass 20KHz and block above 154KHz, Now you can have the ability to pass audio all the way to say even 54KHz, and still have 100KHz filter transition band. Moving the pass band to 54KHz helps gets you out of a big mess - the phase problems when the filter is right at 20KHz. Also, obviously, 100KHz transition band is much simpler filter than a 2KHz transition band!

Most DA's today have oversampling so high that one can set the filter way high above the audio and still reject the image energy by 120dB or more. The up sampling by some factor X pushes the image energy to very high frequencies by digital computation that can be phase linear.

A similar story, ending with a digital computation (that can be linear phase) hold for the AD side.

That is why I am saying: The analog 20KHz filter problem with it's associated phase problems is history and should be put aside. Your AD and DA, and everyone elses (unless it is very old gear) has oversampling and upsampling in it, therefore the problem of 20KHz analog filter is long gone.

Improving gear is not about 20KHz analog filters. We are long passed this bottleneck. Of course there are still issues to deal with, but we are making some progress.  

Regards
Dan Lavry
www.lavryengineering.com
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: A question
« Reply #33 on: June 28, 2005, 02:17:28 PM »

Richard Black's experiments concluded that even small amounts of spurious frequencies could cause Intermodulation distortion:
"(The tweeter) was found to give audible intermodulation when fed with 9kHz (approx.) at -12dBW and 21kHz at -47dBW".

Does the filtering on current chip designs sufficiently attenuate frequencies in the stop band to minimise Alisaing Intermodulation Distortion?

Thanks,

Karl


Please note that I am not taking issue with Mr. Blacks finding. I did not yet read his paper but see no reason to question that a 9KHz and 21KHz fed into some specific tweeter will generate inter-modulation. And maybe all tweeters generate some of that distortion.

But my comment was about audibility of analog filters at 20KHz, and my answer was - we do not need to worry about such filters that we no longer use.

The tweeter issue you mentioned seems to be a speaker maker problem. The makers of passive speakers have to work with analog filters for the cross over region between drivers (transducers), and the problem is "very serious" because the cross over frequencies are in the audible range, way below 20KHz. But that is a whole other issue - the issue of making good speakers over the audible range.

We are talking about the pro and con of extending the sampling rate, thus the audio bandwidth, assuming that the speakers and mic can cover the true audio hearing range (whatever it may be).

The problem is indeed that while we can not do a near perfect job to 20KHz, some people are looking for answer based on extending the bandwidth capability to 96KHz (192KHz sampling), about 3-4 time of what the best ear can hear.

If you want to clean your house, cleaning the houses of the 3 next door neighbours will not help Smile

Regards
Dan Lavry
www.lavryengineering.com  


Logged

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: A question
« Reply #34 on: June 28, 2005, 05:00:45 PM »

Thanks Dan,

Mr. Black's paper can be found here:
http://www.musaeus.co.uk/aespaper.htm


Thanks for your response. I wasn't sure if the technology Richard Black referred to in his paper was outdated (it was written in 1999). He acknowledges that oversampling techniques have improved filter performance but according to the late Julian Dunn it's the frequencies immediately above 20khz that cause the problem even at low levels. This may be a moot point now if filters are more efficient but it is surprising how little it takes to produce intermodulation effects in speakers.

I am not suggesting that increasing the sample rate will rectify the problem. On the contrary, since the majority of speakers top out at about 20khz I believe that higher sample rates might increase ID by allowing ultrasonic frequencies into speakers bandlimited to 20khz. If we lived in a world were speakers were completely linear up to 40khz it would not be a problem.


Karl
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: A question
« Reply #35 on: June 29, 2005, 11:58:20 AM »

kraster wrote on Tue, 28 June 2005 22:00

Thanks Dan,

Mr. Black's paper can be found here:
http://www.musaeus.co.uk/aespaper.htm


Thanks for your response. I wasn't sure if the technology Richard Black referred to in his paper was outdated (it was written in 1999). He acknowledges that oversampling techniques have improved filter performance but according to the late Julian Dunn it's the frequencies immediately above 20khz that cause the problem even at low levels. This may be a moot point now if filters are more efficient but it is surprising how little it takes to produce intermodulation effects in speakers.

I am not suggesting that increasing the sample rate will rectify the problem. On the contrary, since the majority of speakers top out at about 20khz I believe that higher sample rates might increase ID by allowing ultrasonic frequencies into speakers bandlimited to 20khz. If we lived in a world were speakers were completely linear up to 40khz it would not be a problem.


Karl



Thank you for the link. I read the paper, and it talks about the inability of a speaker to deal with ultrasonic frequencies, causing intermode. The paper even went as far as to suggest some non real time (enabling a lot of computations) DIGITAL "mastering filter" to make sure there is no energy over 20KHz, to help the speaker problem.

After reading the paper, I do not see the paper as advocating going to higher sampling for the sake of elimination of 20KHz filters. I do not see the paper advocating capture of ultrasonic frequencies. On the contrary! The paper suggests complete elimination of ultrasonics, because it may cause inter-modulation distortions in the speaker. In other words, if I understand it correctly, the author would rather use a sharp 20KHz (or so) decimation filter, even for 96KHz and 192KHz sampling, to protect against inter-modulation based on ultrasonics.

In other words, I read what he says as: a 96KHz sampling system with a sharp 20KHz good decimation filter (like a 44.1KHz system) has the advantage over a 96KHz (or 192KHz) system with a more gradual filter.

I don't disagree. I see some other considerations for some slight increase in sample rate, and after taking all the factors into account we may find the optimum point - the best  sampling compromise, which in my view is around 50-70KHz, mics that do not go much above 20KHz, improved speakers and much more...

Regards
Dan Lavry
www.lavryengineering.com

Logged

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: A question
« Reply #36 on: June 29, 2005, 08:09:37 PM »

Quote:



Thank you for the link. I read the paper, and it talks about the inability of a speaker to deal with ultrasonic frequencies, causing intermode. The paper even went as far as to suggest some non real time (enabling a lot of computations) DIGITAL "mastering filter" to make sure there is no energy over 20KHz, to help the speaker problem.

After reading the paper, I do not see the paper as advocating going to higher sampling for the sake of elimination of 20KHz filters. I do not see the paper advocating capture of ultrasonic frequencies. On the contrary! The paper suggests complete elimination of ultrasonics, because it may cause inter-modulation distortions in the speaker. In other words, if I understand it correctly, the author would rather use a sharp 20KHz (or so) decimation filter, even for 96KHz and 192KHz sampling, to protect against inter-modulation based on ultrasonics.

In other words, I read what he says as: a 96KHz sampling system with a sharp 20KHz good decimation filter (like a 44.1KHz system) has the advantage over a 96KHz (or 192KHz) system with a more gradual filter.

I don't disagree. I see some other considerations for some slight increase in sample rate, and after taking all the factors into account we may find the optimum point - the best  sampling compromise, which in my view is around 50-70KHz, mics that do not go much above 20KHz, improved speakers and much more...

Regards
Dan Lavry
www.lavryengineering.com






It shines a rather dubious light on the validity of listening tests. Perceived differences in listening tests could have a lot more to do with ID than "ultrasonic perception". As 96k will let more ultrasonic frequencies through (if the Mic is capable of capturing them from the source) it increases the chances of non linearity in the speaker.

An interesting point I was considering was that a lot of people perceive differences in inharmonic sources, cymbals, percussion etc. at higher sample rates. The frequency content of these sources can extend way up the frequency spectrum and would be a likely candidate to cause ID in speakers if captured. But because the source is inharmonic the resultant ID in speakers would appear as being correlated with the source thus giving the impression of more 'presence' where in fact it's just distortion!


Karl
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: A question
« Reply #37 on: June 30, 2005, 12:46:38 PM »

“It shines a rather dubious light on the validity of listening tests. Perceived differences in listening tests could have a lot more to do with ID than "ultrasonic perception". As 96k will let more ultrasonic frequencies through (if the Mic is capable of capturing them from the source) it increases the chances of non linearity in the speaker”

I am glad to see that you are including the mic in the equation. Most mics don’t pick up  much above 20KHz! Also, while talking about listening tests, I wish we were talking about some double blind ABX tests. The fact is we are talking about “reports” which is a “different thing” all together.

An interesting point I was considering was that a lot of people perceive differences in inharmonic sources, cymbals, percussion etc. at higher sample rates. The frequency content of these sources can extend way up the frequency spectrum and would be a likely candidate to cause ID in speakers if captured. But because the source is inharmonic the resultant ID in speakers would appear as being correlated with the source thus giving the impression of more 'presence' where in fact it's just distortion!

Again, I really do not think ultrasonics plays much of a role here, because very few mics just work there. But your comment is very interesting:

You said: “The frequency content of these sources can extend way up the frequency spectrum and would be a likely candidate to cause ID in speakers if captured. But because the source is inharmonic the resultant ID in speakers would appear as being correlated with the source thus giving the impression of more 'presence' where in fact it's just distortion!”

This comment may be applied to the audio content without any ultasonics present in the signal. Inter-modulation due to ultrasonic energy is only one mechanism for distortion…

Regards
Dan Lavry
www.lavryengineering.com
Logged

hiendaudio

  • Newbie
  • *
  • Offline Offline
  • Posts: 10
Re: A question
« Reply #38 on: June 30, 2005, 06:02:17 PM »

Hi:

If all people that claim beyond 20K audio , see a far field impulse response of a concert hall, they
Logged

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: A question
« Reply #39 on: July 03, 2005, 07:32:21 AM »

Given the evidence in Dan's 192khz white paper and the acknowledgement by many researchers that somewhere around 60khz is the optimal samping frequency. Whose bright idea was it to jump up to 96 and ,in particular, 192k sampling rates? Is there a collusion between various hardware manufacturers (ie. Hard-disk makers and DSP makers) and AD/DA makers (with the exception of Mr. Lavry) to keep pushing the sample rate up?

When all is said and done the most compelling reason for utilising high sample rates is that it financially benefits a select few hardware manufacturers.

I know the above statement can be taken as a given but without any compelling technical argument in its favour how can they (the disk/dsp makers) still get away with it?

Karl


Logged
Pages: 1 2 3 [All]   Go Up