R/E/P Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: 1 [2] 3 4 5   Go Down

Author Topic: Understanding Dan's 192kHz paper/argument  (Read 24602 times)

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #15 on: December 15, 2004, 12:23:40 PM »

bobkatz wrote on Wed, 15 December 2004 16:02

Nika Aldrich wrote on Wed, 15 December 2004 10:15


Quote:

I also notice when I convert down to 48K on some of my songs to create faster computer and drive response the sound of the tracks take on a filtered extreme top end tone.  I find this harder tone actually good for softer pop music and I use regularly but it doesn't sound as real.


That is the fault of your downsampling algorithm, not the sample rate...
Nika


Well, probably the SRC algorithm needs some work, but remember that conceivably (and in practice) cumulative filtering and multiple DSP calculations can take things over the line from "inaudible" to "audible"....
I do not want Dan Lavry to remove this post because it sits very well on the line of the science. It is not possible to totally remove subjective observations from even the most technical forum.


Yes, this comment is technical. In fact, I am glad you learn form my message regarding accumulative effects, posted on the thread "Optimal Sample Rate" dated Nov 17.

Regards
Dan Lavry
Logged

stoicmus

  • Newbie
  • *
  • Offline Offline
  • Posts: 12
Re: Understanding Dan's 192kHz paper/argument
« Reply #16 on: December 15, 2004, 03:06:58 PM »

Thanks - I'll weigh in on this after I get through the paper.  - Jay
Logged

Glenn Bucci

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 627
Re: Understanding Dan's 192kHz paper/argument
« Reply #17 on: December 21, 2004, 03:02:19 PM »

I read the paper myself. From what I understood, it makes sense.

However there is a big question I have for Dan and others out there. Why is it when I sang at 44 with Yamaha converters of a DM 1000, then at 96, I heard a little more top end at 96? From the paper, I should hear no difference since I am already over twice the sample rate of the highest freq. in the recording. The only thing I can think of why I heard more top end, was that the converters were more consumer type converters, and at 96 it increased the quality of the music at the 15 - 20 range that the converter at 44 lacked.

Dan, you mentioned on page 11 of the PDF file that "the errors near the ends are due to the high frquency contnet near the ends of the input signal. Keep in mind that the error is a high frequency signal." So this kinda gives me the impression that my observation is correct.

Could you give me an explanation on why I (as many others) have heard an improvement at 96 over 44.
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #18 on: December 21, 2004, 05:26:55 PM »

Keef wrote on Tue, 21 December 2004 20:02

I read the paper myself. From what I understood, it makes sense.

However there is a big question I have for Dan and others out there. Why is it when I sang at 44 with Yamaha converters of a DM 1000, then at 96, I heard a little more top end at 96? From the paper, I should hear no difference since I am already over twice the sample rate of the highest freq. in the recording. The only thing I can think of why I heard more top end, was that the converters were more consumer type converters, and at 96 it increased the quality of the music at the 15 - 20 range that the converter at 44 lacked.

Dan, you mentioned on page 11 of the PDF file that "the errors near the ends are due to the high frquency contnet near the ends of the input signal. Keep in mind that the error is a high frequency signal." So this kinda gives me the impression that my observation is correct.

Could you give me an explanation on why I (as many others) have heard an improvement at 96 over 44.


There are many possible causes. The first thing that comes to mind is the accumulative effect of various gear. I already talked about in this forum: Say your mic is a 20KHz device, than you have a loss of 1/2 the power at 20KHz. Add to it the speaker at the other end, with say 20KHz bandwidth you now lost even more at 20KHz and the loss is noticeable even at 18KHz..
So Adding an AD with further attenuation at around 20KHz may just become too much.... But getting it (AD or power amplifier or what not)out of the way requires only a few KHz more. Going to 48KHz will make most of the difference. Going to 96KHz is an overkill. If you could go to 60KHz, that would solve all your AD high end issues. Of course the mic's and speakers are still effecting the 20KHz and even below it. Going to 192KHz is way too far!    
Logged

Glenn Bucci

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 627
Re: Understanding Dan's 192kHz paper/argument
« Reply #19 on: December 22, 2004, 01:00:37 PM »

Here's a ex BBC engineers opinion

Why do I here such a massive diference in the high frequencies when I record drums at 88.2K over 44.1K. I've even tried recording filtering off everthing over 18K to see if it still sounded better and it did.

If the waveforms are being accurately reconstructed why do I hear such a drastic difference.


------------------------------------------------------------ --------------------



To re-quote the post above: because of cheap parts! You don't say what converters (and other equipment) you are using, but I'd wager we are not talking Prism or dCS. Essentially what you are hearing are the audible artefacts of duff converters !

The simple fact is that the vast majority of converters don't do what the theory calls on them to do when operating at 44.1 or 48kHz. The anti-alias/reconstrution filters have an audible impact onthe pass band when they shouldn't. They introduce amplitude ripples and horrendous phase distortions. And if they are designed to minimise these aspects, the transition band is insufficiently steep and the stop band insufficiently attenuated, resulting in aliasing distortions.

They don't work properly at 88.2 or 96kHz either, but ther is so little audio signal energy close to the turnover points that the problems don't manifest.

However, technology continues to move on, and a recent AES paper by M Craven has proposed the application of a technique used in radio astronomy to correct for some of the inherent deficiencies of the digital filter designs used in anti-alias and reconstruction filters. Some manufacturers are already starting to use this new idea, and the results look (and sound) impressive.

Going back to the thought at the start of the thread, there are distinct and audible advantages to operating at 88.2 or 96kHz with current, available converter technologies compared with the results obtained with run of the mill 44.1/48kHz converters. I am very sceptical about any advantages of working at rates higher than 96, but remain open to persuasion -- I know and respect several engineers who claim additional sonic benefits.

Some of the really high end converters (the likes of Prism and dCS to name but two UK ones) can achieve the same (or better) sonic quality as budget and mid-price converters operating at 96kHz -- suggesting that when engineered properly, the Nyquist theory really does hold up.

As always, inthe affordable end of the market, quality is limited by cheap parts, cut corners, and poor design implementations. Nothing new in that -- the same problems affect analogue products too. Why else would a Neve console sound so much better than a Mackie or a Soundcraft?

hugh

--------------------
Technical Editor, Sound On Sound

It is certainly true that some instruments generate ultrasonic energy -- trumpets, some percussion, string sections etc.

It is also true that some of the these ultrasonic components can interact in the air to produce audible intermodulation products. I think this is one reason why miking a string section from a reasonable distance always produces a richer, more pleasing sound that close miking each instrument and mixing in a desk!

Also, very few microphones have a response that extends significantly above 25kHz or so, and the same for loudspeakers. So in most cases, the mic is not capturing ultrasonic energy even if it is there, and neither can the speaker reproduce it.

But, the high end roll off in both cases is gentle -- 6dB octave, typically -- which means there is little phase distortion.

Contrast that with digital converters with brickwall rol-offs operating at 44.1 or 48kHz sample rates. These inherently cause horrendous amounts of phase distortion around the turnover freuqnecy, and that, I think, it what our ears pick up on as 'the digital sound' that many don't like. Move the sampling rate up to 96kHz, and while the phase distortion still happens, it is now way outside the hearing range and so the sound appears to have improved.

But it's not because higher harmonics are being captured, or conveyed, or reproduced. It's because one of the unpleasant artefacts of digital encoding has been circumvented.

Hugh

--------------------
Technical Editor, Sound On Sound
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #20 on: January 04, 2005, 05:01:10 PM »

Keef wrote on Wed, 22 December 2004 18:00

Here's a ex BBC engineers opinion

The simple fact is that the vast majority of converters don't do what the theory calls on them to do when operating at 44.1 or 48kHz. The anti-alias/reconstrution filters have an audible impact onthe pass band when they shouldn't. They introduce amplitude ripples and horrendous phase distortions....

Hugh

--------------------
Technical Editor, Sound On Sound




"The simple fact is that the vast majority of converters don't do what the theory calls on them to do when operating at 44.1 or 48kHz. The anti-alias/Reconstruction filters have an audible impact onthe pass band when they shouldn't. They introduce amplitude ripples and horrendous phase distortions. And if they are designed to minimise these aspects, the transition band is insufficiently steep and the stop band insufficiently attenuated, resulting in aliasing distortions."

That was true 15 years ago. The AD converters today are operating at over sampled rate and the anti aliasing analog filters phase problem is NOT an issue any longer.
DA’s also operate at up sampled rates and the analog reconstruction filter is no longer a problem. Up sampling is mainly done FOR THAT REASON, to overcome phase problems. When is the last time you saw a DA with no up sampling?

When it comes to the digital up sampling filters (for DA), if they are well designed, there is minimal ripple. The FIR’s interpolating yield linear phase, thus NO phase problems.

When it comes to the digital decimation filters (for AD), if they are well designed, there is minimal ripple. The FIR’s decimators yield linear phase, thus NO phase problems.

"They don't work properly at 88.2 or 96kHz either, but ther is so little audio signal energy close to the turnover points that the problems don't manifest."

So that is also incorrect. If you want to find phase problems near 20KHz, look at the mics and the speakers…

As always, inthe affordable end of the market, quality is limited by cheap parts, cut corners, and poor design implementations. Nothing new in that -- the same problems affect analogue products too.
hugh
Technical Editor, Sound On Sound


That part is correct.


"It is certainly true that some instruments generate ultrasonic energy -- trumpets, some percussion, string sections etc. It is also true that some of the these ultrasonic components can interact in the air to produce audible intermodulation products. I think this is one reason why miking a string section from a reasonable distance always produces a richer, more pleasing sound that close miking each instrument and mixing in a desk!

Also, very few microphones have a response that extends significantly above 25kHz or so, and the same for loudspeakers. So in most cases, the mic is not capturing ultrasonic energy even if it is there, and neither can the speaker reproduce it."


I have been saying that for a long time. I am glad it is being heard. First, lets be very clear. It takes mics AND speakers AND ears to respond to higher frequencies. Second, With say 88.2KHz you have OVER 40KHZ AUDIO! That is more than is needed.

But, the high end roll off in both cases is gentle -- 6dB octave, typically -- which means there is little phase distortion.

A musical instrument is not a filter. If it makes say 25KHz harmonic, it is by definition zero phase! Do we need to capture it? Not if we do not hear it! Having it removed does not impact the sound. So 50KHz trumpet energy is of zero value (for humans and even for dogs). Again, your comment the anti-alias/Reconstruction filters have an audible impact on the pass band when they shouldn't. is wrong in the environment of the last 10-15 years of audio. You obviously did not read my papers. Look at “Sampling Theory” and also at “Sampling, Oversampling, Imaging and Aliasing” on my web under support.  

"Contrast that with digital converters with brickwall rol-offs operating at 44.1 or 48kHz sample rates. These inherently cause horrendous amounts of phase distortion around the turnover freuqnecy, and that, I think, it what our ears pick up on as 'the digital sound' that many don't like. Move the sampling rate up to 96kHz, and while the phase distortion still happens, it is now way outside the hearing range and so the sound appears to have improved."

Once again, a same wrong story repeated here. A well done, or even a poorly done digital decimation, and also digital up sampler, when based on FIR (not IIR) yields ZERO PHASE SHIFT!
A poorly done filter will have ripple. But instead of saying horrendous amounts of phase distortion around the turnover frequency, you should say understand it has ZERO PHASE DISTORTIONS.

Again, the analog circuits are operating way up there (frequency wise). Modern AD’s front end operates at 64-512fs (2.8Mh to way over 22MHz)! Nyquist is so high that the analog filters today are rarely above 3 poles – no horrendous phase shifts anywhere, not even in the MHz. A 3 pole filter yield 135 degrees at cutoff, and only a few degrees at an octave bellow cutoff. So if your cutoff is at say 50KHz, your deviation from linear phase is under already control. Disagree? So set your filter at 100KHz or 500KHz!

Most DA’s operate at significant up sampling rate and the filter corner frequencies is also way above 20KHz. Similar story to the AD – NO horrendous phase shifts.

Why are you talking about phase shifts resulting from filter corners at such high frequencies when your microphone and speaker have a corner at 20KHz or so?  

I find the spread of this sort of information to be very disturbing. Plainly wrong facts, mixed with a couple of correct statments, when stated with such authority, are the reason even large companies chose to ignore engineering and scientific fundamentals and go for the outrageous 192KHz hype.

Also, while promoting everything British as good (Prism, DCs, Mr. Craven, Neve Counsel) and even putting down some non British makers, lets not forget that it was a DCS paper that is responsible to propagating much of that 192KHz BS.

Regards
Dan Lavry
http://lavryengineering.com

“In a time of deceit, telling the truth is a revolutionary act.”

 

Logged

stoicmus

  • Newbie
  • *
  • Offline Offline
  • Posts: 12
Re: Understanding Dan's 192kHz paper/argument
« Reply #21 on: January 19, 2005, 12:00:16 AM »

Hey Dan -

Nice paper - finally got a chance to read it.  Some great observations, and I'm inclined to agree with many of the points you've taken, esp. with regards to the value of >96kHz rates.  20kHz hearing and Nyquist pretty much define the  logical limits of the whole scenario, in my opinion.  I honestly believe that much of the sample rate hype is direct from those who sell the stuff - and many of us are lining up to dine on the BS.

Jay Craggs
Logged

maxdimario

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3811
Re: Understanding Dan's 192kHz paper/argument
« Reply #22 on: February 02, 2005, 10:05:58 PM »

danlavry wrote on Tue, 14 December 2004 22:36

Regarding that speed – accuracy tradeoff, that is easier to understand. Analogies can be misleading, but say you take on a task to color a picture with crayons and “stay within the lines”. The picture is intricate. I bet doing it in 10 seconds will be a lot less accurate than if you took 10 minutes. The same statement applies towards so many things. Devices and circuits also have speed limitations (and speed is in fact bandwidth). A given size capacitor takes time to charge, a logic gate takes time to change states and so on. Doing things fast goes against doing things accurately. Devices and circuits can be optimized for maximum speed, power, accuracy and more. They are most often optimized to provide a combination of acceptable tradeoff. When you relax on one requirement, you end up with more “breathing room” for other requirements.



Regards

Dan Lavry




I have noticed that if an audio source is bandwidth-limited with a passive LC filter even at 15 KHz this does not bother me.

what bothers me about digital is the loss of feel, rhythm, and realism in the transients that, to me, is evident even in lower-bandwith reproduction systems.

a lot of what is disturbing about digital is, in purely intuitive terms, the lack of solidity and 3D quality that high-speed analog excels at.

the sound to me seems to lose resolution of the wavefront starting in the mid-highs  (personal intuition) and not solely near the top of the frequency range.

your argument on the importance of stability fits in quite well with what I hear, although I can't say for sure if it is the fundamental problem, because I have no working experience except for modifying the analog I/O, and we are dealing with an 'unnatural' medium.

I will say that in the analog circuits that I have built stability and speed, simplicity of design and the avoidance of any circuit that might induce ringing or unpredictable phase shift tends to make for a more realistic sound. Limiting the bandwidth at 20 K after the fact does not really make a strong impact on my listening, provided the filter is a 'good' one.

the stability probably suffers in DAWs as well regarding  mixing algorhythms?
Logged

AndreasN

  • Full Member
  • ***
  • Offline Offline
  • Posts: 247
Re: Understanding Dan's 192kHz paper/argument
« Reply #23 on: February 24, 2005, 07:12:58 AM »

Hi!

There's one thing still puzzling me after reading you paper. Hope you can help explain the effect of amplitude modulation in sampling.

Idea: Sampling audio is like pointing a TV cam at a computer screen. The screen rolls. Audio pulsates, sampling itself puslates, the result is interference manifested as beat frequency amplitude modulation. I guess that no matter what frequencies are used either in medium or sampler the effect will always be present to /some/ degree in the bit stream. Example; if you sample at 1 hz more than the Nyquist, the result is a very long 100% amplitude modulation. From there on the AM speed increases.

I'm unfortunately totally clueless as to whether this is an obnoxious effect at all.

Did a quick and dirty test in soundforge when I got the idea. Result is attached in the JPG picture to illustrate the AM. The waveforms show 96/48/44/22Khz, signal is 10Khz sine. All sines generated withing soundforge, not sampled, so this may have an effect. This beat frequency seems to be everywhere in any sampled signal, although to a lesser degree in lower frequencies. I guess the effect works in harmonies on the beat frequencies, rippling the signal with AM.

Hope you can help explain this further.

I'm terribly sorry of this doesn't make sense and would love to stand corrected. I'm very much humbled in this company!


Cheers,

Andreas Nordenstam (Bergen/Norway)
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #24 on: February 24, 2005, 04:19:20 PM »

AndreasN wrote on Thu, 24 February 2005 12:12

Hi!

There's one thing still puzzling me after reading you paper. Hope you can help explain the effect of amplitude modulation in sampling....

Did a quick and dirty test in soundforge when I got the idea. Result is attached in the JPG picture to illustrate the AM....

Hope you can help explain this further.

I'm terribly sorry of this doesn't make sense and would love to stand corrected. I'm very much humbled in this company!

Cheers,

Andreas Nordenstam (Bergen/Norway)


The "test" you did proves that the sampled waveform is not the same as the input analog waveform. They only share the same values at the sample points. Indeed, as you raise the frequency to be near Nyquist, the differences between input and sampled waveforms become very large. My plot below shows an input wave at 15KHz (red), and below it there is the sampled wave (in blue)sampled at 44.1KHz. Note that I am holding the sampled wave value constant between samples.

index.php/fa/715/0/

The black line (below the red and blue) is the difference between the two waves. We simply subtract the blue curve from the red one.

If I could take the blue wave (the sampled signal) and remove from it the difference (which is the black wave), than I will end up with the original wave which is the red wave. In other words, if I remove the difference, I end up with the original.

So here is the "trick": The red wave (analog audio) is made out of frequency energy below Nyquist (a requirement for AD conversion).

The Blue wave is a sum of two "parts of energy": the first part is the all the energy below the Nyquist frequency, and it is IDENTICAL to the analog input (red wave).
The second part is the energy ABOVE the Nyquist frequency, and it happens to be the black wave - the difference. That is the concept that makes Nyquist theorm work.

In other words, all the difference between the input and the samples waves resides in high frequencies, above Nyquist. Therefore a low pass (analog) filter (that passes all the energy under Nyquist and blocks all the energy above Nyquist) will in fact remove the difference and "bring back" the original wave. The reconstructed (filtered) wave will have the correct values not only at sample times but all the time (including between the samples).

The process of filtering (passing signals below Nyquist and blocking it above Nyquist) is easier to do when we upsample the digital data, thus oversampling DA's. But that higher rate is a localized DA processes, not to be confused with 192KHz sampling for audio. Upsampling the data at the DA is not about more data bandwidth or storage. It is a localized process for the DA.

Regards
Dan Lavry
www.lavryengineering.com    
Logged

AndreasN

  • Full Member
  • ***
  • Offline Offline
  • Posts: 247
Re: Understanding Dan's 192kHz paper/argument
« Reply #25 on: February 27, 2005, 07:22:39 AM »

Hi!

Wish I could be happy with your answer. Still intrigued by this 'artifiact minus digital values equals original signal' in practice. Not when regarding Nyquist++ frequencies, that's perfectly logical, but these AM thingies going on are below Nyquist, well into audio range.


>In other words, all the difference between the input and the samples waves resides in high frequencies, above Nyquist. Therefore a low pass (analog) filter (that passes all the energy under Nyquist and blocks all the energy above Nyquist) will in fact remove the difference and "bring back" the original wave.

If this removal of the artifact had been done by some logic wonder I could have accepted it. But a low pass filter..? This is where I get lost. Look at my illustration again, the last picture have AM which affects every fourth swing on the 10khz sine(the modulation is probably at 2.8125khz(22,5khz SR divided by 8).

Have a hard time visualising a LPF that removes these AM components! So I'm still as intrigued as ever about this phenomena.


What I'm trying to check with you is this idea of a set of artifacts /below/ Nyquist, working on volume in N integer divisions of the sample rate. Haven't found any mention of this in the sampling theories, except some vague hints about "rippling in the audio band" which may be this effect.


Sorry if this is way too picky, but I'm really puzzled but this AM. Still got a feeling this is a trait inherent in all sampling, below Nyquist. Again, I'm very open for all suggetions and explanations!

Cheers,

Andreas
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #26 on: February 27, 2005, 03:23:30 PM »

AndreasN wrote on Sun, 27 February 2005 12:22

Hi!

Wish I could be happy with your answer. Still intrigued by this 'artifiact minus digital values equals original signal' in practice. Not when regarding Nyquist++ frequencies, that's perfectly logical, but these AM thingies going on are below Nyquist, well into audio range.


>In other words, all the difference between the input and the samples waves resides in high frequencies, above Nyquist. Therefore a low pass (analog) filter (that passes all the energy under Nyquist and blocks all the energy above Nyquist) will in fact remove the difference and "bring back" the original wave.

If this removal of the artifact had been done by some logic wonder I could have accepted it. But a low pass filter..? This is where I get lost. Look at my illustration again, the last picture have AM which affects every fourth swing on the 10khz sine(the modulation is probably at 2.8125khz(22,5khz SR divided by Cool.

Have a hard time visualising a LPF that removes these AM components! So I'm still as intrigued as ever about this phenomena.


What I'm trying to check with you is this idea of a set of artifacts /below/ Nyquist, working on volume in N integer divisions of the sample rate. Haven't found any mention of this in the sampling theories, except some vague hints about "rippling in the audio band" which may be this effect.


Sorry if this is way too picky, but I'm really puzzled but this AM. Still got a feeling this is a trait inherent in all sampling, below Nyquist. Again, I'm very open for all suggetions and explanations!

Cheers,

Andreas


I did not explain that the difference between the signal and the sampled signal is at higher frequencies above Nyquist. I just stated it as a fact, which it is. I also did not explain how the filter "fills in" what you call the AM modulation (I view it as an error signal). The fact is it does. That rippling will disappear with a low pass filter removal of the HF energy.

For more detail, please look at my paper "sampling Theorem" on www.lavryengineering.com for much more detail.    

regards
Dan Lavry
www.lavryengineering.com
Logged

danickstr

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3641
Re: Understanding Dan's 192kHz paper/argument
« Reply #27 on: June 10, 2005, 09:57:53 AM »

certainly 88.2/96 sounds "different" than 44.1.  is this due to artifacts of reproduction/conversion defiencies or the ability of us trained ear monkeys to hear beyond the human range?  I guess I will believe the experts around here and remain skeptical of the benefits of ultra-dog recording, although Rupert Neve is also an expert, but does not seem to be as versed in digital theory as Dan L.  We know that R Neve believes that 192 is necessary, which is one person I would be inclined to trust, but maybe not in this area.  

I personally have never recorded in anything other than 44.1/24  (well, 16, too) and it sounds fine to me, and quite a bit different depending on the converter.  I have never recorded in 60khz.  is this possible?
Logged
Nick Dellos - MCPE  

Food for thought for the future:              http://http://www.kurzweilai.net/" target="_blank">http://www.kurzweilai.net/www.physorg.com

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #28 on: June 10, 2005, 12:45:26 PM »

danickstr wrote on Fri, 10 June 2005 14:57

certainly 88.2/96 sounds "different" than 44.1.  is this due to artifacts of reproduction/conversion defiencies or the ability of us trained ear monkeys to hear beyond the human range?  I guess I will believe the experts around here and remain skeptical of the benefits of ultra-dog recording, although Rupert Neve is also an expert, but does not seem to be as versed in digital theory as Dan L.  We know that R Neve believes that 192 is necessary, which is one person I would be inclined to trust, but maybe not in this area.  

I personally have never recorded in anything other than 44.1/24  (well, 16, too) and it sounds fine to me, and quite a bit different depending on the converter.  I have never recorded in 60khz.  is this possible?


The standards do not include 60KHz, but 88.2KHz is only a little higher, and it does have a practical advantage over 60KHz. You can convert an 88.2KHz to 44.1KHz (or a 96KHz to 48KHz) with a SYNCHRONOUS sample rate conversion. The ratio is 2 to 1 exactly which makes the job much easier and the results better. Starting with a 60KHz will require an ASYNCHRONOUS SRC, which is more audible.

But why 88.2KH and not 44.1KHZ? Well, one may argue that 44.1KHz may be a bit tight for audio, however slightly. A 44.1KHz system does not yield 22.05KHz audio bandwidth. Many of the devices used in the audio chain may have some attenuation by the time you get to 20KHz. For example, a 20KHz mic (most mike do not go over 20KHz) means a 3dB loss at 20KHz. That audio signal will find at least one more 3db loss at around 20KHz going through the AD, and there may be other limitations in the signal path before you get to the speakers, which mostly pile up the last of the causes for high
frequency limitation.

So while each individual cause may not create a big problem by itself, the combined effect may be noticeable. It is not all about hearing 20KHz. Each of the single causes may have some degree of amplitude loss at 18KHz, and of course some impact at 15KHz… Again each one may be small, but combined they may amount to more than what you desire…  

At this point it is worth to mention that amplitude loss is often associated with some deviation from linear phase. Such is certainly the case for all analog hardware (including mics and speakers.

Unfortunately, working with an 88.2KHz or 96KHz production only eliminates some of the causes for deviation from frequency flatness and phase linearity. Chances are that your mics and speakers are limited to no more than 20KHz…

The obvious question is of course, why is it that we saw so much marketing hype regarding 192KHz for conversion, while hardly anyone mentioned getting the mics and speakers to “go faster”.

Technology wise, it is very difficult to make  a 20-96Khz (or 20-48KHz) mics that work well for audio. It is difficult to make speakers go fast, in fact we already use 3 “cones to cover just the basic range (20-20KHz)… The sonic compromises as well as price of such devices are large. It is much easier to speed a converter with some “reasonable” degradation in performance, and hype the market  place with “something new”.

I am not extending the high audio frequency range by a few KHz to provide a somewhat wider margin for overcoming the accumulation of various gear limitations. But I would prefer a slight added margin, such as 25KHz mics and speakers that still work well down to 20Hz (or some good lower limit goal).
The pro 192KHz hype went for 96KHz audio bandwidth to be used with 20KHz mics and speakers...

And again, the reason for having a few more KHz is not because we hear 25KHz (or 30KHz). A 20Kz device is less flat to 20KHz device than a 30KHz device. But it is easier to make 20KHz device work well at 50Hz than a 30KHz device.

As always, engineering calls for a lot of compromises, and we need to look for the OPTIMAL POINT, instead of be sold on "faster is better".

Regards
Dan Lavry
www.lavryengineering.com


Logged

danickstr

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3641
Re: Understanding Dan's 192kHz paper/argument
« Reply #29 on: June 10, 2005, 10:29:37 PM »

i don't understand how the information to rebuild the wave(black) from the blocky looking digital sample(blue) can be saved, and then used to make the sine wave (red) sort of magically come back.  it seems that the black wave is complex and changing, but no record of it is actually taken.  sorry to be lost here, I willl reread the papers.
Logged
Nick Dellos - MCPE  

Food for thought for the future:              http://http://www.kurzweilai.net/" target="_blank">http://www.kurzweilai.net/www.physorg.com
Pages: 1 [2] 3 4 5   Go Up
 

Site Hosted By Ashdown Technologies, Inc.

Page created in 0.06 seconds with 18 queries.