R/E/P Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: [1] 2 3 ... 5   Go Down

Author Topic: Understanding Dan's 192kHz paper/argument  (Read 24592 times)

Jay Levitt

  • Newbie
  • *
  • Offline Offline
  • Posts: 17
Understanding Dan's 192kHz paper/argument
« on: December 12, 2004, 02:12:26 AM »

I've read the sampling theory paper, and it both confirmed and cleared up much of my knowledge about sampling, while filling in a lot of the math used (such as the sinc curves).

But even after reading the paper, I don't see where the argument comes from that 192 is *worse* than 96.  I can see why it's no better.  And I can see that, in a sense, it's worse because it stores redundant information, requiring extra CPU power, storage space, etc.  But Dan seems to imply that the result is actually sonically worse in 192, and I don't get where that comes from.

I'm not an electronics whiz, and I aced calculus but forgot most of it.  Could someone point me in the right direction?
Logged

Nika Aldrich

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 832
Re: Understanding Dan's 192kHz paper/argument
« Reply #1 on: December 12, 2004, 07:56:35 AM »

There are a limited amount of resources available with which to do a task.  Engineering is quite often about balancing some resources against others (heat vs. time vs. amount of RAM vs. money vs. RF vs. shielding vs. layout vs. size and on and on).  Devoting resources to higher sampling rates inherently means a tradeoff in other aspects of the design that are consequential.

There are some other issues as well - like the ability for speakers et al to produce those higher frequencies, and what happens to them when we try.

Nika
Logged
"Digital Audio Explained" now available on sale.

Click above for sample chapter, table of contents, and more.

Bob Olhsson

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3968
Re: Understanding Dan's 192kHz paper/argument
« Reply #2 on: December 12, 2004, 04:34:17 PM »

His argument, as I understand it, is that higher sample rates must ultimately increase computational requirements beyond what can be accomplished in real time using real world electronics parts.

This means that there is a point of diminishing returns where a higher sample rate can only reduce the precision of the samples acquired in real time.

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #3 on: December 13, 2004, 08:09:34 PM »

Jay Levitt wrote on Sun, 12 December 2004 07:12

I've read the sampling theory paper, and it both confirmed and cleared up much of my knowledge about sampling, while filling in a lot of the math used (such as the sinc curves).

But even after reading the paper, I don't see where the argument comes from that 192 is *worse* than 96.  I can see why it's no better.  And I can see that, in a sense, it's worse because it stores redundant information, requiring extra CPU power, storage space, etc.  But Dan seems to imply that the result is actually sonically worse in 192, and I don't get where that comes from.

I'm not an electronics whiz, and I aced calculus but forgot most of it.  Could someone point me in the right direction?


In my paper I mentioned 3 arguments why 192KHz is worse:
1. The file size increases thus the space requirement compared to say 96KHz is doubled, and data transfer is slower by 2.

2. The computational requirement gets to grow and often by more than a factor of 2. That is why people that bought into 192KHz often ended buying very expansive accelerator cards, and still came short.

3. That is the big one: There is a tradoff between speed and accuracy. Clearly, the accuracy of a 10Hz system is great, but it is too slow for audio. The accuracy of 1GHz is much poorer, and it is too fast for audio. The question is - what is the optimum rate?

It is not true that faster is better. It is not true that more is always better. A 6 foot person weighing 100lb is too thin, but the same person weighing 500lb is too heavy. There is such a thing as OPTIMAL RATE. In the case of audio, it is all about what people can hear. That is what dictates why most mic's and speakers are optimized to about 20-20KHz, not 20-96KHz. The same factors should apply to converters.

The speed accuracy tradoff is one of the general engineering concepts, and it manifests itself many ways. Most of them are practical, such as "you can charge the cap more accurately if you have more time", or "the amplifier will settle to a more accurate value if you give it more time". But with modern converters, mostly based on sigma delta, the tradoff starts on paper, before we get to "real world" circuits. The basic given set of design parameters for a sigma delta converter are 1. oversampling ratio 2. filter order 3. number of quantizer bits.
Say you have a given set of parameters.
You can design for the best 0-24KHz audio bandwidth
You can have less precision but more bandwidth 0-48KHz
You can have even less precision but more bandwidth 0-96KHz

This was regarding the paper design stage of sigma delta. Than you get into the real world circuitry and face the same tradoffs again...

There is no escape from speed vs accuracy tradoffs, sigma delta or not...

Regards
Dan Lavry
Logged

davidstewart

  • Newbie
  • *
  • Offline Offline
  • Posts: 14
Re: Understanding Dan's 192kHz paper/argument
« Reply #4 on: December 14, 2004, 02:47:00 PM »

Quote:

  The basic given set of design parameters for a sigma delta converter are 1. oversampling ratio 2. filter order 3. number of quantizer bits.
Say you have a given set of parameters.
You can design for the best 0-24KHz audio bandwidth
You can have less precision but more bandwidth 0-48KHz
You can have even less precision but more bandwidth 0-96KHz

This was regarding the paper design stage of sigma delta. Than you get into the real world circuitry and face the same tradoffs again...

There is no escape from speed vs accuracy tradoffs, sigma delta or not...




Dan:

I don't want to assume or speak too much for too many people, but I suspect that it's here (above) where many are not following you. You have mentioned the speed versus accuracy trade-off many times, however, this aspect of it is not  very intuitive, and is therefore not well understood, and therefore it keeps coming up again and again.

In the paper design stage (presumably before we think about how long it takes to charge a cap) it isn't obvious why 96kHz mandates less precision. The way most people tend to think about sampling the higher rate is MORE precision, in terms of acquiring the signal.

So...in THEORY (before we think about the practical reality of having to build the circuit) do we give up precision to sample faster, or is it that we give it up in practice due to the various limitations of parts and physics we have to work within?

Thanks

David Stewart
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #5 on: December 14, 2004, 04:36:31 PM »

Dan:

I don't want to assume or speak too much for too many people, but I suspect that it's here (above) where many are not following you. You have mentioned the speed versus accuracy trade-off many times, however, this aspect of it is not very intuitive, and is therefore not well understood, and therefore it keeps coming up again and again.

In the paper design stage (presumably before we think about how long it takes to charge a cap) it isn't obvious why 96kHz mandates less precision. The way most people tend to think about sampling the higher rate is MORE precision, in terms of acquiring the signal.

So...in THEORY (before we think about the practical reality of having to build the circuit) do we give up precision to sample faster, or is it that we give it up in practice due to the various limitations of parts and physics we have to work within?

Thanks

David Stewart



Thank you for your comments. I know that some of the concepts regarding sampling are NOT intuitive. It is difficult to explain that more samples are not better in a world where more pixels are better, but the fact remains, samples are not pixels and there are issues that are not easy to convey to people that did not chose to take an EE or math career. I wrote my paper to try to simplify things, but I guess it is still too difficult for many to follow.
So let’s just say that Nyquist was right, and we have 100 years of hand on experience, including test equipment, the communication industry, digital video, digital audio and much more.
And even without that experience, it is proven solidly to be mathematically correct that more samples than needed (as indicated by Nyquist) are going to add ZERO content, and are totally redundant.

Regarding that speed – accuracy tradeoff, that is easier to understand. Analogies can be misleading, but say you take on a task to color a picture with crayons and “stay within the lines”. The picture is intricate. I bet doing it in 10 seconds will be a lot less accurate than if you took 10 minutes. The same statement applies towards so many things. Devices and circuits also have speed limitations (and speed is in fact bandwidth). A given size capacitor takes time to charge, a logic gate takes time to change states and so on. Doing things fast goes against doing things accurately. Devices and circuits can be optimized for maximum speed, power, accuracy and more. They are most often optimized to provide a combination of acceptable tradeoff. When you relax on one requirement, you end up with more “breathing room” for other requirements.

Regarding the sigma delta design, yes, in theory you give up accuracy for speed. The noise shaping concept is about moving noise from a frequency range you wish to use for the signal, to other frequencies. Think of it as digging a hole. You can either dig a deep hole of small diameter, or very shallow hole of a large diameter. It is the same amount of dirt, but a different result. The depth of the hole is analogues to the accuracy, the diameter represent the bandwidth. Do you want great 20KHz or not so great 100KHz?

That answers your question about paper design. But I am an engineer and therefore equally interested in the real parts and circuits. Speed vs accuracy is a solid concept. Speed vs power is another one and there are others. Those concepts are no different than the first law of thermodynamics – never proven but no one so far came up with a single example to contradict it.


Regards

Dan Lavry
Logged

Nika Aldrich

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 832
Re: Understanding Dan's 192kHz paper/argument
« Reply #6 on: December 14, 2004, 08:57:44 PM »

If I understand Dan's point correctly:

A 44.1kS/s sample rate involves sampling at a very high rate using noiseshaping to push the quantization error out of the audible range and putting it in the range above Nyquist, where it then gets filtered out by the decimation filter.

A 96kS/s sample rate involves the same thing, except that a tradeoff comes into play:  do we push the noise out of the audible range or do we push the noise outside of the "legal" range below Nyquist.  At a 96kS/s sample rate the "legal" range is 0Hz to 48kHz and the audible range is 0Hz to 20kHz.  Pushing that quantization error out of the audible range is easy, but if the audible range is all we care about then why sample at the higher rate?  Further, sampling at the higher rates means including a lot of noise-shaped quantization error that is an artificial byproduct of the sampling process that we shoved up into that range and now doesn't get filtered out.  The added noise in that range is very likely to wreck havoc upon later processing of the signal.

So, then, filter that error out of the "legal" range, filter the quantization error to above Nyquist.  That, however, becomes more difficult.  We can solve this in one of two ways: increase the converter's speed so it runs twice as fast, or use steeper noise-shaping.  The first has tradeoffs in one way: faster sampling means more error is likely because of issues with clocks and much more.  Steeper noise shaping means more math, which also engenders tradeoffs of onboard DSP capability, price, and mathematical (rounding) error vs. performance/quality.

Either way using a 96kS/s converter means tradeoffs that either drive the price up, create more errors, or both.

There is an argument (I'm not giving validity to it, just expressing that one exists) that 96kS/s sample rates solve a particular problem with lower sample rates.  Based on that, these rates may actually be necessary - though that is a big "may..."  96kS/s sample rates, however, actually far exceed the sample rates needed to fix the "problem."  192kS/s sample rates therefore don't actually help at all, but 192kS/s sample rates again double (on top of the 96kS/s doubling) the impact of the various tradeoffs.  192kS/s sampling means sampling twice again as high or using twice again as much noiseshaping which means double again the processing requirements, mathematical (rounding error) and cost, or allowing in MORE than twice again as much out-of-band, artificial noise byproduct.

All of these are consequences - many of which can be overcome with good design.  That good design, however, will be expensive, and to date we haven't seen a reason to go that route.  This is not to say that a 192kS/s converter CAN'T sound as good as a 96kS/s converter - just that it will be more expensive - not just double - to accomplish this, and the gains will be zero - not just minimal - but zero.

Does that help answer the question?  Dan, do you approve of my interpretation and understanding of your points?

Nika
Logged
"Digital Audio Explained" now available on sale.

Click above for sample chapter, table of contents, and more.

Timeline

  • Full Member
  • ***
  • Offline Offline
  • Posts: 215
Re: Understanding Dan's 192kHz paper/argument
« Reply #7 on: December 15, 2004, 12:31:24 AM »

One time long long ago Dean Jensen showed me a spectral analysis of a cymbal crash extending to 27khz. He told me that a sample rate of around 270+, 10 times,  would be necessary to find true purity with digital recording. Was he right or just being excessive?

True we can't hear the frequencies in these ranges but we can sense the pressures from the sound being reproduced through some monitors and we are around these frequencies daily in our environment.

It used to be really obvious on older JBL mains back in the LE85 days because they produced so much more harmonic distortion and reacted to sub sonics.  Back then I could eq extreme top-end find a very open sheen which is nonexistent in todays recordings unless older API or certain hi IM distortion amps are in the chain.

Not so much on monitors today but higher sample rates do extend this information more naturally in my opinion.  I don't really know if that's good or bad thing but I would just like to say that if what I hear at 96K is better definition and noise, then I want more of it. The clarity is soooo much better to my ear I dread using anything else. To me, the noise/error measurement thing is not consequential to the overall improvements in top end clarity.

I also notice when I convert down to 48K on some of my songs to create faster computer and drive response the sound of the tracks take on a filtered extreme top end tone.  I find this harder tone actually good for softer pop music and I use regularly but it doesn't sound as real.

Just though another less techie perspective might be of interest.

Happy Holidays,
Gary Brandt
Timeline

Logged
Gary Brandt
Timeline

bobkatz

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 2926
Re: Understanding Dan's 192kHz paper/argument
« Reply #8 on: December 15, 2004, 09:59:30 AM »

Timeline wrote on Wed, 15 December 2004 00:31

One time long long ago Dean Jensen showed me a spectral analysis of a cymbal crash extending to 27khz. He told me that a sample rate of around 270+, 10 times,  would be necessary to find true purity with digital recording. Was he right or just being excessive?




Dean was using some arguments derived from pure analog world analysis issues. For example, it is recommended, and for good reason, that an oscilloscope have 10 x the bandwidth of the circuit being measured in order to detect any alteration of the waveform being measured. But neither of these arguments has anything to do with Nyquist theory! Basically, if you can create a filter which adequately removes all the material below the Nyquist frequency with no audible consequences, then all you need is a sample rate which is twice the bandwidth of interest. End of discussion  Smile

BK
Logged
There are two kinds of fools,
One says-this is old and therefore good.
The other says-this is new and therefore better."

No trees were killed in the sending of this message. However a large number of
electrons were terribly inconvenienced.

Nika Aldrich

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 832
Re: Understanding Dan's 192kHz paper/argument
« Reply #9 on: December 15, 2004, 10:15:12 AM »

Reconfirming what Bob Katz said:

Timeline wrote on Wed, 15 December 2004 00:31

One time long long ago Dean Jensen showed me a spectral analysis of a cymbal crash extending to 27khz. He told me that a sample rate of around 270+, 10 times,  would be necessary to find true purity with digital recording. Was he right or just being excessive?


In the analog world you need to have bandwidth far exceeding the required range to ensure that phase shift does not occur at a noticeable level.  In the digital world this is not necessary - limited bandwidth does not affect the phase of in-band material because the process is done with linear-phase filters as opposed to natural, or non-linear-phase filters.  Having said that, the analog sections of the converters need to have high bandwidth so that the phase doesn't get phunny bephore the conversion, but the conversion itself does not not that bandwidth.

Quote:

True we can't hear the frequencies in these ranges but we can sense the pressures from the sound being reproduced through some monitors and we are around these frequencies daily in our environment.


There is no evidence that we can sense that ultra high frequency "pressure."

Quote:

Not so much on monitors today but higher sample rates do extend this information more naturally in my opinion.  I don't really know if that's good or bad thing but I would just like to say that if what I hear at 96K is better definition and noise, then I want more of it.


This is the fault of your converters, not the sample rate.  The 44.1kS/s  sample rate is enough to completely accurately reproduce everything in the audible range.  If you're not getting that then your converters are failing you.

Quote:

I also notice when I convert down to 48K on some of my songs to create faster computer and drive response the sound of the tracks take on a filtered extreme top end tone.  I find this harder tone actually good for softer pop music and I use regularly but it doesn't sound as real.


That is the fault of your downsampling algorithm, not the sample rate.  Again, the lower rate is enough to completely accurately reproduce everything in the audible range.  If you're not getting that then your downsampling algorithm is failing you.

Nika
Logged
"Digital Audio Explained" now available on sale.

Click above for sample chapter, table of contents, and more.

bobkatz

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 2926
Re: Understanding Dan's 192kHz paper/argument
« Reply #10 on: December 15, 2004, 11:02:26 AM »

Nika Aldrich wrote on Wed, 15 December 2004 10:15


Quote:

I also notice when I convert down to 48K on some of my songs to create faster computer and drive response the sound of the tracks take on a filtered extreme top end tone.  I find this harder tone actually good for softer pop music and I use regularly but it doesn't sound as real.


That is the fault of your downsampling algorithm, not the sample rate.  Again, the lower rate is enough to completely accurately reproduce everything in the audible range.  If you're not getting that then your downsampling algorithm is failing you.

Nika


Well, probably the SRC algorithm needs some work, but remember that conceivably (and in practice) cumulative filtering and multiple DSP calculations can take things over the line from "inaudible" to "audible". You have to have some "room", which is why I think that 96ks/s makes a good compromise with sufficient "overkill." Individually "transparent" stages when cascaded can sound less than transparent. For example, one stage the excellent Weiss SRC sounds fine but cascaded you can notice some degradation.

I do not want Dan Lavry to remove this post because it sits very well on the line of the science. It is not possible to totally remove subjective observations from even the most technical forum.
Logged
There are two kinds of fools,
One says-this is old and therefore good.
The other says-this is new and therefore better."

No trees were killed in the sending of this message. However a large number of
electrons were terribly inconvenienced.

Nika Aldrich

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 832
Re: Understanding Dan's 192kHz paper/argument
« Reply #11 on: December 15, 2004, 11:05:41 AM »

Yes, good point, Bob.  Cumulative errors can build up and make an otherwise inaudible process audible.  

The same should be noted of the few processes out there that require downsampling in order to work at the higher rates.

Nika
Logged
"Digital Audio Explained" now available on sale.

Click above for sample chapter, table of contents, and more.

stoicmus

  • Newbie
  • *
  • Offline Offline
  • Posts: 12
Re: Understanding Dan's 192kHz paper/argument
« Reply #12 on: December 15, 2004, 11:12:23 AM »

I'm new to the forum, and would like to read the 192kHz paper - where is it located?
Thanks -
Jay
Logged

ustompsteve

  • Newbie
  • *
  • Offline Offline
  • Posts: 35
Re: Understanding Dan's 192kHz paper/argument
« Reply #13 on: December 15, 2004, 11:25:40 AM »

I believe this is the one they are referring to:

http://www.lavryengineering.com/documents/Sampling_Theory.pd f

--steve
Logged

Bob Olhsson

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3968
Re: Understanding Dan's 192kHz paper/argument
« Reply #14 on: December 15, 2004, 11:27:09 AM »

In this era of lossy coding and single-chip digital consumer electronics, I think cumulative degradation ought to be taken very very seriously. It's "analog think" to assume low quality digital playback gear will just mask audio quality making it less important from a communication standpoint. The facts of life are that at some point downstream every signal is going to break down. Digital cloning buys us some mileage but downstream digital processing takes it away.
Pages: [1] 2 3 ... 5   Go Up
 

Site Hosted By Ashdown Technologies, Inc.

Page created in 0.059 seconds with 20 queries.