R/E/P Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: 1 ... 8 9 [10]   Go Down

Author Topic: the "high frequency transients" fallacy  (Read 54177 times)

trevord

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 81
Re: the "high frequency transients" fallacy
« Reply #135 on: August 18, 2006, 06:44:57 PM »

Jon Hodgson wrote on Fri, 18 August 2006 21:58

intervalkid wrote on Fri, 18 August 2006 05:16

Now correct me if I'm wrong but isn't the sample rate not simply a widening of the bandwidth but also the amount of samples per second?  I mean isn't that what the sample rating is based on Khz (thousands per second)?
If you have a 20k wave sampled at 44.1khz then you are getting aproxiately 2.205 samples per peak and valley and if you are sampling using 192khz you would be getting 9.6 samples per peak and valley.  Wouldn't this, though not consciously audible, make for a smoother waveform representation?  I mean the flicker on the computer screen doesn't appear to be flickering at 49 or so times per second, your subconscious creates the illusion of flicker at a much slower strobe.  This is because, though you don't consciously see 49 passes per second your brain subconsciously percieves it.  I think that it is understood because the eyes are more objective than the ears and make for an obvious and commonly agreed on phenomenon.  Therefore it has been studied and is lucrative to correct.
That aside considering the bit rate working in conjunction and the fact that each sound being recorded has not only one frequency but 10's (maybe 100's?)of frequencies.  Wouldn't using a higher sampling rate increase the fidelity of frequecies within the 20hz-20khz range regardless?


The one bit of Trevor's rather confused answer I agree with was the bit where he said "In a word, No".

The problem is you look at a a sampled waveform with its steps and think that that's what gets fed to your speakers, it's not, it gets used to drive a filter which generates a completely smooth signal.

It's not intuitive, but it is provable that sampling a bandlimited signal at more than twice the highest frequency it contains captures EVERY BIT OF INFORMATION REQUIRED TO REPRODUCE THE SIGNAL EXACTLY (given a perfect quantizer and reconstruction filter), phase, level, frequency, the lot.

Of course the sampling isn't perfect, the clock driving the sampling will have jitter (even if it is negligible) and sample resolution isn't infinite, you have a limited number of bits. However these factors are known and understood, and increasing the sample rate of the stored PCM doesn't help.

If this stuff didn't work, then we wouldn't be having this discussion in this way, because Fourier, Nyquist, Shannon etc have been fundamental to communications for decades before people started producing CDs. That's thousands of engineers designing gear that's been used by tens of millions of people every day for decades (probably billions of people these days)... if Fourier or Nyquist were as fundamentally wrong as some people here seem to think, it would have been found out, published, patented, licensed for millions and the patents would have run out by now.


intervalkid wrote on Fri, 18 August 2006 05:16

If you record a synth with no frequencies produced (including harmonics) above 10 khz and sampled at 22.05 khz and then at 44.1, do you think there would be a difference in the sound?



Well the problem here is that there are a number of variables which might affect the result, does your synth really have nothing above 10kHz? (Highly unlikely), does your particular ADC do anything different to signals around 10k in the different modes? Does your playback system work equally well at 22.05kHz and 44.1kHz?

So there are a number of things that could change and introduce a difference when you make the switch, even though none of them is inherent in the sample rate change.

So we need to eliminate those differences to make a meaningful test.

Now, with a great deal of confidence I would say that I could do the following.

1) Sample the synth at 44.1kHz
2) Brick wall filter it at 10kHz in the digital domain
3) Downsample to 22.05kHz
4) Upsample back to 44.1kHz

With a proper blind test you wouldn't know the difference between 2 and 4, which would answer the question of whether the 22.05kHz stream contained as much information about the 10kHz bandwidth signal as the 44.1kHz stream.

Of course this is assuming a high enough quality of filtering and sample rate conversion, but this is just a case of correct mathematics.


errr....
Exactly...
Couldn't have said it better myself.
Smile
Logged

Ronny

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 2739
Re: the "high frequency transients" fallacy
« Reply #136 on: August 18, 2006, 09:06:05 PM »

intervalkid wrote on Fri, 18 August 2006 16:07

trevord wrote on Fri, 18 August 2006 04:18

In  a word - no
sampling theory says once you don't violate the 1/2 sample rate rule
AND
keep the signal periodic - no "flat" regions at plus or minus digital max (digital zero )
THEN
the analog signal reproduced from a perfect conversion from the digital data is a perfect representation of the analog signal that was converted.

A higher sample rate would not make a more perfect analog wave because you didn't violate the lower sample rate nyquist anyway.

The question is - since a/d converters filter off the frequencies above 1/2 the sample rate - are they filtering important audio information?

Pick the maximum frequency you think is important - set your sample rate to twice that. Theory says you can do perfect conversion back and forth. Going higher that 2 x what you think is highest makes no difference.


This doesn't make any sense.  If it did then the bit-depth wouldn't make any difference since "as long as you don't vioate the 1/2 sample rate rule the analogue signal reproduced from a perfect conversion from the dital data is a perfect representaion of the analogue signal that was converted"
If it's perfect sampled at twice the frequency rate then wheredoes the bit depth come in?
Also this completetly ignores the fact that there are multiple frequencies being recorded for any given sound.
When do you ever get a frequency with no harmonics? Every instrument would sound exactly the same.



Not true, Adrian. Bit depth directly effects dynamic range. Lower the bit depth, raise the noise floor. It's really apples and oranges compared to sampling rates as you can sample at different bit depths. Sampling rate = frequency response, bit depth = dynamic range.

As far as how important high order harmonics, or air frequencies are, talking inaudible to human frequencies, you must not forget that the typical studio condenser is only flat to 20k often plus or minus 3dB, our typical playback systems are only accurate from 18 to 22k often plus or minus 3dB, and our ears highly attenuate signals above 15k relative to the main fundamental frequencies where the instruments lie in the 31Hz to 4.2kHz range. It matters not if you are recording at 192k and try to capture the 80kHz harmonic that a muted trumpet is capable of being measured at, if the mic is only flat to 20k and 15k and above gets highly, highly masked by the main energy in 99.9999% of all music that resides between 31Hz and 4.2kHz.

I've been reading Jon Hodgson's take on the higher sample rate theory for a couple years now and I seldom find a thing that he says that doesn't mirror my own experience. If you want to really know how important the higher sample rates are, just do the blind tests yourself. You may find out that it's not really worth bothering with and that there are many other ways to improve your sound that are more effective than increasing sample rates above 44.1k.  
Logged
------Ronny Morris - Digitak Mastering------
---------http://digitakmastering.com---------
----------Powered By Experience-------------
-------------Driven To Perfection---------------

Sin x/x

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 72
Re: the "high frequency transients" fallacy
« Reply #137 on: August 19, 2006, 01:47:03 AM »

danlavry wrote on Fri, 18 August 2006 15:00

Sin x/x wrote on Fri, 18 August 2006 18:44

So the 96kHz sounded better. This can only be because of the sample rate, or else the chips are possessed?


You are giving 2 choices, and one of them is not likely. The problem here is that the statement "the 96KHz sounded better" is:
1. Based on listening to specific gear.
2. Does not provide any insight as to why.

The rules of this forum is to stay away from statements about what sounds good, which can be subjective. It is OK to say "so and so sounds good BECAUSE....



I understand that.
I was being sarcastic.
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: the "high frequency transients" fallacy
« Reply #138 on: August 19, 2006, 05:45:25 PM »

Sin x/x wrote on Sat, 19 August 2006 06:47

danlavry wrote on Fri, 18 August 2006 15:00

Sin x/x wrote on Fri, 18 August 2006 18:44

So the 96kHz sounded better. This can only be because of the sample rate, or else the chips are possessed?


You are giving 2 choices, and one of them is not likely. The problem here is that the statement "the 96KHz sounded better" is:
1. Based on listening to specific gear.
2. Does not provide any insight as to why.

The rules of this forum is to stay away from statements about what sounds good, which can be subjective. It is OK to say "so and so sounds good BECAUSE....



I understand that.
I was being sarcastic.


Thanks for letting me know. It is not easy to keep up with it all.

Regrads
Dan Lavry
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: the "high frequency transients" fallacy
« Reply #139 on: August 19, 2006, 06:18:03 PM »

I see some comments about bit depth here in the conversation.

First, as long as we are talking about theory, sampling and bit depth are independent. In engineering and math language sampling rate and sample accuracy are orthogonal. Orthogonality means that one is independent of the other, that changing one has no bearing on the other.

Nyquist theory deals with sample rate and bandwidth, so you can assume that each sample represents an ANALOG value with zero error.

In practice, we need to set some accurcay goals. There is no filter that can remove the energy to say -1000dB. There is no perfect sample value, because there is always some causes such as inaccuracy of the circuit, the noise that is always there to some degree...

In practice, the sample accuracy (bit depth) and sample rate are not completely orthogonal. I stated it many times: When one
designs for speed, you are compelled to "relax" you demand for bit depth (accuracy), and design for accuracy goes against speed. That is NOT just with digital. It is true for analog.

Anyone that wishes to bring bit depth into the conversation should realize that 44.1KHz is better for bit depth then 96KHz. Of course we can not slow the rate bellow a certain point, because it will eliminate some audio. But we should not forget that speeding it up has some down side as well.

There are no simple answers. One can not say that 96KHz is better, or that 44.1KHz is better. 44.1 is a bit on the low side, 96KHz is a bit on the high side. The best compromise is somewhere between 44 and 96KHz, but there is no standard for 60-70KHz...

Regards
Dan lavry
http://www.lavryengineering.com
Logged

intervalkid

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 56
Re: the "high frequency transients" fallacy
« Reply #140 on: August 23, 2006, 12:39:35 AM »

I will admit that I am pretty ignorant and am hoping to learn.
I can only discuss according to my understanding as of now so here we go.
To my understanding (or so I thought, but am now in doubt)
for each sample the bit depth is applied.  For 16/44.1 you would have 16 bits 44.1 thousand times per second.  If this is so then 24 bits 192 thousands times per second seems like it would be closer to perfect representation.  Now I notice that a wav. file of 16 bit 44.1khz is listed as having a bit rate or 1411kbps which is twice what I would expect from the above understanding.
Nonetheless, for whatever reason this is, I would assue that a 24 bit 192Khz wav file would have a bit rate of 9216kbps.  If this is true, again I assume that the more information the higher fidelity, at least if the file has been band passed within audible frequencies.
Please tell me where I have erred.

Thanks
Logged

Sin x/x

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 72
Re: the "high frequency transients" fallacy
« Reply #141 on: August 23, 2006, 02:02:50 AM »

intervalkid wrote on Tue, 22 August 2006 23:39

I will admit that I am pretty ignorant and am hoping to learn.
I can only discuss according to my understanding as of now so here we go.
To my understanding (or so I thought, but am now in doubt)
for each sample the bit depth is applied.  For 16/44.1 you would have 16 bits 44.1 thousand times per second.  If this is so then 24 bits 192 thousands times per second seems like it would be closer to perfect representation.  Now I notice that a wav. file of 16 bit 44.1khz is listed as having a bit rate or 1411kbps which is twice what I would expect from the above understanding.
Nonetheless, for whatever reason this is, I would assue that a 24 bit 192Khz wav file would have a bit rate of 9216kbps.  If this is true, again I assume that the more information the higher fidelity, at least if the file has been band passed within audible frequencies.
Please tell me where I have erred.

Thanks



Our sense's have a limited resolution, we simply don't react to certain things. (The higher frequency's)


It takes time to make an accurate measurement.
If you measure short, resolution goes down.


If the file has been band passed, it has the same amount of information in it. And the high sample rate doesn't add anything.
Logged

Andy Peters

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1124
Re: the "high frequency transients" fallacy
« Reply #142 on: August 23, 2006, 05:42:38 AM »

intervalkid wrote on Tue, 22 August 2006 21:39

To my understanding (or so I thought, but am now in doubt) for each sample the bit depth is applied.


It's very simple.  Each sample has a specific word width.  (I don't know where "bit depth" originated, but it's just plain wrong.  It's width, as in "the sample is sixteen bits wide."  Say, "the sample is sixteen bits deep" out loud and realize how wrong it sounds.)

And, as Dan points out, word width is independent of sample rate.

Quote:

 For 16/44.1 you would have 16 bits 44.1 thousand times per second.  If this is so then 24 bits 192 thousands times per second seems like it would be closer to perfect representation.  Now I notice that a wav. file of 16 bit 44.1khz is listed as having a bit rate or 1411kbps which is twice what I would expect from the above understanding.


"Bit rate" is a completely uninteresting and silly metric.  For uncompressed audio, it has nothing to do with quality.  All it tells you is that if you're transmitting samples over some serial medium, then you have to ensure that your data rate exceeds "bit rate."

When discussing compressed audio, all the bit rate tells you is that if you're transmitting the audio over some serial medium, you have to ensure that your data rate exceeds your "bit rate."  Oh, yeah, and the bit rate tells you how much damage you've done to your audio.

Quote:

Nonetheless, for whatever reason this is, I would assue that a 24 bit 192Khz wav file would have a bit rate of 9216kbps.  If this is true, again I assume that the more information the higher fidelity, at least if the file has been band passed within audible frequencies.


Yeah, it does, but so what?  And see Dan's discussion about optimal sample rates.

a-
Logged
"On the Internet, nobody can hear you mix a band."

Jon Hodgson

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1854
Re: the "high frequency transients" fallacy
« Reply #143 on: August 23, 2006, 07:11:08 AM »

intervalkid wrote on Wed, 23 August 2006 05:39

I will admit that I am pretty ignorant and am hoping to learn.
I can only discuss according to my understanding as of now so here we go.
To my understanding (or so I thought, but am now in doubt)
for each sample the bit depth is applied.  For 16/44.1 you would have 16 bits 44.1 thousand times per second.  If this is so then 24 bits 192 thousands times per second seems like it would be closer to perfect representation.  Now I notice that a wav. file of 16 bit 44.1khz is listed as having a bit rate or 1411kbps which is twice what I would expect from the above understanding.
Nonetheless, for whatever reason this is, I would assue that a 24 bit 192Khz wav file would have a bit rate of 9216kbps.  If this is true, again I assume that the more information the higher fidelity, at least if the file has been band passed within audible frequencies.
Please tell me where I have erred.

Thanks


Ok, the first thing to understand is that more samples equals more data, but it does NOT equal more information.

What do I mean by this? Information is something that tells you something you would not otherwise know. How much data is required to represent all the information required depends on what you are trying to say, and also on the contect in which the data is presented. Take the following two messages

"Everton 2, Liverpool 1"

and

"Everton football team scored two goals in the match today against liverpool who score 1 goal in that match"

Which one has more data? The second one obviously, which one has more information? Well that depends on what you already know, if the message comes out of the blue then the second one does, but if you know you are listening to todays football scores, then they both contain exactly the same information.

Now let's take something a little different. You have a circle on a piece of paper, that circle is made up of an infinite number of points. How many samples of that circle do you need to take to be able to reproduce that circle exactly? An infinite number? No, actually it is three. Any three points on that circly allow you to reproduce the complete circle, so long as you know what you are trying to reproduce is a circle.

So, hopefully you can understand and accept that more data is not equal to more information.

Right now people are reading this and thinking (or possibly screaming at the screen) "But an audio signal is a hell of a lot more complex than a bloody circle!!". Well that's quite true, but it does have a mathematical representation. If we start with the simple case of a periodic signal, then ANY periodic signal can be composed of a combination of harmonics of the fundamental, with varying level and phase, up to the highest frequency contained in that signal.

With non periodic signals the maths gets more complicated, but the maths still exists. Not only that, but we can be about as certain as it is possible to be certain about anything that the maths is correct. The formulae have been prodded, twisted and tested by hundreds of mathemeticians, thousands of engineers and billions of people over the past century in many fields, including communications.

So unless the time space continuum somehow gets warped as soon pass a Keith Richards riff up to 20kHz or so through a wire (as opposed to passing a signal with a bandwidth of GHz and containing a few TV channels), then I think that Fourier et al can be taken as written.

So back to that mathematical representation of the audio signal.
We can break it down into two elements. The first is a filter, it is a lowpass filter and for the purposes of simplifying the discussion and avoiding reams of maths working out what the consequences of filter imperfections are we'll assume it is perfect, with signal components being passed perfectly below the cutoff and not being passed at all above it.

Then we need a signal to drive that filter, now what that signal looks like in its totality does not actually matter so long as EVERYTHING IN THE PASSBAND IS IDENTICAL TO THE ORIGINAL SIGNAL.

Let's say your original signal was a 0.1v magnitude sine wave at 20 Hz. Now you add a 1v square wave at 100kHz. What you see looks nothing like a sine wave, but put it through a brick wall filter with a 20kHz cutoff and what you will get back is that original sine wave.

So, what signal can we feed into the filter that has the same content below the cutoff point as the original signal? Well it turns out that a series of pulses at slightly more than twice the frequency of the cutoff actually contains all the same components below the cutoff as the original signal. It also contains a load of stuff above the cutoff which is completely different to the original signal (which had absolutely nothing above the cutoff, because our first stage was to bandlimit that to the audible range), so it looks very different, but our filter will remove all those additional components, leaving us with only what we want.

So, we need to generate a series of pulses at slightly more than twice the frequency of the filter cutoff, what information do we need to generate that series of pulses? The height of each pulse, which luckily happens to be the level of the original signal at a particular point in time, so we can get those levels by sampling the input at the same rate as we want to generate the pulses.

We now have ALL the INFORMATION required to reproduce the original signal. Capturing more DATA does not give us more information.

Ok, hopefully what I've written so far is understandable. But it is based purely on sample rate, where does sample bit-depth come into this? Well it affects how accurate each sample is.

People tend to think of finite sample resolution as meaning that something is missed out from the original signal, but it is easier to understand if you turn that around and think of it as something being ADDED to the original signal.

Imagine you have a sample based system which is perfect, except that the sampling has a finite number of quantization steps, what come out of the end?

Your original signal, PLUS an error signal.

Not that different from feeding audio through just about anything, put it through a channel on the best analogue desk in existance and what come out of the end is your original signal, plus and error (noise and distortion), though the error may well be so tiny you can't hear it.

And actually that's what that error is, noise and distortion. If the error is correlated to the signal, then you hear it as distortion, if it isn't (i.e. it is random), then you hear it as noise. But since in a 24bit system the error resulting from the quantization steps is not only below audibility (assuming you haven't set full scale such that it drives your eardrums through your brain), but way below the other errors in the system (i.e. the analogue noise).


Logged

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: the "high frequency transients" fallacy
« Reply #144 on: August 24, 2006, 08:09:10 PM »

Bravo Jon. This an excellent description that anyone could understand but:

Everton 2 Liverpool 1

That's just snake oil.  Twisted Evil
Logged
Pages: 1 ... 8 9 [10]   Go Up
 

Site Hosted By Ashdown Technologies, Inc.

Page created in 0.078 seconds with 19 queries.