R/E/P Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: 1 2 [3] 4 5   Go Down

Author Topic: Understanding Dan's 192kHz paper/argument  (Read 24609 times)

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #30 on: June 11, 2005, 01:04:45 PM »

danickstr wrote on Sat, 11 June 2005 03:29

i don't understand how the information to rebuild the wave(black) from the blocky looking digital sample(blue) can be saved, and then used to make the sine wave (red) sort of magically come back.  it seems that the black wave is complex and changing, but no record of it is actually taken.  sorry to be lost here, I willl reread the papers.


There are many subjects that are not easy to grasp completely by use of intuition alone. In fact, comprehension of Nyquist is almost counter intuitive, and that is why so many people were so quick to accept an increased sample rate as a good thing. But “Gut feel” does not always lead you  in the right direction, and at times it leads one in the wrong direction. Nyquist contribution is so important because it taught us something that is counter intuitive to so many.

People tend to think that the more, the merrier. More pixels may yield better picture. More computer speed is a good thing. Faster internet is better…. But more “dots” and denser samples is not always needed.  

For example, if you wish to draw a straight line, all you need are 2 dots. You can later connect the 2 dots with a ruler and having more dots has no value. A circle will require 3 dots and there is no reason for more dots…

Yes, my example of a straight line or a circle are based on imposing a very strict “limitation”. I can say that with 3 dots one can precisely define and draw a line or a circle. You will not be impressed because the “restriction is so tight”. What about any signal, not just a line or a circle?

Indeed, for any signal, you need to have infinite number of points, and only analog is good enough for that. But Nyquist found out that there is something between the case of infinite points with no restriction, and the simple 2 points for a line. He found out that with a signal being restricted to contain frequencies within a given bandwidth, you need to sample just faster than twice the bandwidth to have ALL the information in the sampled waveform.

You do not connect the sampled dots with a ruler. You connect it with a circuit called low pass filter. Limiting the sampled signal to Nyquist means that we are setting some limits and restrictions for how fast the signal can change. A low pass filter is “sort of an averaging machine”, and it will take out (smooth out) the fast “staircase like” sudden steps of a sampled signal and bring back the original shape of the curve.

Alternate view in the frequency domain: Take a 1KHz wave and sample it. Take the difference between the original wave and the sampled one and it “sort of looks like” a bunch of triangles. That difference between the original and sampled wave is referred to as the error signal. The error signal (the differance between sampled and non sampled waves) is made of high frequency only content. In fact the error signal is made of energy ONLY at frequencies above Nyquist. So by take a sampled wave, and filtering out the frequencies above Nyquist, we are removing the error signal. We are removing the difference between the sampled and original wave, thus a low pass filter will exactly fill in the “missing dots” at the time intervals between the sampling.

That is an intuitive explanation, for people that do not use math. I hope it helps. I can do better but it will be a long explanation with graphs. I may do it at some point.

Regards
Dan Lavry
www.lavryengineering.com
Logged

Greg Reierson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 425
Re: Understanding Dan's 192kHz paper/argument
« Reply #31 on: June 11, 2005, 02:28:19 PM »

danlavry wrote on Sat, 11 June 2005 12:04


That is an intuitive explanation, for people that do not use math. I hope it helps. I can do better but it will be a long explanation with graphs. I may do it at some point.

Regards
Dan Lavry
www.lavryengineering.com



Dan,

That's the best intuitive description I've ever read. Great post!


GR
Logged

danickstr

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3641
Re: Understanding Dan's 192kHz paper/argument
« Reply #32 on: June 11, 2005, 03:21:08 PM »

in this example, i tried to give an example of what i think many people fear can happen to the signal when it is Low pass filtered back into place.  I put a digital signal (red) then two variations of the recreated analog signal (light blue and magenta)   I tried to make the signals close to being the same, but with just a hint of difference that people wonder if this type of thing can happen, and if that is what makes one converter sound different than another, for example. index.php/fa/1202/0/  both signals pass (more or less) through the same points, but have slightly different curvatures, kinda like a bezier point curve used by graphic artists can go through several points and have a different curve profile, but still fit.
Logged
Nick Dellos - MCPE  

Food for thought for the future:              http://http://www.kurzweilai.net/" target="_blank">http://www.kurzweilai.net/www.physorg.com

Joe Crawford

  • Full Member
  • ***
  • Offline Offline
  • Posts: 107
Re: Understanding Dan's 192kHz paper/argument
« Reply #33 on: June 13, 2005, 11:16:59 AM »

Dan – Your explanation of sampling theory and Nyquist makes total sense as long as you consider the samples as “dots” (I think the normal term is “point”, which is defined as infinitely small).  But, as Nick Dellos’s diagram shows, when you include the inaccuracies of both sample rate and bit depth, those sample “dots” become rectangles of noticeable size.  While the effects of sample rate and bit depth become insignificant given a continuous, non-varying sine wave and an infinite number of samples, if you take one cycle of the sine wave as Nick has done the possible combinations of amplitude, phase and frequency that fit the samples can become quite large, as in infinite.  I think this (i.e., the single cycle) is where intuition and reality coincide.  

Intuition would also lead me to suspect that jitter, like that shown in Nick’s diagram, would affect the audio a lot more than would a few nanoseconds of clock jitter in the A/D.  It would be interesting to take a singe cycle of, for example, a 15k Hz. sine wave and plot the possible errors in amplitude, phase and/or frequency when sampled at the common bit-depths and sample rates, then do the same for 10 cycles.  Again, this is just intuition, but I would think the errors in bit-depth and sample rate would be more likely to miss recording actual, real jitter in the input audio signal than to cause added jitter to the output (reproduced) signal.

Joe Crawford
Stony Mountain Studio
Shanks, WV 26761
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #34 on: June 13, 2005, 12:40:13 PM »

Joe Crawford wrote on Mon, 13 June 2005 16:16

Dan – Your explanation of sampling theory and Nyquist makes total sense as long as you consider the samples as “dots” (I think the normal term is “point”, which is defined as infinitely small).  But, as Nick Dellos’s diagram shows, when you include the inaccuracies of both sample rate and bit depth, those sample “dots” become rectangles of noticeable size....

Joe Crawford
Stony Mountain Studio
Shanks, WV 26761



I am not sure what Nick did. Practical implementation does matter so bits are important and so is jitter, but a proper simulation will not show you a visual wave difference between the original and the sampled wave. In fact the outcome at only 12 bits is 4096 points on a screen (or on a printer page) and a nanosecond of jitter have little visuall consequences. The visual presentation is much more forgiving than the audio which demands much better accuracy. The eye is “a 1% instrument”, the ear is “a 0.001%” instrument.

But here are a few comments to keep in mind:

A. When doing a computer simulations, you can not really reconstruct an analog wave form (like a DA). A computer is always a digital machine. You can APPROXIMATE an analog outcome by having a simulation happen at very high over sampling rate. I have done many such simulations, and it is “good enough” for papers and presentations, but I always make sure to state clearly that the computer is providing an approximation.

B. One should not do a single cycle simulation. You can choose to show a single cycle, one out of many simulated cycles, but the simulation should extend to include having many cycles both before and after what you observe. A single cycle is really a “gated function”. A single sine wave cycle contains energy at frequencies extending all the way to infinity, therefore a single cycle violates Nyquist and will bring about alias distortions. One CAN simulate a single cycle AFTER making sure that all the high frequency (above Nyquist) is removed. I often "pre run" my signals through an anti alias filter simulation.  

C. As I pointed out in my papers “Sampling Theory” and “Sampling, Oversampling, Imaging and aliasing”, the reconstruction of a DA signal without some over sampling will suffer from some sinc curve ( sine(X)/X ) attenuation – the high frequencies drop off in amplitude. The difference between viewing sampling as "DOTS" and "RECTANGLES" is referred to as RZ and NRZ sampling. RZ describes the dot case - you take a sample value at a dot and Return to Zero (thus RZ). Th NRZ case means you take a sample and hold it's value - Not Return to Zero (NRZ) until the next sample changes the value. Practical signals are NRZ, and therefore they suffer from the sinc problem (high frequency loss. Therefore up-sampling (or other form of compensation) is required, or one has to be able to tell that there is a sinc attenuation.
One way to check frequency response is to use FFT plots. But again, doing just an FFT of say one and a half cycle (or so) test tone is not going to work, because of high frequency content due to the gating. That IS why people use windows on the signal pripr to the FFT processes. The window serves to overcome the high frequency content due to a “sudden” start or stop, such as in the case of a single cycle simulation. That unwanted high frequency at the "sudden" start and stop of the signal is called "leakage". The FFT window filters that leakage.  

So, if can make sure to account for the facts that:

A. A computer is a digital sampler machine, and a computer simulation of a DA calls for some up-sampling.

B. Make sure to have the computer pre-filter (anti aliasing filter) your signal, or find a way to make sure there is no energy above Nyquist in the AD part of the simulation

C. Up-sample your data prior to the simulation back to DA signal    

Doing the above listed, the signal going in and the signal coming out will yield the same plot. I have done it many times, including the plots in my paper “Sampling Theory”.

Regards
Dan Lavry
www.lavryengineering.com

Logged

danickstr

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3641
Re: Understanding Dan's 192kHz paper/argument
« Reply #35 on: June 13, 2005, 05:02:09 PM »

I have to confess I don't konw what I did either, since I am a layman engineer trying to understand PHD level digital siganl conversion, and just tried to make a simulation that expresses my area of being lost conceptually.  I try to break it down to small, bite-sized concepts that I can digest.  I have always felt that any concept is just a bunch of smaller concepts that add up to a big picture, so I am trying to start with the concept that I have the hardest time understanding, which is the way that Nyquist theory can alleviate all possiblity for error with 2 samples per cycle on a 20k freeform harmonic wave.  

Also weighing in the back of my mind is the "oversampling" concept that some converters try to use as a sales point.  I find myself wondering why oversampling is necessary if the Nyquist theory eliminates the possibility of error.  I apologize for not "getting it" since it seems to be a well known and documented phenomenon, but I think many folks are in my boat, and we are floating around with a big question mark over our heads.  Thanks again for trying to help us make the jump to lightspeed here.  Maybe the Empire turned off my motivator.  (empire strikes back joke).
Logged
Nick Dellos - MCPE  

Food for thought for the future:              http://http://www.kurzweilai.net/" target="_blank">http://www.kurzweilai.net/www.physorg.com

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #36 on: June 13, 2005, 06:00:46 PM »

danickstr wrote on Mon, 13 June 2005 22:02

I have to confess I don't konw what I did either, since I am a layman engineer trying to understand PHD level digital siganl conversion, and just tried to make a simulation that expresses my area of being lost conceptually.  I try to break it down to small, bite-sized concepts that I can digest.  I have always felt that any concept is just a bunch of smaller concepts that add up to a big picture, so I am trying to start with the concept that I have the hardest time understanding, which is the way that Nyquist theory can alleviate all possiblity for error with 2 samples per cycle on a 20k freeform harmonic wave.  

Also weighing in the back of my mind is the "oversampling" concept that some converters try to use as a sales point.  I find myself wondering why oversampling is necessary if the Nyquist theory eliminates the possibility of error.  I apologize for not "getting it" since it seems to be a well known and documented phenomenon, but I think many folks are in my boat, and we are floating around with a big question mark over our heads.  Thanks again for trying to help us make the jump to lightspeed here.  Maybe the Empire turned off my motivator.  (empire strikes back joke).



I respect the fact that you are trying to grasp the concepts.

2 samples per cycle may be misleading. If I give you only 2 dots on a piece of paper, you would not know anything about the signal. It could be one cycle of a sine wave, or "2 dots on the peak of a sine wave", or a straight line segment…  

2 dots can accommodate almost anything – it can be “2 dots on a straight line”, it could be “2 dots on a 1KHz sine wave”, or “2 dots, the first one on some cycle the second on another cycle”….

What can you do with 3 dots? Perhaps not much, but the 3rd dot does limit the possibilities. If the 3 dots are not on a straight line, you would know it right away. So adding a dot does contributes to what you know about the curve.

So as a rule, you may get to know more information about the signal with more dots. I said “may know” because there is a missing ingredient here: One can connect the dots with different curves. On one hand you may know that the dots are not on a straight line or on a circle, or a parbola….  On the other hand you still do not know what the curve is UNTIL YOU AGREE TO LIVE WITH THE NYQUIST RULE.

When you agree to restrict the signal to contain only frequencies up to Nyquist,  there is only one curve that will fit all the dots. But you need a lot of dots. In theory, you need infinite number of dots. In practice, one second of sampling is a lot of dots (44100 for a CD) so the “approximation” of infinite dots yields great practical results.

Why over sampling? The good answer is:
Go to my web at www.lavryengineering.com and look for a paper called “Sampling, Oversampling, imaging and aliasing” (under support).

The short answer is: Nyquist requires removal of all the energy content above half the sampling rate. Without oversampling, the requirement calls for a very large amount of precision circuitry which is costly, and not easy to implement. With say 44.1KHz sample rate, you want to block everything over 22.05KHz (say to 100dB attenuation) but you want to pass everything below 20KHz. That is a tough filter – 100dB slope in 2KHz!

But if you upsample to say 1MHz, than your filter is easy – 100dB over 500KHz range (instead of 2KHz). That is a “walk in the park”, relatively speaking….

There are a couple of other issues. If you want more detail read the paper I mentioned.

Regards
Dan Lavry
www.lavryengineering.com
Logged

danickstr

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3641
Re: Understanding Dan's 192kHz paper/argument
« Reply #37 on: June 13, 2005, 06:56:34 PM »

I certainly don't have a problem with cutting out the musical energy over 22.05.  If this were to remain the only argument for 88.2 and up sampling, then consider me converted.  The type of speakers needed to recreate this stuff is getting into the "moonrock needle" area of boutique elite shopping.  Most adults have been shown to hear up to about 15k, I think.

Secondly, if the Nyquist thing is a physics "card trick" (by that I mean it seems like magic until you understand it) that absolutely limits by scientific law that the curve is identical when the machine is built properly, then that is the concept that I have to understand more thoroughly, and I can do some due diligence on the topic.  I certainly will read the paper you referenced as well.

Thanks again!  Cheers.
Logged
Nick Dellos - MCPE  

Food for thought for the future:              http://http://www.kurzweilai.net/" target="_blank">http://www.kurzweilai.net/www.physorg.com

danickstr

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3641
Re: Understanding Dan's 192kHz paper/argument
« Reply #38 on: June 13, 2005, 07:46:26 PM »

I read the 6 page paper on Dan's website and what I was able to conclude was that, for example, 16x oversampling (702k/sec) as a presampling rate would give a basically perfect digital representation of the analog curve, which I am assuming is then electronically averaged down to be stored as a close to perfect Nyquist version of 44.1.  So that the pre sampling (which is buffered in the converter and then discarded) is the way to get around the problems associated with just sampling an analog signal at 44.1 from the start.  

Or to restate, The sampler is getting around having to store the info of a higher sample rate by using the advantages of oversampling to make its own high-resolution emulation of the analog wave in a language it can understand (digital), and then throwing that version away after it exploits it to build a close to perfect version of the wave (based on Nyquist) in a buffered area, that it can then store at 44.1 without worrying about aliasing and attenuation.

Not sure if this is right, but I understand how this could work.
Logged
Nick Dellos - MCPE  

Food for thought for the future:              http://http://www.kurzweilai.net/" target="_blank">http://www.kurzweilai.net/www.physorg.com

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #39 on: June 14, 2005, 12:36:37 PM »

danickstr wrote on Tue, 14 June 2005 00:46


Or to restate, The sampler is getting around having to store the info of a higher sample rate by using the advantages of oversampling to make its own high-resolution emulation of the analog wave in a language it can understand (digital), and then throwing that version away after it exploits it to build a close to perfect version of the wave (based on Nyquist) in a buffered area, that it can then store at 44.1 without worrying about aliasing and attenuation.

Not sure if this is right, but I understand how this could work.


The concepts of oversampling for the AD and upsampling for the DA came into play in the early 1990's, and it really helped move digital audio forward, enabling to do away with costly and complex analog filters. At that time, decimation and upsampling IC's also came into the market place. The rates of home DA's (upsampling) started at X2 upsampling, then X4 to X16. The AD decimators were similar in range. The AD's and DA's operating at X2 to X16 needed to be as accurate as possible (in terms of bits, because doubling or halving the sample rate does not yield a dramatic change in dynamic range. In other words, a decimator input (say at 88.2Khz or even 384KHz)needed to have many bits of accuracy.    

A few years later, sigma delta technology arrived to the audio world. The first design was a Bob Adams design (a brilliant designer). Sigma delta is a feedback based architecture (called noise shaping), and the idea is to have an AD generate only a few bits but at a very high speed. The front end (called the modulator) feeds the few high speed bits to the back end (called the decimator). The decimator converts the data to many bits at low speed.

The point here is: Given that the architecture calls for very high speed front end, the Nyquist frequency of sigma delta converters is very high(!). It is somewhere between 1.4MHz and 22MHz these days. So the job of passing everything to include even 100KHz while blocking frequencies above 1.4MHz or more became very easy, relative to the "good old days".

Needless to say, a lot of people are still taking about AD aliasing issues as if we live in the tube era. AD aliasing was a major challenge 15 yeas ago. It is no longer the case.

Regards
Dan Lavry
www.lavryengineering.com        



Logged

trevord

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 81
Re: Understanding Dan's 192kHz paper/argument
« Reply #40 on: June 14, 2005, 12:39:17 PM »

just a quick note to get your mind bent

learn to think in the frequency domain..
the time domain is an illusion ( Smile )

seriously...
IMHO a lot of audio engineers have trained themselves to think like
they are looking at a scope..


remember
a periodic wave has lots of energy all over the place (frequency wise)
what you see in the time domain is really the result of the interaction of these infinite number of frequencies
(infinite but arranged in "bands")

playing with one or more of these "bands" results in weird things in the time domain.

it is hard to explain by just looking at the signals in the time domain

Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Understanding Dan's 192kHz paper/argument
« Reply #41 on: June 14, 2005, 04:24:45 PM »

trevord wrote on Tue, 14 June 2005 17:39

just a quick note to get your mind bent

learn to think in the frequency domain..
the time domain is an illusion ( Smile )

seriously...
IMHO a lot of audio engineers have trained themselves to think like
they are looking at a scope..


remember
a periodic wave has lots of energy all over the place (frequency wise)
what you see in the time domain is really the result of the interaction of these infinite number of frequencies
(infinite but arranged in "bands")

playing with one or more of these "bands" results in weird things in the time domain.

it is hard to explain by just looking at the signals in the time domain





When I play music or listen to it, I try not not think about either time or frequency, and just "go with the sound".

But as a professional, I think both are very important. I think the "complete picture" requires both time and frequency domain, but there are times where one of the two is sufficient.

One of the weaknesses in time domain plot is the vertical axis. A scope plot is great for about 1% accuracy but the ear demands much better dynamic range.

One of the weaknesses of frequency plots is that they are based on an "average energy" over a short time interval. They do not yield detail about the attack of a piano key or time varying behaviour.

However, the time domain plot will yield greater detail of time varying signals, and the frequency domain plot can have a log scale to help see the ear's huge dynamic range....

There are more such factors. The specific goal or application is going to dictate which domain is better...

Regards
Dan Lavry
www.lavryengineering.com
Logged

danickstr

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3641
Re: Understanding Dan's 192kHz paper/argument
« Reply #42 on: June 14, 2005, 07:48:17 PM »

I appeciate the explanations!  I agree that it would be helpful to see the frequency domain (I have never thought of it that way).  I did study physics enough to do some of the math, and what I remember from the theory is that of course sound is an amalgam of sonic pressurizations that ultimately end up wriggling our ear (or a microphone) back and forth based upon the energies creating them.

So while the freq. domain is a novel way for visualizing sound (if I understand it right), it seems that the ultimate criteria for a sound engineer is the pulsing of air molecules that translates into an additive (and subtractive) time wave.  The accurate reproduction of this wave and the frequency of time based sampling seems to be where we end up.  

Knowing that modern AD converters oversample at such high rates makes the application of Nyquist theory (why isn't it a law) seem like it will work flawlessly at 44.1, if the machine stays out of its own way.  Then the worries become plug-ins and summing, but that is another avenue altogether.

Sigma-Delta and Bob Adams. I will check it out.  Thanks.

edit: this paper is interestinghttp://www.iar-80.com/page21.html
Logged
Nick Dellos - MCPE  

Food for thought for the future:              http://http://www.kurzweilai.net/" target="_blank">http://www.kurzweilai.net/www.physorg.com

trevord

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 81
Re: Understanding Dan's 192kHz paper/argument
« Reply #43 on: June 15, 2005, 03:07:45 AM »

danlavry wrote on Tue, 14 June 2005 21:24


When I play music or listen to it, I try not not think about either time or frequency, and just "go with the sound".


Although my response was not to your post
(i think we replied close together)
i will say this

I agree with you, i think i became comfortable thinking in frequency domain mostly from doing synthesizer work. After a while you can appreciate what difference a filter movement here and/or an increase in even harmonics there can do to a sound.

Even tho i made the freq/time domain comment in jest (hence the smiley) i guess the way they are used breaks down sort of like the ear vs math debate - that is people are different.

IMHO
even the most mundane studio applications some times involve freq. domain analysis (although the practised "ear person" may not know the theory)
for example
every experienced engineer knows turning down the highs too much on the bass removes the "oomph" or transient.
But how many know enough to say you are reducing the higher frequencies which go directly to generating the transient.

Learning to control harmonics is probably one of the most exciting fields in audio today, as example i would cite the cranesong stuff. Not new technology by any means, (class d amps), but their use for control of harmonic characteristics is a pretty neat engineering trick. This not only involves tricky (and ballsy) engineering for controlling the harmonics of the audio signal but also you can get screwed by noise problems if you dont understand harmonic issues in the power supply .

IMHO
i think of lot the broadband theory is trickling down from communication work into the audio field and will produce some interesting stuff. Not super clear accuracy, but maybe we will get some interesting effects which tickle our senses the way the old analog designs did.
Logged

blueboy

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 538
Re: Understanding Dan's 192kHz paper/argument
« Reply #44 on: September 29, 2005, 01:28:58 AM »

I may be way out of my league here, but I have struggled for years to grasp exactly what is going on and I would appreciate an educated assessment of my understanding of this topic.

I have struggled with understanding how digital sampling is able to accurately reproduce a continuous analog waveform using discrete intervals. The visual representation of sampling that I see most often is probably what has caused this confusion for me.

For example let's say someone gave you a piece of graph paper and told you to draw a image of an analog wave with lots of little squiggles across the page. Then they said "draw the same wave again on top of your analog wave, but this time you can only draw using the graph paper lines".

With the resulting staircase effect superimposed over your nice smooth analog waveform, it just doesn't seem plausible that digitally graphing (sampling) would ever allow you to  reproduce the original signal when all the variations that occurred "between the lines" were left out.

The answer would seem to be that you should use graphing paper with twice as many lines and try it again. The staircase effect will still be there when you try to re-draw the wave using the "higher resolution" graph paper, but it will be much closer (less quantization error) to the original than the low rez version was.

The perception you are left with is that by increasing the amount of "lines" (sampling frequency and bit depth), the better the end result will be. Hence the perception is that a faster sampling rate will always improve the accuracy of the sampled audio.

It wasn't until I saw Dan Lavry's reply to Andreas earlier in this thread that a light went on for me. Dan posted a plot of the error that occurs when sampling at a given frequency, and explained that all you have to do to remove that error is to pre-filter all the frequencies above Nyquist because that is exactly equivalent to the errors that are produced. The explanation was logical, but I still didn't "feel" like I understood it entirely.

It then occurred to me to think about sampling in a different way. The analog waveform that is being captured contains frequencies that we can't hear anyway, so that ORIGINAL waveform is NEVER the one that gets reproduced. The waveform that IS produced, only contains the frequencies below Nyquist (meaning only the frequencies that we are able to hear if the sample rate was 44.1khz for example).

The visual representations of the waves will look different because the original wave contains all the frequencies whether we can hear them or not, while the sampled wave will only contain frequencies below Nyquist.

Even if we increase the resolution to more accurately represent the original waveform, we still only hear what we are physically capable of hearing. In other words, we can't "hear between the graph lines" anyway, so there is no point in using higher resolution graph paper (at least on the horizontal axis).

If we now draw a third continuous curved line on our graph paper that runs through the middle of every "staircase step" we end up with a waveform that more accurately represents the audio that we actually hear.

It makes sense to me now that the more simplified or "filtered" wave that gets sampled can then be accurately represented in D/A conversion through additional oversampling and filtering.

Maybe this is obvious, but I had never thought about "visually filtering" the analog wave on a sampling graph knowing that this extra detail was not audible anyway, and therefore could be "graphed" more easily without having an audible impact.

Please let me know if I am out to lunch on this analogy.

I do have to admit though that I have heard what I perceived to be an improvement when recording at 96khz over 44.1 or 48. I think I'll have to go back and re-read a few earlier posts to fully grasp the reason for that.

I also notice an improvement when up sampling 44.1/16 content to 96/24. I'm assuming that the "benefits" I'm hearing are the effect of distortion produced by the SRC and the increased dynamic range from interpolated bit depth.

Anyway, excuse me if this is the wrong place to be asking these questions or if this is a little too "Digital Sampling 101", but I would appreciate any comments or corrections.

Thanks.
Logged
"Only he who attempts the absurd can achieve the impossible." ~ Manuel Onamuno
Pages: 1 2 [3] 4 5   Go Up
 

Site Hosted By Ashdown Technologies, Inc.

Page created in 0.077 seconds with 17 queries.