R/E/P Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: 1 2 [All]   Go Down

Author Topic: Latency of Lavry Blue AD/DA  (Read 10966 times)

mantovibe

  • Newbie
  • *
  • Offline Offline
  • Posts: 29
Latency of Lavry Blue AD/DA
« on: October 08, 2004, 05:25:26 PM »

Hello Mr. Lavry and all.
I would like to ask how much latency we would have at 44.1K going
into your Lavry Blue for a full conversion process "Analog>AD>DA>Analog".
Also, if the latency of the AD is different than the DA I would like to know their respective latency values.
And, just as a couriosity, is it true that the venerable Lavry Gold AD/DAs induces a higher latency?
Thank you so much for being here (and for manufacturing your gear).

Renzo Mantovani
Logged
-Renzo Mantovani-

malice

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 799
Re: Latency of Lavry Blue AD/DA
« Reply #1 on: October 09, 2004, 02:37:20 AM »

I guess it should be related with the use (or the no use) of "crystal clock" function ...

Am I right ?

malice

mantovibe

  • Newbie
  • *
  • Offline Offline
  • Posts: 29
Re: Latency of Lavry Blue AD/DA
« Reply #2 on: October 10, 2004, 02:55:20 PM »

...anxious bump... Smile
Logged
-Renzo Mantovani-

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Latency of Lavry Blue AD/DA
« Reply #3 on: October 11, 2004, 08:09:42 PM »

Hello Renzo and George Massenburg,

RENZO- Your question does not often come up, but if I recall  latency is about 1.5msec for the blue AD and 1.5msec for the blue DA, at 44.1KHz.

GEORGE: I see the subject of latency on George Massenburg site with a comment addressed to me so the rest of the message is a reply also to George. “Sorry Dan, seems as if there is a rationale for high sample rates.”

George said:
"In his standard-reference book Blauert "Spatial Hearing" points out that interaural differences as small as 2 microseconds are audible…
For how a delay can affect a delicate percussion performance - more an artistic question than a scientific one - I'll go with a test that I did with a drummer years ago and say 500 microseconds keeps a groove safe…..It was in fact Jeff Porcaro… that we thought 500us was a safe call. Not scientific, but not dumb, either.
George"


George, your comments beg for some EE perspective so here I go:

Uh, sorry George, you are mixing apples and oranges and I am afraid you have come up with a fruit salad. Neither your reference to Blauert, nor to Wieslaw Woszczyk’s paper  apply to the topic of latency but instead to interaural differences.  

LATENCY IS ONLY AN ISSUE FOR A FEW SPECIAL SITUATIONS NOT ONE THAT SHOULD DRIVE A WHOLE AUDIO INDUSTRY.

Adjusting for latency (relative delay between tracks) in a DAW is technically easy. Even at 44.1, delaying a track by one sample is 22uSec, a 20 fold better resolution than your proposed 500uSec.

For spot monitoring (such as in live performance) one can move the mic (or speaker) by 6 inches to overcome the difference between 1msec and 500usec. Physics says 1mSec per foot.  

First, why bring the 2uSec interaural differences into the conversation on latency?
It is a different issue!
You call latency more an artistic question than a scientific one. I view it as a scientific question – finding the point where the latency is low enough so that no one can tell the difference in a BLIND ABX test.

But even after quantifying latency, what is the difference between 1msec and 500usec delay? Again, acoustically speaking IT IS ABOUT 6 INCHES! But let’s keep our eyes on the ball. You want it to be 500uSec so let’s carry on.

Latency is not much of an issue for most of recording or mastering, certainly not when one uses the same delay on all channels. As is most probably indicated by Peter Poser comment:

Peter said:
"I have ZERO problems with latency these days."  

So my understanding of latency issues is about live performance (such as spot monitoring). Is there another case where it matters (latency, not interaural differences)?

Doing things in analog is fine. But to suggest 192KHz or 384KHz, begs for many comments. The first comment: data captured at say 192 or 384KHz will have to stay at 384KHz, and not be down sampled to lower rates (say 96Khz or even 44.1KHz) with any linear phase hardware. When you do that (down sampling), you add delay, thus increasing latency. For example, let’s take a modern state of the art sample rate converter chip such as Analog Devices AD1895 (or AD1896). What is the SRC filter group delay (ignoring the additional delay which is 64 bit clocks per frame)?
Here is the formula:
Group delay is 16/fs(in)+fs(in)/fs(out)
Going from 192 to 48KHz, the filter delay is 750uSec.
Going from 192 to 96Khz, the filter delay is 416uSec.

So I have just established the fact that even a very efficient hardware SRC design, optimized for low latency (very few taps) will end up with MORE latency unless you keep the data and processing at the high rate, say your desired 384KHz or 192KHz. The minute you down-sample you will have the latency problem back big time.

And of course, at 384 KHz, any DSP work, especially at low audio frequencies takes a lot more DSP power, including more processing bits (wider words). At the same time that your 384fs (or 192 KHz) generates huge data files, and demands huge processing power, it also makes the conversion less accurate (see my paper Sampling Theory on Lavry Engineering web, under support. I will post additional information about it soon). So all that for what? For moving the mic by a foot?

Here I once again PROTEST against 192kHz and now heaven forbid 384 kHz  from an engineering point of view and my many years of design experience. For me it is about  OPTIMAL SAMPLE RATE for audio devoid of politics.

You suggest to go 384KHz (or even 192KHz) for the sake of decreasing delay. I hope you are not suggesting that all of the audio industry will pay the huge price of almost nine fold in data size (relative to CD), more then that in processing requirement, and lower conversion accuracy. Moving tracks forward or backward in a DAW by increments of 10-20usec (virtual adjustment for that 1foot/msec) is a “walk in the park” compared to changing audio over to 192-384KHz, and the increased cost while reducing quality associated with it. Given you proposed 500usec, 22usec (at 44.1KHz) is a very fine adjustment indeed. And yes, speed accuracy tradeoff is a fact. It is not arguable.

But let me try my best to go along with your program. There are some special cases where you may wish to have minimum delay. For example, live monitoring. So let’s assume you are proposing to have 384KHz for those special cases. What is wrong with that picture?

A bit of a background is in order. Latency was less of a problem some dozen or so year ago. AD’s and DA’s architecture was PCM without noise shaping. Even a 44.1KHz, 24 bits R2R AD or DA had latency of 544usec. At 96KHz the delay is only 250uSec. And segmented architecture design could yield much less than 50uSec at 44.1KHz!
It was the introduction of sigma delta conversion that caused an increase in delay. Some such AD’s and DA’s are in access of 1msec.

SO INSTEAD OF TRYING TO FORCE SIGMA DELTA ARCHITECTURE TO GO FASTER AND FASTER, MAKING BOTH THE NOISE SHAPING TRADEOFF AND  THE  SPEED ACCURACY TRADEOFF HARDER AND HARDER, ONE WOULD BE WISER TO ACCEPT THAT LINEAR PHASE SIGMA DELTA CONVERTERS ARE NOT THE SOLUTION FOR CASES WHERE LATENCY IS SO DESIRED.

In fact, Crystal Semiconductors (Cirrus) which is one of the 2 top makers of AD IC’s, introduced a whole line of IIR based AD’s and DA’s, where they trade off phase linearity for lower latency.

THERE A THREE SOLUTIONS FOR LATENCY:
Use analog
Use more suitable converter architecture
Use sigma delta with IIR

And of course, remember that when you use    additional DSP hardware on a channel before the mix, it does accumulate delay…

TO SUMMARIZE You are trying to fix the delay due to sigma delta architecture by doubling, doubling again, and doubling one more time, stretching that design concept, that architecture to a ridicules point, and with it, having all of audio industry move in the wrong direction. I do not want the cart to lead the horse, and I am approaching it from an overall engineering perspective.

Your suggestion to go for 384KHz to reduce latency is just another in the increasing list of incorrect “reasons” for making sample rates go faster and faster.

THESE 6 REASONS LISTED BELOW HAVE BEEN USED AS ARGUMENTS FOR 192 OR HIGHER. I HAVE DEBUNKED THEM ALL:

1. First there was the argument regarding analog anti aliasing filter. Of course it fell apart with today’s front ends, all operating at some over sampling (AD’s) or up sampling (DA’s)

2. There was the old “more points is better” argument which simply contradicts Nyquist. I explained it in my paper.

3. There was the “impulse response width” which goes against basic fundamentals (first year EE course). The bandwidth (ear, amps, mic, speaker whatever is lowest) sets the limit on impulse width. Bandwidth and impulse width are one in the same.

4. There was “I like the sound”. I never argue likes and dislikes. I do point out that anything you hear is probably under 20Khz (oh well George I conceded to 40KHz just in case), thus we do not need to go above 88.2-96KHz (Nyquist theory).

5. There was that unsubstantiated spin about gradual filters. To my knowledge, no one heard a thing other than some badly designed 44.1 KHz decimation filters. Certainly I did not find a single ABX test with say 88.2KHz-96KHz properly designed filter.

6. So now you advocate 384KHz to reduce latency…

I don’t know whether to laugh or cry.
Oy

Dan Lavry



Logged

blairl

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 376
Re: Latency of Lavry Blue AD/DA
« Reply #4 on: October 12, 2004, 12:38:04 AM »

One of the major concerns on the thread in George's forum is that of monitoring during the tracking and overdub process.  Let's take a vocalist for example that is overdubbing to prerecorded tracks.  Since a vocalist can hear their own voice in their head, some complain that the delay of microphone to A/D - D/A conversion and back through the headphones is unnatural and irritating.  Moving the microphone closer still won't compensate fully since the voice is already present in the head.  Some vocalists reportedly have a hard time adapting to hearing their voice delayed.  At 44.1 and 48K the predominant DAW interface has a latency of approximately 2ms A/D - D/A.  

Have you done any studies on how latency affects monitoring in the recording process?  Do you know of any limits before latency starts to affect the performance?

You mentioned alternate converter chips for lower latency.  Would using these chips degrade the sound quality in favor of lower latency?
Logged

Immanuel Kuhrt

  • Newbie
  • *
  • Offline Offline
  • Posts: 46
Re: Latency of Lavry Blue AD/DA
« Reply #5 on: October 12, 2004, 10:34:25 AM »

Thank you for clearing out the irrelevant arguments Dan. I myself was very skeptic about 2us being a problem, as it corresponds to  0,068cm in metric scale. So I am glad you could put words on, where the problem was = fruit salad.

Why not just split the signal from the preamp? One part goes into the AD, and the other goes into a small mixer. The small mixer is also feed by the DA. The performer gets his/her own voice in time, and all you have to do is latency compensation. If you use Samplitude, you can even set it to do this automatically.

I am no pro. I just record my own stuff at home. So my personal solution is the use badly insulated cans. That way I do not need to hear anything but the back ground track, and I can accept the bleed.

Did you people ever get complaints from guitarists hearing their sound too early? If you close mike a guitar (acoustic or amp - doesn't matter), and then operate purely in analog, the guitarist will get the sound earlier than usually. I have no experience with such problems, and I know it is pretty controversial, but maybe sometimes the question is: How much latency is needed? You tell me Smile
Logged
Disclaimer - I ain't no pro

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Latency of Lavry Blue AD/DA
« Reply #6 on: October 12, 2004, 08:11:58 PM »

blairl wrote on Tue, 12 October 2004 05:38

One of the major concerns on the thread in George's forum is that of monitoring during the tracking and overdub process.  Let's take a vocalist for example that is overdubbing to prerecorded tracks.  Since a vocalist can hear their own voice in their head, some complain that the delay of microphone to A/D - D/A conversion and back through the headphones is unnatural and irritating.  Moving the microphone closer still won't compensate fully since the voice is already present in the head.  Some vocalists reportedly have a hard time adapting to hearing their voice delayed.  At 44.1 and 48K the predominant DAW interface has a latency of approximately 2ms A/D - D/A.  

Have you done any studies on how latency affects monitoring in the recording process?  Do you know of any limits before latency starts to affect the performance?

You mentioned alternate converter chips for lower latency.  Would using these chips degrade the sound quality in favor of lower latency?


“Since a vocalist can hear their own voice in their head… some complain that the delay of microphone to A/D - D/A conversion and back through the headphones is unnatural and irritating.”

I understood that, and that is what we call latency.

“Moving the microphone closer still won't compensate fully since the voice is already present in the head.”

What do you mean when you say won't compensate fully? There is going to be a delay due to many factors. Some factors contribute little (add little delay) and others add a lot of delay. Let’s look at the proposed desired latency of 500usec. Let us assume that the mic itself contributes zero, the speaker (or headphone) contributes zero, the electronics in series contribute zero. Than there is NO WAY you can get 500usec latency when the distance between mic and vocal chord is much greater than 6 inches!

So I assume the comment regarding 500uSec was for the electrical portion only (AD and DA). But think about it - an electrical path of 500uSec and 6 inches acoustic distance is still very restrictive.

I am not saying that one can not hear tracks moved relative to each other by 500uSec or by 1usec. I am not talking about what one can hear or not hear. What I am suggesting that we all may have to learn to live within some physical limitations, and “mentally adjust” for more delay. The recording engineer can later slide tracks to make it the way they want it to be.

“Some vocalists reportedly have a hard time adapting to hearing their voice delayed. At 44.1 and 48K the predominant DAW interface has a latency of approximately 2ms A/D - D/A.”

So with 2msec AD and DA, you add the acoustic delay (1msec for 1 foot, 2msec for 2 feet…) and there you are at some latency. If it is too much, you cut the delay down but do it wisely, not by trying to convert a car into a jet. The sigma delta technology is a “car” (when talking about latency). Another architecture is a “jet” for latency. Proposing 384fs to reduce latency is an attempt to convert all the cars to jets. It is the difficult way to do things and everyone has to buy a expansive jet. No more cars. Jets are difficult to park, they take a lot of fuel…  

”Have you done any studies on how latency affects monitoring in the recording process? Do you know of any limits before latency starts to affect the performance?”

My first message on the forum (see comments) states that we will not deal with ear brain issues. I rather stay in my domain.

“You mentioned alternate converter chips for lower latency. Would using these chips degrade the sound quality in favor of lower latency?”
Clearly, latency is not the first in the list of what is important in audio. If “latency
”ruled, the IC makers would not have invested so much in the popular sigma delta architecture, which is in fact, takes time to convert. Most of the delay in an AD is due to a decimation circuit, and in a DA it is due to the up sampling circuit. These digital circuits FIR type filters (finite impulse response) are used because they provide a property called “linear phase”.

Recently, an IC maker (Cirrus, formerly Crystal Semiconductors) has introduced a number of IC’s (both AD and DA) that utilizes a different digital filter structure called IIR (infinite impulse response). The IIR filter is very fast (small delay), but it does not yield linear phase. It may be a great solution for the specialty type of application we are talking about.  Other than the linear phase tradeoff, those IC’s are very good performers.

Again, it is up to the ear people to tell us the EE’s how much deviation from linear phase is acceptable. Knowing the parameters such as latency, phase linearity and more, we can make progress. At this point, latency oriented gear is a “specialty market”.

Roughly speaking, with 500usec “budget”, analog seems like the wisest solution. Every 100usec’s is an inch! With 3-4msec “budget” you can do fine with what is on the market. If you are going to insist on digital in the range of say 500uSec to 2msec (electric plus acoustic delay) you open up a “specialty market”. Such gear will be a bit more costly (less volume). Quality? Again, roughly speaking, you can have better than CD quality today, at reasonable cost. Shooting from the hip (but with a lot of converter design experience) I would settle on 88.2-96KHz as the optimum rate, for what I think is a good compromise.

Again, with such a tight requirenment for latency, I would consider analog the best solution.

BR
Dan Lavry  



Logged

mantovibe

  • Newbie
  • *
  • Offline Offline
  • Posts: 29
Re: Latency of Lavry Blue AD/DA
« Reply #7 on: October 12, 2004, 10:32:30 PM »

Many thanks for answering and for the further discussion.
Logged
-Renzo Mantovani-

Jules

  • Full Member
  • ***
  • Offline Offline
  • Posts: 144
Re: Latency of Lavry Blue AD/DA
« Reply #8 on: October 15, 2004, 09:53:50 PM »

"My first message on the forum (see comments) states that we will not deal with ear brain issues. I rather stay in my domain. "

Its just that in this day and age of digital latency, recording engineers are wary of the effect of latency on recording, especially on OVERDUBBING performers. It can be nick named 'micro timing' by some of us recording engineers. We are a paranoid bunch, (it comes in handy for the job)I am in the group of recording engineers that worries about this micro timing area.

But not to labour the point, heres some question if you please..

Does upsampling cause EXTRA latency?

Say - program recorded at 44.1 but D/A upsampled to 96k or higher

Do you agree with upsampling as a D/A "quality assistance"?

or is there some 'trade off'? (if so what is it?)

I realise my questions might be phrased poorly, I hope the meaning is there.

Thanks in advance

bobkatz

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 2926
Re: Latency of Lavry Blue AD/DA
« Reply #9 on: October 15, 2004, 11:40:35 PM »

Jules wrote on Fri, 15 October 2004 21:53




But not to labour the point, heres some question if you please..

Does upsampling cause EXTRA latency?




Absolutely it does, all other things being equal, if you ADD any additional cycles to a  given set of instructions, it increases the latency.

Quote:



Say - program recorded at 44.1 but D/A upsampled to 96k or higher




Well, almost every D/A manufactured today includes an "upsampler" or an "oversampler" (the differences are mostly semantic). So you won't find one around that doesn't do it. And their total latency is typically less than a couple of milliseconds, often much less in a good, fast internal design. The latency is almost always less at the higher sample rates as well.
Logged
There are two kinds of fools,
One says-this is old and therefore good.
The other says-this is new and therefore better."

No trees were killed in the sending of this message. However a large number of
electrons were terribly inconvenienced.

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Latency of Lavry Blue AD/DA
« Reply #10 on: October 16, 2004, 11:36:44 PM »

“Does upsampling cause EXTRA latency?”

As Bob answered, Yes, up sampling adds latency.

Here is some additional detail:

We do need to up sample, and the reason is: to make it possible to provide an analog filter that will:
A.   Yield good flat audio band response
B.   Yield good rejection of image energy (error signal)
C.   Have good phase characteristics.

The process of up sampling is in fact about moving the Nyquist frequency higher. The audio band stays where it was, the Nyquist moves up, so it is easier to filter. With more and more up sampling, the audio and image energy gets further and further apart  so they are “easier to separate” with a lesser filter.

Back to latency:

1.   Analog filters add a bit of delay.  True, by raising the up sampling you can reduce the delay through the analog filter. Higher analog LPF (low pass filters) introduce less delay.
2.   The computational process of up sampling adds delay to the overall process.

The question is then, what is the net outcome (in terms of latency)?

There are 2 fundamental methods to do the computational process of up sampling:

1. With linear phase filters (FIR’s) and
2. With non linear phase filtering (IIR)

Clearly, the FIR method will add a lot more delay than the IIR method. In both cases, the higher you over sampling the more delay!

There is a potential confusion here: It is true that if you are STARTING with say 96KHz data, the DA will have less delay than if you are STARTING with 44.1KHz data. But YOUR QUESTION IS ABOUT UP SAMPLING.

So from the up sampling point of view, it is NOT true that the faster rate device is lower latency. To get to a faster rate, thus doing up sampling, you need to go through a whole computational processes, and that takes time – more latency.
One has to be careful to make that distinction between the 2 cases. Again:

Faster rate devices have less latency WHEN YOU FEED THEM HIGH RATE DATA.

Faster rate devices have equal of higher latency (usually much higher) WHEN YOU FEED THEM LOWER RATE DATA, TO BE UP SAMPLED.

Up sampling With a good FIR it takes a lot of time. With a IIR, whatever you gain due to a faster analog filter will be lost by the processing…

“Say - program recorded at 44.1 but D/A upsampled to 96k or higher. Do you agree with upsampling as a D/A "quality assistance"? or is there some 'trade off'? (if so what is it?)”

Up sampling AT the D/A?  Yes it is very important. It is virtually impossible to make an analog filter that will pass the audio and block the unwanted image energy without some up sampling at 44.1KHz. The audio and image energy are almost on top of each other and separating the (filtering) is extremely tough, costly, and will require sending the signal through so much circuitry that the distortion will become a major factor. Also, there is no way to get  sufficiently good phase characteristics with no upsampling.

By how much to up sample? That is a trade off, faster makes for easier and cheaper analog. But too fast makes for some drawbacks both in terms of required DA accuracy and in the digital computation area.

Also:

But one should not be confused by marketing hype such as CD player stating say X8 up sampling 20 bit D/A (or even 24 bits).  The CD data itself is only 16 bits, so can the music be 20 or 24 bits? Of course not. The marketing guys got it backward: We don't get more bits, we need more bits just to keep the same accuracy.

In order to be able to take advantage of the concept of up sampling of 16 bits data, we need a DA that is a lot more accurate than 16 bits. In fact a DA with infinite number of bits would be great. Say you want to up sample by X2. You need to insert a computed value between each of the original 16 bits data samples. The “new” computed sample values may not fall on the same 16 bit grid of the original samples. The new samples can be “anywhere” and need the use of a very high resolution DA to yield the proper "new" computed sample voltage levels. Yet, the original samples are still where they are, at 16 bits accuracy. The original samples are analogous to markers in a geographical survey. If they (the reference points) are off by say 1 foot, nothing else will be more accurate than 1 foot…

So the answer is: Up sampling moves the Nyquist up. It does helps the analog filter issues and even lowers the noise floor a bit (spreading the noise over a wider range because zero to Nyquist is extended). Up sampling does not yield more bits of signal accuracy, no matter how many false and misleading claims one reads…

BR
Dan Lavry
Logged

Andy Peters

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1124
Re: Latency of Lavry Blue AD/DA
« Reply #11 on: October 18, 2004, 03:42:43 PM »

danlavry wrote on Mon, 11 October 2004 17:09

And of course, at 384 KHz, any DSP work, especially at low audio frequencies takes a lot more DSP power


Very true; double your sample rate and you double your storage requirements and you halve the time available to process your filters for each sample time.

Quote:

including more processing bits (wider words).


How is this so?

--a
Logged
"On the Internet, nobody can hear you mix a band."

Bob Olhsson

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3968
Re: Latency of Lavry Blue AD/DA
« Reply #12 on: October 18, 2004, 06:50:31 PM »

[quote title=Andy Peters wrote on Mon, 18 October 2004 14:42]
danlavry wrote on Mon, 11 October 2004 17:09

..

Quote:

including more processing bits (wider words).


How is this so?


Download and read "Breaking the Sound Barrier: Mastering at 96 kHz and Beyond" at http://www.jamminpower.com/main/articles.jsp

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Latency of Lavry Blue AD/DA
« Reply #13 on: October 18, 2004, 07:45:35 PM »

Andy Peters wrote on Mon, 18 October 2004 20:42

danlavry wrote on Mon, 11 October 2004 17:09

And of course, at 384 KHz, any DSP work, especially at low audio frequencies takes a lot more DSP power


Very true; double your sample rate and you double your storage requirements and you halve the time available to process your filters for each sample time.

Quote:

including more processing bits (wider words).


How is this so?

--a


It is much worse than just having 1/4 of the time to do the same computation. I am not sure I can explain it all very quickly, but I’ll do my best with a single example:

Say you design DSP low pass filter, say FIR type at 1KHz, for a 44.1KHz system. The compute engine “does not know” it is 1KHz. The compute engine “thinks of it” as 1KHz out of 44.1KHz. If you ran the same filter at a sampling rate of 88.2KHz, the filter point will move to 2KHz. At 176KHz the bandwidth is 4KHz. The ratio of the filter to cutoff is 1 to 44.1 in all of the rates.

So if I wanted to design a 1KHz low pass for say 176.4KHz sampling, the coefficients must be different than those used for 1KHz at 44.1KHz. In fact, the coefficients for 1KHz at 176.4KHx are the same as the coefficients for 250Hz at 44.1KHz.

Here is an example of a 201 coefficient LPF with -6dB at 1KHz. The requirement is to get to -100dB attenuation at 1KHz *1.77 =1.77KHz. Here is the outcome:
At 0Hz, 0dB
At 500Hz, -.1dB, loss of .1dB half way to cutoff
At 1KHz, -6dB
At 1.77KHz, -100dB so 1.77KHz/1KHz=1.77

Try the same filter with 201 coefficients for 250Hz (same as 1KHz for 176.4KHz sampling). We get:
At 0Hz, -2.88dB  this is already screwed up
At 125Hz, -3.69dB, loss of .81dB half way to cutoff – not very flat
At 250Hz, -6dB
At 1.02KHz, -100dB so 1.02KHz/250=4.08 – poor transition range
 
Try the same but with 401 coefficients for 250Hz (same as 1KHz for 176.4KHz sampling). We get:
At 0Hz, 0dB, this is fixed by the additional coefficients.
At 125Hz, -1.29dB, loss 1.29dB half way to cutoff – worse than before
At 250Hz, -6dB
At 640Hz, -100dB so 640/250=2.56 – better but not good enough

Try the same but with 801 coefficients for 250Hz (same as 1KHz for 176.4KHz sampling). We get:
At 0Hz, 0dB, this is fixed by the additional coefficients.
At 125Hz, -.1dB, loss .1dB – same as 1KHz at 44.1KHz
At 250Hz, -6dB
At 445Hz, -100dB so 445/250=1.77 – OK

So in this example, you see that we had to go for 4 times the coefficients to get the same behavior (and believe me it gets more dramatic when you increase the ratio between sampling and cutoff).  So you now have 4 time as many coefficients and 1/4 of the time for the computation – a factor of 16 in difficulty! And of course, with 4 times the arithmetic operations (each one limited by some word length) the accumulated error is larger. Alternatively, to keep the same performance, you need to go for smaller error per computation thus more world length.

And that was only one example. The issue with IIR design are a lot more involved.

BR
Dan Lavry

Logged

Andy Peters

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1124
Re: Latency of Lavry Blue AD/DA
« Reply #14 on: November 07, 2004, 06:19:26 PM »

danlavry wrote on Mon, 18 October 2004 16:45

Andy Peters wrote on Mon, 18 October 2004 20:42

danlavry wrote on Mon, 11 October 2004 17:09

And of course, at 384 KHz, any DSP work, especially at low audio frequencies takes a lot more DSP power


Very true; double your sample rate and you double your storage requirements and you halve the time available to process your filters for each sample time.

Quote:

including more processing bits (wider words).


How is this so?

--a


It is much worse than just having 1/4 of the time to do the same computation. I am not sure I can explain it all very quickly, but I’ll do my best with a single example:


{snip example}

Quote:

So in this example, you see that we had to go for 4 times the coefficients to get the same behavior


I understand that.

Quote:

So you now have 4 time as many coefficients and 1/4 of the time for the computation – a factor of 16 in difficulty!


Also understood.

Quote:

And of course, with 4 times the arithmetic operations (each one limited by some word length) the accumulated error is larger. Alternatively, to keep the same performance, you need to go for smaller error per computation thus more world length.


... but four times the number of operations means one needs an accumulator that has only two more bits.

-a
Logged
"On the Internet, nobody can hear you mix a band."

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: Latency of Lavry Blue AD/DA
« Reply #15 on: November 07, 2004, 08:54:35 PM »

I think I demonstrated it cost you more in processing power. I did not say that you need to have twice the word length. 2 more bits can be undesirable. A lot of people are arguing about the difference between a floating 24 bits wit 8 bits for mantissa vs. 26 bits with 6 bit mantissa.

The point is: going faster costs a lot more processing. Check around and see what happens to a system operating at say 48KH, 96KHz, 192KHz. Often, even with expansive accelerator cards, you lose as much as 1/2 the channels at 192KHz.

My point was: it is not the same computation faster. It is a lot more computation to start with, then on top of it it needs to be faster, and also with some penalty in word length. There are better examples for word length. Mine was just a quick one to demonstrate a point. I believe it did.

BR
Dan Lavry
Logged

Andy Peters

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1124
Re: Latency of Lavry Blue AD/DA
« Reply #16 on: November 08, 2004, 01:07:45 PM »

danlavry wrote on Sun, 07 November 2004 18:54

I did not say that you need to have twice the word length. 2 more bits can be undesirable. A lot of people are arguing about the difference between a floating 24 bits wit 8 bits for mantissa vs. 26 bits with 6 bit mantissa.


Understood, although I think the memory and processing-time requirements are more limiting than accumulator widths, as one can use any arbitrary width.  Of course, exceeding what's available in hardware is doable but ugly in terms of processing time and data access.  I mean, an 8-bit 8051 is perfectly capable of doing 32-bit floating-point arithmetic if you wanna wait that long.

Quote:

The point is: going faster costs a lot more processing.


I agree with you!  That's basically what I said in my 18 Oct post: double the sampling frequency and you double the storage requirements and halve the time available to do the processing for each sample.

But that was of course oversimplifed because as you correctly point out, in order to achieve the same filter response at the higher sampling frequency, you need more taps.

I guess all I was really asking is why you need wider words when moving to higher sampling frequencies.  Now I understand what you were saying.

-a

(Edit: fix typos and add conclusion!)
Logged
"On the Internet, nobody can hear you mix a band."
Pages: 1 2 [All]   Go Up
 

Site Hosted By Ashdown Technologies, Inc.

Page created in 0.061 seconds with 19 queries.