R/E/P Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: [1]   Go Down

Author Topic: Unfiltered decimation?  (Read 7662 times)

Terry Demol

  • Full Member
  • ***
  • Offline Offline
  • Posts: 103
Unfiltered decimation?
« on: July 14, 2007, 11:37:05 PM »

Bruno Putzeys wrote on Thu, 12 July 2007 23:32

A converter isn't supposed to be musical. It's supposed to convert.

If recordings made through old converters sound good, that might be more because in those days they couldn't mess about with the digital signal afterwards like they can now. It's not only the converters that changed since then. It's the way we're producing, recording, mastering, distributing and consuming music that has changed.


Hi Bruno,

This may deserve a thread split -

What is your take on the Tony Faulkner downsampling method?
(averaging every 4 samples of 176.4 to get 44.1 sans filtering).

Thanks,

Terry

Logged

Rivendell61

  • Newbie
  • *
  • Offline Offline
  • Posts: 45
Re: Do you use an Analog Summing Amplifier type NEVE 8816??
« Reply #1 on: July 15, 2007, 12:55:58 AM »

Terry Demol wrote on Sat, 14 July 2007 23:37


What is your take on the Tony Faulkner downsampling method?
(averaging every 4 samples of 176.4 to get 44.1 sans filtering).



I've wondered about this too.
Link to some background info on Tony Faulkner's method:
http://www.stereophile.com/features/104law/index1.html

Mark
Logged

KSTR

  • Newbie
  • *
  • Offline Offline
  • Posts: 24
Re: Do you use an Analog Summing Amplifier type NEVE 8816??
« Reply #2 on: July 15, 2007, 10:33:04 AM »

I think the article already gives the answer:
"It will never work with source material having energetic high-frequency content, rock cymbals, for example—the aliasing would be unacceptable—but for a range of other musical forms, it could be just the ticket."

Only when there is very little content above fs/2 (of the target fs), the simple decimation will work with good results.

Klaus
Logged

bruno putzeys

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1078
Re: Do you use an Analog Summing Amplifier type NEVE 8816??
« Reply #3 on: July 16, 2007, 04:44:00 AM »

KSTR wrote on Sun, 15 July 2007 16:33

Only when there is very little content above fs/2 (of the target fs), the simple decimation will work with good results.


And when there is so little content above fs/2, properly filtering it would not appreciably alter the signal.

The beauty of the sampling theorem is that once the nyquist criterium is met the impulse response of the system is time-invariant. The relative timing of events in the input signal compared to the sampling time simply has no impact.
When you feed a square wave into a correctly filtered AD/DA set (most converters are less than perfect but still quite good) you'll find that the zero-crossings and wave-shape are perfectly maintained, whatever the signal's relative timing to the actual sampling interval may be. If you don't filter the AD at all, the zero crossings will be quantized to the sample times. So for a square wave you get something amounting to jitter equal to the sampling period!!! Ouch...

Even the tiny little bit of aliasing left by standard half-band filters (20.000kHz to 24.100kHz) that you can only just tease out visibly using the square wave test is enough to make sibilants and breathing in A/B miked material spread out across the whole stereo image (an effect we're almost used to attribute to the miking method).

Logged
Warp Drive. Tractor Beam. Room Correction. Whatever.

Affiliations: Hypex, Grimm Audio.

Jon Hodgson

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1854
Re: Do you use an Analog Summing Amplifier type NEVE 8816??
« Reply #4 on: July 19, 2007, 04:20:17 AM »

This is a really horrible technique, the effect of the difference between two samples will depend on whether it occurs within a group of four samples or between them. In other words, delay your input by one sample and what comes out of the downconversion could sound notably different in places... that's just nuts.

Could be an interesting effect to play with though, along with bit crushers and distortion units, nothing wrong with a bit of unpredicatability when you're shaping sounds... but as a mastering tool?

No thanks.
Logged

Graham Jordan

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 63
Re: Do you use an Analog Summing Amplifier type NEVE 8816??
« Reply #5 on: July 19, 2007, 03:44:28 PM »

Jon Hodgson wrote on Thu, 19 July 2007 01:20

This is a really horrible technique, the effect of the difference between two samples will depend on whether it occurs within a group of four samples or between them. In other words, delay your input by one sample and what comes out of the downconversion could sound notably different in places... that's just nuts.


I'm in agreement that it's not a great technique, but, as has been pointed out, if the spectral content above fs/8 is already minimal, then great filtering is not needed.
As for the filtering used it is perfectly valid (technically), and well known about to those of use doing DSP. It's simply a moving average (pretty much the simplest low-pass filter), where the window size is 4 samples. Then when you decimate by an integer amount, 4 in this case, you throw away the unused samples. Since this is a very simple case of an FIR filter, you only use input samples to calculte output samples, hence you never even need to calculate the values you're going to throw away. So taking every four samples, averaging them, to produce a decimated output stream is a perfect implementation for the given FIR low-pass filter: y(n) = 0.25*x(n) + 0.25*x(n-1) + 0.25*x(n-2) + 0.25*(n-3).
In direct relation to your comments, how can an input sample be 'in between' groups of four? Every input sample is in a group of four.
As for delaying the input stream by one sample, yes the output samples will be different, BUT the reconstructed waveform should be the same (I think). The point is, even with 'PERFECT' decimation, delaying the input stream by one sample ALWAYS creates different output sample values. It has to! It's like delaying the input to your A/D but a fraction of a sample period: you get different sample values, but the same reconstructed waveform.

The interesting point about this method, as talked about on the mentioned website, is that is raises the question of trade-off between increased noise/bad sound due to aliasing versus increased 'bad sound' due to 'pre-ringing' of steep linear-phase filters. I think that there are articles somewhere about other, even better ways of filtering, using minimum-phase, where phase is better preserved than simple filters, but do not have so much of the pre-ringing.

I'm not sure that this is a novel way of doing down-sampling. However, you'll not see it around in general, as it makes assumptions about the spectral content of the audio (which the website also notes - e.g. don't do cymbals!), which you generally cannot do.

Graham
Logged

bruno putzeys

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1078
Re: Do you use an Analog Summing Amplifier type NEVE 8816??
« Reply #6 on: July 19, 2007, 04:08:02 PM »

Graham Jordan wrote on Thu, 19 July 2007 21:44

As for delaying the input stream by one sample, yes the output samples will be different, BUT the reconstructed waveform should be the same (I think). The point is, even with 'PERFECT' decimation, delaying the input stream by one sample ALWAYS creates different output sample values. It has to! It's like delaying the input to your A/D but a fraction of a sample period: you get different sample values, but the same reconstructed waveform.

Jon's point is that fractionally delaying the input signal would produce a different reconstructed signal. Only that matters. With ideal decimation & upsampling, the reconstructed signal is not affected by fractional delays (apart from being fractionally delayed as well). Less than perfect filtering on either end will cause the reconstructed signal to become dependent on the relative timing between the sampling and the signal.

Graham Jordan wrote on Thu, 19 July 2007 21:44

if the spectral content above fs/8 is already minimal, then great filtering is not needed.


If you have no >fsout/2 content, applying a super-ultra-mega-sharp lowpass filter would not do anything, either for the bad or for the good. It simply would not affect the signal at all. When you do have fsout>2 content, the minimalist filter isn't good, as attested by those commenting on cymbals. So under what conditions is eschewing sharp filtering actually supposed to improve matters?

Graham Jordan wrote on Thu, 19 July 2007 21:44

The interesting point about this method, as talked about on the mentioned website, is that is raises the question of trade-off between increased noise/bad sound due to aliasing versus increased 'bad sound' due to 'pre-ringing' of steep linear-phase filters. I think that there are articles somewhere about other, even better ways of filtering, using minimum-phase, where phase is better preserved than simple filters, but do not have so much of the pre-ringing.


All this pre-ringing stuff is highly conjectural. As pointed out in another thread, pre-ringing is audible only when the cut-off frequency is actually in the audible band or possibly right at the edge (and even then). Historically, audibility of pre-ringing is simply an ad-hoc hypothesis formulated to link the most salient feature of digital audio (the sinc function) to audible deficiencies present in many digital chains.

The most unfortunate intellectual deadlock anyone can get stuck in is becoming more attached to one's explanation of an observed phenomenon than to the observation itself. It is also the most common one. When an alternate hypothesis is proposed and shown to have better predictive and explanatory power, people will feel their observation is being denied, even though it is actually being confirmed, explained and a solution implied. Stubbornly holding on to an unproven or disproven hypothesis guarantees that the problem will stay. I find a particularly painful case the ongoing insistence by some parents of autistic children that the condition is triggered by the MMR jab. In spite of overwhelming evidence to the contrary they keep fighting on that front without even considering alternative explanations. If certain types of autism are indeed acquired during infanthood, this distraction of attention will only insure it keeps happening.

That particular case highlights the mechanism behind this type of deadlock. Usually the proffered explanation is simple and clear-cut whereas the more realistic scenario involves a large number of factors, none of which even remotely classify as "the" cause. Animal brains (and by extention ours) have been evolutionarily honed to associate single salient causes to observed phenomena. The proverbial rabbit in the green had better believe the rustling in the grass to be caused by something bigger and hairier or it will be eliminated from the gene pool straight away (read more: http://www.csicop.org/si/9505/belief.html ).

People who latch on to pre-ringing as the only explanation for "digital sound" are simply trying to avoid predation.
Logged
Warp Drive. Tractor Beam. Room Correction. Whatever.

Affiliations: Hypex, Grimm Audio.

Graham Jordan

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 63
Re: Unfiltered decimation?
« Reply #7 on: July 19, 2007, 05:51:22 PM »

There seems to be two issues here:
1. Is the algorithm a correct implementation of a low-pass filter (albeit not good - very simple FIR) followed by decimation? Followed, by does the output sampling phase of the filtered signal affect the fully reconstructed signal?
I think this is a purely technical issue, and the one I am interested in primarily.
2. How does it 'compare'? Highly subjective, and I'm not going to get into that.

Bruno Putzeys wrote on Thu, 19 July 2007 13:08

Nobody cares about the sample values. With perfect decimation & upsampling, the reconstructed signal is not affected by fractional delays (apart from being fractionally delayed as well).

No arguments there. [In fact I wish more people in general would quit looking at sample values - especially in digital editors without proper reconstruction Smile]

Bruno Putzeys wrote on Thu, 19 July 2007 13:08

 Jon's point is that fractionally delaying the input signal would produce a different reconstructed signal.
Quote:

Less than perfect filtering on either end will cause the reconstructed signal to become dependent on the relative timing between the sampling and the signal.

I have to question this. I've not seen this assertion before. Why does a single input sample delay to the input signal cause a different reconstructed signal, even if the filtering is not perfect? What is seems to me that you're saying is that the phase of the sampling signal relative to the input signal, affects the reconstructed signal. Note that neither of us are saying that the reconstructed signal matches the input signal (given we are assuming out-of-band signals are present). Note that the only filtering here we're talking about is the 'averaging' filter defore decimation - the reconstruction filter/DA conversion doesn't come into it (although all this could apply to it also). My technical reasoning says it shouldn't (but I could be wrong)...

A simple time delay imples no effect on the frequency domain (except simple linear phase change - pure time delay).
The filter is LTI - so only magnitudes of frequency are adjusted (although for this argument there could be no filtering). If this (not-yet decimated) signal were reconstructed, it would be the same for an input delay, but with matching output delay.
Now when you resample the signal at a lower rate, any fs/2 content, wraps back into the fs/2 down to 0, then back to fs/2, etc. as input freq increases. As we all know.
I contend that the phase of the resampling has no effect on the frequency domain, and hence no effect on the time domain, hence no difference in reconstructed waveform (full of wrapped-back >fs/2 content).

Or looking from another direction... 'out-of-band' signals are no different to 'in-band' signals go in regard to reconstruction, except that the get 'reconstructed' in the wrong band (and maybe 'fliped' in frequency). Take note of under-samping as a method of A/D. Remember Nyquist talks about bandwidth of a signal. In audio, for 48Khz sampling, we use (in the extreme) 0-24kHz. But we could perfectly well sample and reconstruct 48-72kHz (with sharp bandpass filters), or even shift it into the audio band by using 0-24kHz reconstruction - which is exactly what happens when you let in 48-72kHz signals with audio band (i.e. poor or no >24kHz 'input' filter signal rejection). This unwanted stuff adds linearly with the desired signal. The same happens with the 24-48kHz range, except that the freqency domain is flipped (i.e. 24-48kHz maps to 24-0kHz). Again this is all irrespective of the phase of the sampling signals, so how does its phase change the signal?

Or am I wildly missing something you and Jon are saying?

Bruno Putzeys wrote on Thu, 19 July 2007 13:08


If you have no >fsout/2 content, applying a super-ultra-mega-sharp lowpass filter would not do anything, either for the bad or for the good. It simply would not affect the signal at all. When you do have fsout>2 content, the minimalist filter isn't good, as attested by anyone who heard it on material with significant HF.

Right. So maybe, given the material this method is being used with (very limited HF), maybe there is little difference to 'proper' down-sampling? This guys  says he's released lots music that he's mastered using this method (only applicable HF deficient material). Maybe it's simply subjective 'desire' that makes it sound different? (That's rhetorical question - I don't want to get into that myself)

Bruno Putzeys wrote on Thu, 19 July 2007 13:08


So under what conditions is eschewing sharp filtering actually supposed to improve matters?

I think that's the core of this thread perhaps? But againn, not one for me.

Bruno Putzeys wrote on Thu, 19 July 2007 13:08


All this pre-ringing stuff is highly conjectural.

Understood. I'll leave that alone, to avoid muddying things here.

(I also changed the title of my post, so I don't get confused, again, into thinking I posted to the wrong thread)
Logged

bruno putzeys

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1078
Re: Unfiltered decimation?
« Reply #8 on: July 19, 2007, 06:31:46 PM »

The reason why the reconstructed waveform becomes sample time dependent as soon as the decimation/reconstruction filter doesn't fully reject above-nyquist component is because the phase of the mirrored/aliased signal components is not only determined by the phase of the original signal but also by the phase of the sampling clock. If it weren't, the reconstructed signal wouldn't change with sub-sample delays even if the decimation and upsampling filter were unity (i.e. no filtering at all).

The effect can be visualised using a function generator (set to square wave, not synchronised to the sampling rate) and any modern AD/DA box. You'd say that the filters in those boxes would be competent enough, but it turns out that for input frequencies that have a harmonic in the transition band from 0.4535fs and 0.5565fs (half-band filters don't strictly adhere to the nyquist criterion) the ringing on the reproduced square wave wobbles visibly.

Graham Jordan wrote on Thu, 19 July 2007 23:51

(I also changed the title of my post, so I don't get confused, again, into thinking I posted to the wrong thread)


Lazy moderator failed to rename the posts after a thread split Razz
Logged
Warp Drive. Tractor Beam. Room Correction. Whatever.

Affiliations: Hypex, Grimm Audio.

Graham Jordan

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 63
Re: Unfiltered decimation?
« Reply #9 on: July 19, 2007, 10:10:35 PM »

Interesting... looks like I have something to really look into: sampling theory for out-of-band signals (effectively sub-sampling theory), and sampling phase.

I am aware that there are issues with reconstruction of signals very close to fs/2, particularly with 1/2 band filters (closer you get to fs/2, the closer the signal and it's first alias become in level - but I'm not sure what the phases are doing).

However, I was also thinking more generally, and 'wider band'. Once you're out of the 'transition band', and in a fairly 'central' frequency (between fs/2 and fs say), what happens to reconstructed aliased signal's phase then, and how does it relate to the samping phase? What about the details of sub-sampling (I haven't looked into this in any detail)? I know that's not audio (more general communications application). I don't expect anyone to answer all this directly, but any pointers to technical references I could go to would be very welcome.

Thanks.
Logged

bruno putzeys

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1078
Re: Unfiltered decimation?
« Reply #10 on: July 20, 2007, 02:56:02 AM »

The transition band is only a function of the filters you're using. Without the filter, all frequencies mirror like mad.

Sub-sampling works by extending the basic Nyquist/Shannon theorem. A sampling rate of fs suffices to capture and fully reconstruct any signal with a bandwidth from 0.5*n*fs to 0.5*(n+1)*fs. So there you need filtering too to constrain the signal to an fs/2 wide band, albeit one not starting at DC. Likewise, for reconstruction you need a similar bandpass filter to reconstruct the signal.

Subsampling is used in audio, for instance, in subband coders.
Logged
Warp Drive. Tractor Beam. Room Correction. Whatever.

Affiliations: Hypex, Grimm Audio.

Jon Hodgson

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1854
Re: Do you use an Analog Summing Amplifier type NEVE 8816??
« Reply #11 on: July 20, 2007, 05:32:08 AM »

Graham Jordan wrote on Thu, 19 July 2007 20:44

Jon Hodgson wrote on Thu, 19 July 2007 01:20

This is a really horrible technique, the effect of the difference between two samples will depend on whether it occurs within a group of four samples or between them. In other words, delay your input by one sample and what comes out of the downconversion could sound notably different in places... that's just nuts.


I'm in agreement that it's not a great technique, but, as has been pointed out, if the spectral content above fs/8 is already minimal, then great filtering is not needed.
As for the filtering used it is perfectly valid (technically), and well known about to those of use doing DSP. It's simply a moving average (pretty much the simplest low-pass filter), where the window size is 4 samples.


[Removed stuff I typed without quite thinking it through]

Ooops, talking bollocks for a moment there, should let the coffee work before I post in the mornings, hence the need to edit my post.

Yes, it is equivalent to a moving average followed by taking every fourth sample.

Still really crap though, for the reasons explained by Mr Putzeys
Logged

bruno putzeys

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1078
Re: Do you use an Analog Summing Amplifier type NEVE 8816??
« Reply #12 on: July 20, 2007, 07:29:20 AM »

Jon Hodgson wrote on Fri, 20 July 2007 11:32

[Removed stuff I typed without quite thinking it through]

Good I didn't see that Twisted Evil
Logged
Warp Drive. Tractor Beam. Room Correction. Whatever.

Affiliations: Hypex, Grimm Audio.

Schallfeldnebel

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 816
Re: Unfiltered decimation?
« Reply #13 on: August 12, 2007, 04:20:01 AM »

With all respect, but I would not like to try Mr. Faulkner's method for organ recordings. I do understand from his interview, he uses this technique mainly on sources with low amounts of high frequency content.

My digital experience goes back to 1983, when we started to work digital on location using the Sony PCMF1 consumer units, and in the studio the Sony 1610's. These units had horrible brickwall filters, non phaselinear, so organ recordings with lot of HF content sounded harsh, not from aliasing, but from the terrible phase shift. But darker sources like piano sounded quite good on these units, and orchestra recordings with controlled highfrequency balance, preferable no use of Neumann KM83's and M50's, but Schoeps MK2 were rather OK too.

Point is, a darker source is less sensible to phaseshift and alising effect, phase shift even brightens up. Maybe Mr. Faulkner's methods sounds better, but I get also the impression it is used a bit as a tool to brighten up the recordings in mastering.

I still remember Dan Lavry's writings about higher sample rates, you win and at the same time you loose. I am still producing only for 44.1 CD format, and I have always found recording on 44.1K without any DSP step inbetween like SRC, the best way to record for CD. I do record in 24 bit mode, to have more headroom safety and use POW-R to go back to 16 bit.

Schallfeldwebel
Logged
Bill Mueller:"Only very recently, has the availability of cheap consumer based gear popularized the concept of a rank amateur as an audio engineer. Unfortunately, this has also degraded the reputation of the audio engineer to the lowest level in its history. A sad thing indeed for those of us professionals."
Pages: [1]   Go Up
 

Site Hosted By Ashdown Technologies, Inc.

Page created in 0.116 seconds with 22 queries.