djui5 wrote on Fri, 25 June 2004 22:36 |
Where did you get this from anyways?....digital filters like the ear does?
|
I'm not sure exactly the nature of your question - there are lots of ways to take it, I suppose.
The ear is a filter just like any other natural device. There are several factors at work in the ear that cause it to be a filter - The eardrum itself is a transducer with a specific frequency response just like any microphone. The ossicles are mechanical devices that filter more like the speaker magnet/coil combination. The oval windowfilters again like a transducer - a microphone or speaker. Inside the cochlea we have the basilar membrane - essentially a taught piece of material stretched across the basilar membrane. When liquid flows over it is causes it to resonate. As with any transducer, however, it has a fixed and specific frequency response. The fascinating thing about the basilar membrane is that it is long and shaped such that certain frequencies resonate better at different places. Higher frequencies resonate the basilar membrane closer to the entrance to the cochlea, where the width of the membrane is very small - less than a millimeter. Lower frequencies resonate better at the opposite end (the helicotrema) where the basilar membrane is wider - around 1.7mm. This means that each spot on the basilar membrane is actually a filter unto itself, allowing the membrane to only resonate at particular frequencies. As a whole, however, the basilar membrane is, unto itself, a filter, in that it simply can't resonate at frequencies higher than the mechanical structure, elasticity, and its tautness allows, nor lower than its loosest point allows. Finally, there is the individual filtering of individual hair cells and their respective neurons. Each hair cell can only respond to specific frequencies. The lowest ones can respond to around 10Hz. The highest ones respond to around 1kHz. The combination of various of those firing asynchronously provides all audibility between 1kHz and around 4kHz. Anything above that we only hear based on the physical location on the basilar membrane that causes the hair cell neurons to fire. Finally, there is the filtering caused by the impedence caused by the round window as the waves of fluid pressure in the inner ear meet the middle ear again.
The combination of all of these filters causes our ears to function as one big filter with several individual components. This filtering in the ear causes us to have an upper limit at typically a little less than 20kHz, and changing with age and as inner hair cells disintegrate. The complete shape of the filtering of the ear is not as simple as a few-order IIR filter, but it is safe to say it is a near-minimum phase IIR filter with a .25-.4ms window or so. The bottom line is that the waveform that reaches the auditory canal is heavily filtered by the time it reaches the VIIIth auditory nerve, and any frequency content outside of certain boundaries is attenuated entirely. There are other things that are filtered out as well as a result of the masking, etc. We can understand the difference between what goes into the ear and what gets to the brain by mocking up a simple filter that rolls of at 20kHz and look at what happens when we pass various signals into it - such as an impulse, a sine wave at 25kHz, a square wave at 5kHz, etc. We notice that what the brain gets with which to decipher the waveform is nothing at all like what enters the auditory canal.
Digital systems also have to filter the audio - in this case to remove frequency content that would allow aliasing. The filtering used in these digital systems is specifically designed to exceed the boundaries of the human ear, such that any waveform that enters a digital system and then enters the ear would not sound any different than if that waveform were to just enter the ear.
When I say that digital systems filter audio "like the ear" I do not mean that we use minimum phase IIR filters with the same window, et al, but I do mean that digital systems filter out all material above 20kHz, just like the ear does before it gets sent to the brain. In fact it is good that the filters in the digital systems are not exactly "like the ear" as the non-linearities in the ear, if compounded by more than one instantiation, would cause audible distortion. The filters used in digital, therefore, allow no phase-shift so that the only phase-shifting is that which is done by the ear itself, for example. So long as the filters in the digital systems exceed the boundaries of the ear the audio is capable of being accurately presented to the ear as though the digital systems were not in the circuit.
Of course many methods of cutting corners on these filters have been implemented over the years, including allowing slight amounts of aliasing in, phase shifting the audio at various frequencies, and rolling off some of the viable audible range. The fact that this has been done in no way insinuates that it has to be, however.
One last point - we have already accepted, have we not (?), that any digital system inherently functions as a filter in that it has strict boundaries, and anything outside of those boundaries is incapable of being transmitted. The ear is a digital device. As such, it is a filter and has strict boundaries very much akin to the digital systems designed for its benefit.
I hope this answers your question?
Nika.