R/E/P Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: [1] 2 3  All   Go Down

Author Topic: 24-bit it '256 times more accurate' than 16 Bit - Roger Nichols debunking?  (Read 20835 times)

blueintheface

  • Newbie
  • *
  • Offline Offline
  • Posts: 24

Hi - this is my first post here, though I've been a voyeur for some time.  Cool

That mammoth thread that was here somewhere on the virtues - or otherwise - of high sample-rates was as informative and interesting as anything I've ever read anywhere - in the audio field. Anyone bookmark a link?

Anyway, Roger Nichols is not my favourite person at the moment - Elemental Audio and price hikes and all - but this isn't about that. This is about Roger's article in Sound On Sound May 2006.

Either I'm not understanding something, or the science is a bit dubious - like MOTU's demo's of the superiority of high sample rates!

Roger's main assertion is that the resson 24 Bit audio sounds better - 'particularly at the bass end' is because:

Quote:

The 256 times higher resolution is in effect everywhere in the waveform, from the lowest levels to the highest peaks. A sample point nearing 0dB full scale is 256 times more accurate than the same sample recorded at 16-bit.


Hmmm.

Is it more accurate to say that the 24-bit sample is digitally described with greater resolution, but does that mean what you get post D/A is 256 times more accurate?

Quote:

Let's cut down the confusion with bit sizes, let's use the smallest bit in the 24-bit scale as a reference and call it a step. The difference between Sample A and Sample B in the 24-Bit recording is 16 steps. The difference between the same samples in the 16-Bit recording is 112 steps. That is 96 steps away from where it should have been - a 700% error in low-frequency signal.


Again, I'm not disputing the superiority of 24-Bit resolution, I'm just skeptical of the 'science' behind these explanations.
 
Anyone?

Edit: spelling
Logged

Barry Hufker

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 8228

I don't know the answer to your question, but am dying to have someone provide it.  This isn't my area of expertise.

But related to your question is this: Many people assume a 24 bit A/D converter can actually offer a dynamic range greater than what is common.  Looking at the specs of many systems with 24 bit converters, one only finds a signal to noise ratio of 108dB.  That comes out to being 18 bit conversion.

Barry
Logged

AndreasN

  • Full Member
  • ***
  • Offline Offline
  • Posts: 247

Quote:

Let's cut down the confusion with bit sizes, let's use the smallest bit in the 24-bit scale as a reference and call it a step. The difference between Sample A and Sample B in the 24-Bit recording is 16 steps. The difference between the same samples in the 16-Bit recording is 112 steps. That is 96 steps away from where it should have been - a 700% error in low-frequency signal.


The difference in bit depth equals difference in low level information, not low frequency signal.

Lowest level is noise. More bits, less error, less noise. The accuracy does not do anything magic to the upper bits, they're still holding the same information.

Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997

blueintheface wrote on Mon, 05 June 2006 12:28

Hi - this is my first post here, though I've been a voyeur for some time.  Cool

That mammoth thread that was here somewhere on the virtues - or otherwise - of high sample-rates was as informative and interesting as anything I've ever read anywhere - in the audio field. Anyone bookmark a link?

Anyway, Roger Nichols is not my favourite person at the moment - Elemental Audio and price hikes and all - but this isn't about that. This is about Roger's article in Sound On Sound May 2006.

Either I'm not understanding something, or the science is a bit dubious - like MOTU's demo's of the superiority of high sample rates!

Roger's main assertion is that the resson 24 Bit audio sounds better - 'particularly at the bass end' is because:

Quote:

The 256 times higher resolution is in effect everywhere in the waveform, from the lowest levels to the highest peaks. A sample point nearing 0dB full scale is 256 times more accurate than the same sample recorded at 16-bit.


Hmmm.

Is it more accurate to say that the 24-bit sample is digitally described with greater resolution, but does that mean what you get post D/A is 256 times more accurate?

Quote:

Let's cut down the confusion with bit sizes, let's use the smallest bit in the 24-bit scale as a reference and call it a step. The difference between Sample A and Sample B in the 24-Bit recording is 16 steps. The difference between the same samples in the 16-Bit recording is 112 steps. That is 96 steps away from where it should have been - a 700% error in low-frequency signal.


Again, I'm not disputing the superiority of 24-Bit resolution, I'm just skeptical of the 'science' behind these explanations.
 
Anyone?

Edit: spelling


Are you quoting the statements accurately?

The first statement (as posted) was about having 256 more accuracy with 24 bits (then 16 bits). That one is correct IN THEORY. Each additional bit is a factor of 2 improvement so with 8 bits you have 2*2*2*2*2*2*2*2 = 256. From an ear stand point, each bit is 6dB additional improvement, so 8 more bits will improve the dynamic range by 48dB.

But first, even in theory, note that the improvement is about fine detail BELOW the 96dB range offered by a 16 bit format. In other words, a perfect 16 bits yields 0.001526% accuracy so the additional bits will improve on that.

Second, we can talk about 24 bits all day long, but there is no converter that will yield real 24 bits. The lowest bits are noise. In fact, take a mic, any mic. Take a mic-pre, any mic pre. Set the mic pre gain to say 30-40dB. You now have enough noise to burry the top 5-6 bits with noise making them useless. Your real world statement becomes: My 20 bit AD is receiving enough noise to make it function as an 18 bit AD (or much less), so I have a 4 times improvement over a 16 bits machine, that is 12dB more accuracy.

Regarding the second statement. It is completely flawed. Using the lowest bit as a reference is off, causing that very misleading conclusion, about 700% error.
Say we have a million dollars deal, and I call a million 100%.

Say I got short changed by a dollar. What is the percent “error”? It is only 0.0001%.

Say I use a dollar as a “reference”, making it the “100% point”. Then a missing dollar is 100% error. Such “approach” is of course ridiculous! It is 100% out of 100000000%, where the maximum starting point (when talking percentage) should be 100%.

Not to mention that that lowest step is buried in huge amount of noise to start with.
Not to mention that you do not need 24 bits – 144dB dynamic range. Having 120dB is fantastic range from ear standpoint.

The rest of the comment about sample A vs. B having 112 steps error is weird. Why 112?

But the weirdest statement was about: “700% error in low frequency signal”. It is totally and completely out to lunch. What does any of it has to do with frequency? Nothing! In theory, one can have 256 more accuracy BELOW the 0.0015%. In practice, nowhere near it. And that is for any signal and ANY FREQUENCY.

Regards
Dan Lavry
http://www.lavryengineering.com


 
Logged

Reuben Ghose

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 53

I saw that article too and thought that it was full of inaccuracies.  I have a lot of respect for Sound On Sound, so I was really surprised that they would publish an article that was so misleading.  Maybe they just went with it because it was by Roger Nichols?

Reuben Ghose

Logged

blueintheface

  • Newbie
  • *
  • Offline Offline
  • Posts: 24

Yes I did accurately transcribe those paragraphs from Sound On Sound - if anyone want to point me towards some easy and effective OCR software - y'know, some that actaully works - I'll take some more - in the interests of healthy discussion.  Mad

Thanks for your comments thus far.
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997

blueintheface wrote on Tue, 06 June 2006 22:29

Yes I did accurately transcribe those paragraphs from Sound On Sound - if anyone want to point me towards some easy and effective OCR software - y'know, some that actaully works - I'll take some more - in the interests of healthy discussion.  Mad

Thanks for your comments thus far.


Can you just point at the artical?

Regards
Dan Lavry
www.lavryengineering.com
Logged

blueintheface

  • Newbie
  • *
  • Offline Offline
  • Posts: 24

Hi Dan - well I'm pointing at the article now but I don't have a webcam  Mad

It was in the SOS magazine, May 2006 - real paper - but possibly available online to subscribers. I don't have a subscription (too poor or too cheap - or both Wink)



Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997

Malcolm Boyce wrote on Wed, 07 June 2006 01:35

 http://www.soundonsound.com/sos/may06/articles/rogernichols_ 0506.htm


Oh? That artical is for sale? Am I expected to pay for an artical containing such fundumental errors regardings the very basics?

Regards
Dan Lavry
www.lavryengineering.com  
Logged

blueintheface

  • Newbie
  • *
  • Offline Offline
  • Posts: 24

Attached is a (digital) quote from Sound On Sound magazine May 2006:

SOS Roger Nichols Part 1
Logged

blueintheface

  • Newbie
  • *
  • Offline Offline
  • Posts: 24

Attached is a digital quote of the rest of the article:

Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997

blueintheface wrote on Wed, 07 June 2006 02:51

Attached is a digital quote of the rest of the article:




Thanks. It is very difficult to read on my machine, but I saw enough there that is way off. For example, He says that current DA's have less linearity issues because modern DA's are based on 1 bit at 256fs.
That is totally wrong. We did have a 1 bit at 64fs about 10 or so years ago, but the DA's went to multibit.

Sadly, one can say many interesting facts about the methods used to reduce integral linearity with multibit AD and DA's. But it requires knowledge of the technology.

I just do not get it. Some guys got to record or master some "stars" and some actually did a fine job. No one is going to take it away from them, they get full credit for what they know and what they did. But that does not make them into technology gurus. Why don't they realize that themselves?

And then there are some audio magazines. In a more ideal world, they would make it their business to know who is competent enough to "talk technology" to their readers. Don't they care enough? Are they so incompetent to the point of being unable to choose a heavy duty technology guy to talk about the technology?

Everyone accept the fact that being a good EE does not automatically make one into a recording engineer.
Is it not long overdue to understand that being a a good recording engineer does not short circuit the long process of becoming a knowledgeable and experienced EE?

Regards
Dan Lavry
www.lavryengineering.com
Logged

Graham Jordan

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 63

Wow. If I'd have bought that article, I'd want my money back! The section 'The Bits' is just so wrong, in so many ways.

As Dan said, one of the big mistakes is showing error signal levels compared to the 'sample step' size. Totally wrong.

He is suggesting that a same size 'error' on a high frequency waveform isn't heard, but there's no difference between this and a low level signal, e.g. a bass waveform! But I'm sure you'd want to hear that.

Some of the problems with this section...

1. Error compared to step size is plain wrong. Error signal size compared to desired signal size is what we hear. We don't hear sample steps, we hear audio and frequencies.
2. Diagrams show straight lines between sample points. NO! This is not the waveform.
3. 16-bits sample 'step sizes' are integer multiples of 256 24-bit single steps. The high freq wave form shows 1280 'steps', = 5*256. But low freq is 112?? Shouldn't this be 256, so even more 'error' as he defines it.
4. The 'errors' in the 16-bit wave form are at the 16-bit noise floor level, so if has been dithered properly, then is just noise (unlike bad signal correllated noise from non-dithered).
5. Why is the 24-bit '16 steps' the 'right' answer? This is A/D level noise (even a good A/D). The 'right' answer could be 32 steps, 0 steps, or other similar numbers.
6. What are the 16-bit and 24-bit 'waveforms' showing? Is 16-bit a dithered version of 24-bit? A dithered+truncated?  Just truncated? Simulataneous recoding on 16-bit and 24-bit A/Ds?
7. The bass waveform has clear higher frequency compnents to it, but by his logic, as the lower frequency 'carrier' waveform increases in frequency, this high frequency component is going to 'dissapear' (as the stpe size becomes larger).

Ah, enough.

It makes me sick to see this in a respected magazine.
Logged

Barry Hufker

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 8228

OK, so does anyone tell Roger Nichols?  Or write a letter to the Editor?

Barry
Logged
Pages: [1] 2 3  All   Go Up
 

Site Hosted By Ashdown Technologies, Inc.

Page created in 0.034 seconds with 16 queries.