The "Recommendations for Surround Sound Production" generated by Phil Ramone & co. is here:
http://www.grammy.com/Recording_Academy/Producers_and_Engine ers/Guidelines/
it's an interesting, albeit not very helpful read. It lacks enough technical merit in the calibration scheme to void its credibility, IMO.
cheflaco:
""If I calibrate each monitor to 85dB SPL, wouldn't the sum of all main monitors add up to 97dB SPL (85dB+3dB*4) ? Wouldn't this be a little too "hot"?""
It would be very close to a 7dB increase in SPL. You can figure it out for yourself too:
10Log(dB1/dB2) gives you the delta, in this case the increase in SPL when instantiating the additional sound source. In the real world if you manage to get above 6dB with all 5 running, you're doing well.
I presently use Mr. Katz method of calibration, thank you Bob for allowing us a credible alternative for music than the RP 200, but I'll switch over to RP 200 if I'm producing a stem for film.
These days I'm a sound designer, but I spent 12 years of my career as an acoustician and being anal is a job requirement/prerquisite in that field. As such, I'm somewhat disappointed that there isn't better deinition with regard to how we translate analog concepts into the digital world, and that the methods for monitor calibration, as offered, lack sufficient hard data to allow for true compatibility. No offense Mr. Katz, you've done a great job. In my opinion however, there are still too many variables in the process.
I mean, let's face it, the telephony world has it all over us in this respect. One thing that would help is a little clearer definition of the tools. For example:
-Agree on a standard test signal. There are several methods available for generating Pink noise and none really come close to the ideal, but this wouldn't matter if we just specified the method of generation, gaussian distribution, or even just say, "use white noise using "X" method, filtered 10dB/decade. Don't care, just agree on something. I'm inclined to think the signal should be clamped to provide a constant crest factor, say 12 dB or something. That would help alleviate variables in the electrical and acoustic measurement side of things, eg SLM integration times and impulse effects, i.e you wouldn't be stuck having to buy a B&K meter.
-Metering... yea there's a can-o'-worms. Another good argument for a fixed crest factor test signal. Can we even do "True" RMS measurements in the digital domain? The best Agilent True RMS digital meter does an analog conversion first to figure this out. OK if you're just using sinewaves, but any complex waveshape.....
-Are we using the 0dBspl=20 uPascal standard?
-Why measure C-weighted? The measurement should be linear if you want to correctly calibrate the LFE. C is 3dB down at 30Hz, 6dB at 20. A lot of room for error, especially if test signals are inconsistent.
-Microphone incidence correction?
-SLM integration time? (to the standards, not the Radio Shack variety).
-RTA or TDS? Both? Hanning, Hamming?
Maybe I am being too anal. Poor cheflaco just wanted some help and I give him a rant. Ah, there's some single-malt in the cabinet. I'd better have one now. Anyway, follow Mr. Katz procedure and your stuff will more than likely sound great on other peoples systems. So, maybe all the tech stuff really doesn't matter.
Randall