Ken,
Sorry for the very delayed reply
Your observations are correct. I’ll try to address them one at a time:
You said “In this case it seems a dry recording is best- so you avoid the room-in-a-room effect during playback.” While your conclusion that very dry recordings would have reflections lower in level than even the driest control room is correct the problem with this is that it’s not all music that is listenable if it is very dry. Experimental recordings have been made in anechoic rooms and they are mainly used for evaluation and research purposes. I can think of some Lyle Lovett songs that are very dry but even they have a signature of early reflections that will reveal itself in a properly treated room. But you are right: Such recordings will be influenced by the listening room and becomes a part of the sonic signature of the music. At this point it’s actually not a “control room” any more but a “music reproduction room” and this is often the case for “hi-fi” listening where speakers with dipole or even a 360 degree radiating pattern are used in an often relatively live room. In these instances the listening room will overpower the information in the recording and create its own spaciousness, maybe with exception of large classical pieces. This works for a lot of people but it is not an accurate representation of what was recorded and this should in my opinion never be the situation in a control room.
If you’re asking if I would rather see recordings from anechoic chambers and then have live listening rooms with 360 degree dispersion loudspeakers the answer is no. The creative use of space and placement in recordings is a very important part of the musical whole in my opinion. Leaving the reverb up to the listeners room would not work IMO –it wouldn’t work with headphones.
In most instances with very dry recordings reverb with or without early reflections will be added during mixing. The issue I have with this is that whatever “early reflections” and reverb is added it’s never comparable to the real thing. It can sometimes create a pretty good illusion (but I can’t think of more than a handful of CD’s in my collection that I would include in this group) but I haven’t heard any recording where it actually gives the same sonic image as that of an actual stereo recording. Just move your head a little off axis and the whole soundstage collapses. One reason for this is the relationship between early reflections and reverb isn’t like in a real room and that there are so many more correlated and uncorrelated reflections present in the real thing than in a DSP box.
It doesn’t always take much to enhance the listening experience a lot. You say “Seems to me in practice the best we can do is pick 1-2 key sounds to try and track properly in stereo.” And I agree with this – some stereo is a lot better than none at all. One engineer that has done this with great results is Bruce Swedien.
The problem with bleed is that there are too many mic’s
Even if individual instruments are recorded it seems like the mic technique is the same as that on a stage or in a studio full of loud instruments. I’m sure comfort and the fact that decisions about sounds can be delayed has something to do with this. A decision has to be made about what that recording will be in the final mix if you’re to record in stereo and this requires some experience and also, I’m sure, a very comfortable relationship between everyone involved (producers, musicians and engineers). What makes me sad is that this all seems to be possible in jazz and classical music while the music that I prefer to listen to most of the time “suffers” from microphone techniques that might be necessary on a stage but shouldn't really be needed in a studio where the situation is so much more controllable.
Now, I do understand that there are creative reasons to use close mic techniques in a studio as well but it seems to me that these techniques are chosen more as a default and a habit rather than an effort to capture tone and an instruments sonic qualities *so that they fit into the musical whole that makes a song/final mix*. In classical music there are notations as to how loud a sound should be in the “mix”- in pop/rock now it seemingly all has to be as loud as it can get. In too many CD’s now it’s all directly on top of each other and it becomes an unlistenable mush of distortion (because of +dBFS signals). Everything seems to be recorded at 1” and the only “soundstage” is left to right. Not being an engineer I can’t explain what engineers do different now or where in the production chain the sound reaches these levels of horrible sound quality, but I know from what I hear (no rule without exception) that pretty much all of the CD’s I have that are good or excellent sounding are more than 5 years old. So what is done differently? How can far superior recording equipment mean worse sound?
Next: One of the reasons why layers of stereo sounds mushy are because there are “too many rooms” recorded when different stereo mic setups are used. Psycho acoustically the brain simply can’t make any sense out of it because it would never occur in any natural setting.
My suggested solution would be to record stereo as few times as possible – record more instruments at once into the stereo mic: it doesn’t need to be live: re-amp all the guitar parts and/or synth parts for example. Move amps around, play with levels and effects until you get the desired placement and room interaction.
...and don’t compress everything to the top of the digital scale. It really kills everything that is enjoyable about sound when, seemingly every single track is compressed and limited to death before and during the mix and then one more time during the mastering. It's not like resolution is an issue if you're down at -20dBFS but resolution and _severe_ distortion artifacts will be an issue when you approach 0dBFS.
Comments are always welcome!
Regards
Lars Tofastrud