For example, do you apply noise reduction first and listen to the results before moving forward, or do you feel that noise reduction and EQ settings need to be done simultaneously for the best result?
I always configure the noise reduction first. In my experience, if NR is done later in the chain, it will mess with everything you have done to tune the mix into sounding better. NR in more extreme settings, if you can get away with it, especially.
NR can change the spectral balance, the dynamics, the inter-relationships of phase form the different spectra in a non-productive way. When I isolate it as the first thing in the chain, I can then correct later what might have been affected that was out-of-bounds (artefacts).
Is it possible to master with headphones only? Advantages and disadvantages.
Possible? Yes.
Recommended? Definitely NOT.
Headphones are a valuable tool in mastering, but to do everything with headphones as monitors brings disaster.
Headphones are used to help zero in on frequencies that are very thin and surgical.
They are helpful in detecting artefacts from digital processing especially.
Also, briefly, as a last check to the final cut before pressing.
Sometimes there are details that can only be revealed by a pair of phones, but they will not give you any natural feel for the lower end of the spectrum, and your overall frequency balance will suffer if you make too many of your decisions with only phones.
Would a BBE or Exciter help in this situation?
Sometimes, if used sparingly, they can help with some phase issues. But not always. BBE and Aphex are very different types of exciters. Briefly put, BBE is fooling around with the phase relationships in 3 discrete bands. Aphex is extending the highs with harmonics derived from the original signal.
Alot of practice needs to be had with these devices before you can go on deciding right away whether what you have can benefit from it's treatment.
Sometimes it can make a HUGE difference with what seems to be impossibly small differences in settings. But, the same could be said for EQ, compression, ...
This is why the older, phasier EQ boxes, the Pultec for instance, are taken by me as more of an excitation process than pure EQ.
If I want pure frequency domain adjustments I'll reach for any of the dozen linear phase EQs out there. With a box like the Pultec, I'm not really tuning in on frequencies (well yes, I am, but for a different reason), I'm choosing the best place in the frequency domain to change the phase in relation to the time domain in desirable ways. (which often has the nice side effect of boosting and/or cutting certain frequencies in an aesthetically pleasing way that can be described as "psycho-acoustic").
Now, the Pultec is much longer to learn in this way, but it can be used in many more situations because it can be more delicately tuned than any of these new-wave exciters.
I do own an SPL Tube Vitalizer, and when this is used in the right situation the results can be stunning.
Could a combination of M/S processing and delay help in repositioning the instruments in the mix?
Yes. Unfortunately the point of diminishing returns is not far from where you start with it. If you're lucky it will get you not even half way there. Better to do a remix then to pay for long hours of digging around with m/s EQ to remix it for you.
At it's best, it still may sound artificial.
Remix it.
As MEs we are told to "honor the mixers intentions", are there conditions where this is not the case?
No.
Communication is essential.
Once things have been explained to whoever may be in charge of making these types of decisions (it may be the band, or the executive producer rather than the mixing engineer's), it will be up to them to decide where they want to go with THEIR sound.
You are only the pilot, they should know where they want to go. Once they've heard some options from you, you take them where they ultimately want to go.
Another general rule of thumb for mastering is to maintain as transparent a path as possible, why would we then intentionally color a mix, and where in the processing chain should this be done if at all? At the start, or after we've heard the "naked" mix?
Coloration is mainly done with different balances of several types of harmonic distortion and/or types of phase shifts. These distortions can be pleasing when used in proper amounts with the right kind of material. They add a nuance that can only be heard when you are very concentrated on listening. Sometimes they help bring out details to the surface that may have been overlooked otherwise. It is a type of "Hi-Lighting".
I prefer to have it as one of the last steps in mastering. If I put it first, everything I do subsequently will effect that balance and may steer the sound somewhere undesirable.
By saving it for last, the ultimate control of the final color of the sound is left to me. I usually achieve this with the SPL box and sometimes the Cranesong HEDD converter, although it is often not needed because of the other gear that I use.
Because coloration also comes naturally from out-board devices such as compressors and EQs, which also distort sound but in very controlled ways in relation to the settings used. This is why "coloration" can often help a digital mix to feel more "analog".
Again, total discretion is needed in these decisions. It is obviously undesirable to have a mix turn completely into distortion (and it is easier for this to happen than you may believe; the ears are quickly acclimated to the added distortion and will ask for more). Try to think of the added tones as a gentle brushing over with water colors rather than thick paint that you would use on a wall that only covers everything underneath it. The real trickery can be observed when you realize that after a minutes listening, if the settings are right, that it is easier to detect when the unit is switched out, rather than when it is switched into the path.
"Color", don't "Cover".