..using a microphone with artificially boosted top end, or using EQ ahead of the tape machine, would have been a strange way to address the problem of a poorly maintained machine.
Even with perfectly aligned machines (this was the normal situation!), in the analog era boosting the treble before going to tape was very common and necessary.
You either had a machine without Dolby, then you had to watch out for the signal to noise ratio,
or you had the Dolby's which had negative influence on the attacks and therefore on the subjective treble dynamics.
Usually you recorded very hot, and this further compressed the treble by saturation.
When I switched to digital (32 track PD-format) this was the biggest change for me, I could define the amount of treble in the mix.
I know, I'm responsible for this bunny trail off the original thread.
I also had the experience of the sound coming off tape "relaxing" after a day or so on the reel. I have no scientific evidence to support it, but I often noticed that what I'd recorded on one day typically sounded a bit "gauzier" the next.
Is there any documented evidence from the microphone manufacturers themselves that they designed mics with built in treble boost to compensate for tape demagnetisation? I've never read anything like that.
...All this had influence on the way microphones were built and tuned, as it was and is the end result that counts.
Again do you have any evidence for that?
But the idea that studio microphone parameters were altered on the basis of such factors further downstream of the recording, or even downstream of the microphone is something I dont remember ever encountering before.