As I read this quote he's indeed saying that the 3x voltage gain in the transformer automatically translates as 3x higher slew rate as seen after the transformer. Or, otherwise put, that reduced level in an amplifier automatically reduces the effects of slew rate related distortion.
The second half of the quote a bit vague. As I interpret it, he then applies the same logic to any system where gain can be varied in several places, like an amplifier followed by an ADC. Dropping the gain before the converter and making it up in the digital domain indeed reduces slew rate (and other) distortion in the amplifier.
Actually, reducing the level in normal (class A) amplifier circuits always translates into reduced distortion, be it slew rate related or otherwise.
One rarely sees slew rate induced problems in modern op amp implementations (those aimed at audio at least). So, if slew rate is good, there's no need to be too concerned about it.
As a sideline I should note that slew rate does not relate well with distortion performance. Some op amps have fantastic slew rate specs, but slew rate induced distortion may become noticeable at much lower slew rates. Others may have a relatively slow slew rate, but don't distort much until you run into it.
Like one can have 24dBu headroom but THD starts to climb at +14 already, whereas another could only have an 18dBu headroom but still not distort discernably at +17.9.
An op amp with a degenerated bjt input pair will be much better behaved, slew-rate wise, than a jfet input op amp with the same transconductance (but no degeneration).