volki wrote on Mon, 08 August 2005 05:07 |
Brian, indeed, my MOTU2408 only has internal jumpers for +4 / -10 dB reference level...
I wonder if there are AD-converters around which can take a higher level from a balanced source? That would be one reason to go balanced, actually. The thought comes to mind because e.g. ADAT's accept up to +19 dBu when driven balanced, but considerably less when driven unbalanced.
|
The internal jumpers you mentioned for +4 or -10 dB set your Nominal Operating Level. Generally your headroom (or clip level, or full scale level) will be 14dB (or more) higher than the nominal operating level you might select by using a particular connector or setting a jumper or going to a menu. You can think of Nominal Operating Level as your "0VU", an average operation level above which there will be higher level peaks.
For example, on a Masterlink you have the same -10/+4 choice but you get there by selecting the which connectors you use. From the Masterlink book... "the XLR jacks are +4dBu nominal input level, while the RCA jacks are –10dBV nominal input level. Both sets of inputs have 15dB of headroom from nominal input to clipping, resulting in a maximum of +19dBu at the XLR inputs, and +5dBV at the RCA inputs."
The maximum level an A to D with a balanced input can deal with is not going to be affected by the source being balanced or unbalanced. Masterlink's balanced inputs will accept up to +19dBu and this is independent of whether the source is balanced or unbalanced - see more on this father below...
as I write this Mark Donahue has brought up a point...
Mark Donahue wrote on Mon, 08 August 2005 12:06 |
Der All, There's actually one more thing to think about. Some converters use a true balanced 2 converter topology to get the noise figures they need for the marketing people. one of these is the Benchmark AD-2004. I'm sure there were others. If either of these were driven single ended, the signal would peak at -6dBFS and would distort above that.
|
That hopefully is an exception to the rule - that would be the equivalent to a grounded center-tap transformer input and would be, in my opinion, stupid - good for specs and marketing and bad for using in a real world environment. One can design a differential input amplifier that has a differential input and a differential output to feeding two converters differentially that does not look like a center-tapped input. If Benchmark really did design their input circuit so that you loose 6dB of headroom in the input amplifier or it looses the input to one or the other of the two converters used differentially when the source is unbalanced I have to say I am disappointed.
back to reference levels
Other devices have settings referenced to Full Scale. Full Scale (dBFS or dBfs) is your clip point and that figure can be expressed in a number of ways that can add to one's confusion. I like it when FS is expressed as the actual level where a sine wave will clip or go full scale at the A to D input and the actual level that comes out of the D to A output when reproducing a full scale sine wave. Sometimes it is expressed as a negative number i.e. that nominal operating level is some number of dB below FS but then, what is nominal operating level... 0dBu? -10dBV? 0VU (whatever that is)?
I had an early Benchmark A to D converter with no input adjustment at all and it turned out that it was set internally, with fixed resistors, for a full scale input level of +22dBu... which can be thought of as FS = +18 over 0dBVU (+4dBu). This made it less desirable to use as a primary converter when mixing to both analog and digital in one pass. The VU meters on the console and the analog machine always stayed on the pegs and were useless. The tape machine had to be aligned to record quite cool on the tape in order to compensate for this fixed input level. If the FS on the converter was set too low I could just pad the input a little but since it was set too high I had to send it back and have the reference changed.
Prepare for a Tangential Launch! Content contains some editorializing, opinionating and some handy and possibly relevant technical info-izing!
A lot of gear can't output signal cleanly at +22dBu. Most "professional gear as that term is used today" can handle +18dBu okay unless it's some sort of cheap wall-wart powered junk or is otherwise designed around power sourced from Firewire and USB interfaces - which is a LOT of gear these days. I am finding that "nominal" operating level (as defined by some reasonable headroom) is drifting toward 0dBu in newer gear because it is less costly to manufacture. Lower power supply voltages and lower current consumption means smaller power supplies. Smaller power transformers (if the supplies are linear and not switching) mean less weight means lower shipping cost, lighter chassis build and so on. It's the same reason audio transformers in manufactured equipment first got crappy and then were replaced with "superior" chip-based active transformerless inputs and outputs.
As we move deeper and deeper in to mass market driven commodity product I expect that we will continue on down to "-10dBV" Home HiFi interface levels as the norm. Price seems to be the bottom line driving almost everything. This in itself is not a bad and evil thing but it certainly does present problems when you are trying to cleanly interface a variety of equipment spanning perhaps forty years of changing technology and target markets.
So...
Gear with unbalanced outputs that have a bipolar audio power supply running +/-15Vdc can "swing" an output signal almost "rail to rail"... meaning, at the output of the amplifier driving the output, the most positive peak would be, maybe, +14.5Vdc and the most negative peak would be -14.5Vdc. The signal goes out through a capacitor in most cases so if you want to measure the actual swing you could use an O'Scope and measure 29Vdc from the most positive peak to the most negative peak... or you could hang an AC Voltmeter that can give you a true RMS level measurement of the level at clip. I find a 'scope and a distortion analyzer used together to be the most handy because I can measure and see the distortion as level is increased.
At any rate - here's how it goes. With the unbalanced +/-15Vdc powered output described above, swinging 29Vpp (Volts Peak to Peak) you can figure the maximum RMS sine wave level (which is the signal we use to set pretty much all levels in audio) as follows.
Vpp = 2.828 x Vrms
or
Vrms = 0.3535 x Vpp
so
0.3535 x 29Vpp = 10.25Vrms
and to get Vrms into dB
20 times the Log of the ratio Vrms over Vref
Vref for dBu units is 0.775Vrms
Vref for dBV units is 1.0Vrms
10.25Vrms = +22.43dBu
Subtract 2.214dB from a dBu unit to convert to dBV
Add 2.214dB to a dBV unit to convert to dBu
so 0dBu = -2.214dBV
and 0dBV = +2.214dBu
okay...
unbalanced output running on bipolar +/- 15Vdc supplies will clip at around +22dBu
and unbalanced output running on bipolar +/- 12Vdc supplies will clip at around +20dBu
that's not a huge difference really - when you jump power supplies UP to 18 volts you only get 2dB more headroom and distortion generally rises in chips optimized to 15 volt rails... going from 18 to 20 volt rails gets you 1 more dB and chips fail faster as you exceed the 'maximum' operating characteristics... this is why I suggest people with Trident chip based consoles like TSM and Series 80, drop the supplies from +/-20 to +/-18 or lower
Of course voltage swing is not everything - just looking at voltage swing alone with no consideration of driving a wire and a real load at the end of the wire does not take into account that most things you are trying to drive into will present an impedance and in most cases that impedance will not be linear with frequency like a pure resistance might be. While a 5534 chip can drive a 600 ohm resistive load, it really can't drive it up to +22dBu (given +/-15 volt supplies) at all frequencies from 10Hz to 30KHz - CLEANLY... If you change the load to something more like you would really see - with a cable that has some capacitance and a load that has various capacitive and inductive components (more so in digital gear in order to keep high frequency radiation down and comply with FCC requirements) you see some problems. Try this with a TL071 series chip or it's various cousins that are cheaper and consume less power than 5534's (and more modern, more costly more power hungry chips) and it gets worse - so that's why I say most gear can't do much over +18dBu cleanly...
okay - so two amplifiers instead of one... balanced outputs
Gear with bipolar +/- 15 volt power supplies and active balanced outputs can develop nearly of +28dBu differentially into no load. That is 6dB more than an unbalanced output on the same supply voltages. There are really two output amplifiers driving a single output, each driving a signal pin on an XLR or TRS jack, with one amplifier being 180 degrees phase reversed from the other to proved a complimentary or "differential" output. The peak to peak figure is going to be double because while the "pin2" amplifier output is sitting at +14.5Vdc, the other "pin3" amplifier will be sitting at -14.5Vdc... so that is a differential "positive" peak voltage of 29 volts and, in the next half-cycle, the polarity has reverse so the differential "negative" peak voltage is -29 volts... the differential peak to peak is now double - skip the math - it's 6dB more... UNLESS you are trying to drive INTO and unbalanced input.
When you drive a differential balanced output into an unbalanced input the driving amplifier is seeing one of it's amplifiers shorting to ground. These days the output circuits are generally designed to sense this and correct the level at the surviving un-shorted output so there is no 6dB loss in level (because half the output is shorted)... but we are back to one amplifier driving off a set of 15 volt (or whatever) supplies and the max level is dropped back down to what it would have been with an unbalanced output... but, in addition, it's not as though that low side amplifier shorting to ground has actually shut itself off. It may be current-limited but in order for a differential balanced output to "sense" that one half has been shorted to ground that amplifier has to keep trying to output signal to GROUND... and this can cause problems with crosstalk. Crosstalk can make itself apparent both as signal bleed, leakage and an increase in overall distortion... because the amplifier driving ground is often distorting pretty badly so the leakage signal is distorted and mixing back in with the rest of the audio you are trying to keep clean. There are some differential outputs that are better than others in this regard (dumping audio to ground when unbalanced) but by and large those circuits cost a few cents more to produce so many manufacturers opt for saving the money.
This is why - in a topic running in this forum "Anyone Else Constructing Their Own Console?" - a solution to a crosstalk problem was to drop the low side pin of the differential balanced output that was driving in to an unbalanced line input - drop the wire from the low side pin and tie it to ground. If you can deal with loosing 6dB in maximum level from the source then this can work. It's not the very best way to deal with it but it can solve the problem.
Other fixes would include
-> as above (drop the wire from the low side pin and tie it to source ground) but jumper the low side pin to ground at the source device and hope that the audio current driving into ground there takes does not find it desirable to go to ground through the low side wire to your console - a good chassis earth for the source equipment can help. The amplifier will sense the unbalance and maintain level - not dropping by 6dB... and sometimes a 10 ohm decoupling resistor inserted in series with the low side of the common wire between source and destination can help by making the path to the console high impedance than going to ground another way... but this can be problematic for other reasons - 10 to 100 ohm earth decoupling resistor can work great in closed designs like inside consoles and such but in the less controlled and chaotic environment outside the the gear where connection and ground paths change... it can be problematic. I am already out on a big tangent so I won't explain to much more about that here except to say that this is one reason among many that I design and view studios as complete systems - like one big piece of equipment - instead of just a bunch of disparate gear piled in a room and wired together.
more fixes, continued
-> balancing every input to every unbalanced input device that could be driven from a differential output - you can do this with transformers or by adding little circuit boards with differential input chips on them.
-> transformer isolating all differential output devices so they drive a transformer primary differentially and stay happy while the transformer secondary can be unbalanced and not care - oh yes - transformered outputs really don't care if they are driving a balanced load or an unbalanced load as long as both ends of the transformer winding are connected to the destination (one end to ground and the other to hot in the case of an unbalanced input). Transformers don't drive audio current to ground or loose headroom when connected to an unbalanced load.
-> you can wire some 1:1 transformers into a balanced patchbay and patch from differential balanced outputs through the transformer to unbalanced inputs on a case by case basis...
and...
where did this start? oh yes...
matrix wrote on Sat, 23 April 2005 04:11 |
Hi all. I want to connect the unbalanced out from my consoles inserts to my A/D converter, that uses balanced inputs. ...
|
I did post a response to that but here is a little more
Connecting from an unbalanced source to balanced or unbalanced destinations is not in itself problematic. Unbalanced to unbalanced is fine. You have to be mindful of multiple ground connections - you really can't avoid them, but as long as your ground is clean and there are no stray currents trying to equalize various noise potentials across your studio, you should be fine. Unbalanced to differential balanced (except, perhaps, as noted by Mark Donahue) is fine. I view differential inputs as I would a voltage measuring device. It's going to see the difference between to hi and low terminals so it will see the difference between your signal output and ground and that is fine. Differential inputs connected to unbalanced sources have just as much headroom as they do when connected to balanced sources Transformer balanced inputs - same thing - they can be driven from unbalanced sources with no loss in headroom. Both ends of the transformer winding have to be terminated to the unbalanced source - one to ground and one to signal.
You must connect both ends of a transformer winding or it will be low level, very thin and unhappy sounding.
Hanging step-up output transformers across unbalanced -10dBV outputs to shift level and balance the signal at the output is often problematic because the average chip output can't drive an output transformer.
Outputs are generally low impedance and inputs are generally high impedance - output transformers, in order to be low impedance and consequently be able to drive a wire and a real world load need to be driven by a low impedance source that can actually drive the transformer and whatever is hanging off the secondary - properly. The output transformers I use are anywhere from 50 Ohms to 150 Ohms at the primary and 600 Ohms at the secondary WHEN TERMINATED - connected to a 600 to 1000 Ohm load.
Transformer impedance figures that are quoted in spec sheets are dependent on how the thing is terminated - secondary loading reflects through the transformer back to the primary. In a step-up transformer (most line level output transformers in solid state gear are either 1:1 or step-up) the load seen by the amplifier driving the primary will be some factor lower than the load seen by the secondary and that factor is dependent on the turns ratio. In a step-down transformer (most line level input transformers in solid state gear are either 1:1 or step-down) the load seen by the amplifier driving the primary will be some factor higher than the load seen by the secondary and that is dependent on the turns ratio.
The amplifiers I use to drive output transformers with step-up have to be able to drive a 50 to 150 ohm load.... and, yes, the vast majority of devices made today are ideally high impedance and non-loading, on paper, at mid-band but I want the output of the transformer to present a low impedance source to the world in order to overcome cable and real world load effects and also I want to be able to drive an 1176 or any number of old pieces of junk I refuse to depose of.
A lot of the old gear that I like - by the way, can take and deliver +22dBu to +30dBu all day so line amplifiers and attenuators are handy to have on patch when you want to operate old gear in conjunction with more the modern, new and improved gear being presented to us today.