by erichayes » Mon Aug 21, 2006 7:59 pm
Hi All,
I agree completely with Don about making sure a load is connected at turn-on. Taking this paranoia one step further, turn all gain and volume controls to zero if possible (difficult to do post-sound check).
(Global) negative feedback--"global" was added to the term in the last few years by those folks who wanted everyone to think they'd invented something new-- seems to be one of the most misunderstood and, therefore, vilified tool available in amplifier design. A well designed, stable amp doesn't "need" NFB, but nevertheless will benefit from judicious application of NFB. On the other hand, a poorly designed amp will require NFB just to keep it from breaking out in glorious ultrasonic song, causing its owner to wonder why he's always replacing speakers. My philosophy with amp design, regardless of application, is to make them as clean and stable as possible to begin with, then go back and apply muck and filth as needed.
The one aspect of NF that seems to be the most misunderstood is that it decreases the maximum power output of an amp. This is simply not true. Power output is determined by the power supply. What NF does affect is gain structure. You might need to add an extra stage of amplification to get your 100 watt amp back up to 100 watts at full volume, but that can be done in the preamp section of the amp.
One interesting thing I did for a guy with a Princeton Reverb several years ago was to add a "Magic" control (his term, not mine, and preceded by "f'ing"; the control was labelled, simply, "FM"). It consisted simply of putting a 50K pot in series with the feedback resistor coming off the output transformer. Thus, the feedback could be reduced (and the gain and distortion correspondingly increased) from the factory value to virtually none with a twist of a knob. The only downside was the strain of adding one more contol for the guy's already overtaxed and maginally functional mind to deal with.
Several bassists I've been consulting with lately, two of whom are classically trained, point out that the newer generation bass amp/speaker systems are significantly wider in frequency response and lower in distortion than older amps. This is primarily due to the style of bass playing that has evolved in the last few years (slap, funk etc.) from the old stodgy thumbless keep-the-beat background mantra. With all the new harmonics being generated, bass players have moved into the forefront and want the audience to hear all the sounds they're creating now. There's a semi-myth that even though bass guitars are tuned the same as upright basses, the short string length of the guitar prevents the fundamental frequencies from being produced in any useful quantity and all you're hearing is the first harmonic. When we plugged them into the 100 watt acoustic instrument amp (running flat into a matched speaker system) prototype, all of their jaws dropped to the floor. The fundamentals have been there all along; the amps and speakers just weren't capable of reproducing them.
Damping factor is an indication of how high or low the overall output circuity internal impedance is. Back in the 1950s, when speaker construction techniques were primitive, low damping factors were actually desirable as the speakers, themselves, were highly damped (cone excursions of less than ¼" were not uncommon). In order to get any meaningful bass response out of a system, the amp makers would deliberately mess with the damping by putting small resistances in series with the output and the speaker and simultaneously changing the cathode resistance on the output tubes. The net result was variable damping from "unity" (actually 1/∞) to around a maximum of 9 or 10. As speaker manufacturing techniques improved and evolved, the need for higher electrical damping increased, causing higher power amps to be created. The highest damping factor I ever saw in a tube amp was 30, which is about 4 or 5 higher than the calculated upper limit needed for present speaker designs. Solid state amps with damping factors of over 100 (one bragged of a DF of >1600 back in the '70s) sound particularly awful on older speaker systems and cabs because they won't let the speaker do anything below 100~ or so.
Determining damping factor of an amp is ridiculously simple, but, for some reason, has a shroud of secrecy surrounding it. Ironically, it involves a step that got this whole subthread started: removing the load with signal present on the input. Although any output voltage will work, I prefer 1.00 VAC as it is easily to compute with and low enough to prevent harm to the amp.
The first step is to hook up a non-inductive load that matches the output impedance of the amp. Then pump in a sine wave signal (I prefer 40~, but anything up to 1kc or so will work if it's a good amp. When in doubt, do them both and compare notes) that gives 1.00 VAC across the load resistor. Disconnect the load resistor and measure the unloaded output of the amp. Reconnect the load or turn off the amp. Subtract the loaded voltage (1.00) from the unloaded (1.XX) and take the inverse of the difference. Thus, if your unloaded voltage went up to 1.18 VAC, the difference would be 0.18; 1/0.18 = 5.55.
Eric in the Jefferson State