In my album reviews, I often complain about audio compression on compact discs. Here's an explanation of what I am talking about and why I think it's bad.
First of all, let me point out that I am an audio engineer/producer as much as a music critic/reviewer. I had been doing music recording since back in high school in the mid 1960s, and I produce and engineer a lot of music, almost an album's worth of each week for the Homegrown Music series (probably well over 700 so far), and have also produced or engineered numerous CDs and LPs. In my capacity as a Public Radio producer, programmer and host, I get the opportunity both to evaluate and program music for radio audiences, and to produce it in the studio. As you can see from my album reviews and playlists, I get to audition a lot of records, and think I have formed a pretty good idea of what makes a record sound good and why.
There are many instances where the quality of a performance will outweigh sonic deficiencies. But with the state of technology what it is, especially compared to what it was 30 years ago when I was starting out, I don't think there is any excuse for bad sound on a major-label big-budget CD, and even less justification for intentionally making something sound "lo-fi" just to be cool.
Having grown up in the analogue days when signal-to-noise ratio was something vigorously pursued with improved tape formulations, getting rid of noisy, unreliable tube equipment (which seems to have made a comeback in the professional audio world like some prehistoric monster from the lagoon), tape noise-reduction systems, and lots of "tweaking," people like me were always trying for quieter recording with the maximum possible dynamic range -- the difference in loudness between the loudest peak and the "noise floor" of your equipment.
Back in the 1970s, digital audio held remarkable promise. Though the early digital equipment had its share of problems -- and there were vigorous arguments over the way the conversion between analogue and digital would color the sound -- there was little argument about the glorious 90 db or more of dynamic range that digital could provide. A really good analogue tape machine with noise reduction could give you in the high 60s. But once the music got onto an LP for distribution to consumers, only the very finest vinyl formulations, and a virtually virgin pressing could give a signal to noise ratio approaching 60 db. Most commercial pressings, especially those played a few times, averaged in the low 50s, and intermittent pops and scratches could actually could be louder than the music itself (a negative signal-to-noise ratio). So even with the best quality manufacturing, the recording that music lovers bought in the store never sounded as good as the master tape.
In 1982-83 when compact discs were introduced, it was like an epiphany for us audio folks. For the first time, consumers could purchase a recording in a medium whose dynamic range exceeded that of $20,000 professional tape machines. Now I know that there are vinyl-philes who still swear that LPs sound better than CDs. But right now I'm talking about signal-to-noise ratio and dynamic range. Putting aside the arguments about the analogue digital conversion process, I don't think anyone can make a convincing case that an LP (or a cassette for that matter) has a dynamic range that comes within 20 db of that available on a CD.
With the advent of the CD, record buyers could finally experience the full dynamic range of the music. And for the first few years CDs did provide appreciably better dynamic range than LPs or cassettes. But since then something has gone seriously wrong.
First, let's backtrack a bit. Vinyl LPs have a number of physical limitations, owing to the actual mechanics of making records. An LP is literally cut from a master disk, usually made of lacquer, or sometimes of metal. Cutting an LP is a constant compromise between level (volume), playing time, trackability and residual surface noise. Too much level and grooves will overlap and cause a skip. The more playing time per side, the lower the maximum available level. A really good mastering engineer, whose job it was actually to cut the master disk, would develop a reputation for the ability to take a master tape and translate that into the best compromise the physics of an LP would permit. Sometimes that would mean using a "compressor" to reduce the dynamic range, bringing down the loudest peaks and raising the volume of softer passages. Mastering engineers also would prepare the master tape for cassette duplication, usually fixing the level and equalization of the master, and often also adding some compression to make up for a cassette's limited dynamic range.
With the pop music business having become very competitive, a myth arose among producers and record companies that the way to get one's record to stand out (in the competition for airplay) was to make it as loud as possible. Since the loudness of a LP or single is subject to a the physical limitations of the medium, the way to make a record, especially a 45 rpm single, louder was to compress it heavily, much as commercial radio stations do to sound loud on the air. Some rock may not sound bad heavily compressed -- the Beatles and their producer George Martin were able to use compression artistically.
Now, with the CD, we have more than 90 db of dynamic range to utilize, and no surface noise or risk of groove skip to worry about. So what has become of all that wonderful dynamic range? The loudness wars have come back. While most classical CDs still make use of the CD's dynamic range potential, once again the fallacious belief that "louder is better" has permeated the record industry.
One would think with the ability of the CD to deliver an accurate representation of the master tape, that mastering engineers would become an endangered species. However the skills of mastering engineers can be invaluable in taking a master that might consist of different songs recorded in different studios putting them together into a sonically consistent continuum. Also, experienced mastering engineers can "tweak" masters tapes to improve the sound to play well in a variety of listening situations.
Where the current pressure is coming from is unclear, but several prominent mastering engineers have complained that they are being pushed to make the CDs they work on as loud as possible. The digital audio medium also has its maximum upper limit in level, in this case all digital "ones." So to make the music sounder louder more of the time, that means adding compression, just like the bad old days of 45s.
My CD player has a digital level display, and I am also able to take the digital output of a CD and run it into a computer editing system allowing statistical study of audio levels, and I am constantly appalled at how many CDs spent most of their time in the top 3-4 db of the 90 db available, with absolute digital maximum level being reached very frequently -- sometimes on every beat. Sophisticated digital compressors alleviate the all the horrible distortion that would normally happen from hitting the digital "brick wall," but nuances and the "airy" quality of the recording are murdered.
In the audio business, there is something of a chasm between broadcast audio engineers and recording engineers. Folks from one camp don't seem to know a lot about the practices and mindset of the other. I guess I'm lucky to work on both sides of the fence -- making music recordings for broadcast and then hearing just how they sound on the air. Every broadcast station already uses compression on the air. There is a legal limit, as regulated and monitored by the FCC, to the loudness of sound on the air. So to keep a signal loud enough not to be lost in fading, and static, compression, which varies by station and format, is inevitably used.
The fallacy that seems to have become pervasive among many people in the pop music recording field, especially among record companies, is that if a CD is pushing the absolute digital max it will somehow be louder or better on the air and presumably win more airplay, and thus sell more copies to the public. This is not true at all. Compressing a CD will contribute to on-air loudness almost unnoticeably. Radio people have the brains to turn up a CD that's recorded at a normal level, and broadcast stations' existing compressors will even everything out anyway. The only thing that is accomplished is messing up the dynamic range for those who pay their good money for CDs, "squashing" the life out of any acoustic instruments in the mix, and increasing listener fatigue.
Lately, this has been made worse by the increasing availablity of "desktop audio," which puts powerful compression tools in the realm of the home studio, by using a computer to perform the mastering function. Increasing numbers of CDs are being released that have come from home and "project" studios, with generally less-experienced people doing the mixing and mastering in these settings. So some serious damage is being done by people impressed by how much louder they can make their recording sound by crushing the dynamic range with relatively inexpensive software.
Further, there is the phenomenon of "cascaded compression." When an already-compressed signal (e.g. a CD) is itself compressed (e.g. when played on a radio station), the compressors can actually "fight" each other, one bringing down the signal, followed by another one with different characteristics that might want to bring it back up at a slightly different rate. The result can border on distortion, and gives an especially annoying "pumping" sound, that ruins what is left of the dynamics of the music and can leave the artist and producer's sonic intent in shambles. And this is exactly the situation when a compressed CD is run on a radio station with its own compression.
Twenty five years of recording music for broadcast has led me to what seems like a heretical opinion these days: relatively uncompressed music recordings sound better on the air, and no less loud.
The CDs I mix try to preserve as much dynamic range as their genre calls for. And experience has shown that they will stand up to anything else, in terms of loudness on the air.
So in my own small way, I'll add my voice to those in the professional audio business who are starting to complain about this sonic cheapening of music. With 20-bit bit-mapping technologies and ultimately the 24-bit potential of the DVD medium, the future dynamic range potential of CD is very bright. Why then, is the record business throwing away 95% of the potential of even today's 16-bit technology in the loudness fallacy?
So in the hope someone takes notice, I'll continue to complain whenever good music on CD is degraded by excessive compression.
(c) Copyright 1997, 1999 George D. Graham All rights reseved.
This article may not be copied to another Web site without written permission.