Re: I went to the symphony today
Posted: Wed Oct 17, 2012 2:30 pm
Hello all,
just noticed this thread now. And since it touches my profession, I have a thing or two to say.
The discussion of digital versus analog, and which is best, is as old as digital audio, and is hardly ever held in a sensible way. Because usually the people discussing which is best don't even mention WHAT digital system they are comparing to WHAT analog one! And that's pretty obviously important. When we are comparing a 78rpm record (analog) to a CD (digital), the CD will be obviously better, and any discussion about it would be absurd. Likewise, if we compare a top quality studio tape recorder of the kind used in the 1980's (analog), to cellphone audio (digital), the result would be equally obvious, but opposite.
When we compare a low quality analog system to a low quality digital one, the sound of each will be poor, but in different ways. The analog system will have lots of noise (hiss, and possibly pops and ticks), with the music sounding reasonably natural through it, as long as it doesn't get loud, because the louder it gets, the more distortion there will be. And there will likely be some wow and flutter. The digital system instead will probably have no noise at all, but will have MORE distortion at low levels than at high ones, and will often also have artifacts, that is, alien nonharmonic sounds triggered by the real signal.
In such a comparison, the preference for digital or analog depends largely on whether a person is more sensitive to noise or to artifacts, to distortion at low or at high levels, and it also depends on what music is being used for the test. A plucked solo instrument like a guitar can tolerate a lot of distortion but little noise, while a complex sound like a choir is tolerant to noise but very sensitive to distortion!
One more difference is that the frequency response for a digital system is usually flat, with a well marked upper limit that's often high enough to be above the audible range, while analog systems usually do not have a very flat frequency response, often suffering at the low frequency end in addition to the high one.
But as the quality improves, both analog and digital systems get closer to the real sound, and that forces that they get closer together too. At some point the differences are so small that most people can't hear them.
At present, analog audio systems are inexpensive up to a certain quality level, and from there up the price rises far faster than the quality. Digital systems are far more complex, but then it's cheap to add more quality. So, when high quality is required, it's far less expensive to implement it digitally than in analog ways.
As to audibility: When the total amount of non-original sound (noise, artifacts, and distortions of all sorts) is pretty large, it's easy to hear, and I can immediately tell whether the recording is analog or digital. Personally, the imperfections of an analog system bother me less than those of a digital one. But when the total non-original sound gets small enough to become inaudible, of course there is no way anymore to tell by ear which is the digital and which is the analog version.
The catch is just this: The human ear (at least for some humans!) is quite sensitive to sound defects. If just 0.01% of the total audio power is imperfections, that can sound REALLY nasty! That's about the quality level of the cheapest portable cassette tape recorders. Many people stop hearing the imperfections when they are down to about 0.0001%. To achieve this level with analog recording technology, a studio tape machine is required. Some people claim that the very best vinyl LPs can reach it, but that's only true when they are REALLY good, absolutely scratch-free, and not yet worn, a very difficult thing to find. CDs instead far exceed this level of quality, getting down to about 0.00000004%. That's good enough to fully satisfy almost all people, but some people claim that under certain conditions they can still hear the defects of CDs.
For the techies among you, please note that I have been talking in power percentage. Distortion in music equipment is usually stated as voltage percentage. In this scale, a casette tape might have roughly 1% distortion, a good analog system 0.1%, and a CD 0.002%.
Of course there is a large difference between the intrinsic system limitations, and what is actually available on commercial CDs, LPs, or old fashioned tapes! Companies produce recordings down to a price, and most customers don't even care to pay attention to any defects, let alone complain. So the quality of commercial recordings usually is far below the technical possibilities of the medium.
In my collection of CDs there is a large amount, over 90% (!), of terrible technical quality. There is big distortion, lots of AC hum, traffic noise complete with honks, intense low frequency noise from condenser microphones, problems with frequency response, all sorts of trouble with poor phasing and location of multiple microphones, and many others. Most of these defects are totally independent of whether the final product is digital or analog. But some are, such as wow and flutter of tapes and LPs, which is simply absent in digital systems, or the hum picked up by long microphone cables, which doesn't happen in all-digital studios where the signals are digitized right at the microphones and then transmitted digitally.
So, in the end, digital wins, but only if ALL stages of the production chain are properly implemented. Often they aren't. If there is anything poorly done, it will destroy the quality of the recording, regardless of whether it's digital or analog. Just to give you an example: 15 years ago I recorded concerts with a PC and a soundcard of those days. It was OK, but not great, as that soundcard had a nasty high noise level, being two orders of magnitude worse than a 16 bit 44.1kHz system should be. Then came the day when I upgraded to a Soundblaster Audigy card, praised as the best card one could buy locally at reasonable cost. Indeed it had much lower noise. But it sounded nasty! Loopback testing of the card produced consistent great results. I ws puzzled. Until I noticed that the testing was taking place at 48kHz, the card's highest sampling rate, and of course I was recording at 44.1kHz, because I had to burn my recordings on CDs. After finding a program to do loopback testing at 44.1kHz, surprise! At that sampling frequency, the Audigy is rock bottom dirty awful bad! In the higher audio range, the distortion can reach 20%! That's like the inner tracks of a badly pressed LP, played through a cheap record changer.
Apparently the Audigy always works at 48kHz, and then converts the output to the sampling rate wanted by the user. And this conversion has a bug!
The solution was simple: record in 48kHz, and convert to 44.1kHz in software. That way I got good quality.
This little story shows how easy it is to mess up a recording, and produce results far worse than the theoretical limits of a given system.
The recording, playing and amplyfying chain these days is pretty close to "perfect", measured by the human ear. Unfortunately microphones are not, and speakers are a bad joke! So, when everything else is reasonably correct, the speakers are the limiting factor. And there are no digital speakers yet, regardless of the stupid advertising of some audio companies...
As for compression, not all MP3 encoders were born alike, and of course the user has a wide range of choice regarding the trade-offs between quality, file size and encoding speed. Too many people use very basic encoders at 128kb/s fixed bitrate, joint stereo, all settings at default, and then whine that MP3 is poor. Not so! MP3 has the ability to produce outputs ranging from VERY poor, but very small too, to good enough to be indistinguishable from the original. It just takes a good encoder, configuring it properly, and not striving for the absolutely smallest file sizes.
Well, for a forum aimed at the great subject of cutting off some things, this technical post is already long enough! I will stop it here.
Il Musico
just noticed this thread now. And since it touches my profession, I have a thing or two to say.
The discussion of digital versus analog, and which is best, is as old as digital audio, and is hardly ever held in a sensible way. Because usually the people discussing which is best don't even mention WHAT digital system they are comparing to WHAT analog one! And that's pretty obviously important. When we are comparing a 78rpm record (analog) to a CD (digital), the CD will be obviously better, and any discussion about it would be absurd. Likewise, if we compare a top quality studio tape recorder of the kind used in the 1980's (analog), to cellphone audio (digital), the result would be equally obvious, but opposite.
When we compare a low quality analog system to a low quality digital one, the sound of each will be poor, but in different ways. The analog system will have lots of noise (hiss, and possibly pops and ticks), with the music sounding reasonably natural through it, as long as it doesn't get loud, because the louder it gets, the more distortion there will be. And there will likely be some wow and flutter. The digital system instead will probably have no noise at all, but will have MORE distortion at low levels than at high ones, and will often also have artifacts, that is, alien nonharmonic sounds triggered by the real signal.
In such a comparison, the preference for digital or analog depends largely on whether a person is more sensitive to noise or to artifacts, to distortion at low or at high levels, and it also depends on what music is being used for the test. A plucked solo instrument like a guitar can tolerate a lot of distortion but little noise, while a complex sound like a choir is tolerant to noise but very sensitive to distortion!
One more difference is that the frequency response for a digital system is usually flat, with a well marked upper limit that's often high enough to be above the audible range, while analog systems usually do not have a very flat frequency response, often suffering at the low frequency end in addition to the high one.
But as the quality improves, both analog and digital systems get closer to the real sound, and that forces that they get closer together too. At some point the differences are so small that most people can't hear them.
At present, analog audio systems are inexpensive up to a certain quality level, and from there up the price rises far faster than the quality. Digital systems are far more complex, but then it's cheap to add more quality. So, when high quality is required, it's far less expensive to implement it digitally than in analog ways.
As to audibility: When the total amount of non-original sound (noise, artifacts, and distortions of all sorts) is pretty large, it's easy to hear, and I can immediately tell whether the recording is analog or digital. Personally, the imperfections of an analog system bother me less than those of a digital one. But when the total non-original sound gets small enough to become inaudible, of course there is no way anymore to tell by ear which is the digital and which is the analog version.
The catch is just this: The human ear (at least for some humans!) is quite sensitive to sound defects. If just 0.01% of the total audio power is imperfections, that can sound REALLY nasty! That's about the quality level of the cheapest portable cassette tape recorders. Many people stop hearing the imperfections when they are down to about 0.0001%. To achieve this level with analog recording technology, a studio tape machine is required. Some people claim that the very best vinyl LPs can reach it, but that's only true when they are REALLY good, absolutely scratch-free, and not yet worn, a very difficult thing to find. CDs instead far exceed this level of quality, getting down to about 0.00000004%. That's good enough to fully satisfy almost all people, but some people claim that under certain conditions they can still hear the defects of CDs.
For the techies among you, please note that I have been talking in power percentage. Distortion in music equipment is usually stated as voltage percentage. In this scale, a casette tape might have roughly 1% distortion, a good analog system 0.1%, and a CD 0.002%.
Of course there is a large difference between the intrinsic system limitations, and what is actually available on commercial CDs, LPs, or old fashioned tapes! Companies produce recordings down to a price, and most customers don't even care to pay attention to any defects, let alone complain. So the quality of commercial recordings usually is far below the technical possibilities of the medium.
In my collection of CDs there is a large amount, over 90% (!), of terrible technical quality. There is big distortion, lots of AC hum, traffic noise complete with honks, intense low frequency noise from condenser microphones, problems with frequency response, all sorts of trouble with poor phasing and location of multiple microphones, and many others. Most of these defects are totally independent of whether the final product is digital or analog. But some are, such as wow and flutter of tapes and LPs, which is simply absent in digital systems, or the hum picked up by long microphone cables, which doesn't happen in all-digital studios where the signals are digitized right at the microphones and then transmitted digitally.
So, in the end, digital wins, but only if ALL stages of the production chain are properly implemented. Often they aren't. If there is anything poorly done, it will destroy the quality of the recording, regardless of whether it's digital or analog. Just to give you an example: 15 years ago I recorded concerts with a PC and a soundcard of those days. It was OK, but not great, as that soundcard had a nasty high noise level, being two orders of magnitude worse than a 16 bit 44.1kHz system should be. Then came the day when I upgraded to a Soundblaster Audigy card, praised as the best card one could buy locally at reasonable cost. Indeed it had much lower noise. But it sounded nasty! Loopback testing of the card produced consistent great results. I ws puzzled. Until I noticed that the testing was taking place at 48kHz, the card's highest sampling rate, and of course I was recording at 44.1kHz, because I had to burn my recordings on CDs. After finding a program to do loopback testing at 44.1kHz, surprise! At that sampling frequency, the Audigy is rock bottom dirty awful bad! In the higher audio range, the distortion can reach 20%! That's like the inner tracks of a badly pressed LP, played through a cheap record changer.
Apparently the Audigy always works at 48kHz, and then converts the output to the sampling rate wanted by the user. And this conversion has a bug!
The solution was simple: record in 48kHz, and convert to 44.1kHz in software. That way I got good quality.
This little story shows how easy it is to mess up a recording, and produce results far worse than the theoretical limits of a given system.
The recording, playing and amplyfying chain these days is pretty close to "perfect", measured by the human ear. Unfortunately microphones are not, and speakers are a bad joke! So, when everything else is reasonably correct, the speakers are the limiting factor. And there are no digital speakers yet, regardless of the stupid advertising of some audio companies...
As for compression, not all MP3 encoders were born alike, and of course the user has a wide range of choice regarding the trade-offs between quality, file size and encoding speed. Too many people use very basic encoders at 128kb/s fixed bitrate, joint stereo, all settings at default, and then whine that MP3 is poor. Not so! MP3 has the ability to produce outputs ranging from VERY poor, but very small too, to good enough to be indistinguishable from the original. It just takes a good encoder, configuring it properly, and not striving for the absolutely smallest file sizes.
Well, for a forum aimed at the great subject of cutting off some things, this technical post is already long enough! I will stop it here.
Il Musico