iopboat.blogg.se

The house that dirt built the heavy flac
The house that dirt built the heavy flac





the house that dirt built the heavy flac

In fact everyone hears differently anyway, because everyone's ears are a slightly different shape, so we all have different acoustic filters stuck to sides of our heads. She hears music as pitch lines and very fine timing details, and most timbres as a placeholder. But my gf, who is a classical musician and can pick out the notes in chords by ear, is fine with MP3s. IME I can hear the difference very clearly, and the MP3 sound is seriously fucking annoying, even in a car. It will also make the differences between FLAC and MP3 more obvious. If your DAC can accept an external clock, hooking up a studio-grade clock source will do the sound many favours. A lot of the smeary-splashy-nasty sound digital used to be famous for was caused by cheap jittery clock sources. If the sampling clock isn't rock solid to nano-second precision, you can forget Nyquist, because Nyquist assumes perfect sample timing. (And if it's really pants it can make the sound worse).īut the killer problem for digital is clock jitter. Sampling at higher frequencies and higher bit depths should have fewer imperfections, although if the hardware is a bit pants anyway high-rate sampling won't make a huge difference. If it didn't, it would sound really, really horrible.Īs other posters have pointed out, the filtering is never perfect, so sampling is never Nyquist-perfect either. Sampling includes filtering to get rid of the aliased copies. So again, precisely what is wrong with that quoted text? The article states that events that happen faster than the sampling frequency can't be represented. That is the whole point of Shannon-Nyquist - the sampling frequency determines the maximum frequency that can be sampled.įrom there you get to events that occur faster than the sampling frequency can't be captured which remains true however those samples are captured - I see another poster is bringing in whether the samples are instaneous readouts or integrations which is an utter irrelevance - the principle holds regardless of the sampling methodology. If it hasn't been lost then where has it gone. That 100kHz signal is not present in the output. Encode a 100kHz signal at 44.1kHz and then regnerate the wave from the sampled data. Pity that it doesn't automatically make you right or knowledgeable, indeed it simply shows that you missed their central tenet. I'm sure you feel such a big boy quoting those names. Messrs Nyquist and Shannon might have a bit to say about this. Re: "Everything between sample points is lost" Since that near brick-wall filter is highly impractical for any analogue filter, what is normally done is to sample higher than that, either a little bit more on sample rate (like 44.1kHz) and use good analogue filters, or a much, much higher sample rate and push the band-limiting problem in to the digital domain where it is practical to implement good filters (but with time delay, but for recording that in not a problem) and then to re-sample at a chosen lower rate. What is impotent is that 20kHz is an arbitrary value (but realistic limit for most younger humans, us old buggers are lucky to get 15kHz) and to avoid the very unpleasant business of aliasing you MUST be strictly limited to that value. If that initial assumption is true, for example that you only want/need 20Hz to 20kHz, then by sampling above twice the highest frequency (say at 40.0001kHz) than you are NOT losing any information by sampling. The key point about Nyquist's theorem is it starts with the assumption that the signal you are interested in is strictly limited in bandwidth. Re: "Everything between sample points is lost" spectacularly refined chap)







The house that dirt built the heavy flac