In an environment I work there's multichannel audio recordings that are archived. The archival recordings all had a perfect 4kHz tone appearing, seemingly out of nowhere. This was happening on every channel, across every room, but only in one building. Nowhere else. Absolutely nothing of the sort showed up on live monitoring. The systems were all the same and yet this behaviour was consistent across all systems only at one location.
The full system was reviewed: from processing, recording, signal distribution, audio capture, and in room. Maybe there was a test gen that had accidentally deployed? Nope. Some odd bug in an echo canceller? Also no. Something weird with interference from lighting or power? Slim chance, but also no. Complete mystery.
When looking for acoustic sources there was an odd little blip on the RTA at 20kHz. This was traced back to a test tone emitted from the fire safety system (ultrasonic signal for continuous monitoring). It's inaudible to most people and will be filtered before any voice-to-text processing so no reason for concern. Anyway 20kHz is nowhere near 4kHz though so the search continued.
The dissimilarly of 20kHz and 4kHz is true, until you consider what happens in a non-bandwidth limited signal. The initial capture was taking place at a 48kHz sampling rate. It turns out the archival was downsampling to 24kHz, without applying an anti-aliasing filter. Without filtering, any frequency content above the Nyquist 'folds' back over the reproducible range. So in this case a clean 24kHz bandwidth signal with a little bit of inaudible ultrasonic background noise was being folded at 12kHz to create a very audible 4kHz tone.
It was essentially a capture the flag for signals nerds and a whole lot of fun to trace.
But... why?
Your sampling should really really be twice the bandwidth.
e.g. your bandwidth is 100 MHz centered at 1 GHz (it needs to actually be bandlimited to 100 MHz**). You do not need to sample at 2.2 GHz. You sample at 200 MSPS (really, you should sample a little more than that, say 210 MSPS, so that the bandwidth of interest doesn't butt up against the Nyquist zone edges.)
You can sample 100MHz of bandwidth at 1GHz just as you describe at 210MSPS. You’ll get everything in the 950-1050MHz band.
Trouble is, without an antialiasing filter, you’ll get every other band that’s a multiple of that sampling rate. The Nyquist criterion works at every multiple of the sampling frequency.
Bandpass filter your analog input appropriately from 950-1050MHz and you’re golden.
This is the way nearly every commodity Wi-Fi chip downsamples 2.4/5GHz raw RF. Sigma-delta ADCs are cheap, fast, and space efficient for die area using this method.
Details here:
https://www.dsprelated.com/thread/7758/understanding-the-con...
https://s3.amazonaws.com/embeddedrelated/user/124841/fbmc_bo...
https://s3.amazonaws.com/embeddedrelated/user/124841/fbmc_ch...
If you are sampling an RF-modulated signal with a center frequency of 1GHz and 100MHz of baseband bandwidth, then yes, you do need to sample at 2.2GHz+. And some applications do exactly that.
If you're taking the RF signal, mixing it down to baseband, and filtering it to bandlimit, then you have a signal with maximum frequency component of 100MHz, and in that case, yes, your sampling rate can be 200MHz+
Another way of looking at it is that sampling inherently does the mixing down to baseband. Although it may not be exactly the baseband you want if the spectrum isn't cleanly symmetric about a multiple of the sample frequency.
"In signal processing, undersampling or bandpass sampling is a technique where one samples a bandpass-filtered signal at a sample rate below its Nyquist rate (twice the upper cutoff frequency), but is still able to reconstruct the signal.
When one undersamples a bandpass signal, the samples are indistinguishable from the samples of a low-frequency alias of the high-frequency signal. Such sampling is also known as bandpass sampling, harmonic sampling, IF sampling, and direct IF-to-digital conversion."
If you sample s.t. your folding frequencies are in an appropriate place, you can fold your desired region into the first nyquist region without needing to mix it down. This is especially desirable if you can avoid having to build an IQ mixer because they're hard to keep balanced.
The worst case doing this is that your signal spectrum is reversed in frequency, but you can correct that easily digitally.
For transient signals you need at least Nyquist frequency.
Granted, we might say that from the perspective of the complex Fourier transform using signed frequencies, the frequencies of this signal actually range over [-4 Hz, -2 Hz] U [+2 Hz, +4 Hz]. But I'm not sure that's the interpretation you had in mind.
Let me know if I've screwed anything up here!
When the lowest frequency is zero, this is the familiar rule that the sample rate has to be at least twice the highest frequency in the signal. But more generally, it's more complicated.
Anyway, the details on that example don't matter, the Wikipedia graph and article makes things more clear.
You can think of it as multiplying the original signal by a comb (in the time domain) of delta functions, which folds everything (in the frequency domain) back into the nyquist frequency of your ADC. Each delta function corresponds to one sample. If your original signal was truly band-limited to 100MHz, then what comes out is a replica of the band limited signal.
One catch (which is actually fairly easy to do in practice) is that the sampling window needs to correspond to around 1/f of the carrier frequency. This is what YakBizzaro is talking about (ADC analog bandwidth) in their sibling post.
In a way, you’re relying on aliasing / frequency folding to do it for you.
https://ars.els-cdn.com/content/image/3-s2.0-B97801241589310...
You can even improve information transfer in these scenarios by using a synchronizer, which allows you to phase shift your sampling to be at the ideal transition point in your information stream.
One of the more popular series is the Journal2Matlab blog about translating academic journal papers into easy to read matlab.
When I took the course, it made no sense to me that you could sample at twice the frequency of the signal and reconstruct it. Consider a sine wave at 1 Hz. If you sample at 2 Hz, you’d get readings of 0, 1, 0, -1, etc. If you graph that, it’s a perfect triangle wave, not a sine wave! That’s what I couldn’t not get past. I thought you’d need an infinite sampling rate to accurately capture the sine wave.
As I type this out, I’m realizing that a critical component of this that I wasn’t taught (or I didn’t grasp) is the need for the signal to be bandlimited. Returning to my sine example from above, what bothered me was, if I don’t sample more points, how do I know that it’s only a sine wave, and nothing more? That only works if you pretend there are no higher frequencies (or filter them out, though an ideal filter is impossible in practice). If there aren’t higher frequencies, there can’t be anything you “can’t capture” by sampling at the Nyquist frequency.
Their suggested solutions were to 1) get a wide-angle lens to reduce detail beamed into the sensor 2) use a larger image sensor or 3) remove the object causing moire artifacts.
A wide-angle lens would change the effective bandwidth of the system, as would a larger sensor: all either would do is change the apparent size of the moire pattern (possibly so it's less annoying).
What you really want is something that would act as a spatial low-pass filter in front of the sensor; something like a very slightly frosted piece of glass which would prevent any feature size smaller than two sensor pixels from being resolved on the far side. I imagine if that wasn't a completely stupid idea for some other reason that you could buy them.
When a grid’s misaligned
with another behind
That’s a moiré…
When the spacing is tight
And the difference is slight
That’s a moiréE.g. sampling a 1hz signal at 2hz still doesn't tell you if the signal was a 1hz sin or a 1hz sawtooth (depending on how lucky or unlucky you are).
So your case, a 1hz sin doesn't contain any higher frequencies, and will be reconstructed perfectly. A 1hz sawtooth contains higher frequencies, and so is not.
I think what you are really getting into is that a signal with periodicity of, say 1hz, does not mean that the Nyquist limit is 1hz. Square waves and sawtooths are particularly obvious examples of this, because the sharp edges cannot be achieved without (very many) high frequency contributions.
Now you can avoid this by creating a different set of component functions and a different sense of "frequency" but that just pushes the problem around. Also, since you are doing non-standard things you need to explain it, especially if what you are using doens't form a proper basis.
Finally, of course this is all in the idea mathematical setting, in real world noise etc. also has to be taken into effect.
It actually has frequency components that go out to infinity, so its impossible to perfectly reconstruct a sawtooth without knowing beforehand that its a sawtooth.
This is true for any signal with discontinuities (i.e. not "band-limited").
1) It is completely possible to create a sawtooth wave that contains only a single frequency. However, you could also consider the wave to be an (infinite) sum of sinusoids at different frequencies. Both views are "correct", and which is more appropriate depends on the context.
2) Related to (1): natural (acoustic) sounds are almost always best considered as a sine series. While there are such sounds which are most easily described as a sawtooth, when you consider the physical/mechanical process by which they are formed, the sine series is a more obvious approach.
3) A digital 1Hz sinusoid can trivially contain no harmonics at all. However, the moment you attempt to convert this into an acoustic pressure wave, the nature of the physical world essentially guarantees that the acoustic pressure wave will have a series of harmonics going out far beyond the base frequency. Once you start actually moving things (like magnetic coils, speaker cones and air), it's more or less impossible to avoid generating harmonics. But since the original signal was genuinely a pure sine tone, it becomes a little tricky to decide what the correct way to describe this is.
Yes, it does. The Nyquist criterion gives exactly the (minimum) sampling frequency you need for perfect reconstruction of a bandlimited signal.
> E.g. sampling a 1hz signal at 2hz still doesn't tell you if the signal was a 1hz sin or a 1hz sawtooth (depending on how lucky or unlucky you are).
A 1 Hz sawtooth is not a bandlimited signal, so the Nyquist theorem does not apply.
Formally, the Shannon-Nyquist theorem states that if you sample a band limited signal at twice its bandwidth, an ideal reconstruction filter can be used to perfectly reconstruct the input signal. There's some wiggle room over ideal sampling/filtering, but the point is that it tells you exactly what the input was, provided it was band limited.
The misconception I think you're having is that band width is not the period of a signal.
Nyquist more or less says "If I you know nothing about the signal, by sampling at X Hz, you can determine what the signal looks like over a bandwidth of 0 Hz to X/2 Hz". If you have additional knowledge about the signal (eg. band limited, periodic or other) you can exceed those limits.
It can also be looked at from an information viewpoint. Nyquist says "if you sample a signal at a certain rate you will get a certain amount of new information about it". You might "spend" this information by saying something about the signal over the band DC-f/2, or you might choose to say something about the signal over a different band of frequencies. In the example above we chose to say something about a set of discrete harmonic frequencies over a very wide bandwidth, ignoring the frequencies in between the harmonics as the 1Hz constraint told us they will be zero.
The conception of the theorem is that if the signal being sampled is sufficiently integrable AND bandlimited AND the signal is uniformly sampled at at least the Nyquist rate over all time/space THEN then reconstruction of the bandlimited signal is exactly possible using the sinc interpolator. The proof is covered in "Shannon's original proof" in the Wikipedia article and most books on signal analysis such as Gaskill's Linear System book. Most EE people will have to do the proof as an intro course assignment in the first month of a DSP class.
OTOH, if you are not able to sample the function over all space or time AND the function happens to be periodic outside the interval you did sample THEN reconstruction of the bandlimited periodic signal is possible using the Dirchlet kernel.
If you are not able to sample the function overall space (from the first) AND that function is not periodic, you have small problems which occasionally become big problems if you are no careful. Most DSP books have a chapter about windowing discrete data and dealing with this conundrum. Basically, exact reconstruction is not guaranteed and context-specific techniques need to be employed to ensure desirable fidelity.
To accurately sample a 1hz sawtooth waveform, you'd have to filter/sample at a much higher frequency.