See, the collection of non-pathological functions is a vector space. We add elements by adding the functions pointwise, we multiply by a constant in the obvious way, and the other requirements can be checked. Functions form a vector space.
And vector spaces have bases. One basis for the vector space of functions is the collection of sin and cos functions. Thus we can see that finding the Fourier Transform is just finding how much of each basis vector we need to make the function.
And as we know, the amount of basis vector u needed in the representation of a vector v is v.u, the dot product.
Thinking of it this way starts to make connections between all sorts of ideas.
Added in edit: I see the same sort of point made by dropdownmenu in http://news.ycombinator.com/item?id=4862228
When I started explaining it to some computer science students, it helped by giving a particular example of its usefulness:
Sound is composed of waves so, when you want to send a music to a friend it's all a bunch of values like [0, 1, 2, 1, 0, -1 , -2, -1, 0, ...]. If you know they're going to look like waves (sinusoidal functions) why not just send your friend how much they look like sin or cos? The values back there were just a 2sin(x) so why not just send them the value [2]?
You could save a lot of bandwidth. You just need to "correlate" sounds with a bunch of sin or cos functions everybody agrees on :)
Bonus: you can add the phase values, 2 sin(x + phase), to get the beats just right.
Fortunately, you don't need to do so; if the complex number z = A + iB has magnitude r and argument theta, then Acos(t) + Bsin(t) is the same as r*cos(t - theta). That is, combining cosines and sines of the same frequency already accounts for the phase shift.
You have indicated quite explicitly that you're being informal (by using non-technical terms like 'non-pathological functions'), but I think it's worth making the small observation here that sine and cosine functions do not form a basis (implicitly: "a Hamel basis") for, say, the space of continuous functions on R/Z in the usual sense of linear algebra, since not every function can be written as a finite linear combination of them.
Rather, they are a topological basis, in the sense that every function can be written as an infinite linear combination of them. Why is it worth making this point, which probably seemed too obvious to say? The reason is that, unlike finite sums, which are unambiguous algebraic constructs, infinite sums require topology to compute; and being a topological basis in, say, the L^2 topology is quite different to being a topological basis in, say, the topology of pointwise convergence. The study of the different kinds of summability of Fourier series is the subject of some very, very deep work.
But you're right, there is interesting and deep material here.
The only thing that I still can't get my head around or visualize is how it is that we know that the set of all cos and sin functions is sufficient to span this set of "non-pathological" functions.
So assuming some vector v is reachable by a linear combination of vectors u1,u2,u3... we know that each component should be v.u1, v.u2 etc. But how do you know that v is in fact "reachable" by some combination of vectors in the basis u?
That is, suppose I describe some (infinite) set of a functions. How do I determine the set of functions that it spans? Is there an intuitive way to picture why the set of sin and cos functions forms a suitable basis?
So start with a periodic wave and look at how much sin(x) is in it. Do this by integrating:
Integral from 0 to 2.pi of f(x)*sin(x) dx
That's a dot product of your function f(x) with the sine wave, and that tells you how much of the first frequency you need. Subtract off the result, and then go again with sin(2x). You find the residuals get less and less. More, for something nice like a square wave or triangular wave the coefficients you get form a predictable sequence.Interesting note:
integral sin(k.x)*sin(m.x)
is 0 if k != m, and 1 otherwise, so the basis elements have dot-product 0, and hence are thought of as being at right angles. They also have "length" 1, since the dot-product with themselves is 1. So they are an ortho-normal basis.So the question is: what functions can be reached by adding and subtracting multiples of sine (and cosine) functions of different frequencies? In Linear Algebra terms, what is the space of functions spanned by these basis functions?
That's harder, but it turns out to be "everything non-pathological".
http://en.wikipedia.org/wiki/Fourier_series#Hilbert_space_in...
http://en.wikipedia.org/wiki/Hilbert_space#Fourier_analysis
Edited for typos
There are infinite expansions of trig functions to solve the finite vs infinite apparent mismatch, so don't worry about that.
An intuitive way to look at it, is on a 2-d graph there's no spot on the graph that can't be hit by a point of a triangle aka trig function collection, so it doesn't particularly matter how you pick any one spot, a triangle can always hit that one spot because it can hit all spots.
Comp sci analogy would be something like x=x+1 starting at x=0 will hit all the positive integers eventually...
http://altdevblogaday.com/wp-content/uploads/2011/05/Derived...
Define normal.
I have some difficulty differentiating blues and greens sometimes but I found those shades to be quite distinct even on a low quality screen.
Perhaps your colour differentiation is worse than you think? It could be down to monitor settings though, for example.
There's an Ishihara test here - http://colorvisiontesting.com/ishihara.htm FWIW but I'd imagine you'd need to see an optician to do it correctly.
A particularly helpful tangent is to play with convolution and see the many different applications that is useful for (did you know that the shadow cast by an object is its cross section convolved with the cross section of the light source?). Once you really get convolution, and really get its relationship to the Fourier transform, you will be well along the path to enlightenment.
However, I believe the continuous Fourier Transform works exactly like that.
http://news.ycombinator.com/item?id=2555562
Here are some other items about the Fourier transform:
For example, the Fourier transform is behind quantum uncertainty (dp.dx>h). Think of it this way: the inverse Fourier transform of a frequency impulse (zero extent) is a sine wave of infinite duration. Truncate the infinite sine wave and its spectrum ceases being an impulse, broadening into the shape of the windowing function used to truncate the sine wave. That is, an attempt to constrain/define time leads to a broadening in frequency, and vice versa. The uncertainty principle naturally arises from using the Fourier Transform in an environment where "you can't have infinities".
This is true of any two variables which are related by a Fourier Transform. Yes, position and momentum are related by a Fourier transform (as are energy and time).
The thinking also works for the other extreme: if you consider how a time impulse (zero extent) related to its Fourier transform, a flat spectrum of infinite extent on the frequency axis.
♫♫♫
Uncertainty is not so odd as long as we're aware
position and momentum are a Fourier transform pair
So anything that tightens our precision on the one
means certainty about the other value gets undone
♫♫♫
Still, I'm not sure I approve of your suggestion to use this physical property of the universe as a basis for intuition. The reasons why quantum mechanics "works" are far, far more difficult to understand, internalize, and accept than the concepts behind the DFT which only really requires an understanding of first-semester linear algebra. In other words, I expect that the set of people who understand QM at this level but do not understand the DFT is nearly empty.
If you actually meant to go the other way (use the properties of the FT/DFT to gain an understanding of QM) then I completely agree with everything you said, of course.
That is also a really well phrased one line point of commonality to talk to a telecom / RF / EE type person about communications bandwidth theory. If you just wedge in signal to noise ratio / bit error rate, and look at what you phrase "time definition" and "frequency broadening" in the right way, then you pretty much have Shannons famous paper.
The FT shows up all over the place in science and anywhere you find it, you can analogize it into a totally different field of study. One "design pattern" hundreds of "implementations".
A major barrier to understanding any new field is being able to strip away the jargon and recast the ideas into a familiar form. One can envisage website, where you tick the field(s) "A" that you want to learn about, tick the field(s) "B" that you already know about, and it recasts field(s) "A" in terms of the terminology of field(s) "B".
Make it have a "plug-in" structure, so supporting a new field is a matter of writing a mapping of field specific concepts to a set of "design patterns". Make it open source, so experts can jump in and contribute mappings for a wide variety of fields. Getting fancy, mappings could be written in terms of any two fields (so the expert does not have to learn a set of web-site specific design patterns), and the software could construct a graph to allow conversions between arbitrary fields.
The fancy e^i stuff is just for mathematical beauty and compactness and should be avoided when explaining the fourier transform, imo. There's absolutely no need for it as far as the idea goes, just do the sins and coses separately.
For limited numbers of bands it will use less battery power and be faster.
Another way to see the FT is as the basis where the convolution operators are diagonal - this is used in image processing, where computing the FFT of a filter + entry-wise multiplication can be much faster than running the convolution at each pixel of the input image.
1. http://homepages.abdn.ac.uk/mth192/pages/html/maths-music.ht... (free pdf)
2. https://ccrma.stanford.edu/~jos/mdft/ (skip down to applications and the digital audio number systems for a preview)
http://home.gregfjohnson.com/fft
Here is a really terse "just the essence" ruby implementation of the FFT:
http://www.youtube.com/watch?v=M0Sa8fLOajA
(Gilbert Strang - my first time watching him teach. Such a great paced lecturer)