* If you invest $1 at 100% interest for 1 year, you get $2 at the end
* Compounded 2 times in a year, you get 100/2 = 50% interest every 1/2 year, which amounts to $2.25
* Compounded 4 times in a year, you get 100/4 = 25% interest every 1/4 year, which amounts to $2.44
* Compounded n times in a year, you get 100/n percent interest every 1/n year, which amounts to (1+1/n)^n dollars
* So continuous compound interest is the limit as n approaches infinity, which amounts to $2.71828 at the end of the year
(This is a great problem to give to pre-calc students to see if they can figure out the calculation for themselves.)
A few months later I talked to a super advanced math genius kid who had a signature that said e^(i*pi)+1=0 and I asked him if that was Euler's number. He was a super quiet skittish guy that rarely talked. His eyes lit up and he spent the next 2 hours teaching me about Taylor series and showed me how to prove it.
It remains the most fascinating math equation I have ever seen.
He had a lot of issues and dropped out because he couldn't pass a history class. I found that guy on Facebook 20 years later and thanked him. He didn't remember that but he was so happy that he made such a big impact on me.
e arises when you ask the question: is there a function that is its own derivative? And it turns out the answer is yes. It is this infinite series:
1 + x + x^2/2 + x^3/6 + ... x^n/n! ...
which you can easily verify is its own derivative simply by differentiating it term-by-term. When you evaluate this function at x=1, the result is e. In general, when you evaluate this function at any real value of x, the result is e^x.
But the Euler equation e^iπ = -1 has nothing to do with exponentiating e, it's just a notational convention that is defined to be the series above. When you evaluate that series at x=iπ, the result is -1. In general, the value of the series for any x=iy is cos(y) + i*sin(y).
It's that simple.
I've seen (1+1/n)^n before, but never seen an explanation of why I might ever want to use something of that form. I've used the e^ix notation extensively, but again I've never really cared, because to me it was just a compact representation of sin and cos together. Likewise, all the proofs in that article are still a bit "so at this point on this carefully chosen graph, the gradient is e" and I think "Who cares? You carefully chose the graph to prove a point, I'll never see it in the real world."
And then the example with compounding interest - immediately I can see the application. It's definitely a good way of explaining it, although maybe it'd be even more grounded if it had n=12 and n=365 as examples. When you notice that the actual values seem to be converging, then you can try plugging in ever bigger and bigger numbers. This way you can discover the value of e for yourself and that process of discovery leads to a better understanding than rote learning of an abstract thing you haven't mentally visualised yet. All the other explanations are useful later, and they allow you to see it in different situations, but having at least one "this is why it's tangibly useful" hook at the start is definitely a massive help in understanding something.
Can't the same thing be said about using fractions on the exponent? Exponentiation is actually just repeated multiplication (a^n=a*a*...*a, repeated n times), but you can't do that when n is a fraction or irrational anymore than you can do it when it's imaginary.
We have to define what it means for an exponent to be non-integer: for fractions we might define a^(b/c) as the root of the equation x^c=a^b, and to allow irrationals I think you need some real analysis (it's been a while, but I think the usual way is to first define exp and log, and then say that a^b=exp(b*log(a)), which is kind of cheating because we have to define exp first!).
There's a very intuitive way to "see" that e^ix=cos(x)+i*sin(x): all you have to do is to treat complex numbers like you would any other number, and "believe" the derivative rule for complex numbers (so (e^(ix))'=ie^(ix)). Then you can just graph f(x)=e^(ix) for real x by starting at x=0 (when clearly f(x)=1) and from there take small steps in the x axis and use the derivative to find the value of the next step with the usual formula f(x+dx)=f(x)+f'(x)*dx.
Doing that you realize the image of e^(ix) just traces a circle in the complex plane because every small step in the x direction makes e^(ix0) walk a small step perpendicular to the line going from 0 to e^(ix0), simply because multiplying by i means rotating 90 degrees.
- the solution of the ode you just stated.
- compound interest.
- The defining property of exponential functions is f(x+y)=f(x)f(y) with some normalization.
- Moving on the unit circle is given by an exponential function because rotation is a group, i.e. a^(i(x+y)). Now choose the basis a such that you move with unit speed.
- ...
The nice thing is that all of these very different motivations lead to the same thing.
The "has nothing to do with exponentiating e" I would strongly disagree with. It has everything to do with exponentiating and is exactly the only way exponentiation can work. So afterwards you can pretend you didn't know that and define exponentiation by using e. Same for matrix exponentials, semigroups etc.
It has, that's the beauty of it. You can define as usual the function x -> e^x on the real line. Now, complex analysis tells us that if this function can be extended to a holomorphic function on the whole complex plane, then the extension is unique. And, in fact, this function does admit such an extension, so you can compute e^z for any complex number z, and in this way one gets e^iπ = -1.
Yes, but it's not a priori clear that the series actually even converges for all x, and the fact that power series are differentiable term-by-term within their radius of convergence also requires proof.
Or, to put it another way, you will blunder into this somewhere around month two of calculus 1, unavoidably.
Of course, that doesn't show how it will show up in all sorts of other places; "the one and only function that is its own derivative" strikes me as more likely to be something we encounter everywhere.
I'd argue it's a bit more, it's a natural way to extend domain of exponentiation, as we do before this with naturals -> integers -> reals.
If not I think you might like it.
And a tangent from your username: I quite like how complex numbers and functions like CIS made it into Common Lisp.
e^(iπ) is (e^i)^π.
There is a concrete complex number e^i:
[1]> (exp 1)
2.7182817
[2]> (expt (exp 1) #c(0 1))
#C(0.54030234 0.84147096)
See, it's around 0.54 + 0.84i. It's on the unit circle.When you raise this number to pi, you get -1.
[3]> (expt * pi)
#C(-1.0 1.2776314E-7)
This means it's the pi-th root of -1; let's try it: [4]> (expt -1 (/ pi))
#C(0.5403023058681397174L0 0.84147098480789650666L0)
and e must be the i-th root of this: [5]> (expt * (/ #c(0 1)))
#C(2.7182818284590452354L0 -4.6847612413106414363L-20)
Yes; it is all literally exponentiation which we can approximate with concrete floating-point numbers that know nothing about the formula we are exploring.The other constants fundamental to science, like the gravitational constant or the speed of light, can only be measured, not discovered from nothing. We aren't even sure how constant they actually are, there might be extremely tiny variations in either time or space that our instruments just can't measure yet. In theory, other universes could exist where these constants are "set" a little bit differently; whether we could live in such universes is another matter entirely.
E, on the other hand, comes from pure mathematics. As long as fractions, addition and exponentiation work the same way in another hypothetical universe, this strange E number is going to have the same strange value.
Neither of those things are sort of empirically measured, and in any formula where they show up, you could theoretically absorb them into other constants -- and in fact the Einstein gravitational constant does exactly that -- it's defined as (8*pi*G)/c^4, absorbing pi into newton's gravitational constant (for historical reasons, when using plank units, they set G and c to 1, so it ends up just being 8pi -- _reduced_ planck units set the whole constant to 1). It's just frequently easier to separate out e and pi for the purposes of actually working out the math.
One of my favorites is https://en.wikipedia.org/wiki/Feigenbaum_constants . Pops up in many situations, and as I understand it, it is not even well-understood why it pops up in as many places as it does. Intuitively one would expect that many of the places it pops up ought to have their own local constant of some sort, but instead this one keeps popping up. 7-year-old Numberphile video on it: https://www.youtube.com/watch?v=ETrYE4MdoLQ
Relationship to pi; Euler's formula:
ix
e = cos x + isin x
derivative is itself: d x x
-- e = e
dxAh, that explains why you didn't see this obvious one coming.
This is perhaps the most unnatural equation (well identity) in maths. It doesn't fall out anywhere, you would never write it down and solve for e, it's a special case of a more general result you would get first, and it's symbol soup for precisely the reason that the identity itself confers no understanding.
exp/log are natural because you almost can't help but discover them as they appear in so many different seemingly unrelated places.
That's certainly one way, but you can also define exp via its power series (which is easily proven to be convergent everywhere). Then, all the properties of exp, as well as Euler's formula, are actual theorems, not just definitions.
We want a^(i pi) + 1 = 0. Now,
a^(i pi) = e^(ln(a) i pi) = e^(i ln(a) pi) = cos( ln(a) pi) + i sin( ln(a) pi),
so we want cos( ln(a) pi) = -1, sin( ln(a) pi) = 0,
so ln(a) = 1, 3, 5, so a = e, e^3, e^5, ...
Thus indeed e^(i pi) + 1 = 0.
But also ln(a) = -1, -3, -5 work, so for example for
a = 1/e = 0.3678794412... > 0, we have
a^(i pi) + 1 = 0;
and of course for a = e^-99 = 1.0112214926104485... × 10^-43 etc.
(Off-by-one errors, they're not just for programmers!)
1 - x <= e^{-x} so e^x <= 1/(1 - x) for x < 1
(1 + x/n)^n <= e^x <= (1 - x/n)^{-n} for x < 1
Letting n go to infinity gives e^x = \sum_{n=0}^infy x^n/n! using Newton's binomial formula.
The number is what it is cause it's "increment of increment" is the same as it's "increment" when you are using exponentiation.
Why do we measure angles in radians? Because then d/dx (sin x) = 1 at x = 0, and sin x ≈ x for small x.
In my opinion drilling down too much on conventions misses the point of math.
A more practical way to measure angles would be in rotations. 0° = 0, 360° = 1, 90° = 0.25, etc. It would remove a ton of 2π and 4π² factors from a lot of equations in physics.
For example, try to work out the Taylor series for sin(x) using degrees (or rotations). It's awful.
Fourier transform would have 4π² instead of 2π under the exponent, no big deal.
The Euler's formula gets a factor of 2π under the exponent though. Given its wide application, it adds plenty of noise, of course.
e pops up quite often when taking limits on a surprising number of varied phenomena. It is much more than a mere convention, unless you subscribe to the nihilistic, anti-epistemological notion that all of mathematics is merely convention. It seems to be the center of the conceptual space particularly around questions of relative and absolute scale.
It's true you can use any base for computing things but some are more natural than others in that specific parametrizations have natural interpretations especially when it comes to physics (timescales, information-theoretic optimality, etc.).
90 degrees is one fourth of a circle, it would be so much more intuitive if we'd use "1/4th of something" rather than "1/2 radians" to express this
What is "the point of math?"
So I'm curious to know about this other... non-conventional point.
Something of a trinity
(Wink from Diety)