e arises when you ask the question: is there a function that is its own derivative? And it turns out the answer is yes. It is this infinite series:
1 + x + x^2/2 + x^3/6 + ... x^n/n! ...
which you can easily verify is its own derivative simply by differentiating it term-by-term. When you evaluate this function at x=1, the result is e. In general, when you evaluate this function at any real value of x, the result is e^x.
But the Euler equation e^iπ = -1 has nothing to do with exponentiating e, it's just a notational convention that is defined to be the series above. When you evaluate that series at x=iπ, the result is -1. In general, the value of the series for any x=iy is cos(y) + i*sin(y).
It's that simple.
I've seen (1+1/n)^n before, but never seen an explanation of why I might ever want to use something of that form. I've used the e^ix notation extensively, but again I've never really cared, because to me it was just a compact representation of sin and cos together. Likewise, all the proofs in that article are still a bit "so at this point on this carefully chosen graph, the gradient is e" and I think "Who cares? You carefully chose the graph to prove a point, I'll never see it in the real world."
And then the example with compounding interest - immediately I can see the application. It's definitely a good way of explaining it, although maybe it'd be even more grounded if it had n=12 and n=365 as examples. When you notice that the actual values seem to be converging, then you can try plugging in ever bigger and bigger numbers. This way you can discover the value of e for yourself and that process of discovery leads to a better understanding than rote learning of an abstract thing you haven't mentally visualised yet. All the other explanations are useful later, and they allow you to see it in different situations, but having at least one "this is why it's tangibly useful" hook at the start is definitely a massive help in understanding something.
https://betterexplained.com/articles/an-intuitive-guide-to-e...
I don't think that's a very good explanation because all the heavy lifting is being done by the phrases "base rate" and "continually growing", nether of which are well defined. "Continually growing" could reasonably be interpreted to mean "monotonically increasing" in which case f(x)=x qualifies and the whole explanation falls apart.
I think the idea of finding a function that is its own derivative, discovering that there is a polynomial series that meets that requirement, and evaluating that series at x=1 is a much more natural explanation.
Can't the same thing be said about using fractions on the exponent? Exponentiation is actually just repeated multiplication (a^n=a*a*...*a, repeated n times), but you can't do that when n is a fraction or irrational anymore than you can do it when it's imaginary.
We have to define what it means for an exponent to be non-integer: for fractions we might define a^(b/c) as the root of the equation x^c=a^b, and to allow irrationals I think you need some real analysis (it's been a while, but I think the usual way is to first define exp and log, and then say that a^b=exp(b*log(a)), which is kind of cheating because we have to define exp first!).
There's a very intuitive way to "see" that e^ix=cos(x)+i*sin(x): all you have to do is to treat complex numbers like you would any other number, and "believe" the derivative rule for complex numbers (so (e^(ix))'=ie^(ix)). Then you can just graph f(x)=e^(ix) for real x by starting at x=0 (when clearly f(x)=1) and from there take small steps in the x axis and use the derivative to find the value of the next step with the usual formula f(x+dx)=f(x)+f'(x)*dx.
Doing that you realize the image of e^(ix) just traces a circle in the complex plane because every small step in the x direction makes e^(ix0) walk a small step perpendicular to the line going from 0 to e^(ix0), simply because multiplying by i means rotating 90 degrees.
a^b, for positive a and irrational b can also be defined as lim (x -> b, x € Q) a^x - which is possible because Q is dense in R. This is a pretty natural way of extending a function to the reals.
The way we extend exponentiation to complex exponents is IMHO much less straightforward.
> The way we extend exponentiation to complex exponents is IMHO much less straightforward.
I think it depends on how much you're used to dealing with complex numbers. In college, I was always taught to prove that Euler formula by replacing ix for x in that series, and then noting that the alternating signs and presence/absence of i in the terms allowed you to separate it into two series for cosine and sine. That always felt awkward, like there was no way anyone could just come up with that naturally.
Many years later I found that construction with graphing e^(ix) by taking small steps using f(x+dx)=f(x)+f'(x)*dx, and everything clicked: how exponentials work in the real axis is pretty different from the imaginary axis, but both are completely intuitive and unavoidable once you understand that.
Even later I "discovered" the connection with group theory[1]; that one still blows my mind.
[1] This is a really nice explanation: https://www.youtube.com/watch?v=mvmuCPvRoWQ
That depends on your point of view. You can also view exponentiation as "really" being e's infinite series, and it so happens that that matches what you get in the case of repeated multiplication. The advantage of that is now you can start exponentiating a lot more than just numbers. Here's 3blue1brown on raising e to the power of a matrix: https://www.youtube.com/watch?v=O85OWBJ2ayo
In general there is fruit in viewing infinite series as the fundamental building block of a lot of math and non-infinite series as the special case. I won't claim which is "real" or "correct", though, just point out that there is value in viewing "repeated multiplication" as the special case rather than the "real" thing. Of course you can always view exponentiation as the generalization too.
Yes, it can. In fact, it is really useful to think of the "usual" definition of exponentiation as repeated multiplication as nothing more than a special case of a much more general concept, which is evaluating a function whose defining property is that it is its own derivative.
Sure you can. You know what 2^n is and you want that 2^(1/3) 2^(1/3) 2^(1/3)= 2^1=1. That uniquely defines the exponential function on the rationals. For the real numbers you need some amount of continuity or measurability, but then it is also uniquely determined.
> but I think the usual way is to first define exp and log, and then say that a^b=exp(b*log(a)), which is kind of cheating because we have to define exp first!).
No, you don't just "say" that. You prove it. Big difference.
Whether a^b=exp(b*log(a)) is a definition or a proof really depends on how exactly you define certain terms (e.g. exp). What's certainly a theorem that requires a proof is that the definition of a^b (for irrational b) via limits of rational exponents and the one via exp are equivalent.
- the solution of the ode you just stated.
- compound interest.
- The defining property of exponential functions is f(x+y)=f(x)f(y) with some normalization.
- Moving on the unit circle is given by an exponential function because rotation is a group, i.e. a^(i(x+y)). Now choose the basis a such that you move with unit speed.
- ...
The nice thing is that all of these very different motivations lead to the same thing.
The "has nothing to do with exponentiating e" I would strongly disagree with. It has everything to do with exponentiating and is exactly the only way exponentiation can work. So afterwards you can pretend you didn't know that and define exponentiation by using e. Same for matrix exponentials, semigroups etc.
That was a bad way of phrasing it. I should have said something more along the lines of "... has nothing to do with multiplying e by itself iπ times. Multiplying something by itself n times is actually a special case of the more general concept of finding a function that is its own derivative."
It has, that's the beauty of it. You can define as usual the function x -> e^x on the real line. Now, complex analysis tells us that if this function can be extended to a holomorphic function on the whole complex plane, then the extension is unique. And, in fact, this function does admit such an extension, so you can compute e^z for any complex number z, and in this way one gets e^iπ = -1.
Yes, but it's not a priori clear that the series actually even converges for all x, and the fact that power series are differentiable term-by-term within their radius of convergence also requires proof.
Or, to put it another way, you will blunder into this somewhere around month two of calculus 1, unavoidably.
Of course, that doesn't show how it will show up in all sorts of other places; "the one and only function that is its own derivative" strikes me as more likely to be something we encounter everywhere.
That's true, but it's begging the question because you can't define ln without already knowing about e.
You have to go back to first principles:
d(b^x) := lim(∂->0):(b^(x+∂) - b^x)/∂ = ((b^x)(b^∂) - b^x)/∂ = (b^x)(b^∂-1)/∂
So d(b^x) is itself multiplied by lim(∂->0):(b^∂-1)/∂. But now what? How do you evaluate that limit? How do you show that e is the magic value of b that makes that limit turn out to be 1? And in particular, how do you show that to someone whose only background knowledge is how to differentiate polynomials?
IMHO it's a lot easier to see that e is the value to which the polynomial series that is its own derivative converges at x=1.
and its derivative then is (by the chain rule)
ln(b) exp( ln(b) * x ) = ln(b) * b^x.
I'd argue it's a bit more, it's a natural way to extend domain of exponentiation, as we do before this with naturals -> integers -> reals.
If not I think you might like it.
And a tangent from your username: I quite like how complex numbers and functions like CIS made it into Common Lisp.
e^(iπ) is (e^i)^π.
There is a concrete complex number e^i:
[1]> (exp 1)
2.7182817
[2]> (expt (exp 1) #c(0 1))
#C(0.54030234 0.84147096)
See, it's around 0.54 + 0.84i. It's on the unit circle.When you raise this number to pi, you get -1.
[3]> (expt * pi)
#C(-1.0 1.2776314E-7)
This means it's the pi-th root of -1; let's try it: [4]> (expt -1 (/ pi))
#C(0.5403023058681397174L0 0.84147098480789650666L0)
and e must be the i-th root of this: [5]> (expt * (/ #c(0 1)))
#C(2.7182818284590452354L0 -4.6847612413106414363L-20)
Yes; it is all literally exponentiation which we can approximate with concrete floating-point numbers that know nothing about the formula we are exploring.Yes, that is true, of course. But consulting a Lisp REPL merely demonstrates that it is true. It does not explain why it is true.
You'd get further, pedagogically speaking, by pointing out that (e^i)^i = e^(i*i) = e^(-1) = 1/e, and so e^i is a number that is in some sense "half way" between e and 1/e when you are exponentiating. As an analogy, consider:
e^(1/2) * e^(1/2) = e^(1/2 + 1/2) = e^1 = e
as a demonstration that e^(1/2) is a number that is in some sense "half-way" between 1 and e when you are multiplying, a.k.a. the square root of e. But that still leaves unanswered the question of what "half-way" means when exponentiating rather than multiplying.
Exponents give us expoential decay/growth in the real number line, but periodicity in the imaginary domain, which is strange.
In physics, this lets us analyze decaying or amplifying oscillations in a unified way. (Laplace transform and all that.)