I should also take this chance to mention that I work as a private tutor and I have openings for students! Much more info here: https://nicf.net/tutoring/
It feels very rare that someone with his level of intellectual depth is this interested in teaching others.
(Hi Nic! \o/)
If you have a Patreon or similar account, let us know!
PS: The Rieman zeta function page seems to be missing a few enclosing tags leading to non-typeset latex formulae after "This gives us a nice way to pick out terms from a Dirichlet series..." and also after the Von Mangoldt function.
I appreciate the heads up about the typesetting --- it looks like it's due to some macros that the web-based TeX typesetter I'm using hasn't implemented. I usually go over the PDF versions more carefully than the automatically generated web versions, and I guess this is the price I pay.
EDIT: Should be fixed now!
https://archive.org/details/physics-for-mathematicians-mecha...
It's a very interesting take on classical mechanics.
For what it's worth, from all the various nonrigorous explanations in physics texts, the one that worked the best for me was the one in "Quantum Field Theory Lectures of Sidney Coleman".
All Quantum Field Theories are effective theories. Effective means that they work up to certain energy-range, they do not intend to be fundamental.
For example, the Fermi theory of beta decay is an effective theory that works only up to the energy of W and Z bosons. Quantum Electrodynamics (QED), the theory of electromagnetism and photons, is an effective theory which is valid up to the electroweak scale (~250 GeV). So on and so forth. All of them are effective, and therefore break at some point at higher energies or, equivalently, shorter distances.
Renormalizable theories are such that we can abstract away the physics beyond that breaking point where the theory doesn't make sense anymore. And capture the physics beyond that point into a redefinition of a few fundamental constants that we can take from the experiment. To rehash the basic idea, the theory doesn't work beyond certain energies. In principle physics beyond those energies impacts our predictions, because in QM you must account for all the processes. But renormalizable theories are nice enough that we can put physics beyond that energy scale behind a black box and we just have to redefine a few fundamental constants like the mass of the electron.
Comp. science audience analogy: It's as if renormalizable theories gave us a neat API that does not leak the ugly internals of what happens at super high energies (short distances). Like you don't need to know machine code or assembly to import Pytorch and build a neural network in a few lines of code.
For gravity this doesn't work. Gravity is mediated by a massless spin-2 field (Einstein's theory). If you try to quantize this theory you will find that it explodes at second order. We can calculate the first quantum corrections to Einstein's gravity but then it explodes. And when we attempt our tricks to hide all the short distance/high energy stuff inside a black box (renormalization) it just doesn't work. It's as if the API was leaky. It leaks the internals to the high level.
To give you a couple of examples of this leakiness. Dark energy, the energy of the vacuum, causes an expansion to our universe at the largest scales possible. So it's a phenomena of super loooong distances and yet, it's dominated by the super small distance (high energy) interactions that contribute to this energy of the vacuum. Another example, black holes are typically hyper massive, huge beasts and yet... they are inherently quantum gravitational objects for which we need the full theory of quantum gravity to understand them.
This is a blessing and a curse. The curse is that it makes our jobs of getting a theory of quantum gravity so much harder. The blessing is that it gives us a chance of peeking at a more fundamental theory of physics. If we could have worked out everything with effective theories we may never know what's beyond those black boxes that hide the internals. Gravity gives us the chance to peek through and understand something deeper.
Random comments:
>when the states evolve in time and the observables don’t we are using Liouville’s picture; when the observables evolve in time and the states don’t we are using Hamilton’s picture.
I have never heard this terminology, I have only heard Schrodinger's picture vs. Heisenberg's picture.
>This means that, very unlike on a Riemannian manifold, a symplectic manifold has no local geometry, so there’s no symplectic analogue of anything like curvature.
Perhaps the only enlightening comment I have ever heard about the tautological 1-form/symplectic approach to Hamiltonian mechanics.
I wrote the QM article a very long time ago at this point, and I actually can't reconstruct at the moment why I used those two names! I've also heard Schrodinger and Heisenberg much more frequently. Might be worth an edit.
I know the difference between mathematicians and theoretical physicists can be small, but I think that categorization is valid.
To verify my intuition, I checked Wikipedia. It calls
- Liouville a mathematician and engineer (https://en.wikipedia.org/wiki/Joseph_Liouville)
- Hamílton a mathematician, astronomer and physicist (https://en.wikipedia.org/wiki/William_Rowan_Hamilton)
- Schrödinger a physicist (https://en.wikipedia.org/wiki/Erwin_Schrödinger)
- Heisenberg a theoretical physicist (https://en.wikipedia.org/wiki/Werner_Heisenberg)
Anyone interested in coming at physics from a mathematics perspective should read Arnold's mechanics book.
A mathematical formalization of dimensional analysis (2012) - https://news.ycombinator.com/item?id=37517118 - Sept 2023 (54 comments)
A mathematical formalisation of dimensional analysis - https://news.ycombinator.com/item?id=5018357 - Jan 2013 (19 comments)
Since things need to conserve in pyhsics, one has to account for this issue and doing so is harder than it may seem as AC is part of the "fabric" of most mathematics which, at large, chooses to ignore the problem.
Speaking as a physicist, we don't care at all about stuff like Banach-Tarski, and there is essentially zeo expectation that stuff like ZF vs ZFC will have any impact on physics.
Lie groups and geometric algebra remove a lot of problems.
It also applies to differential calculus and ML methods like back propagation and gradient decent.
Gibbs style vectors and the cross are convenient as they tend to match our visual intuitions.
But lots of the 'physics isn't real math' claims just don't understand how the algebra arises from the system.
In physics the length of the basis vector is set to 1 if possible which is called 'natural units'
But the SI system is the domain of Metrology, not physics.
Examples of discrete quantities are the amount of substance and the electric charge.
The discrete quantities are just counted, so their values are integer numbers. They have a natural unit. Nevertheless, for those that are expressed in very large numbers it may be convenient to choose a conventional unit that is a big multiple of the natural unit, for instance the mole and the coulomb in the SI system of units.
All the continuous physical quantities are derived in some way from the measures of space and time, which is the reason for their continuity. For instance the electric charge is discrete, but the electric current is continuous, because it is the ratio between charge and time and time is continuous.
In order to measure a continuous physical quantity, a unit must be chosen. The unit may be chosen arbitrarily or it may be chosen in such a way as to eliminate universal constants from the formulae that express the relationships between physical quantities.
In either case, the value of a measurement is the result of a division operation between the measured value and the chosen unit, which is a real number, though it is normally approximated by a rational number.
In order to be able to define a division operation on the set of values of a physical quantity that has as a result a scalar, the minimum algebraic structure of that set of values is an Archimedean group.
That means that it must be possible to add and subtract and compare the values of the physical quantity and given two values it is always possible to add one of them with itself multiple times and eventually there will be a multiple greater than the second value (which will determine that the second value lies between two consecutive multiples of the first).
Based on the axioms of Archimedean groups it is possible to devise an algorithm that can multiply a value by a rational number and which can determine that a second value lies between two rational multiples that are as close as desired, producing by passing to the limit a real scalar. Thus any value can be divided by another value chosen as unit.
In practice, all the continuous physical quantities have richer algebraic structures, they are vector spaces over the real numbers, so the division of two collinear vectors is the scalar that multiplies one to give the other.
Nevertheless, the fact that the continuous physical quantities form vector spaces over the real numbers can be demonstrated based only on the supposition that they are Archimedean groups.
So the units of continuous physical quantities are just arbitrarily chosen values of those physical quantities, which are normally vector spaces with one dimension or with more dimensions, while the measured values are just rational approximations of the scalars obtained by division.
This division process is very obvious in the structure of the analog-digital converters used to measure voltages. These ADCs have two inputs, the voltage to be measured and the reference voltage, which is the arbitrarily chosen unit. The ADCs produce a rational number that is the approximate result of the division of the measured voltage by the reference voltage. If the reference voltage is not equal to the conventional unit, i.e. 1 V, the measurement result will be converted by multiplying with an appropriate conversion factor. The division operation can be done in the ADC for example by successive approximation, i.e. by binary search of the two multiples of a fraction of the reference value between which the measured value lies. The fraction of the reference voltage may be generated by a resistive or capacitive divider, while its multiples can be generated by a multiplying DAC.
This is where the limits of my brain were reached. Is there a translation of this into category theory terms? Is this where category theory could help formalize units in physics?
However, his paragraph after that is pretty interesting, which I read as sort of treating units as variables since you couldn't combine them, and he only has length, mass, and time for these examples. But then there's an exponent piece? Okay now I'm lost again.
(Also, this quote is from the Terry Tao blog post that dang links below, not the OP, right?)
It's essentially the same as the relation between covariance and contravariance in category theory.
The thing that irks me most is using higher-level concepts, like the existence of a atom, to illustrate lower level concepts that led to the discovery of the atom in the first place.
how have the mathematical contributions of quantum physics affected mathematics? have they??
maybe the field that's really lagging in recognizing the implications of "recent" scientific revolution (QM) is philosophy?
finally, I wonder how will the schizm in mathematics that is the IUT (mochizuki's theory) will finally pan out. apparently euler also left stuff behind that took over 70 years to be understood so I ain't holding my breath.