How do you learn to read this stuff? I'm frequently stumped by an academic paper or book that I just can't understand due to mathematical notation that I simply cannot read.
It's difficult to google mathematical stuff, especially since each symbol has many different meanings depending on which branch of mathematics you're dealing with - this book solves that problem nicely by letting you look up by the roman letter a symbol is similar to, by mathematical discipline, etc.
Many papers, especially im engineering, use a lot of mathematical notation that doesn’t benefit the reader, it’s just there to show off. Often, there are mistakes in there too, because no reviewer typically goes to the trouble of checking every single equation. When reading a paper, don’t get bogged down by all the equations. Read it once or a few times before getting down to that level. Often it’s helpful to read other descriptions of a particular algorithm, for example in a student’s thesis, which contain more detail and contextualize some of the math.
While you may not find this comment particularly helpful, as I’m not pointing to a guide or something, you could take away from it that it takes practice and that one shouldn’t be discouraged when you don’t understand the math in a paper, as I guarantee you there are maths professors that couldn’t make sense of it either.
That is true, unfortunately, in much of the mathematical literature. Way too often authors use the formalized language where the plain language would suffice. This kind of abuse is extremely widespread. It is one thing when a formula is used to represent, say, a complicated integral; it is another when a formula is used to express something just as easily said in a couple of words.
I'm in a somewhat similar position to OP, it's extremely rare that I'd find a whole paper accessible, the best way I know is just to get gradually more specific, from a broad undergraduate textbook on 'logic' or whatever other similarly broad area, and hone in on the more specific area of interest. Notation will be introduced, and by the time the last 5% is, 80% will be second nature, and the remaining 15% will be familiar and 'let's see.. I think it was used in this book..'
1. People are typically _really_ bad at writing mathematics -- notational brevity is not a virtue when attempting to communicate ideas. You may find value reading through notes on Knuth[0] and what he taught regarding writing mathematics.
2. There is a certain level of field/conceptual awareness needed when translating the coded concepts in an academic paper/textbook. However, consistent with (1), people are bad at encoding. Using several topical texts at your disposal can help. For example, in econometrics your standard masters year 1 texts are Greene, and Wooldridge. Wooldridge is expansive and simpler to read, Greene is more fundamental and uses horrendous notation. I found reading the same topics in Wooldridge helped me decode Greene, from which I was able to deepen my understanding of Wooldridge.
3. Very few people can read a paper or text once and know it immediately. The most studied professors I know will take months to fully digest a seminal article that incorporates new ideas -- if you're new to a field, _every_ article is seminal to you.
Don't give up. It takes practice to decode, to put the concepts into a mentally straightforward order, and so on. The fact you're asking about it shows that you care, and if you let that grow you'll get to the point you want to be.
[0] http://jmlr.csail.mit.edu/reviewing-papers/knuth_mathematica...
You need to find out exactly what type of math is used in the paper and learn that. Learning notation will come along with that. If you're interested in machine learning then you should probably start with:
* Proofs and logic (a general requirement to understanding math) * Linear algebra * Statistics * Some multi-variable calculus
I still don't know how to read things like curly braces or matrices or a dozen other things that I don't even know the name for, and figuring it out is usually more work than it's worth. As a result, I don't understand Wikipedia articles that are supposed to be introductory and understandable to the masses, but because the article's writers are being fancy, it's only understandable for a privileged few. Often the concepts, if I do bother to understand them or look them up elsewhere, are rather simple.
It also doesn't help that people use things incorrectly or ambiguously. Sometimes a line over some math will mean the average (or "mean", to use the ambiguous lingo) and sometimes it's an infinite repetition.
Or the lingo: a friend of my girlfriend was baffled I didn't have X in school and didn't know what it was (I think it's derivations). She kind of explained it to me and I kind of understood, not recognising it. Later that evening I used it, unknowingly, in programming. When showing my girlfriend what I made she pointed it out to me.
Heck, I recently used logarithms fairly intuitively where they were never properly explained to me. I just knew the formula for entropy calculation (log(possibilities)/log(2)) by heart and managed to make something that converts a number from any base into any base (e.g. base 13 to base 19). I can do math, just nobody bothered to tell me about the fancy symbols. To me, the symbols mainly seek to obfuscate, look smart, and ensure job security.
(I'm not saying this is the case with every math text, just the ones I've read that are intended to teach the reader the subject.)
I see all these comments about reading papers with confusion and I get the impression that they didn't read the prerequisite information from textbooks for whom they're the right audience as that's generally how you pick up notation.
Learning the notation itself can be useful as a way to point you towards relevant concepts, e.g., “Where did this symbol come from and why did it become the convention? What concept/problem/approach motivated it?”
But if your goal is to memorize a bunch of symbols you’re gonna have a bad time.
These are bad academic habits. If you're using a formula with a dozen variables - at least state what they are, as well as any more unusual or overloaded notation, subscripts, etc. If you're using a formula from elsewhere, it's better to restate what these symbols mean in your paper, rather than send the reader off on a hunt. Be clear, be explicit, leave no doubt. You don't have to explain all of probability theory, but it takes only a line or two to vastly improve the readability of your paper.
1) Logical quantifiers: ∃ ("there exists") and ∀ ("for all"). Quantifiers can get confusing when they get strung together. I like to think of them as challenge-response games. For example, your real analysis textbook asserts that a function f is continuous at x if ∀ e>0, ∃ d>0 such that if |x0-x|<d, then |f(x0)-f(x)|<e. I think of two players, Ella and Daniel, with the ∀ player (Ella) trying to disprove and the ∃ player (Daniel) trying to prove the statement. Since the definition asserts "∀ e>0", all Ella needs is a single counterexample e' such that no matter what d'>0 Daniel chooses, the condition is false. Or mathematically written, Ella's objective is to prove "∃ e'>0 such that ∀ d'>0, the condition doesn't hold." Notice how taking the converse flips the quantifiers.
2) Set notation: S = { x∈R | 0<x<1 } reads as "S is the set of real numbers that are between 0 and 1". You can also think of this as {x ∈ domain | filter_condition(x,y,z,...)}. You'll see many mathematical objects represented as sets, so it's pretty important to know how sets are defined.
This is standardly called "game-theoretic semantics" in the literature. Enthusiasts can find more info here (or just by Googling around): https://plato.stanford.edu/entries/logic-games/#SemGam
Realistically, find a paper that interests you, hopefully they will list some expository text in the references (or a paper that they cite does). Get it and read it. There's not really any alternative if you want to understand the notation, it represents complex ideas compactly, so you need at least a basic grounding in those ideas.
https://en.wikipedia.org/wiki/Table_of_mathematical_symbols
These might help a bit.
But as someone with similar problems, I'm beginning to think there's no real solution other than thousands of hours of studying.
I've spent time studying number theory (Ph.D. Berkeley, wrote 30+ papers and 3 books), and it really is very deep. If understanding some notation or mathematics doesn't come easily to you, that's normal. It often takes Ph.D. students years of fulltime study just to understand a single research paper. This is because mathematics is a very deep subject, certainly much deeper than everything else I've encountered in academia. The good part is that pretty much all mathematics does make sense, and can be truly 100% understood if you're willing to invest enough time, unlike the case with many other things in life! An added bonus is that much of mathematics is also incredibly beautiful, when you understand it.
Listening to lectures by excellent speakers (many are on youtube now) helps a lot.
Inter-universal Teichmüller theory seems to be a counter-example.
Outline_of_mathematics#Mathematical_notation https://en.wikipedia.org/wiki/Outline_of_mathematics#Mathema...
List_of_mathematical_symbols https://en.wikipedia.org/wiki/List_of_mathematical_symbols
List_of_mathematical_symbols_by_subject https://en.wikipedia.org/wiki/List_of_mathematical_symbols_b...
Greek_letters_used_in_mathematics,_science,_and_engineering https://en.wikipedia.org/wiki/Greek_letters_used_in_mathemat...
Latin_letters_used_in_mathematics https://en.wikipedia.org/wiki/Latin_letters_used_in_mathemat...
For learning the names of symbols (and maybe also their meaning as conventially utilized in a particular field at a particular time in history), spaced repetition with flashcards with a tool like Anki may be helpful.
For typesetting, e.g. Jupyter Notebook uses MathJax to render LaTeX with JS.
latex2sympy may also be helpful for learning notation.
… data-science#mathematical-notation https://wrdrd.github.io/docs/consulting/data-science#mathema...
The biggest challenge was reading the mathematical notation, to whit: the four biggest papers in the field I was studying used four different notations, and that you'll end up writing your own translation guide:
https://elfsternberg.com/2018/10/27/so-you-want-to-get-into-...
But be aware that mathematical notation is not, without context, unambiguous. So you might need to provide some context for what you're doing, or where you get the examples from. Some people think this is a weakness of mathematical notation to be solved, but I've found it tremendously powerful. There is no doubt, however, that it is an accidental barrier to entry.
So post a picture of a piece of notation, and maybe someone will either explain it, or point you at a book/website that you can read.
I guess I just need more practice.
Not entirely ... but partially. That's one way you can start to learn. Usually formulas are presented with a broad gloss in English (or other) to let you know roughly what it is saying, with the formula just being a precise way of saying it that can subsequently be used in algebraic manipulations.
Example:
The force due to the Earth's gravity is represented by g at the surface, and falls off as an inverse square. Thus:
F_d = g(R/d)^2
where R is the radius of the Earth.
So how do you get mathematical maturity? A bachelor’s in math, or equivalent. Meaning, approximately 4 years of focused, ideally guided, practice. I don’t think there are any faster ways.
Edit: Also important to know: most papers are poorly written, so that definitely doesn’t help. It’s especially important early on to identify who the leaders are in a field, and focus on their writings. In statistical Machine Learning, I recommend anything by Trevor Hastie, Rob Tibshirani, Bradley Efron, and Martin Wainwright.
[1] https://www.coursera.org/learn/machine-learning?authMode=log...
Without having read it, I'd recommend the "preliminary maths" part of Bengio et al's Deep Learning book - it teaches both the letters and the language, so to speak, and if the language isn't for you, you'd better throw away the papers and completely concentrate on reading and understanding the implementations that are our there, using the implementation first and foremost and the paper only as a backup to provide explanations when the implementation does mysterious or unexplained things.
You can do deep learning productively without having a PhD, but you won't be able (nor obliged) to read and understand PhD-level academic papers unless you have a solid (i.e., maths or physics or math-rich CS BSc) maths background.
Honestly it's weird getting this far without knowing the symbols well, I went to a significantly poorer and less well staffed school that most of my fellow students, and you're kind of between a rock and a hard place sometimes, because no-one sees it as their job to catch you up, and the people who do know it learned it long ago manually and don't exactly have an easy time teaching you if you ask.
I did the course earlier this year, and I can confirm that you don't need much more than high-school level maths knowledge. If you understand concepts like functions and summation then you're most of the way there, and if you've got some calculus then it should be easy. I came out of it with better mathematical comprehension than when I went in.
I found that I spent a lot of time on the course converting mathematical notation into code (octave/matlab), which is a great way to validate your understanding of the maths. If you're understanding is wrong then it either throws an error, or runs slowly because you failed to map summations etc onto the appropriate parallelised operations.
ML has moved on a bit since the course was designed, but it's still a good way to get familiar with the basics.
The secret to reading papers in unfamiliar field is to not get discouraged just because you don't get it immediately. Instead, I will research each concept I encounter that I don't understand well.
Let's assume I want to understand something in a field I am not familiar with. It is important to understand that this typically means I have to get at least somewhat familiar with the field. I don't expect to read a paper and get it just from that single read!
I will open the paper and first just read what I can from start to end without stopping to research anything. This gives me some idea what the paper is about and what are some of the concepts that are important and how they are used later. I call this scanning.
I will go back to the start and slowly go researching EVERYTHING, ABSOLUTELY EVERYTHING that I don't understand. This is most of the work. I will keep the paper open and have something to mark where I currently am. I will research meaning of every symbol and every piece of notation, every concept being introduced, until I am confident I understand it.
This may take multiple days to advance a single line as I will frequently spend time on tagents or just getting up to speed with the basics.
Once I go through entire paper I will get back to start and now read it again. Now that I know the language of the paper I can focus on the matter at hand. I may do this few times until I am satisfied I got everything I could.
Again, I want to stress it, don't expect to understand a paper in unfamiliar field on the first reading. It is completely unrealistic. Use materials, ask people to explain, research on the Internet, rinse and repeat.
this might be a start for you
For example, the lowercase sigma symbol is used for a half-dozen different things depending on the field you're working in.
In Machine learning, if you use a logistic function as your activation function you'll use the lowercase sigma symbol to represent the sigmoid variable. However if you're working in Physics, the lowercase sigma symbol is the Stefan-Boltzmann Constant. So any good text or otherwise will explicate what the function symbols represent.
Some algebraic/calculus functions such as sum over set, divisor, sets etc...should be the same, and I haven't been able to learn how to crack that specific question as to generalizing all of those operations that isn't "go back to get a mathematics degree."
[1] https://en.wikipedia.org/wiki/Activation_function
[2] https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_const...
The short version is you have to ask the right questions. Naturally for every theorem or equation, there are 3 big questions:
1) What does the theorem/equation say? What's the intuition behind it?
2) Why is it true?
3) How does one come up with it?
One must ask these questions in the exact order. To understand what the equation really means, you should break it down further to smaller components. What is this variable? What does it represent? What is the intuition behind what it represents? What's the implication when the variable increases, decreases, etc? Do that for every single component in the equation/theorem. One should fully understand the intuition and clearly describe all quantities before trying to look at the equation/theorem as a whole.
To understand why an equation/theorem is true you need to build up a repertoire of theorems related to the quantities of interest. The bigger your repertoire, the easier you can prove or disprove something. The more advanced way is to build up intuition around the quantities of interest then come up with intuitive hypotheses. The hypotheses are often easier to prove/disprove. The process repeats.
edit: patience, self-forgiveness, and a willingness to accept frustration are important traits. You might spend a whole week banging your head against the wall, feeling like you're making no progress, and then one day everything falls beautifully into place. That doesn't mean you did something correctly on the final day - it means you did everything correctly for the whole week before. Don't view a difficult and unrewarding day as wasted time. You're building something very difficult and that takes a bewildering amount of time.
Are you sure it's a problem of notation, and not just that you're not used to reading slowly? Reading academic papers is very different from reading prose, and I find that even though you might struggle understanding it at first read, going through it line by line very slowly does help a lot.
Do you have an example paper you're struggling with?
3blue1brown's animated maths channel is pretty good. - https://www.3blue1brown.com/
Khan Academy's maths section covers a lot of stuff - https://www.khanacademy.org/math
math.stackexchange is excellent for specific questions - https://math.stackexchange.com/
One thing that works for me is to try and find something stated both in code and in maths notation, then you can work out one from the other.
While math notations can be interpreted in different ways, code is going to give an unambiguous result.
Why don't mathematicians, especially those writing in machine learning or computer science domains, do this? Is it just a problem of agreeing on a common language?
I don't think math notations can be interpreted in different ways usually is right. That would defeat the purpose of using them, if the meaning wasn't exactly clear. And maths notation is more compact, a lot more. Even assuming a translation between the 2 always is possible or makes sense.
"Links to resources talking about how to understand mathematics, mathematical language and mathematical notation."
Okay, if you're reading a physics paper and you encounter 'c', that's probably the speed of light, 'm' is likely mass, and so on. There's context and culture involved, and that's usually fine.
But if you're wading through something less mainstream, say some denotational semantics or queuing theory, and the author starts dragging in undefined alphabet soup, what's the reader supposed to do? Sometimes the answer is to get more culture in the subject area, or read some of the same author's previous papers, or just forge ahead in the paper and hope that understanding will come anyway.
Don't get me started on formula that use single character variables nearly everywhere, but have zingers like 'hz' -- which is not 'h' times 'z', but Hertz. If math is supposed to be so rigorous, how come there's no agreed upon grammar, with reasonable ways to extend it to specialized domains?
While I would recommend either as a primer into discrete math - do you have a specific subset that you're interested in? That might make suggestions more pertinent to your interests.
[0]: https://lamport.azurewebsites.net/tla/book.html
[1]: I think, in retrospect, that Specifying Systems actually stands on its own pretty well and doesn't require supplementary material when paired with the video course.
[2]: It is subtitled "A Gentle Introduction to Discrete Math Featuring Python" and it is _very_ gentle, but I'm also glad to have read it, it did wonders for giving me a kind of reference into math terms from programming (which I already knew).
1. Picked up a High School Algebra Book. Read from beginning to end and did all exercises.
2. Repeat #1 for Algebra 2, Statistics, Geometry and Calculus. (Really helpful for learning those topics fast was Khan Academy).
3. Did MIT Opencourseware's Calculus and Linear Algebra Courses w/the books and exercises.
Now, this took me about 2 years maybe you can get it done quicker, you're at a level where you can pretty much pick up any book, I think I picked up Elements of Statistical Learning, and actually start parsing and understanding what the formulas mean.
One thing I always do is tear apart formulas and equations and play with little parts of them to see how the parts interact with one another and behave under specific conditions, this has really helped my understanding of all kinds of concepts from Pearson's R to Softmax.
https://github.com/Jam3/math-as-code
You are welcome.
Math is super expressive compared to actual code. It's like SQL compared to an ORM like notation.
There's different notation per field and subfield but I would argue that the process of figuring out notation per example or per paper is generalisable and not too difficult:
- identify the field of the paper; keywords, general categorisation, etc.
- take an example and break the notation into parts
- Google each of the parts independently alongside field keywords (results of this search should also give you some contextual info alongside each component which should help your knowledge)
- compile the aggregate of your research
- reread the paper and see if your understanding has improved
- repeat
- try another paper in the same field
During my time as a PhD student in philosophy, I had the luck of having a mathematician and physicist turned philosopher as my supervisor. He helped me a lot in understanding notation and improving my own mathematical writing style (~ getting silly notation ideas playing around with LaTeX out of my head).
Notation heavily relied on conventions and these differ from field to field. Mathematicians are usually very clear and define everything. They have to be easy to understand, because what they are talking about gets very complicated very fast. Unfortunately, in other areas like logic-oriented theoretical CS there is an unhealthy focus on 'notational precision' (plus shortcuts, making things worse) rather than being understood that can make papers rather incomprehensible at first.
2. Look for textbooks with good mathematical prose.
Get some recommendations. Some good introductory books are relatively short, self-contained, define everything, and avoid symbols in favor of English in many places. Try to get one of those in the area you're interested in.
3. Check your reading style. (kind of obvious advice that is often given, but maybe still helpful)
You may be reading the texts in the wrong way. I did that very often and occasionally still do. Reading a mathematical text takes way more time than any other text and requires active participation. Do the exercises and stop every so often to check your understanding. Constantly use pen and paper and check everything. Draw diagrams.
Therefore I would be cautious about jumping to the conclusion that your problem is as simple as "simply cannot read" (unfamiliar notation). Maybe that's true in some cases, but it's also likely that the notation stands in as a kind of shorthand for elaborate, maybe even arcane, concepts that you don't understand very well (unfamiliar mathematics). Working out which concepts/theories you need to study may be a better place to start than worrying about the notation per-se. Eventually you'll drill down to some level where you do understand the notation, or it is explained in a way that you can understand. Also consider that learning to read and write go hand in hand -- reading a notation gets much easier once you start writing your own sentences in the same notation (e.g. do exercises, learn to write proofs.)
With few exceptions, don't expect the notation in one area to be useful, or to mean the same thing, in another area. However there are a couple of generally useful things to know: (1) Greek letter names (so you can recognize, write and pronounce them without being confused) -- just learn the ones that show up in your reading, and (2) set theory notation plus some basic set theory (read Halmos' Naive Set Theory book up to the point that it becomes confusing).
Others have mentioned it, but be aware that it is common for one field to have multiple notation conventions. Often the notations originate from seminal papers or widely used books. Different authors frequently use slightly different notation, even within the same field, and sometimes a single paper may contain multiple contradictory notations. I've even had lecturers who switch notation half way through a lecture. If you're studying multiple texts you may want to translate all of the key theory into one consistent notation -- but at a minimum you need to be able to keep track of the correspondence between the different notations as you're reading.
Since mathematics is abstract anything used necessarily has be precisely defined. Thus, the best way to learn mathematical notation is to get some beginner level rigorous mathematics book for mathematicians, e.g. Rudin's real analysis or Lang's Linear Algebra, and read through chapter 0 and 1. Books like these tend to start entirely from scratch: What is a number? What does '+' mean? What is a sum symbol?
While each field tends to have some further notation, that is often explained within the paper or in standard references and you should be able to read mostly everything after reading up on basic notation sections in analysis, linear algebra and maybe category theory.
"Mathematical Notation: A Guide for Engineers and Scientists"
https://www.amazon.com/Mathematical-Notation-Guide-Engineers...
* Read formulae aloud, using natural languages for symbol names (it is not "sigma of k from i to j", it is a "summation of terms with k from i to j").
* Look at the proofs/construction of these formula and understand them.
* Look at translations of the formulae (applied physics textbooks, more often program code); sometimes it helps to work your way back to the formula from a starting point you are more comfortable with.
* Understand that making up symbol meanings ad hoc is one of the perks of mathematicians - you can find fights within a single math department on how notation should be.
If you don't have the foundations it's wasteful to spend time trying to work out academic papers and such. Bite the bullet and start from the beginning, it'll pay off over time. It will also allow you to go much farther. Even if you manage to 'understand' what a paper is saying, without the background you won't be able to do much other than replicate it.
As a rule of thumb, I would advise to start with elementary linear algebra, statistics, and probability. Edit: As pointed out by a reply comment, these require a good grasp of calculus.
Notation is only syntactic, most importantly you want to understand the semantics (i.e the significance of the construction your are studying). To achieve this, you need to do the proper background work and the rest will follow naturally.
So you need to understand the work and the math behind it and you will get an idea about the notations used. Not the other way round.
In the article, author recommend u have to have a very strong knowledge base of math, statistics, probability, and linear algebra in order to read those things.
haskell helped me think of math as a game of pacman, i understood arity, the concept of "well defined" as ADTs and FOLDs, tail recursion helps understand series.
learn you a haskell is a better math book than it is a haskell book.
i had extreme sifficulty with math, mainly because my temporal lobe is more or less mush due to MS.
then all you have to do is substitute symbols.
oh and the #haskell channel on freenode is huge and VERY helpful.
Sent from my iPhone
You learn while studying math. There is no way around it.
What I don't see, however, is to ask a friend. Find someone who is familiar with the notation, if not the material, and ask questions. You'll likely get a more clear answer.
> What I don't see, however, is to ask a friend.
From https://news.ycombinator.com/item?id=18510564 :
> ... get a few examples, and ask some friends, colleagues, or on-line forums ...
Just pick some small part of it and start tracing back until you find definitions.
One difficulty of notation is that the hierarchy of abstraction builds dizzyingly quickly, and soon you're manipulating symbols that generalise a whole classes of structures that were themselves originally defined in terms of lower-level abstractions. When this becomes overwhelming, it usually means that I didn't give my understanding of the lower levels long enough to settle and mature.
Concise notation and terminology is only useful if the underlying ideas are organised neatly in your mind, and the best way I've found of achieving this is to study a subject obsessively for some time, then put it away for a few weeks, and then go back and try to see the big picture and find out where it doesn't fit together by trying to derive the main results from scratch. Then I start again and fill in the blanks. After a few years things begin to make sense, but this process takes time and it's difficult and tiring (or at least that has been my experience of it).
In order to read research papers fruitfully it's crucial that you understand the basics well, and the best way to do that is to work through books aimed at undergraduates or young graduates. People don't read foreign literature by jumping straight in and looking up every word and every grammatical construction as they go. They become familiar enough with the language by reading easier texts until the language is no longer an obstruction - then they're free to appreciate what's happening at a higher level. The same for driving: you wait until you're comfortable operating the car mechanically before you drive on busy roads. The same, also, for mathematics.
It is not at all unusual to to find notation and technical terminology tiring. Everyone does to some extent. I hate it. But it's necessary.
Some resources I found useful:
Naive Set Theory by Paul Halmos. One of the great mathematical expositors, Paul Halmos here describes the fundamental language of mathematics: set theory. This is a book for people who want to understand enough set theory to do other parts of mathematics without obstacle.
How to Prove It by Daniel Vellemen. A nice introduction to logical notation and common proof structures, aimed at helping incoming maths students to become comfortable with the basics of formal language and notation.
Anything by John Stillwell. Stillwell is an inspiring teacher who insists on including the practical and historical motivations for the abstractions we use (this is, sadly, rare for modern teachers of mathematics). If you find yourself wondering why people cared about a problem enough to solve it, Stillwell might be able to help.
I suspect you'll also need resources on linear algebra (Halmos has 'Finite Dimensional Vector Spaces') and analysis but I'm not sure as I don't work in machine learning. I just sort of learnt linear algebra as I went and I avoid analysis as much as a supposed mathematician can. (context: my undergraduate degree was in economics and didn't carry much mathematical content other than some basic linear algebra - now I'm a graduate student in mathematics and computer science who uses a lot of category theory and abstract algebra. The transition was painful. Really painful.)
Sometimes this place just cracks me up.
I honestly think the answer is pretty simple: go to college. It doesn't have to be expensive. Take a community college course in calculus or undergraduate-level probability. Skip the gen eds and don't worry about the degree if you want to learn something narrow like this.
In any case, just find a mentor. On-the-job if you can, otherwise pay for a class.
What you shouldn't do is try to self-study by reading a book. You can perhaps do this but only if you're smarter than average and more motivated than most. Since you probably aren't, just take a class. Night school, maybe a MOOC. Preferably something heavy on analysis or proofs.
But you should do it with others. Math is a very social discipline, it's good to be able to discuss and have partners to work through things when you get stuck. And if you're like me, you WILL get stuck on things. This stuff is hard.
Another thought: this whole "college is great"/"college is terrible" dichotomy seems to occur people people don't think enough about quality. I think bad colleges are terrible and great ones are fantastic. I don't know any way I'd have learned all the complex topics in math, stats, probability, etc., I did without attending a big 10 engineering school (UIUC in my case)
This is something the MOOC crowd often overlook. It's based on a misunderstanding of what a university gives you: university entry doesn't grant you access to some members-only club where they hide away the knowledge to keep it from the plebs (ignoring the open access movement, at least). Instead it grants you access to an environment in which you can learn effectively.
The idea that MOOCs will threaten the existence of universities is absurd. It's already possible to get access to the sum of human knowledge without being enrolled at a university. It's always been possible. They're called books.
But this stuff is hard, and attending a high-quality institution gives you a huge advantage in learning the material, compared to being self-taught. MOOCs may grant everyone access to lectures, but just as important is learning with high-quality peers. A MOOC is not the equivalent of being in a class with other smart and well-motivated students.
Again, it's always been possible to learn this stuff independently, but your odds of succeeding are far lower that way. It's great that we have lots of freely available learning material online, but I really don't see that MOOCs are going to turn the world on its head the way some people seem to think.
I'm ignoring accreditation here, of course. If you want to be a surgeon, you obviously do not have the option of teaching yourself, but in terms of learning maths/technology (and assuming knowledge - and not a certificate - is your goal), books have always been there. MOOCs are books++, not universities--. (With apologies to the pedants who might prefer the pre-positional operators.)
My experience has been that this is exactly how college math works; you pay for self-study. The professor reads directly out of a book (in poor English), or off of pre-made slides provided with the book, for 2-4 hours per week, and then you are left to do the problems from the book on your own time.
Class populations are so large that if everyone had asked clarifying questions, we wouldn't have completed the readings.
If your college experience was different I'm envious of that.
Edit: as a fun side-note, our calc professor was well known for taking up the entire class to draw out a single proof on the chalkboard. As the chalkboard got full, he would incidentally erase his previous writing with his giant belly as he putzed across the room.
Tests and deadlines provide motivation to do the actual work.
Having a curriculum means that the content is laid out in a logical order that the professor believes should be achievable.
There is a stupid amount of information out there. Breaking it down into a progression that students can follow in order to learn and understand it is incredibly important.
If you're motivated enough then sure maybe you can just buy and read the textbook, although sometimes professors deviate from that when it's wrong.
> Class populations are so large that if everyone had asked clarifying questions, we wouldn't have completed the readings.
Good thing not everybody asks, and those that do ask are generally asking questions shared by a good chunk of the class.
The format was like this: the classes were 2h long, and the professor would begin with an exposition: first a short recap of the previous class, then introducing the new material for the day, new concepts, definitions, the starting point for the day's class. This could take anywhere from 5 minutes to up to 30 or 40.
After that, we were handed a work-sheet. It contained the definitions/summary of the concepts that were just introduced, then a series of exercises. Now this is the core of it: the main part of the class was working through these in order. The exercises were structured so that the rest of the material was learned by doing, by working out through the exercises. They would e.g. ask to prove interesting consequences, or important theorems that followed from the definitions at the start. The professor would point out an exercise, read it aloud, comment on the "meaning" of the problem or what it's meant to demonstrate or similar remarks, then give us some time to figure it out. After a bit, he would ask someone that completed it to present his/her reasoning. Now note that this part required real effort from the part of the professor. I tremendously admire him for this because it required him to listen carefully and think through the proof presented by the student, something which is more difficult than, say, presenting and explaining his own proof on the blackboard. Anyway, he would listen to it, comment ("you could have simplified here", "you didn't consider this case there", etc.), perhaps ask some other students for alternative approaches, or maybe give an alternative approach himself if necessary. By doing this we would learn the rest of the material by working it out from those principles; in a way in a sort of "narrative" that had a thematic "chapter" in each class and an overarching "story" for the whole class.
In summary, only about 10 minutes at the start would be real exposition. This usually amounted to stating definitions or axioms to work with for the rest of the class. The rest of the material -- any theorems, conclusions, etc. -- were worked out.
I have some difficulty concentrating even on 1h or 1h30 lectures. These were 2h and I would be effortlessly engaged for the whole duration. It kind of pains me that maybe the best class I've taken in university wasn't even one in my degree :^)
I hope you can develop a more empowered perspective of yourself and others though reading it.
I look at the history of art (e.g. Impressionists), YC, winning athletics programs, Silicon Valley, or just about any well-regarded academic department as evidence. These places don't do well just because they select the best, they also MAKE the best by creating an environment that fosters it.
All the proof heavy courses I've taken, whenever new notation was introduced the professor would explain how to interpret it. And if they didn't someone would raise their hand and ask what it meant. Which brings me to another point, was OP an active learner? Did he ask questions when he didn't understand?
Also I wish I studied it in college, but didn't want to go back.
I say do try self-study first and see if it works. You have absolutely nothing to lose by trying. It doesn't work for everyone, but you will quickly figure out if it works for you. Some people can absorb knowledge from books much more easily than from oral instruction, and for others it's the other way around.
https://github.com/Jam3/math-as-code was helpful...
I feel so behind because I absolutely did not.
Most days I appreciate its uniqueness, but I find myself rolling my eyes at least once/week at some of the comments here.
Here are some of what I'd call the "HN tropes":
"Oh that hard thing? It's not that hard. I have never worked on that problem before, but can surely replicate that entire system in a week, because everyone working on it is an idiot and doesn't know what they're doing."
"That traditional human behavior (e.g. marriage, buying a house, going to college)? I can squeeze 5% more efficiency out of life by completely ignoring the reality of human behavior, because my superhuman, almost robotic levels of conscientiousness and diligence that make it a horrible idea for 99% of the population, somehow don't apply to me. I beat the averages in everything I do." (I see that a bit in this thread. See also: playing games with credit card balance transfers, renting vs. buying "because I'll save the difference", trading options and thinking you won't fall into well-known behavioral traps)
"Hard problem from another discipline (finance, accounting, medical science)? I can reinvent a clever algorithm ten times smarter than that, despite my complete lack of domain knowledge. No problem."
"Incumbents are stupid. Anyone can beat them if they're clever."
"I don't need to read history. Humans are irrelevant, if we just apply the right combination of game theory, economics, and some clever code, we can fix any problem. Politics and governance are stupid, avoidable problems if we just used the right system." (I see this a lot in Bitcoin discussions)
I see these with alarming frequency here. You can certainly say I'm painting with too broad a brush but I've been here since 2009 and it comes up over and over. More than anything, I just laugh at it these days (not get mad).
/j #math
but in their defense, i was a difficult student.
After enough slow progress (“one page per day” can easily be speed reading), parts of what you are reading become what mathematicians call ‘trivial’, and your reading speed of similar texts increases.
I think there’s an analogy with ‘reading’ a chess position. If you watch the ongoing Carlson-Caruana match on https://youtube.com/watch?v=DgvqBjrusIA, you’ll notice that the commenters can easily go through three or four variants in a minute, and call one position an obvious draw, another clearly winning, etc. The reason they can do that is that they have looked at thousands of similar positions, and remember the essential parts of them.
It's not about understanding the notation, as others have said, it's about understanding the principles expressed by the notation. You may learn the grammar of a language, but that is a far journey from understanding its poetry, which is full of norms and views beyond what is captured by its grammar.