The beginners who come in feeling excited that this will be a great learning resource are probably missing the point. Learning happens when you force yourself to create notes by finding structure in the raw text. Notes are extremely personal, and reading someone else's does not have the same emotional connect.
Am I suggesting you stop reading notes made by others? Absolutely not! I am suggesting you rather double down on that, except _always_ make your own notes if the objective is learning. Use the excellent public notes to build your own mental models of what makes for good notes.
I would probably agree with the "biggest", but disagree with the "only": * The readers might use a note as an extended abstract when selecting a paper to read. This is like a short conference talk which is for advertising the paper and inviting people to the poster session. * The authors get feedback about their research, and some of them engage in a discussion as well.
Having said that, I agree that taking your own notes is better for you.
the same is true of most textbooks as well; most people write textbooks for themselves (and then publish them in order to not nothing to show for a year of work). i saw that somewhere and it's changed the way i approach reading textbooks (no longer do i take it for granted that one presentation is /the/ presentation).
I find it funny that someone would go through the pain of undertaking an endeavor as large as writing a textbook, just for themselves. For that, they already have their notes. If you are hinting that writing textbooks (good or bad) has professional consequences, sure. Are they wrong in doing so? I don't see why they shouldn't bear the fruit of good exposition.
Stretching the argument further, you might as well explain almost every action as "people do X for themselves". Kevin Simpler explores this theme in detail [1].
[1]: The Elephant in the Brain: Hidden Motives in Everyday Life (https://www.librarything.com/work/19982533/book/195649617)
I wonder how long you can keep up doing this. I once was motivated to also read a lot (although not strictly one paper a day) but once you get to have more and more deadlines (paper submissions etc) and then approach the end of your PhD, I gave up. Now that this is (mostly) over, I want to read more again.
Also, I can recommend to keep a balance of papers close to your own research area (these are anyway a must, if you are serious about it) and also from further away. If you can manage to adopt techniques from other areas/fields, this usually results in great things.
I'll probably slow down at some point, but I think atm reading stuff gives me more ideas or general understanding what I want to work on and what not. There are drawbacks as well since some papers have a lot of time to read in depth, and a day is def not enough to get a proper understanding.
Re your advice, that's a great point! How do you select a papers outside of your comfort zone?
So already most pure DL papers are out of this zone, but I anyway many of them, when I find them interesting. Although I tend to find it a bit boring when you just adopt next-great-model (e.g. Transformer, or whatever comes next) to ASR, but most improvements in ASR are just due to that. You know, I'm also interested in all these things like neural turing machine, although I never really got a chance to apply them to anything I work on. But maybe on language modeling. Language modeling is anyway great, as it is simple conceptually, you can directly apply most models to it, and (big) improvements would usually directly carry over to WER.
Attention-based encoder-decoder models started in machine translation (MT). And this was anyway sth part of our team did (although our team was mostly divided into the ASR and MT team). And since that came up, it was clear that this should in principle also work on ASR. It was very helpful to get a good baseline from the MT team to work on, and then to reimplement it in my own framework (by importing model parameters in the end, and dumping hidden state during beam search, to make sure it is 100% correct). And then take most recent techniques from MT, and adapt them to ASR. Others did that as well, but I had the chance to use some more recent methods, and also things like subword units (BPE) which was not standard in ASR by then. Just adopting this got me some very nice results (and a nice paper in the end). So I try to follow up on MT sometime to see what I can use for ASR.
Then out of own interest, I'm also interested in RL. And there are some ideas you can also take over to ASR (and have been already). Although this is somewhat limited. Min expected WER training (like policy gradient) has independently already developed in the ASR field, but it's interesting to see relations, and adopt RL ideas. E.g. actor critic might be useful (has already be done, but only limited so far).
Another field, even further away, is computational neuroscience. I have taken some Coursera course on this, and regularly read papers, although I don't really understand them in depth. But this is sth which really interests me. I'm closely following all the work by Randall O'Reilly (https://psychology.ucdavis.edu/people/oreilly). E.g. see his most recent lecture (https://compcogneuro.org/).
This already keeps me quite busy. Although I think all of these areas can really help me advance things (well, maybe ASR, although in principle I would also like to work on more generic A(G)I stuff).
If I would have infinite time, I would probably also study some more math, physics and biology...
I haven't done a PhD (yet), but I get the sense that in the early years you spend a lot of time under water trying to swim.
For example: https://arxiv.wiki/abs/2101.06861
https://github.com/arxivwiki/kurin-paper-scraper https://github.com/arxivwiki/arxivwiki/blob/main/.github/wor... https://github.com/arxivwiki/arxivwiki/blob/main/.github/wor...
It is also mentioned on today's changelog: https://www.notion.so/What-s-New-157765353f2c4705bd45474e5ba...
So that could mean two things:
- SQLite is being used in-memory, but things are still being flushed to IndexedDB for persistence? shouldn't help with faster page navigation here
- SQLite is not being used, so it can't explain the performance increase
I think the answer is more in the second link (changelog):
> - *Your workspace is now more reliable after Apr 16, 2021's scheduled maintenance — we upgraded from a single database instance to a sharded deployment, which means Notion is now capable of serving 3x as much traffic as before*