Normally, Deep RL courses teach a lot of mathematically involved theory. You get the practical applications near the end (if at all).
I have tried to turn that on its head. In the top-down approach, you learn practical skills first, then go deeper later. This is much more fun.
This course (the first in a planned multi-part series) shows how to use the Deep Reinforcement Learning framework RLlib to solve OpenAI Gym environments. I provide a big-picture overview of RL and show how to use the tools to get the job done. This approach is similar to learning Deep Learning by building and training various deep networks using a high-level framework e.g. Keras.
In the next course in the series (open for pre-enrollment), we move on to solving real-world Deep RL problems using custom environments and various tricks that make the algorithms work better [1].
The main advantage of this sequence is that these practical skills can be picked up fast and used in real life immediately. The involved mathematical bits can be picked up later. RLlib is the industry standard, so you won't need to change tools as you progress.
This is the first time that I made a course on my own. I learned flip-chart drawing to illustrate the slides and notebooks. That was fun, considering how much I suck at drawing. I am using Teachable as the LMS, Latex (Beamer) for the slides, Sketchbook for illustrations, Blue Yeti for audio recording, OBS Studio for screencasting, and Filmora for video editing. The captions are first auto-generated on YouTube and then hand edited to fix errors and improve formatting. I do the majority of the production on Linux and then switch to Windows for video editing.
I released the course last month and the makers of RLlib got in touch to show their approval. That's the best thing to happen so far.
Please feel free to try it and ask any questions. I am around and will do my best to answer them.
[0] https://www.datacamp.com/courses/unit-testing-for-data-scien... [1] https://courses.dibya.online/p/realdeeprl