Machine Learning Resources
I am currently working on audio/speech processing, TTS etc., so…
Deep Learning in Speech resources:
- Understanding sound/audio:
- @radekosmulski - A great starting point, also usable in Google Colab.
- Very good intro to audio.
- Progress tracker in NLP by @seb_ruder
- Everything you should know about sound
-
Generation/Synthesis of new sounds based on training set (copied from):
- Jake Fiala: “Deep Learning and Sound” http://fiala.uk/notes/deep-learning-and-sound-01-intro
- GRUV: https://github.com/MattVitelli/GRUV. Btw, found that LSTM worked better than GRU.
- John Glover: http://www.johnglover.net/blog/generating-sound-with-rnns.html Glover used LSTM fed by phase vocoder (really just STFT).
- Google Magenta for MIDI: https://magenta.tensorflow.org/welcome-to-magenta
- Google WaveNet for Audio… https://deepmind.com/blog/wavenet-generative-model-raw-audio/
- WaveNet is slow. “Fast Wavenet”: https://github.com/tomlepaine/fast-wavenet
- WaveNet in Keras: https://github.com/basveeling/wavenet
- Neural voice cloning with few samples implementations (to be added):
- Text-to-speech implementations (to be added):
Deep Learning cheatsheet by @rusty1s.
Resources by @thom_wolf.
My self-educational approach is usually to get a few rather exhaustive books and read them for cover to cover.
- Here is my reading list to join the NLP/AI/ML field.
- The “Deep Learning” Book by Ian Goodfellow, Yoshua Bengio and Aaron Courville is a good ressource to get a quick overview of the current tools.
- “Artificial Intelligence: A Modern Approach” by Stuart Russell and Peter Norvig is a great ressource for all pre-neural-network tools and methods.
- “Machine Learning: A Probabilistic Perspective” by Kevin P. Murphy is a great ressource to go deeper in the probabilistic approach and get a good exposure to Bayesian tools.
- “Information Theory, Inference and Learning Algorithms” by David MacKay is a little gem that explain propabilities and Information theory so clearly it’s almost unbelievable.
- “The Book of Why: The New Science of Cause and Effect” by Pearl, Judea is a good introduction to Causality (more accessible than the big “Causality: Models, Reasoning and Inference”)
- “Reinforcement Learning: An Introduction” by Richard S. Sutton and Andrew G. Barto is a great ressource to get an introductory exposure to Reinforcement Learning
- Natural Language Processing: three great ressources I’ve read with interest:
- Kyunghyun Cho’s lecture notes on “Natural Language Processing with Representation Learning” are great.
- Yoav Goldberg’s book on “Neural Network Methods in Natural Language Processing” is nice too (see also an older free version here).
- Jacob Eisenstein’s textbook on “Natural Language Processing” is also a very exhaustive read.
- It’s also good to complement this with a few online courses depending on what field you feel you should be diving deeper into.
I took the following classes:
- Computational Probability and Inference (6.008.1x) from edx.
- Probabilistic Graphical Models Specialization from coursera.