Deep Learning for Coders / Chapter-1 / Week-1
Published: June 09, 2021
This following notes is from the Week-1 of the FastAI/fastBook reading session hosted by Aman Arora (Weights & Baises)
- Important links
- Before we get on to the week 2 we need to set up GPU where we can train our models. Google colab would be the easiest to use or we can go for any paid services like Amazon AWS, GCP, JarvisLab, Azure & datacrunch
- KeyPoints - Chapter 1(Your Deep Learning Journey)
- Deep Learning(DL) is for everyone and it can be learned without starting with math.
- DL is good at lot of tasks such as Natural Language Processing, Computer Vision etc.
- It’s an technique to extract and transform data and it uses multiple layers of neural networks.
- Each of these layers take input from previous layers and refines them further. Over the time, the layers improve accuracy & minimize error. And the network learns to perform a specific task.
- Libraries like Fastai or pytorch may be outdated so its better to always learn the low level concepts & algorithms underlying.
- There are two versions of the notebook, the regular one, where all the notes from the book and the code is existing and a clean version, where we can reproduce back what we have learned.
- Machine Learning - Training of programs, developed by allowing a computer to learn from its experience, rather than manually coding the individual steps.
- Deep Learning - Its the more general discipline of machine learning.
- A general program takes inputs & outputs results.
- In machine learning the program is called Model since it takes inputs & weights.
- A Machine Learning program outputs results based on inputs & weights. The model can have a different outputs for same inputs with different set of weights.
- A Model is called trained when the weight assignment is final.
- Important Note - A Trained Model can be treated like a regular computer program.
- Neural Network - A flexible mathematical function that can be used to solve any problem.
- In Neural networks, the way to automatically update weights for any given task is called “Stochastic Gradient Descent”. This process of going back and updating weights is called Back-propagation.