<aside> ⚠️ This note serves as a reminder of the book's content, including additional research on the mentioned topics. It is not a substitute for the book. Most images are sourced from the book or referenced.
</aside>
<aside> 🚨 I've noticed that taking notes on this site while reading the book significantly extends the time it takes to finish the book. I've stopped noting everything, as in previous chapters, and instead continue reading by highlighting/hand-writing notes instead. I plan to return to the detailed style when I have more time.
</aside>
<aside> ✊ This book contains 1007 pages of readable content. If you read at a pace of 10 pages per day, it will take you approximately 3.3 months (without missing a day) to finish it. If you aim to complete it in 2 months, you'll need to read at least 17 pages per day.
</aside>
The Perceptron: one of the simplest ANN architectures (ANN = Artificial Neural Networks)
Figure 10-4. TLU (threshold logic unit): an artificial neuron that computes a weighted sum of its inputs $w^Tx$, plus a bias term b, then applies a step function
Most commond step function is Heaviside step function, sometimes sign function is used.
$$ \text { heaviside }(z)= \begin{cases}0 & \text { if } z<0 \\ 1 & \text { if } z \geq 0\end{cases} $$
$$ \operatorname{sgn}(z)=\begin{array}{ll}-1 & \text { if } z<0 \\0 & \text { if } z=0 \\+1 & \text { if } z>0\end{array} $$
How is a perceptron trained? → follows Hebb’s rule. “Cells that fire together, wire together” (the connection weight between two neurons tends to increase when they fire simultaneously.)
perceptrons has limit (eg. cannot solve XOR problem) → use multiplayer perceptron (MLP)
perceptrons do not output a class probability → use logistic regression instead.
When an ANN contains a deep stack of hidden layers → deep neural network (DNN)
Thời xưa, máy tính chưa mạnh → train MLPs is a problem kể cả khi dùng gradient descent.
→ Backpropagation : an algo to minimize the cost function of MLPs.
→ Read this note.
<aside> ☝ From this, I've decided to browse additional materials to deepen my understanding of Deep Learning. I found that the book has become more generalized than I expected, so I'll explore other resources before returning to finish it.
</aside>