The title is misleading. In general, I like an applications-first approach, but I have hard time liking a tutorial with the title like that and which proceeds to say things like
>Neural networks consist of linear layers alternating with non-linear layers.
without providing a proper definition nor intuition what is linear and what is not. Next step they are constructing NN layers of ReLU's and not telling what is a rectified linear unit and only barely hinting it is supposed to do ("non-linearity").
The article is not a worthless, it's a nice tutorial to building a NN classifier for MNIST, but don't expect full mastery of the mathematics relevant in understanding NNs after reading this tutorial.
Chapter 2 of the book by Goodfellow et al. is a serviceable concise summary of the relevant concepts, but I don't think that chapter alone is a good primary learning material if you are not already familiar with the subject. For that I'd recommend reading a proper undergrad level textbook (and doing at least some of the problem sets [1]), one example: http://math.mit.edu/~gs/linearalgebra/ and then continue with e.g. the rest of the Goodfellow's book.
[1] There's no royal road into learning mathematics. Or for that matter, learning anything.
It's the notes for a 40 min talk Rachel will give at the O'Reilly AI conference. It was originally meant to be a longer tutorial, so the scope had to be cut down significantly, whilst the title remained. There's lots of links in the notebook to additional resources with more background info.
Having said that, there really isn't much more linear algebra you need to implement neural networks from scratch. You'll need convolutions of course, although that's not too different from what's shown here.
For those interested in much more detail, Rachel has a full computational linear algebra course online http://www.fast.ai/2017/07/17/num-lin-alg/ . Most of that isn't needed for most deep learning, however.
The title is misleading. In general, I like an applications-first approach, but I have hard time liking a tutorial with the title like that and which proceeds to say things like
>Neural networks consist of linear layers alternating with non-linear layers.
without providing a proper definition nor intuition what is linear and what is not. Next step they are constructing NN layers of ReLU's and not telling what is a rectified linear unit and only barely hinting it is supposed to do ("non-linearity").
The article is not a worthless, it's a nice tutorial to building a NN classifier for MNIST, but don't expect full mastery of the mathematics relevant in understanding NNs after reading this tutorial.
Chapter 2 of the book by Goodfellow et al. is a serviceable concise summary of the relevant concepts, but I don't think that chapter alone is a good primary learning material if you are not already familiar with the subject. For that I'd recommend reading a proper undergrad level textbook (and doing at least some of the problem sets [1]), one example: http://math.mit.edu/~gs/linearalgebra/ and then continue with e.g. the rest of the Goodfellow's book.
[1] There's no royal road into learning mathematics. Or for that matter, learning anything.