I may be biased because I have a background in mathematics, but after working with tensorflow for about 6 months now, I dont think you really need to understand linear algebra or multivariate calculus to work with neural nets, unless you're trying to implement your own engine.
Edit: I dont mean to discourage anyone from learning, as an understanding of the relatively simple mathematics behind NNs may afford the user an additional intuition about the behavior of neural nets, but it appears that one can treat neural nets almost like black boxes, given a suitable engine to work with like TF.
As an aside, TF is a pretty magnificent library. It pretty much works out of the box, and in addition to python and CPP bindings, there appears to be an unnoficial port to c#, although I haven't tried it yet. I strongly recommend the tutorials at tensorflow for anyone interested in experimenting.
Edit: I dont mean to discourage anyone from learning, as an understanding of the relatively simple mathematics behind NNs may afford the user an additional intuition about the behavior of neural nets, but it appears that one can treat neural nets almost like black boxes, given a suitable engine to work with like TF.
As an aside, TF is a pretty magnificent library. It pretty much works out of the box, and in addition to python and CPP bindings, there appears to be an unnoficial port to c#, although I haven't tried it yet. I strongly recommend the tutorials at tensorflow for anyone interested in experimenting.