Data Science Learning Sequence
I get a lot of questions about what someone needs to learn to become a data scientist. Being a data scientist can mean many different things. For what it's worth, I have written a list of topics with free references for study if you want to really understand the latest machine learning models and implement them.
From calculus, you need knowledge of derivatives, gradients, the chain rule, and vectors for direct use in machine learning algorithms. The ideas of integration and series are necessary to understand probability and statistics at a useful level.
Calculus, Volume 1 by Strang and Herman.
Differential calculus (most important: derivatives, the chain rule, optimization, integrals)
Calculus, Volume 2 by Strang and Herman.
Integral calculus (most important: power series)
Calculus, Volume 3 by Strang and Herman.
Multivariate calculus (most important: vectors, partial derivatives, gradients)
The Essence of Calculus video playlist by Grant Sanderson
Tremendous animated series on single-variable calculus. Great supplements to books.
Learn one programming language very well. Learning other languages is quite easy once you know one.
I recommend Python as the best starting point. I advise against using R. Additional languages can be very helpful, especially C++, but it's very specific to what you want to do, so I will not make specific recommendations.
There are many frameworks that are popular, which supercharge what you can accomplish. If you want to implement neural networks quickly, learn Keras or PyTorch.
Probability and Statistics
From probability and statistics, it is important to
Probability and Statistics: The Science of Uncertainty by Evans and Rosenthal.
Pretty much all the topics are good here.
Lectures from my intro course (single-variable calculus prerequisite)
Data Mining and Machine Learning: Fundamental Concepts and Algorithms by Mohammed J. Zaki and Wagner Meira, Jr. (2020) -- free to read online
Math background for machine learning and a huge array of methods for pattern mining, clustering, classification, and regression. (A better starting point than Hastie if your math background is not amazing.)
The Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani, and Jerome Friedman (2009)
Linear methods, kernel methods, regularization, model selection, tree methods, ensembling, support vector machines, unsupervised learning.
Machine Learning lectures by Andrew Ng
Ng's lectures from Stanford. It covers a wide array of basics: linear models, gradient descent, intro neural nets, regularization, support vector machines, principal component analysis, anomaly detection.
(It is based on Octave, but focus on the ideas, not the programming syntax.)
Neural Networks video playlist by Grant Sanderson
Beautiful animated videos on feedforward neural nets, stochastic gradient descent, backpropagation corresponding to Nielsen's book. The best place to start for neural nets in my view.
Introduction to feedforward neural networks, stochastic gradient descent, and backpropagation. The book makes it easy to understand
Deep Learning. Ian Goodfellow, Yoshua Bengio, and Aaron Courville (2016)
Relatively mathematical treatment of deep learning. Includes a useful first 5 chapters of prerequisites for deep learning (minimal coverage of probability, linear algebra, numerical analysis, machine learning basics). It then covers MLPs, CNNs, RNNs, before moving on to some more modern techniques.
CS231n: Convolutional Neural Networks for Visual Recognition course by Fei-Fei Li, Ranjay Krishna, Danfei Xu and their army of TAs!
Notes, lectures, slides, assignments for a Stanford course, mostly on image recognition using CNNs and other models, image preprocessing, and computer vision.