This course introduces basic knowledge of machine learning and deep learning.
- Understand basic concepts (e.g., classification, convex optimization) and methods (e.g., stochastic gradient descent, back propagation) for discriminative models of machine learning.
- Realize machine learning with toolkits and programming.
[Theme] The first half of this lecture covers basic concept of machine learning with linear models and optimization. The second half of this lecture presents the fundamentals and practices of deep learning.
Machine learning, regression, classification, optimization, linear model, neural network, deep learning
|Intercultural skills||Communication skills||✔ Specialist skills||Critical thinking skills||Practical and/or problem-solving skills|
This lecture includes explanations and exercises of machine learning toolkits.
|Course schedule||Required learning|
|Class 1||introduction||Basic concept of Machine Learning|
|Class 2||Linear Model 1||Loss functions, empirical loss minimization, overfitting, regularization, bias and variance, linear model (linear regression)|
|Class 3||Optimization 1||Concept of optimization, gradient methods, constraint optimization.|
|Class 4||Optimization 2||Convex optimization, Duality|
|Class 5||Linear Model 2||Linear model (classification)，logistic regression, linear and kernel support vector machines|
|Class 6||Linear Model 3||L1 regularization, sparse learning, Lasso|
|Class 7||Scalable Learning||Stochastic gradient, accelerated gradients, moment, mini-batch, distributed parallel training|
|Class 8||Introduction to Deep Learning||Real-world applications|
|Class 9||Feedforward Neural Network (I)||binary classification, Threshold Logic Units (TLUs), Single-layer Perceptron (SLP), Perceptron algorithm, sigmoid function, Stochastic Gradient Descent (SGD), Multi-layer Perceptron (MLP), Backpropagation, Computation Graph, Automatic Differentiation, Universal Approximation Theorem|
|Class 10||Feedforward Neural Network (II)||multi-class classification, linear multi-class classifier, softmax function, Stochastic Gradient Descent (SGD), mini-batch training, loss functions, activation functions, dropout|
|Class 11||Word embeddings||word embeddings, distributed representation, distributional hypothesis, pointwise mutual information, singular value decomposition, word2vec, word analogy, GloVe, fastText|
|Class 12||DNN for structural data||Recurrent Neural Networks (RNNs), Gradient vanishing and exploding, Long Short-Term Memory (LSTM), Gated Recurrent Units (GRUs), Recursive Neural Network, Tree-structured LSTM, Convolutional Neural Networks (CNNs)|
|Class 13||Encoder-decoder models (I)||language modeling, Recurrent Neural Network Language Model (RNNLM), encoder-decoder models, sequence-to-sequence models, attention mechanism, reading comprehension, question answering, headline generation, multi-task learning|
|Class 14||Encoder Decoder Modeling (I)||character-based RNN, byte-pair encoding, Convolutional Sequence to Sequence (ConvS2S), Transformer, ELMo, BERT|
Handouts will be given when necessary.
- Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press. 2016.
- Christopher Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics), 2010
Course marks are based on assignments (70%) and exercises (30%).