2021 Machine Learning (ICT)

Font size  SML

Register update notification mail Add to favorite lecture list
Academic unit or major
Undergraduate major in Information and Communications Engineering
Kumazawa Itsuo  Nakahara Hiroki 
Course component(s)
Lecture    (ZOOM)
Day/Period(Room No.)
Mon7-8(W242)  Thr7-8(W242)  
Course number
Academic year
Offered quarter
Syllabus updated
Lecture notes updated
Language used
Access Index

Course description and aims

Various machine learning techniques and mathematics for their understanding are studies together with their programming techniques for practice. The techniques studied are (1) Multi-layer neural networks, (2) Convolutional Neural networks (CNN), (3) Other non-neural popular machine learning techniques. Mathematics behind these techniques such as differentiation, gradient descend, chain rule, backpropagation, and nonlinear functions used for activation functions are studied. Programing techniques, the libraries of standard functions for efficient programming and implementation are practiced.

Student learning outcomes

Typical machine learning techniques such as (1) Multi-layer neural networks, (2) Convolutional Neural networks (CNN), (3) Other non-neural popular machine learning techniques and their mathematical backgrounds are studied for implementation by programming.


Neural Networks, Deep Learning, Convolutional Neural Network(CNN), Backpropagation, Gradient descend, Minimization technique for loss functions, Bayes estimation, Principal component analysis, Boosting technique, Support Vector Machine, k-means method.

Competencies that will be developed

Specialist skills Intercultural skills Communication skills Critical thinking skills Practical and/or problem-solving skills

Class flow

In the first half of the course (7 classes), the basic principles of multi-layer neural networks and machine learning are studied with the basic mathematics needed for understandings. CNN(Convolutional Neural Network), which is the core technique for deep neural networks, and its learning mechanism are theoretically explained. Programming techniques for deep learning are practiced. Exercises and examinations are held in the eighth class.

Course schedule/Required learning

  Course schedule Required learning
Class 1 Backgrounds and the summary of the first half of the course. Biological neural networks and their modelings for engineering. Computation and programming of the models. Background and basic knowledge
Class 2 Basic mathematics needed for computation, learning, and programming of the multi-layer neural networks part 1 (activation functions, SoftMax, logistic regression, gradient descend, chain rule) Study basic mathematical techniques to analyze multi-layer neural networks
Class 3 Basic mathematics needed for computation, learning, and programming of the multi-layer neural networks part 2 (backpropagation and its recursive computation) Study basic mathematical techniques for backpropagation
Class 4 Programming techniques for multi-layer neural networks and their learning. Study programming techniques for multi-layer neural networks and learning
Class 5 Computation of Convolutional Neural Network (convolution, pooling, SoftMax and their roles) and techniques to improve its performance (generalization capability and avoiding overfitting) Study computation techniques of Convolutional Neural Network
Class 6 Mathematics for learning of Convolutional Neural Network (Gradient Descent and Backpropagation) Study basic mathematics for Convolutional Neural Network
Class 7 Tips for Programing of Convolutional Neural Network Study programming techniques of Convolutional Neural Network
Class 8 Introduction of machine learning, and Python programming for machine learning.
Class 9 Least squares method, overfitting, sparse learning, and robust learning.
Class 10 Classification problem programming using the sci-kit-learn library. Logistic regression, support vector machine (SVM), and decision tree.
Class 11 Clustering including the k-means method. Post/pre-processing for a dataset. L1 regularity, measurement of feature, and missing value.
Class 12 Maximum likelihood estimation, EM algorithm, Bayesian inference, confidence value.
Class 13 Data compression, principal component analysis (PCA), Linear Discriminant Analysis (LDA), Kernel PCA.
Class 14 Ensemble learning, majority method, random forest, bugging, bootstrap method, under boost.

Out-of-Class Study Time (Preparation and Review)

To enhance effective learning, students are encouraged to spend approximately 100 minutes preparing for class and another 100 minutes reviewing class content afterward (including assignments) for each class.
They should do so by referring to textbooks and other course material.


A textbook to study mathematical bases for deep learning: Learning and Neural Networks, published by Morikita Publishing Co., written by Itsuo Kumazawa

Reference books, course materials, etc.

Books to exercise deep learning programming: Machine Learning by raspberry Pi, published by Kodansha Publishing Co., written by Takashi Kanamal. S. Raschka and V. Mirjalili, "Python Machine Learning (Second Ed.)," Packt Publishing. C. M. Bishop, "Pattern Recognition and Machine Learning," Springer.

Assessment criteria and methods

Final Examination and Programming Exercise.

Related courses

  • LAS.M102 : Linear Algebra I / Recitation
  • LAS.M101 : Calculus I / Recitation
  • LAS.M105 : Calculus II

Prerequisites (i.e., required knowledge, skills, courses, etc.)

Basic knowledge about Differential and Integral Calculus.

Page Top