- 1. 2022-机器学习相关规定
- 2. 2021-机器学习相关规定
- 3. 第一节 2021 - (上) - 机器学习基本概念简介
- 4. 2021 - (下) - 深度学习基本概念简介
- 5. 2022-Colab教学
- 6. 2022-PyTorch Tutorial 1
- 7. 2022-PyTorch Tutorial 2
- 8. 2021-Google Colab教学
- 9. 2021-Pytorch 教学 part 1
- 10. 2021-Pytorch 教学 part 2(英文有字幕)
- 11. 2022-作业HW1
- 12. 2021-作业HW1
- 13. (选修)To Learn More - 深度学习简介
- 14. (选修)To Learn More - 反向传播(Backpropagation)
- 15. (选修)To Learn More - 预测神奇宝贝Pokemon
- 16. (选修)To Learn More - 分类神奇宝贝Pokemon
- 17. (选修)To Learn More - 逻辑回归
- 18. 第二节 2021 - 机器学习任务攻略
- 19. 2021 - 类神经网络训练不起来怎么办(一) 局部最小值 (local minima) 与鞍点 (saddle point)
- 20. 2021 - 类神经网络训练不起来怎么办(二) 批次 (batch) 与动量 (momentum)
- 21. 2021 - 类神经网络训练不起来怎么办(三) 自动调整学习率 (Learning Rate)
- 22. 2021 - 类神经网络训练不起来怎么办(四) 损失函数 (Loss) 也可能有影响
- 23. 2022 - 再探宝可梦、数码宝贝分类器 — 浅谈机器学习原理
- 24. (选修)To Learn More - Gradient Descent 样例1
- 25. (选修)To Learn More - Gradient Descent 样例2
- 26. (选修)To Learn More - Optimization for Deep Learning (1_2)
- 27. (选修)To Learn More - Optimization for Deep Learning (2_2)
- 28. 2022 - 作业HW2
- 29. 2021 - 作业说明 HW2中文低画质版
- 30. 2021 - 作业说明 HW2-英文有字幕高清版
- 31. 第三节 2021 - 卷积神经网络(CNN)
- 32. 2022 - 为什么用了验证集 (validation set) 结果却还是过拟合(overfitting)了呢?
- 33. 2022 - 鱼与熊掌可以兼得的机器学习
- 34. (选修)To Learn More - Spatial Transformer Layer
- 35. 2022 - 作业说明HW3
- 36. 2021- 作业说明HW3 中文低画质
- 37. 2021- 作业说明HW3 英文高画质有字幕
- 38. 第四节 2021 - 自注意力机制(Self-attention)(上)
- 39. 2021 - 自注意力机制 (Self-attention) (下)
- 40. (选修)To Learn More - Recurrent Neural Network (Part I)
- 41. (选修)To Learn More - Recurrent Neural Network (Part II)
- 42. (选修)To Learn More - Graph Neural Network(1_2)
- 43. (选修)To Learn More - Graph Neural Network(2_2)
- 44. (选修)To Learn More - Unsupervised Learning - Word Embedding
- 45. 2022 - 作业说明HW4
- 46. 2021 - 作业说明 HW4-中文低画质版
- 47. 2021 - 作业说明 HW4-英文无字幕高清版
- 48. 第五节 2021 - 类神经网络训练不起来怎么办 (五) 批次标准化 (Batch Normalization)
- 49. 2021 - Transformer (上)
- 50. 2021 - Transformer (下)
- 51. 2022 - 各式各样神奇的自注意力机制 (Self-attention) 变型
- 52. (选修)To Learn More - Non-Autoregressive Sequence Generation
- 53. (选修)To Learn More - Pointer Network
- 54. 2022 - 作业说明HW5
- 55. 2021 - 作业说明 HW5 中文 + Judgeboi讲解
- 56. 2021 - 作业说明 HW5 slides tutorial -英文版机翻
- 57. 2021 - 作业说明 HW5 code tutorial -英文版机翻
- 58. 第六节 2021 - 生成式对抗网络(GAN) (一) – 基本概念介紹
- 59. 2021 - 生成式对抗网络(GAN) (二) – 理论介绍与WGAN
- 60. 2021 - 生成式对抗网络(GAN) (三) – 生成器效能评估与条件式生成
- 61. 2021 - 生成式对抗网络(GAN) (四) – Cycle GAN
- 62. (选修)To Learn More - GAN Basic Theory
- 63. (选修)To Learn More - General Framework
- 64. (选修)To Learn More - WGAN EBGAN
- 65. (选修)To Learn More - Unsupervised Learning - Deep Generative Model (Part I)
- 66. (选修)To Learn More - Unsupervised Learning - Deep Generative Model (Part II)
- 67. (选修)To Learn More - Flow-based Generative Model
- 68. 2021 - 作业说明 HW6 中文版低画质
- 69. 2021 - 作业说明 HW6 英文版高画质有字幕
- 70. 2022 - 作业说明 HW6
- 71. 第七节 2021 - 自监督式学习 (一) – 芝麻街与进击的巨人
- 72. 2021 - 自监督式学习 (二) – BERT简介
- 73. 2021 - 自监督式学习 (三) – BERT的奇闻轶事
- 74. 2021 - 自监督式学习 (四) – GPT的野望
- 75. 2022 - 如何有效的使用自督导式模型 - Data-Efficient & Parameter-Efficient Tuning
- 76. 2022 - 语音与影像上的神奇自督导式学习模型
- 77. (选修)To Learn More - BERT and its family - Introduction and Fine-tune
- 78. (选修)To Learn More - ELMo BERT GPT XLNet MASS BART UniLM ELECTRA othe
- 79. (选修)To Learn More - Multilingual BERT
- 80. (选修)To Learn More - 來自獵人暗黑大陸的模型 GPT-3
- 81. 2021 - 作业说明 HW7 中文版低画质
- 82. 2022 - Homework 7
- 83. 第八节 2021 - 自编码器 (Auto-encoder) (上) – 基本概念
- 84. 2021 - 自编码器 (Auto-encoder) (下) – 领结变声器与更多应用
- 85. 2021 - Anomaly Detection (1_7)
- 86. 2021 - Anomaly Detection (2_7)
- 87. 2021 - Anomaly Detection (3_7)
- 88. 2021 - Anomaly Detection (4_7)
- 89. 2021 - Anomaly Detection (5_7)
- 90. 2021 - Anomaly Detection (6_7)
- 91. 2021 - Anomaly Detection (7_7)
- 92. (选修)To Learn More - Unsupervised Learning - Linear Methods
- 93. (选修)To Learn More - Unsupervised Learning - Neighbor Embedding
- 94. 2021 - 作业说明 HW8 中文版低画质
- 95. 2022 - 作业说明 Homework 8
- 96. 第九节 2021 - 机器学习的可解释性 (上) – 为什么神经网络可以正确分辨宝可梦和数码宝贝
- 97. 2021 - 机器学习的可解释性 (下) –机器心中的猫长什么样子
- 98. 2022 - 自然语言处理上的对抗式攻击 (由姜成翰助教讲授) - Part 1
- 99. 2021 - 作业说明 HW9 中文版低画质
- 100. 2022 - Homework 9
- 101. 第十节 2021 - 来自人类的恶意攻击 (Adversarial Attack) (上) – 基本概念
- 102. 2021 - 来自人类的恶意攻击 (Adversarial Attack) (下) – 类神经网络能否躲过人类深不见底的恶意
- 103. 2022 - 自然语言处理上的对抗式攻击 (由姜成翰助教讲授) - Part 2
- 104. 2022 - 自然语言处理上的对抗式攻击 (由姜成翰助教讲授) - Part 3
- 105. 2022 - 自然语言处理上的模仿攻击 (Imitation Attack) 以及后门攻击 (Backdoor Attack) (由姜成翰助教讲授)
- 106. (选修)To Learn More - More about Adversarial Attack (1_2)
- 107. (选修)To Learn More - More about Adversarial Attack (2_2)
- 108. 2021 - 作业说明 HW10 中文版低画质
- 109. 2022 - Homework 10
- 110. 第十一节 2021 - 概述领域自适应 (Domain Adaptation)
- 111. 2022 - 恶搞自督导式学习模型 BERT的三个故事
- 112. 2021 - 作业说明 HW11 Domain Adaptation
- 113. 2022 - 作业说明 Homework11
- 114. 第十二节 2021 - 概述增強式學習(一) – 增强式学习和机器学习一样都是三个步骤
- 115. 2021 - 概述增强式学习 (二) – Policy Gradient 与修课心情
- 116. 2021 - 概述增强式学习 (三) – Actor-Critic
- 117. 2021 - 概述增强式学习 (四) – 回馈非常罕見的時候怎么办?机器的望梅止渴
- 118. 2021 - 概述增强式学习 (五) – 如何从示范中学习?逆向增強式学习 (Inverse RL)
- 119. 2021 - 作业说明 HW12 中文高清
- 120. 2022 - 作业说明 HW12 (英文版) - 1_2
- 121. (选修)To Learn More - Deep Reinforcement Learning
- 122. 第十三节 2021 - 神经网络压缩 (一) - 类神经网络剪枝(Pruning) 与大乐透假说(Lottery Ticket Hypothesis)
- 123. 2021 - 神经网络压缩 (二) - 从各种不同的面向來压缩神经网络
- 124. (选修)To Learn More - Proximal Policy Optimization (PPO)
- 125. (选修)To Learn More - Q-learning (Basic Idea)
- 126. (选修)To Learn More - Q-learning (Advanced Tips)
- 127. (选修)To Learn More - Q-learning (Continuous Action)
- 128. (选修)To Learn More - Geometry of Loss Surfaces (Conjecture)
- 129. 2021 - 作业说明 HW13 中文高清
- 130. 2022 - 作业说明 HW13
- 131. 第十四节 2021 - 机器终身学习 (一) - 为什么今日的人工智能无法成为天网?灾难性遗忘(Catastrophic Forgetting)
- 132. 2021 - 机器終身学习 (二) - 灾难性遗忘(Catastrophic Forgetting)
- 133. 2021 - 作业说明 HW14 中文高清
- 134. 2022 - 作业说明 HW14
- 135. 第十五节 2021 - 元学习 Meta Learning (一) - 元学习和机器学习一样也是三個步骤
- 136. 2021 - 元学习 Meta Learning (二) - 万物皆可 Meta
- 137. 2022 - 各种奇葩的元学习 (Meta Learning) 用法
- 138. (选修)To Learn More - Meta Learning – MAML (1)
- 139. (选修)To Learn More - Meta Learning – MAML (2)
- 140. (选修)To Learn More - Meta Learning – MAML (3)
- 141. (选修)To Learn More - Meta Learning – MAML (4)
- 142. (选修)To Learn More - Meta Learning – MAML (5)
- 143. (选修)To Learn More - Meta Learning – MAML (6)
- 144. (选修)To Learn More - Meta Learning – MAML (7)
- 145. (选修)To Learn More - Meta Learning – MAML (8)
- 146. (选修)To Learn More - Meta Learning – MAML (9)
- 147. (选修)To Learn More - Gradient Descent as LSTM (1_3)
- 148. (选修)To Learn More - Gradient Descent as LSTM (2_3)
- 149. (选修)To Learn More - Gradient Descent as LSTM (3_3)
- 150. (选修)To Learn More - Meta Learning – Metric-based (1)
- 151. (选修)To Learn More - Meta Learning – Metric-based (2)
- 152. (选修)To Learn More - Meta Learning – Metric-based (3)
- 153. (选修)To Learn More - Meta Learning - Train+Test as RNN
- 154. 2022 - 作业说明 HW15
- 155. 【机器学习】课程结语 完结撒花
Lecture 02019/02/19Course Logistics [slides]
Registration: [Google Form]
Lecture 12019/02/26Introduction [slides] (video)
Guest Lecture (R103)[PyTorch Tutorial]
Lecture 22019/03/05Neural Network Basics [slides] (video)
Suggested Readings:
[Linear Algebra]
[Linear Algebra Slides]
[Linear Algebra Quick Review]
A12019/03/05A1: Dialogue Response Selection[A1 pages]
Lecture 32019/03/12Backpropagation [slides] (video)
Word Representation [slides] (video)
Suggested Readings:
[Learning Representations]
[Vector Space Models of Semantics]
[RNNLM: Recurrent Neural Nnetwork Language Model]
[Extensions of RNNLM]
[Optimzation]
Lecture 42019/03/19Recurrent Neural Network [slides] (video)
Basic Attention [slides] (video)
Suggested Readings:
[RNN for Language Understanding]
[RNN for Joint Language Understanding]
[Sequence-to-Sequence Learning]
[Neural Conversational Model]
[Neural Machine Translation with Attention]
[Summarization with Attention]
[Normalization]
A22019/03/19A2: Contextual Embeddings[A2 pages]
Lecture 52019/03/26Word Embeddings [slides] (video)
Contextual Embeddings - ELMo [slides] (video)
Suggested Readings:
[Estimation of Word Representations in Vector Space]
[GloVe: Global Vectors for Word Representation]
[Sequence Tagging with BiLM]
[Learned in Translation: Contextualized Word Vectors]
[ELMo: Embeddings from Language Models]
[More Embeddings]
2019/04/02Spring BreakA1 Due
Lecture 62019/04/09Transformer [slides] (video)
Contextual Embeddings - BERT [slides] (video)
Gating Mechanism [slides] (video)
Suggested readings:
[Contextual Word Representations Introduction]
[Attention is all you need]
[BERT: Pre-training of Bidirectional Transformers]
[GPT: Improving Understanding by Unsupervised Learning]
[Long Short-Term Memory]
[Gated Recurrent Unit]
[More Transformer]
Lecture 72019/04/16Reinforcement Learning Intro [slides] (video)
Basic Q-Learning [slides] (video)
Suggested Readings:
[Reinforcement Learning Intro]
[Stephane Ross' thesis]
[Playing Atari with Deep Reinforcement Learning]
[Deep Reinforcement Learning with Double Q-learning]
[Dueling Network Architectures for Deep Reinforcement Learning]
A32019/04/16A3: RL for Game Playing[A3 pages]
Lecture 82019/04/23Policy Gradient [slides] (video)
Actor-Critic (video)
More about RL [slides] (video)Suggested Readings:
[Asynchronous Methods for Deep Reinforcement Learning]
[Deterministic Policy Gradient Algorithms]
[Continuous Control with Deep Reinforcement Learning]
A2 Due
Lecture 92019/04/30Generative Adversarial Networks [slides] (video)
(Lectured by Prof. Hung-Yi Lee)
Lecture 102019/05/07Convolutional Neural Networks [slides]
A42019/05/07A4: Drawing[A4 pages]
2019/05/14BreakA3 Due
Lecture 112019/05/21Unsupervised Learning [slides]
NLP Examples [slides]
Project Plan [slides]
Special2019/05/28 Company WorkshopRegistration: [Google Form]
2019/06/04BreakA4 Due
Lecture 122019/06/11Project Progress Presentation
Course and Career Discussion
Special2019/06/18Company WorkshopRegistration: [Google Form]
Lecture 132019/06/25Final Presentation