- 02 计算机视觉历史回顾与介绍(中)
- 03 计算机视觉历史回顾与介绍(下)
- 04 数据驱动的图像分类方式:K最近邻与线性分类器(上)
- 05 数据驱动的图像分类方式:K最近邻与线性分类器(下)
- 06 线性分类器损失函数与最优化(上)
- 07 线性分类器损失函数与最优化(下)
- 08 反向传播与神经网络初步(上)
- 09 反向传播与神经网络初步(下)
- 10 神经网络训练细节part1(上)
- 11 神经网络训练细节part1(下)
- 12 神经网络训练细节part2(上)
- 13 神经网络训练细节part2(下)
- 14 卷积神经网络详解(上)
- 15 卷积神经网络详解(下)
- 16 迁移学习之物体定位与检测(上)
- 17 迁移学习之物体定位与检测(下)
- 18 卷积神经网络的可视化与进一步理解(上)
- 19 卷积神经网络的可视化与进一步理解(下)
课件英文视频及字幕等 by 爱可可-爱生活
链接:https://pan.baidu.com/s/1pKsTivp#list
官网
链接:CS231n: Convolutional Neural Networks for Visual Recognition
深度学习与计算机视觉-非常经典人工智能课程-
These notes accompany the Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition.
For questions/concerns/bug reports contact Justin Johnson regarding the assignments, or contact Andrej Karpathy regarding the course notes. You can also submit a pull request directly to our git repo.
We encourage the use of the hypothes.is extension to annote comments and discuss these notes inline.
Spring 2019 Assignments
Assignment #1: Image Classification, kNN, SVM, Softmax, Neural Network
Assignment #2: Fully-Connected Nets, Batch Normalization, Dropout, Convolutional Nets
Assignment #3: Image Captioning with Vanilla RNNs, Image Captioning with LSTMs, Network Visualization, Style Transfer, Generative Adversarial Networks
Module 0: Preparation
Setup Instructions
Python / Numpy Tutorial
IPython Notebook Tutorial
Google Cloud Tutorial
AWS Tutorial
Module 1: Neural Networks
Image Classification: Data-driven Approach, k-Nearest Neighbor, train/val/test splits
L1/L2 distances, hyperparameter search, cross-validation
Linear classification: Support Vector Machine, Softmax
parameteric approach, bias trick, hinge loss, cross-entropy loss, L2 regularization, web demo
Optimization: Stochastic Gradient Descent
optimization landscapes, local search, learning rate, analytic/numerical gradient
Backpropagation, Intuitions
chain rule interpretation, real-valued circuits, patterns in gradient flow
Neural Networks Part 1: Setting up the Architecture
model of a biological neuron, activation functions, neural net architecture, representational power
Neural Networks Part 2: Setting up the Data and the Loss
preprocessing, weight initialization, batch normalization, regularization (L2/dropout), loss functions
Neural Networks Part 3: Learning and Evaluation
gradient checks, sanity checks, babysitting the learning process, momentum (+nesterov), second-order methods, Adagrad/RMSprop, hyperparameter optimization, model ensembles
Putting it together: Minimal Neural Network Case Study
minimal 2D toy data example
Module 2: Convolutional Neural Networks
Convolutional Neural Networks: Architectures, Convolution / Pooling Layers
layers, spatial arrangement, layer patterns, layer sizing patterns, AlexNet/ZFNet/VGGNet case studies, computational considerations
Understanding and Visualizing Convolutional Neural Networks
tSNE embeddings, deconvnets, data gradients, fooling ConvNets, human comparisons
Transfer Learning and Fine-tuning Convolutional Neural Networks
中文字幕