前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >MIT 6.S91 Introduction Deep Learning Notes

MIT 6.S91 Introduction Deep Learning Notes

作者头像
何武凡
发布2023-03-09 17:14:58
3110
发布2023-03-09 17:14:58
举报

1.Introduction to Deep learning

  • 震撼,第一节课直接放大招,用自己拍摄的视频和奥巴马合成来介绍这门课程。
  • 不管老师在课程上讲什么,希望你们能真正的思考为什么这一步是重要而且必须的,正是这些思考才能做出真正令人惊讶的突破。

2.Deep Sequence Model

Three way to solve gradient vanish

  • Gated Cells
    • LSTM
      • Forget
      • Store
      • Update
      • Output
  • Attention [[Transformer]]

3.Deep Computer Vision

  • 介绍卷积操作,是一种提取特征的方法生成feature maps(还有其他的方法可以用吗?然后效果还不错);
    • 与全连接相比的优点;
  • Fast RCNN用于目标检测,怎么实现推荐特定区域图像?
  • 医学图片分割
  • 总结:
    • 原理
    • CNN架构
    • 应用

4.Deep Generative Models

  • what 目标: 来自于一些分布中的训练样本,通过这些样本学习模型来表征这个分布;
  • how 密度估计;神经网络适合来进行高维度表征;
  • why
    • Debiasing: Capable of uncovering underlying features in a dataset
    • Outlier detection: how can we detect when we encounter something new or rare?
  • Latent variable representation:
    • 举例事物的投影,只能看见影子即表象,而被灯光照射的实物是看不见的即隐变量;要做的是通过观察到的投影来对实物进行建模
  • Autoencoder: reconstruction loss
    • 完全是确定性性
  • VAEs:normal prior + regularizationreconstruction loss + regularization termencoder: q_\phi(z|x)decoder: p_\theta(x|z)KL-divergence: D(q_\phi(z|x)||p(z))
  • GANs
    • make a generative model by having two neural networks compete with each other
    • ⭐️CycleGAN: domain transformations 视频开头的视频就是用这个合成

5.Deep reinforcement learning #

  • Reward:
R_t = r_t + \gamma r_{t+1} + \gamma^2 r_{t+2} + …
  • Q-function: expected total future reward
Q(s_t, a_t) = E[R_t|s_t, a_t]
  • Policy: to infer the best action to take at its state, choose an action that maximizes future reward
\pi^*(s)=\mathop{\arg\max}\limits_{s}Q(s, a)
  • Value Learningfind Q(s, a)a = \mathop{\arg\max}\limits_{a}Q(s, a)
  • Police Learningfind \pi(s)sample a\sim\pi(s)
  • Deep Q Network(DQN)
  • Policy Gradient
  • AlphaGo

6.DL Limitations and New Frontiers

  • limitations
    • Generalization
      • data is important
    • Uncertainty in Deep learning
    • adversarial attack
    • Algorithmic Bias
  • Frontiers
    • encoder
      • many real world data cannot be captured by standard encodings
      • GCN(Graph Convolutional Networks)
    • Automated AI

7. LiDAR for Autonomous Driving

@INNOVIZ

  • Camera Vs LiDAR
    • 互补,视线不好的情况
    • 冗余能保证准确
  • Safety and Comfort

8. Automatic Speech Recognition

@Rev

  • Conformer
  • CTC

9. AI fore Science

Principled AI Algorithms for challenging domains @Caltech

10. Uncertainty in Deep Learning

longer version:NeurIPS 2020 Tutorial @Google AI Brain Team

  • Return a distribution over predictions rather than a single prediction
  • Out-of-Distribution Robustness
    • covariate shift: distribution of features changes
    • open-set recognition: new classes may appear at test time
    • label shift: distribution of label changes
  • sources of uncertainty
    • Model uncertainty
      • 认知上的不确定性
    • Data uncertainty
      • human disagreement label noise
      • measurement noise
      • missing data
  • how to compute
    • BDN
    • GP
    • Deep Ensemble
    • MCMC
  • multi-input and multi output(MIMO)
  • how to communicate with uncertainty?

7-10讲很一般,一个复杂的主题,需要将背景讲清楚,公司讲东西也没啥具体细节。

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1.Introduction to Deep learning
  • 2.Deep Sequence Model
  • 3.Deep Computer Vision
  • 4.Deep Generative Models
  • 5.Deep reinforcement learning #
  • 6.DL Limitations and New Frontiers
  • 7. LiDAR for Autonomous Driving
  • 8. Automatic Speech Recognition
  • 9. AI fore Science
  • 10. Uncertainty in Deep Learning
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档