前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Yann LeCun:157页PPT揭示深度学习的局限性

Yann LeCun:157页PPT揭示深度学习的局限性

作者头像
用户1737318
发布2018-06-05 11:41:52
3380
发布2018-06-05 11:41:52
举报
文章被收录于专栏:人工智能头条

Facebook人工智能实验室主任、深度学习专家Yann LeCun在日前的CVPR 2015上做题为《What's Wrong With Deep Learning?》的报告,大谈深度学习的局限性,以及一些理论上和技术上还没有被深入研究的问题。

伴随着157页PPT,LeCun介绍了未来深度学习应领域的几个重要方向,包括:

  1. 内部机理
  2. 短期记忆(从图像视觉角度理解就是DL还应该获取多帧或连续帧的特征)
  3. 非监督学习(借鉴人类自身的领悟能力)

关于PPT,如果您能访问外国网站,可以查阅谷歌文档。当然您也可以直接至CSDN下载频道下载:http://download.csdn.net/detail/qyqyeve/8806977

HN的讨论:https://news.ycombinator.com/item?id=9714199

报告摘要:

Deep learning methods have had a profound impact on a number of areas in recent years, including natural image understanding and speech recognition. Other areas seem on the verge of being similarly impacted, notably natural language processing, biomedical image analysis, and the analysis of sequential signals in a variety of application domains. But deep learning systems, as they exist today, have many limitations.First, they lack mechanisms for reasoning, search, and inference. Complex and/or ambiguous inputs require deliberate reasoning to arrive at a consistent interpretation. Producing structured outputs, such as a long text, or a label map for image segmentation, require sophisticated search and inference algorithms to satisfy complex sets of constraints. One approach to this problem is to marry deep learning with structured prediction (an idea first presented at CVPR 1997). While several deep learning systems augmented with structured prediction modules trained end to end have been proposed for OCR, body pose estimation, and semantic segmentation, new concepts are needed for tasks that require more complex reasoning.Second, they lack short-term memory. Many tasks in natural language understanding, such as question-answering, require a way to temporarily store isolated facts. Correctly interpreting events in a video and being able to answer questions about it requires remembering abstract representations of what happens in the video. Deep learning systems, including recurrent nets, are notoriously inefficient at storing temporary memories. This has led researchers to propose neural nets systems augmented with separate memory modules, such as LSTM, Memory Networks, Neural Turing Machines, and Stack-Augmented RNN. While these proposals are interesting, new ideas are needed.Lastly, they lack the ability to perform unsupervised learning. Animals and humans learn most of the structure of the perceptual world in an unsupervised manner. While the interest of the ML community in neural nets was revived in the mid-2000s by progress in unsupervised learning, the vast majority of practical applications of deep learning have used purely supervised learning. There is little doubt that future progress in computer vision will require breakthroughs in unsupervised learning, particularly for video understanding, But what principles should unsupervised learning be based on?Preliminary works in each of these areas pave the way for future progress in image and video understanding.

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2015-06-15,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 人工智能头条 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档