1. PyTorch构架分析
PyTorch – Internal Architecture Tour
链接:http://blog.christianperone.com/2018/03/pytorch-internal-architecture-tour/
2. OpenAI新的meta-learning算法Reptile,用shortest descent算法加快learn2learn
Reptile: A Scalable Meta-Learning Algorithm
链接:https://blog.openai.com/reptile/
3. Berkeley提出RL并行化框架,大大加快了训练速度,Atari只要几分钟就训练完
Accelerated Methods for Deep Reinforcement Learning
链接:https://arxiv.org/abs/1803.02811
4. CNN再次在sequence modeling上打脸RNN
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
链接:https://arxiv.org/pdf/1803.01271.pdf
5. BAIR blog用GAN做字体的style transfer
Transfer Your Font Style with GANs
链接:http://bair.berkeley.edu/blog/2018/03/13/mcgan/
6. 用神经网络去做计算机内存的预读取
Learning Memory Access Patterns
链接:https://arxiv.org/pdf/1803.02329.pdf
7. Airbnb学listing embedding
Listing Embeddings for Similar Listing Recommendations and Real-time Personalization in Search
链接:https://medium.com/airbnb-engineering/listing-embeddings-for-similar-listing-recommendations-and-real-time-personalization-in-search-601172f7603e
8. 分析深度对训练速度的影响,overparameterization看来还是有好处的
Can increasing depth serve to accelerate optimization?
链接:http://www.offconvex.org/2018/03/02/acceleration-overparameterization/?utm_campaign=buffer&utm_content=buffere8a58&utm_medium=social&utm_source=twitter.com
9.(文章略老)Wolfram的神经网络后端用的MXNet,这个挺好玩
Apache MXNet in the Wolfram Language
链接:https://www.oreilly.com/ideas/apache-mxnet-in-the-wolfram-language
10. ML模拟可视化工具,包括LR, MLP,RNN和Q-learning,做的很漂亮
链接:https://www.mladdict.com/
本文分享自 机器学习人工学weekly 微信公众号,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。
本文参与 腾讯云自媒体同步曝光计划 ,欢迎热爱写作的你一起参与!