# 深度神经网络可解释性方法汇总，附Tensorflow代码实现

##### 【新智元导读】理解神经网络：人们一直觉得深度学习可解释性较弱。然而，理解神经网络的研究一直也没有停止过，本文就来介绍几种神经网络的可解释性方法，并配有能够在Jupyter下运行的代码连接。来新智元 AI 朋友圈和AI大咖们一起讨论吧。

Activation Maximization

1.1 Activation Maximization (AM)

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/1.1%20Activation%20Maximization.ipynb

1.2 Performing AM in Code Space

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/1.3%20Performing%20AM%20in%20Code%20Space.ipynb

Layer-wise Relevance Propagation

2.1 Sensitivity Analysis

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.1%20Sensitivity%20Analysis.ipynb

2.2 Simple Taylor Decomposition

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.2%20Simple%20Taylor%20Decomposition.ipynb

2.3 Layer-wise Relevance Propagation

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.3%20Layer-wise%20Relevance%20Propagation%20%281%29.ipynb

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.3%20Layer-wise%20Relevance%20Propagation%20%282%29.ipynb

2.4 Deep Taylor Decomposition

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Deep%20Taylor%20Decomposition%20%281%29.ipynb

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Deep%20Taylor%20Decomposition%20%282%29.ipynb

2.5 DeepLIFT

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.5%20DeepLIFT.ipynb

3.1 Deconvolution

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.1%20Deconvolution.ipynb

3.2 Backpropagation

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.2%20Backpropagation.ipynb

3.3 Guided Backpropagation

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.3%20Guided%20Backpropagation.ipynb

Class Activation Map

https://github.com/deepmind/mnist-cluttered

4.1 Class Activation Map

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/4.1%20CAM.ipynb

Quantifying Explanation Quality

5.1 Explanation Continuity

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/5.1%20Explanation%20Continuity.ipynb

5.2 Explanation Selectivity

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/5.2%20Explanation%20Selectivity.ipynb

Sections 1.1 ~ 2.2 and 5.1 ~ 5.2

[1] Montavon, G., Samek, W., Müller, K., jun 2017. Methods for Interpreting and Understanding Deep Neural Networks. arXiv preprint arXiv:1706.07979, 2017.

Section 1.3

[2] Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J., 2016. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain. pp. 3387-3395.

[3] A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In NIPS, 2016.

Section 2.3

[4] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W., 07 2015. On pixel-wise explanations for non-linear classi er decisions by layer-wise relevance propagation. PLOS ONE 10 (7), 1-46.

Section 2.4

[5] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.R., 2017. Explaining nonlinear classi cation decisions with deep Taylor decomposition. Pattern Recognition 65, 211-222.

Section 2.5

[6] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning Important Features Through Propagating Activation Differences. arXiv preprint arXiv:1704.02685, 2017.

Section 3.1

[7] Zeiler, M. D., Fergus, R., 2014. Visualizing and understanding convolutional networks. In: Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I. pp. 818-833.

Section 3.2

[8] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Workshop at International Conference on Learning Representations, 2014.

Section 3.3

[9] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.

Section 3.4

[10] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. arXiv preprint arXiv:1703.01365, 2017.

Section 3.5

[11] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. SmoothGrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017.

Section 4.1

[12] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929, 2016.

Section 4.2

[13] R. R.Selvaraju, A. Das, R. Vedantam, M. Cogswell, D. Parikh, and D. Batra. Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization. arXiv:1611.01646, 2016.

Section 4.3

[14] A. Chattopadhyay, A. Sarkar, P. Howlader, and V. N. Balasubramanian. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. CoRR, abs/1710.11063, 2017.

0 条评论

• ### 16岁日本神童打造日版「健康码」，追踪用户行动数据抗击疫情

就在上个月，苹果和谷歌公司刚宣布，为抗击疫情传播将合作开发一种技术来追踪病毒的踪迹。

• ### 万元AI机皇！华为mate 20系列发布：搭载麒麟980，还能当充电宝，稳！

余承东刚刚在英国伦敦发布华为下半年最重要的手机——Mate 20系列，包括四款手机：Mate 20、Mate 20 Pro、Mate 20 X和Mate 20 ...

• ### 深度神经网络可解释性方法汇总，附Tensorflow代码实现

理解神经网络：人们一直觉得深度学习可解释性较弱。然而，理解神经网络的研究一直也没有停止过，本文就来介绍几种神经网络的可解释性方法，并配有能够在Jupyter下运...

• ### 3D深度学习课程

Stanford CS231A: Computer Vision-From 3D Reconstruction to Recognition

• ### 爬虫攻防之前端策略简析

文章里介绍了几个大的网站，在反爬虫过程中，采取的各式各样的策略，无不体现出前端工程师的奇葩脑洞。

• ### 《强化学习》中的时序差分控制：Sarsa、Q-learning、期望Sarsa、双Q学习 etc.

笔者阅读的是中文书籍，所提到的公式，笔者将给出其在英文书籍上的页码。英文书籍见 Sutton 个人主页：http://incompleteideas.net/b...