前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >GCN 论文英语表达总结

GCN 论文英语表达总结

作者头像
张凝可
发布2020-04-08 17:15:51
8320
发布2020-04-08 17:15:51
举报
文章被收录于专栏:技术圈技术圈

!猫在家里看论文,写论文的日子真爽

!我常常自嘲自己的英文写的很像老太太的裹脚布,又臭又长

!主要是将一些GCN的英文表达方式记录下来,收藏起来慢慢学习

!会给出论文题目,还有一些小小的note

-------------------------------------------------------一条开始认真脸的分界线---------------------------------------------------------

Aspect-based Sentiment Classification with Aspect-specific Graph Convolutional Networks

1. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies.

注意over 和 exploit 的使用

2. GCN has a multi-layer architecture, with each layer encoding and updating the representation of nodes in the graph using features of immediate neighbors.

注意multi-layer的使用,

以及用with 的使用

这句话常常需要用来表示多层的GCN

3. Furthermore, following the idea of self-looping in Kipf and Welling (2017), each word is manually set adjacent to itself, i.e. the diagonal values of A are all ones.

Following the idea of …

the diagonal values of A are all ones. 对角线为1的矩阵A

set adjacent to itself 设置自链接

4. Experimental results have indicated that GCN brings benefit to the overall performance by leveraging both syntactical infor- mation and long-range word dependencies.

Bing benefit to

Leverage 可以翻译为利用的意思

5. While attention-based models are promising, they are insufficient to capture syntactical dependencies between context words and the aspect within a sentence.

这里描述了attention-based的缺陷,不能充分地捕捉句子的句法依赖,其实还是由于word与word之间距离远,而 导致的,其实也不能完全这么说吧,self attention 会考虑句内所有word的attention,可能能解决一些远距离的信息丢失问题吧。

While 是尽管

SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS

1. Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop- agation rule for neural network models which operate directly on graphs and show how it can be motivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011). Secondly, we demonstrate how this form of a graph-based neural network model can be used for fast and scalable semi-supervised classification of nodes in a graph. Experiments on a number of datasets demonstrate that our model compares favorably both in classification accuracy and effi- ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.

经典GCN是这样来描述

从本质上讲,GCN 是谱图卷积(spectral graph convolution) 的局部一阶近似(localized first-order approximation)。GCN的另一个特点在于其模型规模会随图中边的数量的增长而线性增长。总的来说,GCN 可以用于对局部图结构与节点特征进行编码。

2. Semantic role labeling (SRL)can be informally described as the task of discovering who did what to whom.

之前在任务定义,形式化时常常会用 is formalized as ……或者是 is define as ……problem

其实也可以使用 is described as the task of …..被描述为这样….的任务

GRAPH ATTENTION NETWORKS

1. In its most general formulation, the model allows every node to attend on every other node, dropping all structural information. We inject the graph structure into the mechanism by performing masked attention—we only compute eij for nodes j ∈ Ni, where Ni is some neighborhood of nodei in the graph.

这里介绍了GAT的两种机制,一种是每个节点考虑图中所有节点的影响,这是极端情况,忽略了结构信息。

另外一种则是只考虑节点i领域范围内的节点。

注意表达方式

every node to attend on every other node 来表达节点相互attend的感觉

Drop all structural information. 尤其是drop的使用,这里有比较多normal的词,比如ignore,lose

injectsth into sth by sth 将某种机制,某种结构通过某种方式注入到….

mask attention 这种说法

Neighborhood of node i in the graph

2. By stacking layers in which nodes are able to attend over their neighborhoods’ features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion).

这里的which 是指在stack layers.

nodes are able to attend over their neihborhoods’ features.

specifying different weights to different nodes

Without 的使用

3. However, many interesting tasks involve data that can not be represented in a grid-like structure and that instead lies in an irregular domain.

4. This is the case of 3D meshes, social networks, telecommunication networks, biological networks or brain connectomes. Such data can usually be represented in the form of graphs.

注意表达方式

这段话的常用来的表达的是两种结构

一种是grid-like structure这样的网格结构是可以通过CNN,

还有一种是irregular domain 非规则的,比如社交网络,电信网络等

5. The idea is to compute the hidden representations of each node in the graph, by attending over its neighbors, following a self-attention strategy.

注意表达方式

By attending over its neighbors

Following a self-attention strategy

Attention Guided Graph Convolutional Networks for Relation Extraction

1. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees remains a challenge research question.

注意表达方式

以how to do sth 作为主语

While 的使用,这里的while 表示同时

然而,如何在有效利用相关信息的同时忽略依赖树中的无关信息,仍然是一个具有挑战性的研究问题

remains a challengng research question , 这里的remain用的好,比 is 表达出了这不仅仅是个问题,还是个遗留问题

2. Intuitively, we develop a “soft pruning” strategy that transforms the original dependency tree into a fully connected edge- weighted graph.

注意表达方式

Intuitively

develop a strategy that

3. With the help of dense connections, we are able to train the AGGCN model with a large depth, allowing rich local and non-local de- pendency information to be captured.

这一段描述的是dense connections 对网络的作用,虽然都是表达DC能够训练更深的网络,降低过拟合的风险,但是这个with the help of 用的好啊

With the help of

train model with a large depth 这个就比deeper network要高大上的多

local and no-local dependency information

allow的主语是model, 也更客观

model allow sth to be done.

allowing rich local and non-local dependency information to be captured. 其实这里可以借此衍生出很多改写

GCNs are neural networks that operate directly on graph structures

operate的使用

大体上描述GCNs

4. Opposite direction of a dependency arc is also included, which means Aij = 1 and Aji = 1 if there is an edge going from node i to node j,

otherwise Aij = 0 and Aji = 0.

5. Our model can be understood as a soft-pruning approach that automatically learns how to selectively attend to the relevant sub-structures useful for the relation extraction task

Sth can be understood as a ….approach that

how to selectively attend to the relevant sub-structures useful 这里的attend

6. Instead of using rule-based prun- ing, we develop a “soft pruning” strategy in the attention guided layer, which assigns weights to all edges. These weights can be learned by the model in an end-to-end fashion.

纯碎觉的写的又简单又清晰,我有的时候觉得我写的文像老奶奶的裹脚布,又臭又长。

Syntax-Aware Aspect Level Sentiment Classification with Graph Attention Networks

1. When an aspect term is separated away from its sentiment phrase, it is hard to find the associated sentiment words in a sequence.

描述了为什么我们要将syntax引入到很多nlp任务中,这里指的是apsect-level sentiment classification.

因为通常我们的模型是建立在序列输入上,在序列关系上,有些文本距离一些关键信息距离很远,但是如果将其转换为句法树其实上两者存在直接的关系,这就是为什么要引入syntactic dependencies,因为能从一定程度上降低由于长距离依赖而导致的信息丢失问题。

2. Unlike these previous methods, our approach represents a sentence as a dependency graph instead of a word sequence.

注意表达

其实就是将文本从词序列的结构转换为依赖图的形式

our approach represents A as a B instead of C

我们将A用B来表示而不是用C

3. We employ a multi-layer graph attention network to propagate sentiment features from important syntax neighbourhood words to the aspect target.

注意表达方式

我很喜欢这个propagate的使用方式

employsb to do sth

Propagate the ….from the important syntax neighbourhood words to the aspect target

这句话就很形象地表达了图结构的信息传播方式,沿着图的边将邻居结点的信息聚合起来

Graph Convolution over Pruned Dependency Trees Improves Relation Extraction

1. To resolve these issues, we further apply a Contextualized GCN (C-GCN) model, where the input word vectors are first fed into a bi-directional long short-term memory (LSTM) network to generate contextualized representations, which are then used as h(0) in the original model.

这里解释了C-GCN, 其实C-GCN很好理解,其实就是在word embedding 和 GCN layer之间插一个Bi-LSTM层(有时也被称为contextualized layer), 现将word embedding 过一遍bi-lstm再输入到gin 中对contextualized features 做propagate.

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档