专栏首页AI研习社博客 | 关于SLU(意图识别、槽填充、上下文LU、结构化LU)和NLG的论文汇总

博客 | 关于SLU(意图识别、槽填充、上下文LU、结构化LU)和NLG的论文汇总

本文原载于微信公众号:AI部落联盟(AI_Tribe)AI研习社经授权转载。欢迎关注 AI部落联盟 微信公众号、知乎专栏 AI部落、及 AI研习社博客专栏社长提醒:本文的相关链接请点击文末【阅读原文】进行查看

不少人通过知乎或微信给我要论文的链接,统一发一下吧,后续还有DST、DPL、迁移学习在对话系统的应用、强化学习在对话系统的应用、memory network在对话系统的应用、GAN在对话系统的应用等论文,整理后发出来,感兴趣的可以期待一下。

SLU

1.SLU-Domain/Intent Classification

1.1 SVM MaxEnt

1.2 Deep belief nets (DBN)

Deep belief nets for natural language call-routing, Sarikaya et al., 2011

1.3 Deep convex networks (DCN)

Towards deeper understanding: Deep convex networks for semantic utterance classification, Tur et al., 2012

1.4 Extension to kernel-DCN

Use of kernel deep convex networks and end-to-end learning for spoken language understanding, Deng et al., 2012

1.5 RNN and LSTMs

Recurrent Neural Network and LSTM Models for Lexical Utterance Classification, Ravuri et al., 2015

1.6 RNN and CNNs

Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks, Lee et al,2016 NAACL

2. SLU – Slot Filling

2.1.RNN for Slot Tagging

Bi-LSTMs and Input sliding window of n-grams

2.1.1 Recurrent neural networks for language understanding, interspeech2013

2.1.2 Using recurrent neural networks for slot filling in spoken language understanding, Mesnil et al, 2015

2.1.3.

Encoder-decoder networks

Leveraging Sentence-level Information with Encoder LSTM for Semantic Slot Filling, Kurata et al., EMNLP 2016

Attention-based encoder-decoder

Exploring the use of attention-based recurrent neural networks for spoken language understanding, Simonnet et al., 2015

2.2 Multi-task learning

2.2.1 , Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding ,Jaech et al., Interspeech 2016

Joint Segmentation and Slot Tagging

2.2.2 Neural Models for Sequence Chunking, Zhai et al., AAAI 2017

Joint Semantic Frame Parsing

2.2.3, Slot filling and intent prediction in the same output sequence

Multi-Domain Joint Semantic Frame Parsing using Bi-directional RNN-LSTM ,Hakkani-Tur et al., Interspeech 2016

2.2.4 Intent prediction and slot filling are performed in two branches

Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling, Liu and Lane, Interspeech 2016

3.Contextual LU

3.1 Context Sensitive Spoken Language Understanding using Role Dependent LSTM layers, Hori et al, 2015

3.2 E2E MemNN for Contextual LU

End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding, Chen et al., 2016

3.3 Sequential Dialogue Encoder Network

Sequential Dialogue Context Modeling for Spoken Language Understanding,Bapna et.al., SIGDIAL 2017

4 Structural LU

4.1 K-SAN:prior knowledge as a teacher,Sentence structural knowledge stored as memory

Knowledge as a Teacher: Knowledge-Guided Structural Attention Networks,Chen et al., 2016

5. SL

CRF (Wang and Acero 2006; Raymond and Riccardi 2007):

Discriminative Models for Spoken Language Understanding; Wang and Acero , Interspeech, 2006

Generative and discriminative algorithms for spoken language understanding; Raymond and Riccardi, Interspeech, 2007

Puyang Xu and Ruhi Sarikaya. Convolutional neural network based triangular crf for joint intent detection and slot filling.

RNN (Yao et al. 2013; Mesnil et al. 2013, 2015; Liu and Lane 2015);

Recurrent neural networks for language understanding, interspeech 2013

Using recurrent neural networks for slot filling in spoken language understanding, Mesnil et al, 2015

Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding.;Mesnil et al, Interspeech, 2013

Recurrent Neural Network Structured Output Prediction for Spoken Language Understanding, Liu and Lane, NIPS, 2015

LSTM (Yao et al. 2014)

Spoken language understanding using long short-term memory neural networks

6. SL + TL

Instance based transfer for SLU (Tur 2006);

Gokhan Tur. Multitask learning for spoken language understanding. In 2006IEEE

Model adaptation for SLU (Tür 2005);

Gökhan Tür. Model adaptation for spoken language understanding. In ICASSP (1), pages 41–44. Citeseer, 2005.

Parameter transfer (Yazdani and Henderson)

A Model of Zero-Shot Learning of Spoken Language Understanding

_____________________________________________________________________

NLG

Tradition

Template-Based NLG

Plan-Based NLG (Walker et al., 2002)

Class-Based LM NLG

Stochastic language generation for spoken dialogue systems, Oh and Rudnicky, NAACL 2000

Phrase-Based NLG

Phrase-based statistical language generation using graphical models and active learning, Mairesse et al, 2010

RNN-Based LM NLG

Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking, Wen et al., SIGDIAL 2015

Semantic Conditioned LSTM

Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems, Wen et al., EMNLP 2015

Structural NLG

Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings, Dušek and Jurčíček, ACL 2016

Contextual NLG

A Context-aware Natural Language Generator for Dialogue Systems, Dušek and Jurčíček, 2016

Controlled Text Generation

Toward Controlled Generation of Text , Hu et al., 2017

1.NLG-Traditional

Marilyn A Walker, Owen C Rambow, and Monica Rogati. Training a sentence planner for spoken dialogue using boosting.

2. NLG-Corpus based

Alice H Oh and Alexander I Rudnicky. Stochastic language generation for spoken dialogue systems. Oh et al. 2000

François Mairesse and Steve Young. Stochastic language generation in dialogue using factored language models. Mairesse and Young 2014

3. NLG-Neural Network

Recurrent neural network based language model.

Extensions of recurrent neural network language model

Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking

Semantically conditioned lstm-based natural language generation for spoken dialogue systems.

4. Transfer learning for NLG

Recurrent neural network based languagemodel personalization by social network crowdsourcing. In INTERSPEECH, 2013. Wen et al., 2013

Recurrent neural network language model adaptation with curriculum learning. Shi et al., 2015

Multi-domain neural network language generation for spoken dialogue systems. Wen et al., NAACL 2016

本文分享自微信公众号 - AI研习社(okweiwu)

原文出处及转载信息见文内详细说明,如有侵权,请联系 yunjia_community@tencent.com 删除。

原始发表时间:2018-11-30

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

我来说两句

0 条评论
登录 后参与评论

相关文章

  • Github项目推荐 | 最优控制、强化学习和运动规划等主题参考文献集锦

    References on Optimal Control, Reinforcement Learning and Motion Planning

    AI研习社
  • 学界 | 生成的图像数据集效果不好?也许你需要考虑内容分布的差异

    对生成数据集和真实数据集差异的探究目前也有不少成果,比如学习不同任务通用的图像特征、学习图像风格迁移等,这样可以让生成数据集中的图像看上去更像真实图像。不过这篇...

    AI研习社
  • 学界 | RNN失宠、强化学习风头正劲,ICLR 2019的八点参会总结

    AI 科技评论按:上周,深度学习顶级学术会议 ICLR 2019 在新奥尔良落下帷幕。毕业于斯坦福大学、现就职于英伟达的女性计算机科学家 Chip Huyen ...

    AI研习社
  • 基于区域的目标检测——细粒度

    今天是二月的第一天,是一个月的新的开始,估计现在有很多学生都已经进入了漫长的寒假,希望你们在寒假空闲之余可以慢慢来阅读我们的精彩推送。今天我们将的就是目标检测,...

    计算机视觉研究院
  • IEEE--关于音频信号处理的机器学习

    IEEE Journal于近日发布 - Special Issue on Machine Learning for Audio Processing。

    用户6026865
  • EMNLP 2018 | 用强化学习做神经机器翻译:中山大学&MSRA填补多项空白

    作者:Lijun Wu、Fei Tian、Tao Qin、Jianhuang Lai、Tie-Yan Liu

    机器之心
  • 一文尽览推荐系统模型演变史

    4. 整理此文的目的是给大家一个清晰的脉络,可当作一篇小小综述。从信息过载概念的提出到推荐系统的起源,从前深度学习时代的推荐系统到劲头正热的深度推荐系统,再到最...

    张小磊
  • BERT_Paper_Chinese_Translation: BERT论文中文翻译版

    Google发布的论文《Pre-training of Deep Bidirectional Transformers for Language Underst...

    AINLP
  • 论文笔记系列-Simple And Efficient Architecture Search For Neural Networks

    本文提出了一种新方法,可以基于简单的爬山过程自动搜索性能良好的CNN架构,该算法运算符应用网络态射,然后通过余弦退火进行短期优化运行。

    marsggbo
  • IJCAI'19最新推荐系统论文分享

    一年一度的AI盛会IJCAI将于2019年8月10日至16日在中国澳门举行,在此特整理关于推荐系统方向最新的论文列表,希望对大家有所帮助。通过整理论文列表发现:

    张小磊

扫码关注云+社区

领取腾讯云代金券