大多数深度学习框架要求用户将其本地数据或模型更新汇总到受信任的服务器,以训练或维护全局模型。在许多应用程序中,假设一个可信的服务器可以访问用户信息是不合适的。为解决此问题,本文在不受信任的服务器设置下开发了一个新的深度学习框架,其中包括三个模块:(1)嵌入模块,(2)随机化模块和(3)分类器模块。 对于随机模块,本文提出了一种新的局部差分私有(LDP)协议,以减少隐私参数ϵ对准确性的影响,并为选择LDP的随机概率提供了更大的灵活性。实验表明,本文的框架与非私有框架和现有LDP协议相比,具有可比甚至更好的性能,这证明了LDP协议的优势。
原文题目:Towards Differentially Private Text Representations
原文:Most deep learning frameworks require users to pool their local data or model updates to a trusted server to train or maintain a global model. The assumption of a trusted server who has access to user information is ill-suited in many applications. To tackle this problem, we develop a new deep learning framework under an untrusted server setting, which includes three modules: (1) embedding module, (2) randomization module, and (3) classifier module. For the randomization module, we propose a novel local differentially private (LDP) protocol to reduce the impact of privacy parameter ϵ on accuracy, and provide enhanced flexibility in choosing randomization probabilities for LDP. Analysis and experiments show that our framework delivers comparable or even better performance than the non-private framework and existing LDP protocols, demonstrating the advantages of our LDP protocol.
原文作者:Lingjuan Lyu, Yitong Li, Xuanli He, Tong Xiao
原文地址:https://arxiv.org/abs/2006.14170
原创声明,本文系作者授权云+社区发表,未经许可,不得转载。
如有侵权,请联系 yunjia_community@tencent.com 删除。
我来说两句