首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往

如何制造帮助而不是伤害人类的AI

TED简介

Speaker: Margaret MitchellKey words:人工智能Abstract:用于训练人工智能的数据要注意全面,不要有biases。@TED: As a research scientist at Google, Margaret Mitchell helps develop computers that can communicate about what they see and understand. She tells a cautionary tale about the gaps, blind spots and biases we subconsciously encode into AI -- and asks us to consider what the technology we create today will mean for tomorrow. "All that we see now is a snapshot in the evolution of artificial intelligence," Mitchell says. "If we want AI to evolve in a way that helps humans, then we need to define the goals and strategies that enable that path now."Rating: ⭐️⭐️⭐️学习笔记

The speaker:

working on helping computers to generate human-like storiesfrom sequences of images

found when computers saw ahorrible, life-changing and life-destroying event and thought it was somethingpositive.

Why:

the computers were trained with images, most of which are positive.

This is Data biases.

Biases that reflect a limited viewpoint, limited to a single dataset --biases that can reflect human biases found in the data, such as prejudice and stereotyping.

参考内容

Link:How we can build AI to help humans, not hurt us | Margaret Mitchell

快来加入#1000个TED学习计划#,在“一千个TED视频的探索之旅”一起分享最好最实用的TED学习笔记

▼点击原文 在Link处查看本期TED视频

  • 发表于:
  • 原文链接http://kuaibao.qq.com/s/20180308G1DPY600?refer=cp_1026
  • 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 cloudcommunity@tencent.com 删除。

扫码

添加站长 进交流群

领取专属 10元无门槛券

私享最新 技术干货

扫码加入开发者社群
领券