前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >斯坦福NLP课程 | 第19讲 - AI安全偏见与公平

斯坦福NLP课程 | 第19讲 - AI安全偏见与公平

原创
作者头像
ShowMeAI
发布2022-05-23 18:27:16
3930
发布2022-05-23 18:27:16
举报
文章被收录于专栏:ShowMeAI研究中心ShowMeAI研究中心

AI安全偏见与公平
AI安全偏见与公平

ShowMeAI斯坦福CS224n《自然语言处理与深度学习(Natural Language Processing with Deep Learning)》课程的全部课件,做了中文翻译和注释,并制作成了GIF动图!


1.Bias in the Vision and Language of Artificial Intelligence

Bias in the Vision and Language of Artificial Intelligence
Bias in the Vision and Language of Artificial Intelligence

2.Prototype Theory

What do you see?
What do you see?
  • Bananas
  • Stickers
  • Dole Bananas
  • Bananas at a store
  • Bananas on shelves
  • Bunches of bananas
  • Bananas with stickers on them
  • Bunches of bananas with stickers on them on shelves in a store

...We don’t tend to say Yellow Bananas

What do you see?
What do you see?
What do you see?
What do you see?
What do you see?
What do you see?
Prototype Theory
Prototype Theory
  • Prototype Theory
    • 分类的目的之一是减少刺激行为和认知上可用的比例的无限差异
    • 物品的一些核心、原型概念可能来自于存储的对象类别的典型属性 (Rosch, 1975)
    • 也可以存储范例 (Wu & Barsalou, 2009)
Prototype Theory
Prototype Theory
  • Doctor —— Female Doctor
  • 大多数受试者忽视了医生是女性的可能性,包括男性、女性和自称女权主义者的人
Prototype Theory
Prototype Theory
World Learning from text
World Learning from text
  • Human Reporting Bias
    • murdered 是 blinked 出现次数的十倍
    • 我们不倾向于提及眨眼和呼吸等事情
Human Reporting Bias
Human Reporting Bias
  • Human Reporting Bias
    • 人们写作中的行为、结果或属性的频率并不反映真实世界的频率,也不反映某一属性在多大程度上是某一类个体的特征。
    • 更多关于我们处理世界和我们认为非凡的东西的实际情况。这影响到我们学习的一切。
Human Reporting Bias
Human Reporting Bias
Human Reporting Bias in Data
Human Reporting Bias in Data
Human Reporting Bias in Data
Human Reporting Bias in Data
  • Data 数据
    • Reporting bias 报告偏见:人们分享的并不是真实世界频率的反映
    • Selection Bias 选择偏差:选择不反映随机样本
    • Out-group homogeneity bias 外群体同质性偏见:People tend to see outgroup members as more alike than ingroup members when comparing attitudes, values, personality traits, and other characteristics
  • Interpretation
    • Confirmation bias 确认偏见:倾向于寻找、解释、支持和回忆信息,以确认一个人先前存在的信念或假设
    • Overgeneralization 泛化过度:根据过于笼统和/或不够具体的信息得出结论
    • Correlation fallacy 相关性谬误:混淆相关性和因果关系
    • Automation bias 自动化偏差:人类倾向于喜欢来自自动化决策系统的建议,而不是没有自动化的相互矛盾的信息

3.Biases in Data

Biases in Data
Biases in Data
Biases in Data
Biases in Data
  • Selection Bias 选择偏差:选择不反映随机样本
Biases in Data
Biases in Data
  • Out-group homogeneity bias 外群体同质性偏见:在比较态度、价值观、个性特征和其他特征时,往往群体外的成员认为比群体内的成员更相似
  • 这有些难以理解:意思就是左边的四只猫之间是非常不同的,但是在狗的眼里他们是相同的
Biases in Data → Biased Data Representation
Biases in Data → Biased Data Representation
  • Biases in Data → Biased Data Representation
  • 你可能对你能想到的每一个群体都有适当数量的数据,但有些群体的表现不如其他群体积极。
Biases in Data → Biased Labels
Biases in Data → Biased Labels
  • Biases in Data → Biased Labels
  • 数据集中的注释将反映注释者的世界观

4.Biases in Interpretation

Biases in Interpretation
Biases in Interpretation
Biases in Interpretation
Biases in Interpretation
  • Biases in Interpretation
    • Confirmation bias 确认偏见:倾向于寻找、解释、支持和回忆信息,以确认一个人先前存在的信念或假设
Biases in Interpretation
Biases in Interpretation
  • Biases in Interpretation
    • Overgeneralization 泛化过度:根据过于笼统和/或不够具体的信息得出结论(相关:过拟合)
Biases in Interpretation
Biases in Interpretation
  • Biases in Interpretation
    • Correlation fallacy 相关性谬误:混淆相关性和因果关系
Biases in Interpretation
Biases in Interpretation
  • Biases in Interpretation
    • Automation bias 自动化偏差:人类倾向于喜欢来自自动化决策系统的建议,而不是没有自动化的相互矛盾的信息
Biases in Interpretation
Biases in Interpretation
  • 会形成反馈循环
  • 这被称为 Bias Network Effect 以及 Bias “Laundering”
Human data perpetuates human biases. As ML learns from human data, the result is a bias network effect.
Human data perpetuates human biases. As ML learns from human data, the result is a bias network effect.
  • Human data perpetuates human biases. As ML learns from human data, the result is a bias network effect.
  • 人类数据延续了人类的偏见。当ML从人类数据中学习时,结果是一个偏置网络效应。

5.BIAS = BAD ??

BIAS = BAD ??
BIAS = BAD ??
“Bias” can be Good, Bad, Neutral
“Bias” can be Good, Bad, Neutral
  • 统计以及 ML中的偏差
    • 估计值的偏差:预测值与我们试图预测的正确值之间的差异
    • “偏差”一词b(如y = mx + b)
  • 认知偏见
    • 确认性偏差、近因性偏差、乐观性偏差
  • 算法偏差
    • 对与种族、收入、性取向、宗教、性别和其他历史上与歧视和边缘化相关的特征相关的人的不公平、不公平或偏见待遇,何时何地在算法系统或算法辅助决策中体现出来”
amplify injustice
amplify injustice
  • 如何避免算法偏差,开发出不会放大差异的算法

6.Predicting Future Criminal Behavior

Predicting Future Criminal Behavior
Predicting Future Criminal Behavior
Predicting Policing
Predicting Policing
  • Predicting Future Criminal Behavior
    • 算法识别潜在的犯罪热点
    • 基于之前报道的犯罪的地方,而不是已知发生在哪里
    • 从过去预测未来事件
    • 预测的是逮捕的地方而不是犯罪的地方
Predicting Sentencing
Predicting Sentencing
  • Prater (白人)额定低风险入店行窃后,尽管两个武装抢劫;一次持械抢劫未遂。
  • Borden (黑色)额定高危后她和一个朋友(但在警察到来之前返回)一辆自行车和摩托车坐在外面。
  • 两年后,Borden没有被指控任何新的罪行。Prater因重大盗窃罪被判8年有期徒刑。
  • 系统默认认为黑人的犯罪风险高于白人

7.Automation Bias

Automation Bias
Automation Bias
Predicting Criminality
Predicting Criminality
  • 以色列启动 Faception
  • Faception是第一个科技领域的率先面市的,专有的计算机视觉和机器学习技术分析人员和揭示他们的个性只基于他们的面部图像。
  • 提供专业的引擎从脸的形象识别“高智商”、“白领犯罪”、“恋童癖”,和“恐怖分子”。
  • 主要客户为国土安全和公共安全。
Predicting Criminality
Predicting Criminality
  • “Automated Inference on Criminality using Face Images” Wu and Zhang, 2016. arXiv
  • 1856个紧密裁剪的面孔的图像,包括“通缉犯”ID特定区域的照片
  • 存在确认偏差和相关性偏差

8.Selection Bias + Experimenter’s Bias +Confirmation Bias + Correlation Fallacy +Feedback Loops

Selection Bias + Experimenter’s Bias +Confirmation Bias + Correlation Fallacy +Feedback Loops
Selection Bias + Experimenter’s Bias +Confirmation Bias + Correlation Fallacy +Feedback Loops
Predicting Criminality - The Media Blitz
Predicting Criminality - The Media Blitz

9.(Claiming to) Predict Internal Qualities Subject To Discrimination

(Claiming to) Predict Internal Qualities Subject To Discrimination
(Claiming to) Predict Internal Qualities Subject To Discrimination
Predicting Homosexuality
Predicting Homosexuality
  • Wang and Kosinski, Deep neural networks are more accurate than humans at detecting sexual orientation from facial images, 2017.
  • “Sexual orientation detector” using 35,326 images from public profiles on a US dating website.
  • “与性取向的产前激素理论(PHT)相一致,男同性恋者和女同性恋者往往具有非典型的性别面部形态。”
Predicting Homosexuality
Predicting Homosexuality
  • 在自拍中,同性恋和异性恋之间的差异与打扮、表现和生活方式有关,也就是说,文化差异,而不是面部结构的差异
  • See our longer response on Medium, “Do Algorithms Reveal Sexual Orientation or Just Expose our Stereotypes?”
  • Selection Bias + Experimenter’s Bias + Correlation Fallacy

10.Selection Bias + Experimenter’s Bias + Correlation Fallacy

Selection Bias + Experimenter’s Bias + Correlation Fallacy
Selection Bias + Experimenter’s Bias + Correlation Fallacy

11.Measuring Algorithmic Bias

Measuring Algorithmic Bias
Measuring Algorithmic Bias
Evaluate for Fairness & Inclusion
Evaluate for Fairness & Inclusion
  • 评估公平性和包容性
    • 分类评估
      • 为每个创建(子组,预测)对。跨子组比较
    • 例如
      • 女性,面部检测
      • 男性,面部检测
Evaluate for Fairness & Inclusion: Confusion Matrix
Evaluate for Fairness & Inclusion: Confusion Matrix
Evaluate for Fairness & Inclusion
Evaluate for Fairness & Inclusion
  • “机会平等”公平准则:子组的 recall 是相等的
  • “预测平价”公平准则:子组的 precision 是相等
  • 选择评价指标的可接受的假阳性和假阴性之间的权衡

12.False Positives and False Negatives

False Positives and False Negatives
False Positives and False Negatives
False Positives Might be Better than False Negatives
False Positives Might be Better than False Negatives
  • False Positives Might be Better than False Negatives
    • Privacy in Images
    • Spam Filtering
False Negatives Might be Better than False Positives
False Negatives Might be Better than False Positives
AI Can Unintentionally Lead to Unjust Outcomes
AI Can Unintentionally Lead to Unjust Outcomes
  • 缺乏对数据和模型中的偏见来源的洞察力
  • 缺乏对反馈循环的洞察力
  • 缺乏细心,分类的评价
  • 人类偏见在解释和接受结果

13.It’s up to us to influence how AI evolves.

It’s up to us to influence how AI evolves.
It’s up to us to influence how AI evolves.
Begin tracing out paths for the evolution of ethical AI
Begin tracing out paths for the evolution of ethical AI

14.It’s up to us to influence how AI evolves. Here are some things we can do.

It’s up to us to influence how AI evolves. Here are some things we can do.
It’s up to us to influence how AI evolves. Here are some things we can do.

15.Data

Data
Data
Data Really, Really Matters
Data Really, Really Matters
  • 了解您的数据:偏差,相关性
  • 从类似的分布放弃单一训练集/测试集
  • 结合来自多个来源的输入
  • 对于困难的用例使用held-out测试集
  • 与专家讨论其他信号
Understand Your Data Skews
Understand Your Data Skews
Understand Your Data Skews
Understand Your Data Skews
  • 没有一个数据集是没有偏差的,因为这是一个有偏差的世界。重点是知道是什么偏差。

16.Machine Learning

Machine Learning
Machine Learning
Use ML Techniques for Bias Mitigation and Inclusion
Use ML Techniques for Bias Mitigation and Inclusion
  • Bias Mitigation 偏差缓解
    • 删除有问题的输出的信号
      • 刻板印象
      • 性别歧视,种族歧视,*-ism
      • 又称为“debiasing”
Use ML Techniques for Bias Mitigation and Inclusion
Use ML Techniques for Bias Mitigation and Inclusion
  • Inclusion
    • 添加信号所需的变量
      • 增加模型性能
      • 注意性能很差的子组或数据片

17.Multi-task Learning to Increase Inclusion

Multi-task Learning to Increase Inclusion
Multi-task Learning to Increase Inclusion
Multiple Tasks + Deep Learning for Inclusion: Multi-task Learning Example
Multiple Tasks + Deep Learning for Inclusion: Multi-task Learning Example
  • 与宾夕法尼亚大学WWP合作
  • 直接与临床医生合作
  • 目标
    • 系统警告临床医生如果企图自杀迫在眉睫
    • 几个训练实例可用时诊断的可行性
  • Benton, Mitchell, Hovy. Multi-task learning for Mental Health Conditions with Limited Social Media Data. EACL, 2017.
Multiple Tasks + Deep Learning for Inclusion: Multi-task Learning Example
Multiple Tasks + Deep Learning for Inclusion: Multi-task Learning Example
  • 内部数据
    • 电子健康记录
      • 病人或病人家属提供
      • 包括心理健康诊断,自杀企图,竞赛
    • 社交媒体数据
  • 代理数据
    • Twitter 媒体数据
    • 代理心理健康诊断中使用自称诊断
      • 我被诊断出患有 X
      • 我试图自杀
Single-Task: Logistic Regression, Deep Learning
Single-Task: Logistic Regression, Deep Learning
Multiple Tasks with Basic Logistic Regression
Multiple Tasks with Basic Logistic Regression
Multi-task Learning
Multi-task Learning
Improved Performance across Subgroups
Improved Performance across Subgroups
Reading for the masses….
Reading for the masses….

18.Adversarial Multi-task Learning to Mitigate Bias

Adversarial Multi-task Learning to Mitigate Bias
Adversarial Multi-task Learning to Mitigate Bias
Multitask Adversarial Learning
Multitask Adversarial Learning
Equality of Opportunity in Supervised Learning
Equality of Opportunity in Supervised Learning
  • 考虑到真正正确的决策,分类器的输出决策应该在敏感特征之间是相同的。

19.Case Study: Conversation AI Toxicity

Case Study: Conversation AI Toxicity
Case Study: Conversation AI Toxicity

19.1 Measuring and Mitigating Unintended Bias in Text Classification

Measuring and Mitigating Unintended Bias in Text Classification
Measuring and Mitigating Unintended Bias in Text Classification

19.2 Conversation-AI & Research Collaboration

Conversation-AI & Research Collaboration
Conversation-AI & Research Collaboration
  • Conversation-AI
    • ML 提高大规模在线对话
  • Research Collaboration
    • Jigsaw, CAT, several Google-internal teams, and external partners (NYTimes, Wikimedia, etc)

19.3 Perspective API

Perspective API
Perspective API

19.4 Unintended Bias

Unintended Bias
Unintended Bias

19.5 Bias Source and Mitigation

Bias Source and Mitigation
Bias Source and Mitigation
  • 偏见造成的数据不平衡
    • 经常袭击了有毒的身份所占比例评论
    • 长度问题
  • 添加维基百科文章中假定的无毒数据来修复这种不平衡
    • 原始数据集有127820个例子
    • 4620个补充的无毒例子

19.6 Measuring Unintended Bias - Synthetic Datasets

Measuring Unintended Bias - Synthetic Datasets
Measuring Unintended Bias - Synthetic Datasets
  • 挑战与真实数据
    • 现有数据集是小 and/or 有错误的相关性
    • 每个例子是完全独特的
  • Approach:"bias madlibs”:一个综合生成的模板化数据集进行评估

19.7 Assumptions

Assumptions
Assumptions
  • 数据集是可靠的
    • 和产品相似的分布
    • 忽略注释器偏见
    • 没有因果分析

19.8 Deep Learning Model

Deep Learning Model
Deep Learning Model
  • 深度学习模型
  • CNN 架构
  • 预训练的 GloVe 嵌入
  • Keras 实现

19.9 Measuring Model Performance

Measuring Model Performance
Measuring Model Performance

19.10 Measuring Model Performance

Measuring Model Performance
Measuring Model Performance

19.11 Types of Bias

Types of Bias
Types of Bias
  • Low Subgroup Performance
    • 模型在子组注释上的性能比在总体注释上差
  • Metric : Subgroup AUC
Types of Bias
Types of Bias
  • Subgroup Shift (Right)
    • 该模型系统地对来自子组的评价打分更高
    • Metric: BPSN AUC
    • (Background Positive Subgroup Negative)
  • Subgroup Shift (Left)
    • 该模型系统地对来自子组的评价打分较低
    • Metric: BNSP AUC
    • (Background Negative Subgroup Positive)

19.12 Results

Results
Results

20.Release Responsibly

Release Responsibly
Release Responsibly
Model Cards for Model Reporting
Model Cards for Model Reporting
  • 目前还没有模型发布时报告模型效果的 common practice
  • What It Does
    • 一份关注模型性能透明度的报告,以鼓励负责任的人工智能的采用和应用
  • How It Works
    • 这是一个容易发现的和可用的工件在用户旅程中重要的步骤为一组不同的用户和公共利益相关者
  • Why It Matter
    • 它使模型开发人员有责任发布高质量和公平的模型
    • Intended Use, Factors and Subgroups
Intended Use, Factors and Subgroups, Metrics and Data, Considerations, Recommendations
Intended Use, Factors and Subgroups, Metrics and Data, Considerations, Recommendations
Disaggregated Intersectional Evaluation
Disaggregated Intersectional Evaluation

21.Moving from majority representation... to diverse representation... for ethical AI

Moving from majority representation... to diverse representation... for ethical AI
Moving from majority representation... to diverse representation... for ethical AI

22.Thanks

23.视频教程

可以点击 B站 查看视频的【双语字幕】版本

video(video-xLF8OkHI-1652089903151)(type-bilibili)(url-https://player.bilibili.com/player.html?aid=376755412&page=19)(image-https://img-blog.csdnimg.cn/img_convert/348c054984ca03db39d4b43924e02302.png)(title-【双语字幕+资料下载】斯坦福CS224n | 深度学习与自然语言处理(2019·全20讲))

24.参考资料

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1.Bias in the Vision and Language of Artificial Intelligence
  • 2.Prototype Theory
  • 3.Biases in Data
  • 4.Biases in Interpretation
  • 5.BIAS = BAD ??
  • 6.Predicting Future Criminal Behavior
  • 7.Automation Bias
  • 8.Selection Bias + Experimenter’s Bias +Confirmation Bias + Correlation Fallacy +Feedback Loops
  • 9.(Claiming to) Predict Internal Qualities Subject To Discrimination
  • 10.Selection Bias + Experimenter’s Bias + Correlation Fallacy
  • 11.Measuring Algorithmic Bias
  • 12.False Positives and False Negatives
  • 13.It’s up to us to influence how AI evolves.
  • 14.It’s up to us to influence how AI evolves. Here are some things we can do.
  • 15.Data
  • 16.Machine Learning
  • 17.Multi-task Learning to Increase Inclusion
  • 18.Adversarial Multi-task Learning to Mitigate Bias
  • 19.Case Study: Conversation AI Toxicity
    • 19.1 Measuring and Mitigating Unintended Bias in Text Classification
      • 19.2 Conversation-AI & Research Collaboration
        • 19.3 Perspective API
          • 19.4 Unintended Bias
            • 19.5 Bias Source and Mitigation
              • 19.6 Measuring Unintended Bias - Synthetic Datasets
                • 19.7 Assumptions
                  • 19.8 Deep Learning Model
                    • 19.9 Measuring Model Performance
                      • 19.10 Measuring Model Performance
                        • 19.11 Types of Bias
                          • 19.12 Results
                          • 20.Release Responsibly
                          • 21.Moving from majority representation... to diverse representation... for ethical AI
                          • 22.Thanks
                          • 23.视频教程
                          • 24.参考资料
                          相关产品与服务
                          NLP 服务
                          NLP 服务(Natural Language Process,NLP)深度整合了腾讯内部的 NLP 技术,提供多项智能文本处理和文本生成能力,包括词法分析、相似词召回、词相似度、句子相似度、文本润色、句子纠错、文本补全、句子生成等。满足各行业的文本智能需求。
                          领券
                          问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档