首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往

Developmental Psychology,英语中的“发展心理学家”是什么?

DAY 09

本阅读素材节选自2018年05月24日《科学》杂志(sciencemag.org)网站

人工智能的学习过程就像一个婴儿的学习过程

《科学》杂志上的最新文章

How researchers are teaching AI to learn like a child

对这个话题进行了探索

在今天的努力播种后,

你将获得下列MBA英语备考考点:

6组考点短语/单词;

插入语的使用

宾语从句和并列句的使用

此外,文中还有一个小彩蛋

“developmental psychology”

英语中的“发展心理学家”是怎样的专家呢?

STEP 1

语料准备

disagreev. 不同意

developmental psychologist发展心理学家

sb. argues that某人主张……

abilityn.能力

within days数天之内(一段时间之内)

machine learning algorithms机器学习算法

STEP 2

高能例句

LeCun, disagreeing with many developmentalpsychologists, argued that babies might be learning such abilities within days,and if so, machine learning algorithms could, too.

STEP 3

难点突破

首先,本句使用了插入语结构。disagreeing withmany developmental psychologists,被插入了主语LeCun和谓语argued之间。插入语是常见的语法现象,但对于中国人来说并不难理解。需要关注的地方是:在翻译的时候,插入语要么前置,要么后置。

找到插入语之后,就不难看清楚整个句子的主干了:LeCunargued that……。LeCun主张……。Arguedthat 后面带的全都是整个句子的宾语,也即宾语从句。

再看argue后面这个宾语从句的结构:句1+and if so+句2。意思是:如果是句1,那么就发生句2. 这个句子“babies might be learning such abilities within days, and if so,machine learning algorithms could, too.”是个并列句,但在整个句子中做宾语从句。

英语阅读中有一个难点,叫“翻对读不懂“。比如这个“Developmentalpsychology”只要单词过关,就能轻易翻出:发展心理学家。但是什么是“发展心理学家”呢?大家如不具备相应背景知识,就会一头雾水。怎么破?播哥的建议是,根据上下文大胆跳出字面意思猜!看下Wikipedia(维基百科)对Developmental psychology的解释:“Developmental psychology is the scientific study of how and why human beings change over the course of their life.”考试如果遇到这种情况,播哥的建议是:根据上下文大胆跳出字面意思猜!

STEP 4

整句翻译

和很多发展心理学家的观点不同,LeCun主张:婴儿可以在数天内学习到这些技能,在这种情况下,机器学习算法同样能做到这一点。

STEP 5

写作积累

Li Ming, disagreeing with many otherexperts, argued that ……,and if so, ……

和其他专家的观点不同,Li Ming主张:……,在这种情况下,……。

STEP 6

原文阅读

How researchers are teaching AI to learn like a child

研究者是如何教AI像儿童那样学习的?

In the past few years, AI has shownthat it can translate speech, diagnose cancer, and beat humans at poker. Butfor every win, there is a blunder. Image recognition algorithms (算法)can now distinguish dog breeds better than you can, yet theysometimes mistake a chihuahua for a blueberry muffin. AIs can play classicAtari video games such as Space Invaders with superhuman skill, but when youremove all the aliens but one, the AI falters inexplicably.

Machine learning—one type of AI—isresponsible for those successes and failures.Broadly, AI has moved from software that relies on manyprogrammed rules (also known as Good Old-Fashioned AI, or GOFAI) to systemsthat learn through trial and error.Machine learning has taken offthanks to powerful computers, big data, and advances in algorithms calledneural networks. Those networks are collections of simple computing elements,loosely modeled on neurons in the brain, that create stronger or weaker linksas they ingest (摄取)training data.

With its Alpha programs, Google's DeepMindhas pushed deep learning to its apotheosis. Each time rules were subtracted,the software seemed to improve. In 2016, AlphaGo beat a human champion at Go, aclassic Chinese strategy game. The next year, AlphaGo Zero easily beat AlphaGowith far fewer guidelines. Months later, an even simpler system calledAlphaZero beat AlphaGo Zero—and also mastered chess. In 1997, a classic,rule-based AI, IBM's Deep Blue, had defeated chess champion Garry Kasparov. Butit turns out that true chess virtuosity lies in knowing the exceptions to theexceptions to the exceptions—information best gleaned through experience.AlphaZero, which learns by playing itself over and over, can beat Deep Blue,today's best chess programs, and every human champion.

Yet systems such as Alpha clearly are notextracting the lessons that lead to common sense. To play Go on a 21-by-21board instead of the standard 19-by-19 board, the AI would have to learn thegame anew. In the late 1990s, Marcus trained a network to take an input numberand spit it back out—about the simplest task imaginable. But he trained it onlyon even numbers. When tested with odd numbers, the network floundered. Itcouldn't apply learning from one domain to another, the way Chloe had when shebegan to build her Lego sideways.

The answer is not to go back to rule-basedGOFAIs. A child does not recognize a dog with explicit rules such as "ifnumber of legs=4, and tail=true, and size>cat." Recognition is morenuanced—a chihuahua with three legs won't slip past a 3-year-old. Humans arenot blank slates, nor are we hardwired. Instead, the evidence suggests we havepredispositions(倾向) that help us learn and reason aboutthe world. Nature doesn't give us a library of skills, just the scaffolding tobuild one.

Harvard University psychologist ElizabethSpelke has argued that we have at least four "core knowledge" systemsgiving us a head start on understanding objects, actions, numbers, and space.We are intuitive physicists, for example, quick to understand objects and theirinteractions. According to one study, infants just 3 days old interpret the twoends of a partially hidden rod as parts of one entity—a sign that our brainsmight be predisposed to perceive cohesive objects.We're also intuitivepsychologists. In a 2017Science study, Shari Liu, a graduate student in Spelke's lab, found that10-month-old infants could infer that when an animated character climbs abigger hill to reach one shape than to reach another, the character must preferthe former.Marcus has shown that 7-month-old infants can learn rules;they show surprise when three-word sentences ("wo fe fe") break thegrammatical pattern of previously heard sentences ("ga ti ga").According to later research, day-old newborns showed similar behavior.

Marcus has composed a minimum list of 10human instincts that he believes should be baked into AIs, including notions ofcausality, cost-benefit analysis, and types versus instances (dog versus mydog). Last October at NYU, he argued for his list in a debate on whether AIneeds "more innate machinery," facing Yann LeCun, an NYU computerscientist and Facebook's chief AI scientist. To demonstrate his case forinstinct, Marcus showed a slide of baby ibexes descending a cliff. "Theydon't get to do million-trial learning," he said. "If they make amistake, it's a problem."

LeCun, disagreeing with many developmentalpsychologists, argued that babies might be learning such abilities within days,and if so, machine learning algorithms could, too.His faith comes fromexperience. He works on image recognition, and in the 1980s he began arguingthat hand-coded algorithms to identify features in pictures would becomeunnecessary. Thirty years later, he was proved right. Critics asked him:"Why learn it when you can build it?" His reply: Building is hard,and if you don't fully understand how something works, the rules you devise arelikely to be wrong.

-END-

播得公益学习计划第二期

“读外刊学干货”正在进行中

每晚21:30分

精选外刊时文

每日精读一个典型例句

播哥陪你突破MBA英语阅读

欢迎来文末打卡

欢迎交流或提问

  • 发表于:
  • 原文链接https://kuaibao.qq.com/s/20180531G1V7L100?refer=cp_1026
  • 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 cloudcommunity@tencent.com 删除。

扫码

添加站长 进交流群

领取专属 10元无门槛券

私享最新 技术干货

扫码加入开发者社群
领券