前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >科学家敦促人工智能研究人员提高透明度

科学家敦促人工智能研究人员提高透明度

作者头像
柴艺
发布2021-01-15 12:31:20
2980
发布2021-01-15 12:31:20
举报
文章被收录于专栏:资讯类翻译专栏

一个国际科学家组织要求科学期刊在接受计算机相关领域的研究人员发表报告时,要求他们提高透明度。

他们还希望计算研究人员在发表的报告中包含有关他们的代码、模型和计算环境的信息。

他们的呼吁发表在10月份的《自然》杂志上,是对去年1月发表在《自然》杂志上的谷歌健康研究结果的回应。

这项研究声称,人工智能系统在筛查乳腺癌方面比人类放射科医生更快、更准确。

谷歌资助了这项研究,由谷歌学者斯科特·麦金尼和其他谷歌员工领导。

对谷歌研究的批评

多伦多大学的Benjamin Haibe Kains领导的国际科学家小组说:“在他们的研究中,McKinney等人展示了人工智能在乳腺癌筛查中的巨大潜力。”。

然而,缺乏详细的方法和计算机代码削弱了它的科学价值。这一缺陷限制了其他人对此类技术进行前瞻性验证和临床应用所需的证据。”

科学家们援引《自然》杂志的政策称,科学进步取决于独立研究人员仔细审查一项研究成果、利用其材料再现主要成果以及在未来研究中建立这些成果的能力。

McKinney和他的合作者指出,发布用于训练模型的代码是不可行的,因为它对内部工具、基础设施和硬件有大量依赖,Haibe Kains的小组指出。

然而,许多框架和平台可以使人工智能研究更加透明和可复制,该组织说。其中包括Bitbucket和Github;包管理器包括Conda;容器和虚拟化系统,如Code Ocean和Gigantum。

Al在医学领域显示出巨大的应用前景,但“不幸的是,生物医学文献中充斥着未通过再现性测试的研究,其中许多研究与方法和实验实践有关,由于未能充分披露软件和数据,因此无法进行调查,“海贝凯恩斯集团说。

谷歌没有回应我们对此事发表评论的请求。

专利申请中?

对于公司来说,不披露其人工智能研究的全部细节可能有很好的商业理由。

“这项研究在科技发展中也被视为机密,”蒂里亚斯研究公司(Tirias research)的首席分析师吉姆·麦格雷戈(Jim McGregor)告诉TechNewsWorld科技公司是否应该被迫放弃他们花费数十亿美元开发的技术?”

麦格雷戈说,研究人员对人工智能的研究“是惊人的,并正在导致技术突破,其中一些将被专利保护所覆盖。”因此,并非所有信息都可用于测试,但不能测试并不意味着它不正确或不真实。”

Haibe Kains的小组建议,如果由于许可证或其他无法克服的问题而无法与整个科学界共享数据,“至少应该建立一个机制,让一些训练有素的独立调查人员能够获取数据并验证分析。”

被炒作所驱使

研究结果的可验证性和可重复性。根据人工智能投资者Nathan Benaich和Ian Hogarth撰写的《2020年人工智能状况报告》,只有15%的人工智能研究论文发表了自己的代码。

他们特别指出谷歌的人工智能子公司和实验室DeepMind以及人工智能研发公司OpenAI是罪魁祸首。

新加坡的技术经济和商业顾问杰弗里·芬克博士在接受TechNewsWorld采访时说:“科学研究中的许多问题都是由越来越多的关于科学研究的炒作所推动的,而这正是获得资金所必需的。”。

“这种炒作及其夸大的说法,助长了人们对与这些说法相匹配的结果的需求,从而也助长了对不可复制研究的容忍。”

芬克观察到,科学家和资助机构将不得不“回击炒作”,以获得更多的再现性。然而,这“可能会减少对人工智能和其他技术的资金投入,这一资金的激增是因为立法者确信,到2030年,人工智能将带来15万亿美元的经济收益。

原文题:Scientists Press AI Researchers for Transparency

原文:An international group of scientists is demanding scientific journals demand more transparency from researchers in computer-related areas when accepting their reports for publication.

They also want computational researchers to include information about their code, models and computational environments in published reports.

Their call, published in Nature Magazine in October, was in response to the results of research conducted by Google Health that was published in Nature last January.

The research claimed an artificial intelligence system was faster and more accurate at screening for breast cancer than human radiologists.

Google funded the study, which was led by Google Scholar Scott McKinney and other Google employees.

Criticisms of the Google Study

"In their study, McKinney et al. showed the high potential of artificial intelligence for breast cancer screening," the international group of scientists, led by Benjamin Haibe-Kains, of the University of Toronto, stated.

"However, the lack of detailed methods and computer code undermines its scientific value. This shortcoming limits the evidence required for others to prospectively validate and clinically implement such technologies."

Scientific progress depends on the ability of independent researchers to scrutinize the results of a research study, reproduce its main results using its materials, and build upon them in future studies, the scientists said, citing Nature Magazine's policies.

McKinney and his co-authors stated that it was not feasible to release the code used for training the models because it has a large number of dependencies on internal tooling, infrastructure and hardware, Haibe-Kains' group noted.

However, many frameworks and platforms are available to make AI research more transparent and reproducible, the group said. These include Bitbucket and Github ; package managers including Conda; and container and virtualization systems such as Code Ocean and Gigantum.

Al shows great promise for use in the field of medicine, but "Unfortunately, the biomedical literature is littered with studies that have failed the test of reproducibility, and many of these can be tied to methodologies and experimental practices that could not be investigated due to failure to fully disclose software and data," Haibe-Kains' group said.

Google did not respond to our request to provide comment for this story.

Patents Pending?

There might be good business reasons for companies not to disclose full details about their AI research studies.

"This research is also considered confidential in the development of technology," Jim McGregor, a principal analyst at Tirias Research, told TechNewsWorld. "Should technology companies be forced to give away technology they've spend billions of dollars in developing?"

What researchers are doing with AI "is phenomenal and is leading to technological breakthroughs, some of which are going to be covered by patent protection," McGregor said. "So not all of the information is going to be available for testing, but just because you can't test it doesn't mean it isn't correct or true."

Haibe-Kains' group recommended that, if data cannot be shared with the entire scientific community because of licensing or other insurmountable issues, "at a minimum a mechanism should be set so that some highly-trained, independent investigators can access the data and verify the analyses."

Driven by Hype

Verifiability and reproducibility plague AI research study results on the whole. Only 15 percent of AI research papers publish their code, according to the State of AI Report 2020, produced by AI investors Nathan Benaich and Ian Hogarth.

They particularly single out Google's AI subsidiary and laboratory DeepMind and AI research and development company OpenAI as culprits.

"Many of the problems in scientific research are driven by the rising hype about it, [which] is needed to generate funding," Dr. Jeffrey Funk, a technology economics and business consultant based in Singapore, told TechNewsWorld.

"This hype, and its exaggerated claims, fuel a need for results that match those claims, and thus a tolerance for research that is not reproducible."

Scientists and funding agencies will have to "dial back on the hype" to achieve more reproducibility, Funk observed. However, that "may reduce the amount of funding for AI and other technologies, funding that has exploded because lawmakers have been convinced that AI will generate $15 trillion in economic gains by 2030.

本文系外文翻译,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文系外文翻译前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Criticisms of the Google Study
  • Patents Pending?
  • Driven by Hype
相关产品与服务
容器服务
腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档