Speaker：Max TegmarkKey words：未来 人工智能Abstract：麻省理工学院的物理学家和人工智能研究者麦克斯 · 泰格马克讨论了人工智能的能力，方向和终点，介绍了阿西洛马人工智能原则，为我们说明了今天的我们应当采取哪些步骤，来确保人工智能最终会为人性带来最好的成果——而非糟糕的结局。@TED: Many artificial intelligence researchers expect AI to outsmart humans at all tasks and jobs within decades, enabling a future where we're restricted only by the laws of physics, not the limits of our intelligence. MIT physicist and AI researcher Max Tegmark separates the real opportunities and threats from the myths, describing the concrete steps we should take today to ensure that AI ends up being the best -- rather than worst -- thing to ever happen to humanity."Rating: ⭐️⭐️⭐️⭐️⭐️学习笔记Life 1.0 to Life 3.0**
I think of the earliest life as "Life 1.0" because it was really dumb, like bacteria, unable to learn anything during its lifetime.
I think of us humans as "Life 2.0" because we can learn, which we in nerdy, geek speak, might think of as installing new software into our brains, like languages and job skills.
"Life 3.0," which can design not only its software but also its hardware of course doesn't exist yet. But perhaps our technology has already made us "Life 2.1," with our artificial knees, pacemakers and cochlear implants.
So let's talk about all three for artificial intelligence:the power, the steering and the destination.PowerThe boundary of AI's power: How far will it go?Will it eventually rise to flood everything, matching human intelligence at all tasks. This is the definition of artificial general intelligence --AGI, which has been the holy grail of AI research since its inception.Ask yourself two Questions:
Are we going to get AGI any time soon?
Then what? What do we want the role of humans to be if machines can do everything better and cheaper than us?
What should we do:
One option is to be complacent. We can say, "Oh, let's just build machines that can do everything we can do and not worry about the consequences.
Another option is to envision a truly inspiring high-tech future and try to steer towards it.
We're making AI more powerful, but how can we steer towards a future where AI helps humanity flourish rather than flounder?
To help with this, I cofounded the Future of Life Institute:
It's a small nonprofit promoting beneficial technology use, and our goal is simply for the future of life to exist and to be as inspiring as possible.
How to deal with more powerful technology like nuclear weapons and AGI:
Asilomar AI principle/阿西洛马人工智能原则So in this spirit, we've organized conferences, bringing together leading AI researchers and other thinkers to discuss how to grow this wisdom we need to keep AI beneficial. Our last conference was in Asilomar, California last year and produced this list of 23 principles which have since been signed by over 1,000 AI researchers and key industry leaders, and I want to tell you about three of these principles.
We should avoid an arms race and lethal autonomous weapons.
We should mitigate AI-fueled income inequality.I think that if we can grow the economic pie dramatically with AI and we still can't figure out how to divide this pie so that everyone is better off, then shame
We should invest much more in AI safety research, because as we put AI in charge of even more decisions and infrastructure, we need to figure out how to transform today's buggy and hackable computers into robust AI systems that we can really trust, because otherwise, all this awesome new technology can malfunction and harm us, or get hacked and be turned against us. And this AI safety work has to include work on AI value alignment, because the real threat from AGI isn't malice, like in silly Hollywood movies, but competence --AGI accomplishing goals that just aren't aligned with ours.
But AGI is by definition smarter than us, so to make sure that we don't put ourselves in the position of those rhinos if we create AGI, we need to figure out how to make machines understand our goals, adopt our goals and retain our goals.And whose goals should these be, anyway? Which goals should they be?DestinationLet's take a closer look at possible futures that we might choose to steer toward, alright?
So one option that some of my AI colleagues like is to build superintelligence and keep it under human control, like an enslaved god, disconnected from the internet and used to create unimaginable technology and wealth for whoever controls it.
I also have colleagues who are fine with AI taking over and even causing human extinction, as long as we feel the AIs are our worthy descendants, like our children.
How about having AGI that's not enslaved but treats us well because its values are aligned with ours? This is the gist of what Eliezer Yudkowsky has called "friendly AI"
We can be ambitious --thinking hard about how to steer our technology and where we want to go with it to create the age of amazement. We're all here to celebrate the age of amazement, and I feel that its essence should lie in becoming not overpowered but empowered by our technology.
Link：Max Tegmark: How to get empowered, not overpowered, by AI