如何让人工智能赋予我们力量,而非受控于它 – Max Tegmark


人工智能时代口译技术应用研究
王华树 | 国内首部聚焦口译技术应用和教学的著作
新书推荐


口笔译教育与评价国际论坛 二号公告
在厦门大学百年校庆之际,邀您齐聚厦门、共襄盛举
论坛推荐

如何让人工智能赋予我们力量,而非受控于它 - Max Tegmark
play-rounded-fill

如何让人工智能赋予我们力量,而非受控于它 - Max Tegmark

About the talk

很多人工智能的研究者预期,在未来十年内,人工智能会在大部分工作上胜过我们。在这样的未来里,我们仅仅受限于物理的定律,而非我们拥有的智慧。麻省理工学院的物理学家和人工智能研究者麦克斯 · 泰格马克,为我们辨析了未来可能拥有的真正机会和错误认知中的威胁,他为我们说明了今天的我们应当采取哪些步骤,来确保人工智能最终会为人性带来最好的成果——而非糟糕的结局。

00:00
After 13.8 billion years of cosmic history, our universe has woken up and become aware of itself. From a small blue planet, tiny, conscious parts of our universe have begun gazing out into the cosmos with telescopes, discovering something humbling. We've discovered that our universe is vastly grander than our ancestors imagined and that life seems to be an almost imperceptibly small perturbation on an otherwise dead universe. But we've also discovered something inspiring, which is that the technology we're developing has the potential to help life flourish like never before, not just for centuries but for billions of years, and not just on earth but throughout much of this amazing cosmos.
在 138 亿年的历史之后, 我们的宇宙终于觉醒了, 并开始有了自我意识。 从一颗蓝色的小星球, 宇宙中那些有了微小意识的部分, 开始用它们的望远镜, 窥视整个宇宙, 从而有了谦卑的发现。 宇宙比我们祖先所想象的 要大得多, 使得生命显得如同渺小的扰动, 小到足以被忽视, 但若没有它们的存在, 宇宙也没了生命。 不过我们也发现了 一些振奋人心的事, 那就是我们所开发的技术, 有着前所未有的潜能 去促使生命变得更加繁盛, 不仅仅只有几个世纪, 而是持续了数十亿年; 也不仅仅是地球上, 甚至是在整个浩瀚的宇宙中。

00:47
I think of the earliest life as "Life 1.0" because it was really dumb, like bacteria, unable to learn anything during its lifetime. I think of us humans as "Life 2.0" because we can learn, which we in nerdy, geek speak, might think of as installing new software into our brains, like languages and job skills. "Life 3.0," which can design not only its software but also its hardware of course doesn't exist yet. But perhaps our technology has already made us "Life 2.1," with our artificial knees, pacemakers and cochlear implants.
我把最早的生命 称之为 “生命 1.0”, 因为它那会儿还略显蠢笨, 就像细菌,在它们的一生中, 也不会学到什么东西。 我把我们人类称为 “生命 2.0”, 因为我们能够学习, 用技术宅男的话来说, 就像是在我们脑袋里 装了一个新的软件, 比如语言及工作技能。 而“生命 3.0” 不仅能开始设计 它的软件,甚至还可以创造其硬件。 当然,它目前还不存在。 但是也许我们的科技 已经让我们走进了 “生命 2.1”, 因为现在我们有了人工膝盖, 心脏起搏器以及耳蜗植入技术。

01:21
So let's take a closer look at our relationship with technology, OK? As an example, the Apollo 11 moon mission was both successful and inspiring, showing that when we humans use technology wisely, we can accomplish things that our ancestors could only dream of. But there's an even more inspiring journey propelled by something more powerful than rocket engines, where the passengers aren't just three astronauts but all of humanity. Let's talk about our collective journey into the future with artificial intelligence.
我们一起来聊聊 人类和科技的关系吧! 举个例子, 阿波罗 11 号月球任务 很成功,令人备受鼓舞, 展示出了我们人类 对于使用科技的智慧, 我们实现了很多 祖先们只能想象的事情。 但还有一段更加 鼓舞人心的旅程, 由比火箭引擎更加强大的 东西所推动着, 乘客也不仅仅只是三个宇航员, 而是我们全人类。 让我们来聊聊与人工智能 一起走向未来的 这段旅程。

01:56
My friend Jaan Tallinn likes to point out that just as with rocketry, it's not enough to make our technology powerful. We also have to figure out, if we're going to be really ambitious, how to steer it and where we want to go with it. So let's talk about all three for artificial intelligence: the power, the steering and the destination.
我的朋友扬·塔林(Jaan Tallinn)常说, 这就像是火箭学一样, 只让我们的科技 拥有强大的力量是不够的。 如果我们有足够的 雄心壮志,就应当想出 如何控制它们的方法, 希望它朝着怎样的方向前进。 那么对于人工智能, 我们先来谈谈这三点: 力量,操控和目的地。

02:19
Let's start with the power. I define intelligence very inclusively -- simply as our ability to accomplish complex goals, because I want to include both biological and artificial intelligence. And I want to avoid the silly carbon-chauvinism idea that you can only be smart if you're made of meat. It's really amazing how the power of AI has grown recently. Just think about it. Not long ago, robots couldn't walk. Now, they can do backflips. Not long ago, we didn't have self-driving cars. Now, we have self-flying rockets. Not long ago, AI couldn't do face recognition. Now, AI can generate fake faces and simulate your face saying stuff that you never said. Not long ago, AI couldn't beat us at the game of Go. Then, Google DeepMind's AlphaZero AI took 3,000 years of human Go games and Go wisdom, ignored it all and became the world's best player by just playing against itself. And the most impressive feat here wasn't that it crushed human gamers, but that it crushed human AI researchers who had spent decades handcrafting game-playing software. And AlphaZero crushed human AI researchers not just in Go but even at chess, which we have been working on since 1950.
我们先来说力量。 我对于人工智能的定义非常全面—— 就是我们能够完成复杂目标的能力, 因为我想把生物学 和人工智能都包含进去。 我还想要避免愚蠢的 碳沙文主义的观点, 即你认为如果你很聪明, 你就一定有着肉身。 人工智能的力量 在近期的发展十分惊人。 试想一下。 甚至在不久以前, 机器人还不能走路呢。 现在,它们居然开始后空翻了。 不久以前, 我们还没有全自动驾驶汽车。 现在,我们都有 自动飞行的火箭了。 不久以前, 人工智能甚至不能完成脸部识别。 现在,人工智能都开始 生成仿真面貌了, 并模拟你的脸部表情, 说出你从未说过的话。 不久以前, 人工智能还不能在围棋中战胜人类, 然后,谷歌的DeepMind推出的 AlphaZero 就掌握了人类三千多年的 围棋比赛和智慧, 通过和自己对战的方式轻松秒杀我们, 成了全球最厉害的围棋手。 这里最让人印象深刻的部分, 不是它击垮了人类棋手, 而是它击垮了人类人工智能的研究者, 这些研究者花了数十年 手工打造了下棋软件。 此外,AlphaZero也在国际象棋比赛中 轻松战胜了人类的人工智能研究者们, 我们从 1950 年 就开始致力于国际象棋研究。

03:50
So all this amazing recent progress in AI really begs the question: How far will it go? I like to think about this question in terms of this abstract landscape of tasks, where the elevation represents how hard it is for AI to do each task at human level, and the sea level represents what AI can do today. The sea level is rising as AI improves, so there's a kind of global warming going on here in the task landscape. And the obvious takeaway is to avoid careers at the waterfront --
所以近来,这些惊人的 人工智能进步,让大家不禁想问: 它到底能达到怎样的程度? 我在思考这个问题时, 想从工作任务中的抽象地景来切入, 图中的海拔高度表示 人工智能要把每一项工作 做到人类的水平的难度, 海平面表示现今的 人工智能所达到的水平。 随着人工智能的进步, 海平面会上升, 所以在这工作任务地景上, 有着类似全球变暖的后果。 很显然,我们要避免 从事那些近海区的工作——

04:22
which will soon be automated and disrupted. But there's a much bigger question as well. How high will the water end up rising? Will it eventually rise to flood everything, matching human intelligence at all tasks. This is the definition of artificial general intelligence -- AGI, which has been the holy grail of AI research since its inception. By this definition, people who say, "Ah, there will always be jobs that humans can do better than machines," are simply saying that we'll never get AGI. Sure, we might still choose to have some human jobs or to give humans income and purpose with our jobs, but AGI will in any case transform life as we know it with humans no longer being the most intelligent. Now, if the water level does reach AGI, then further AI progress will be driven mainly not by humans but by AI, which means that there's a possibility that further AI progress could be way faster than the typical human research and development timescale of years, raising the controversial possibility of an intelligence explosion where recursively self-improving AI rapidly leaves human intelligence far behind, creating what's known as superintelligence.
这些工作不会一直由人来完成, 迟早要被自动化取代。 然而同时,还存在一个很大的问题, 水平面最后会升到多高? 它最后是否会升高到淹没一切, 人工智能会不会 最终能胜任所有的工作? 这就成了通用人工智能 (Artificial general intelligence)—— 缩写是 AGI, 从一开始它就是 人工智能研究最终的圣杯。 根据这个定义,有人说, “总是有些工作, 人类可以做得比机器好的。” 意思就是,我们永远不会有 AGI。 当然,我们可以仍然 保留一些人类的工作, 或者说,通过我们的工作 带给人类收入和生活目标, 但是不论如何, AGI 都会转变我们对生命的认知, 人类或许不再是最有智慧的了。 如果海平面真的 上升到 AGI 的高度, 那么进一步的人工智能进展 将会由人工智能来引领,而非人类, 那就意味着有可能, 进一步提升人工智能水平 将会进行得非常迅速, 甚至超越用年份来计算时间的 典型人类研究和发展, 提高到一种极具争议性的可能性, 那就是智能爆炸, 即能够不断做自我改进的人工智能 很快就会遥遥领先人类, 创造出所谓的超级人工智能。

05:39
Alright, reality check: Are we going to get AGI any time soon? Some famous AI researchers, like Rodney Brooks, think it won't happen for hundreds of years. But others, like Google DeepMind founder Demis Hassabis, are more optimistic and are working to try to make it happen much sooner. And recent surveys have shown that most AI researchers actually share Demis's optimism, expecting that we will get AGI within decades, so within the lifetime of many of us, which begs the question -- and then what? What do we want the role of humans to be if machines can do everything better and cheaper than us?
好了,回归现实: 我们很快就会有 AGI 吗? 一些著名的 AI 研究者, 如罗德尼 · 布鲁克斯 (Rodney Brooks), 认为一百年内是没有可能的。 但是其他人,如谷歌DeepMind公司的 创始人德米斯 · 哈萨比斯(Demis Hassabis) 就比较乐观, 且努力想要它尽早实现。 近期的调查显示, 大部分的人工智能研究者 其实都和德米斯一样持乐观态度, 预期我们十年内就会有 AGI, 所以我们中许多人 在有生之年就能看到, 这就让人不禁想问—— 那么接下来呢? 如果什么事情机器 都能做得比人好, 成本也更低,那么人类 又该扮演怎样的角色?

06:23
The way I see it, we face a choice. One option is to be complacent. We can say, "Oh, let's just build machines that can do everything we can do and not worry about the consequences. Come on, if we build technology that makes all humans obsolete, what could possibly go wrong?"
依我所见,我们面临一个选择。 选择之一是要自我满足。 我们可以说,“我们来建造机器, 让它来帮助我们做一切事情, 不要担心后果, 拜托,如果我们能打造出 让全人类都被淘汰的机器, 还有什么会出错吗?”

06:40
But I think that would be embarrassingly lame. I think we should be more ambitious -- in the spirit of TED. Let's envision a truly inspiring high-tech future and try to steer towards it. This brings us to the second part of our rocket metaphor: the steering. We're making AI more powerful, but how can we steer towards a future where AI helps humanity flourish rather than flounder? To help with this, I cofounded the Future of Life Institute. It's a small nonprofit promoting beneficial technology use, and our goal is simply for the future of life to exist and to be as inspiring as possible. You know, I love technology. Technology is why today is better than the Stone Age. And I'm optimistic that we can create a really inspiring high-tech future ... if -- and this is a big if -- if we win the wisdom race -- the race between the growing power of our technology and the growing wisdom with which we manage it. But this is going to require a change of strategy because our old strategy has been learning from mistakes. We invented fire, screwed up a bunch of times -- invented the fire extinguisher.
但我觉得那样真是差劲到悲哀。 我们认为我们应该更有野心—— 带着 TED 的精神。 让我们来想象一个 真正鼓舞人心的高科技未来, 并试着朝着它前进。 这就把我们带到了火箭比喻的 第二部分:操控。 我们让人工智能的力量更强大, 但是我们要如何朝着 人工智能帮助人类未来更加繁盛, 而非变得挣扎的目标不断前进呢? 为了协助实现它, 我联合创办了 “未来生命研究所” (Future of Life Institute)。 它是个小型的非营利机构, 旨在促进有益的科技使用, 我们的目标很简单, 就是希望生命的未来能够存在, 且越是鼓舞人心越好。 你们知道的,我爱科技。 现今之所以比石器时代更好, 就是因为科技。 我很乐观的认为我们能创造出 一个非常鼓舞人心的高科技未来…… 如果——这个 “如果” 很重要—— 如果我们能赢得这场 关于智慧的赛跑—— 这场赛跑的两位竞争者 便是我们不断成长的科技力量 以及我们用来管理科技的 不断成长的智慧。 但这也需要策略上的改变。 因为我们以往的策略 往往都是从错误中学习的。 我们发明了火, 因为搞砸了很多次—— 我们发明出了灭火器。

07:51
We invented the car, screwed up a bunch of times -- invented the traffic light, the seat belt and the airbag, but with more powerful technology like nuclear weapons and AGI, learning from mistakes is a lousy strategy, don't you think?
我们发明了汽车, 又一不小心搞砸了很多次—— 发明了红绿灯,安全带 和安全气囊, 但对于更强大的科技, 像是核武器和 AGI, 要去从错误中学习, 似乎是个比较糟糕的策略, 你们怎么看?

08:06
It's much better to be proactive rather than reactive; plan ahead and get things right the first time because that might be the only time we'll get. But it is funny because sometimes people tell me, "Max, shhh, don't talk like that. That's Luddite scaremongering." But it's not scaremongering. It's what we at MIT call safety engineering. Think about it: before NASA launched the Apollo 11 mission, they systematically thought through everything that could go wrong when you put people on top of explosive fuel tanks and launch them somewhere where no one could help them. And there was a lot that could go wrong. Was that scaremongering? No. That's was precisely the safety engineering that ensured the success of the mission, and that is precisely the strategy I think we should take with AGI. Think through what can go wrong to make sure it goes right.
事前的准备比事后的 补救要好得多; 提早做计划,争取一次成功, 因为有时我们或许 没有第二次机会。 但有趣的是, 有时候有人告诉我。 “麦克斯,嘘——别那样说话。 那是勒德分子(注:持有反机械化, 反自动化观点的人)在制造恐慌。“ 但这并不是制造恐慌。 在麻省理工学院, 我们称之为安全工程。 想想看: 在美国航天局(NASA) 部署阿波罗 11 号任务之前, 他们全面地设想过 所有可能出错的状况, 毕竟是要把人类放进 易燃易爆的太空舱里, 再将他们发射上 一个无人能助的境遇。 可能出错的情况非常多, 那是在制造恐慌吗? 不是。 那正是在做安全工程的工作, 以确保任务顺利进行, 这正是我认为处理 AGI 时 应该采取的策略。 想清楚什么可能出错, 然后避免它的发生。

08:56
So in this spirit, we've organized conferences, bringing together leading AI researchers and other thinkers to discuss how to grow this wisdom we need to keep AI beneficial. Our last conference was in Asilomar, California last year and produced this list of 23 principles which have since been signed by over 1,000 AI researchers and key industry leaders, and I want to tell you about three of these principles.
基于这样的精神, 我们组织了几场大会, 邀请了世界顶尖的人工智能研究者 和其他有想法的专业人士, 来探讨如何发展这样的智慧, 从而确保人工智能对人类有益。 我们最近的一次大会 去年在加州的阿西洛玛举行, 我们得出了 23 条原则, 自此已经有超过 1000 位 人工智能研究者,以及核心企业的 领导人参与签署。 我想要和各位分享 其中的三项原则。

09:19
One is that we should avoid an arms race and lethal autonomous weapons. The idea here is that any science can be used for new ways of helping people or new ways of harming people. For example, biology and chemistry are much more likely to be used for new medicines or new cures than for new ways of killing people, because biologists and chemists pushed hard -- and successfully -- for bans on biological and chemical weapons. And in the same spirit, most AI researchers want to stigmatize and ban lethal autonomous weapons. Another Asilomar AI principle is that we should mitigate AI-fueled income inequality. I think that if we can grow the economic pie dramatically with AI and we still can't figure out how to divide this pie so that everyone is better off, then shame on us.
第一,我们需要避免军备竞赛, 以及致命的自动化武器出现。 其中的想法是,任何科学都可以 用新的方式来帮助人们, 同样也可以以新的方式 对我们造成伤害。 例如,生物和化学更可能被用来 制造新的医药用品, 而非带来新的杀人方法, 因为生物学家和 化学家很努力—— 也很成功地——在推动 禁止生化武器的出现。 基于同样的精神, 大部分的人工智能研究者也在 试图指责和禁止致命的自动化武器。 另一条阿西洛玛 人工智能会议的原则是, 我们应该要减轻 由人工智能引起的收入不平等。 我认为,我们能够大幅度利用 人工智能发展出一块经济蛋糕, 但却没能相处如何来分配它 才能让所有人受益, 那可太丢人了。

10:11
Alright, now raise your hand if your computer has ever crashed.
那么问一个问题,如果 你的电脑有死机过的,请举手。

10:16
Wow, that's a lot of hands. Well, then you'll appreciate this principle that we should invest much more in AI safety research, because as we put AI in charge of even more decisions and infrastructure, we need to figure out how to transform today's buggy and hackable computers into robust AI systems that we can really trust, because otherwise, all this awesome new technology can malfunction and harm us, or get hacked and be turned against us. And this AI safety work has to include work on AI value alignment, because the real threat from AGI isn't malice, like in silly Hollywood movies, but competence -- AGI accomplishing goals that just aren't aligned with ours. For example, when we humans drove the West African black rhino extinct, we didn't do it because we were a bunch of evil rhinoceros haters, did we? We did it because we were smarter than them and our goals weren't aligned with theirs. But AGI is by definition smarter than us, so to make sure that we don't put ourselves in the position of those rhinos if we create AGI, we need to figure out how to make machines understand our goals, adopt our goals and retain our goals.
哇,好多人举手。 那么你们就会感谢这条准则, 我们应该要投入更多 以确保对人工智能安全性的研究, 因为我们让人工智能在主导 更多决策以及基础设施时, 我们要了解如何将 会出现程序错误以及有漏洞的电脑, 转化为可靠的人工智能, 否则的话, 这些了不起的新技术 就会出现故障,反而伤害到我们, 或被黑入以后转而对抗我们。 这项人工智能安全性的工作 必须包含对人工智能价值观的校准, 因为 AGI 会带来的威胁 通常并非出于恶意—— 就像是愚蠢的 好莱坞电影中表现的那样, 而是源于能力—— AGI 想完成的目标 与我们的目标背道而驰。 例如,当我们人类促使了 西非的黑犀牛灭绝时, 并不是因为我们是邪恶 且痛恨犀牛的家伙,对吧? 我们能够做到 只是因为我们比它们聪明, 而我们的目标和它们的目标相违背。 但是 AGI 在定义上就比我们聪明, 所以必须确保我们别让 自己落到了黑犀牛的境遇, 如果我们发明 AGI, 首先就要解决如何 让机器明白我们的目标, 选择采用我们的目标, 并一直跟随我们的目标。

11:25
And whose goals should these be, anyway? Which goals should they be?
不过,这些目标到底是谁的目标? 这些目标到底是什么目标?

11:30
This brings us to the third part of our rocket metaphor: the destination. We're making AI more powerful, trying to figure out how to steer it, but where do we want to go with it? This is the elephant in the room that almost nobody talks about -- not even here at TED -- because we're so fixated on short-term AI challenges. Look, our species is trying to build AGI, motivated by curiosity and economics, but what sort of future society are we hoping for if we succeed? We did an opinion poll on this recently, and I was struck to see that most people actually want us to build superintelligence: AI that's vastly smarter than us in all ways. What there was the greatest agreement on was that we should be ambitious and help life spread into the cosmos, but there was much less agreement about who or what should be in charge. And I was actually quite amused to see that there's some some people who want it to be just machines.
这就引出了火箭比喻的 第三部分:目的地。 我们要让人工智能的力量更强大, 试图想办法来操控它, 但我们到底想把它带去何方呢? 这就像是房间里的大象, 显而易见却无人问津—— 甚至在 TED 也没人谈论—— 因为我们都把目光 聚焦于短期的人工智能挑战。 你们看,我们人类 正在试图建造 AGI, 由我们的好奇心 以及经济需求所带动, 但如果我们能成功, 希望能创造出怎样的未来社会呢? 最近对于这一点, 我们做了一次观点投票, 结果很让我惊讶, 大部分的人其实希望 我们能打造出超级人工智能: 在各个方面都 比我们聪明的人工智能, 大家甚至一致希望 我们应该更有野心, 并协助生命在宇宙中的拓展, 但对于应该由谁,或者什么来主导, 大家就各持己见了。 有件事我觉得非常奇妙, 就是我看到有些人居然表示 让机器主导就好了。

12:32
And there was total disagreement about what the role of humans should be, even at the most basic level, so let's take a closer look at possible futures that we might choose to steer toward, alright?
至于人类该扮演怎样的角色, 大家的意见简直就是大相径庭, 即使在最基础的层面上也是, 那么让我们进一步 去看看这些可能的未来, 我们可能去往目的地,怎么样?

12:43
So don't get me wrong here. I'm not talking about space travel, merely about humanity's metaphorical journey into the future. So one option that some of my AI colleagues like is to build superintelligence and keep it under human control, like an enslaved god, disconnected from the internet and used to create unimaginable technology and wealth for whoever controls it. But Lord Acton warned us that power corrupts, and absolute power corrupts absolutely, so you might worry that maybe we humans just aren't smart enough, or wise enough rather, to handle this much power. Also, aside from any moral qualms you might have about enslaving superior minds, you might worry that maybe the superintelligence could outsmart us, break out and take over. But I also have colleagues who are fine with AI taking over and even causing human extinction, as long as we feel the the AIs are our worthy descendants, like our children. But how would we know that the AIs have adopted our best values and aren't just unconscious zombies tricking us into anthropomorphizing them? Also, shouldn't those people who don't want human extinction have a say in the matter, too? Now, if you didn't like either of those two high-tech options, it's important to remember that low-tech is suicide from a cosmic perspective, because if we don't go far beyond today's technology, the question isn't whether humanity is going to go extinct, merely whether we're going to get taken out by the next killer asteroid, supervolcano or some other problem that better technology could have solved.
别误会我的意思, 我不是在谈论太空旅行, 只是打个比方, 人类进入未来的这个旅程。 我的一些研究人工智能的同事 很喜欢的一个选择就是 打造人工智能, 并确保它被人类所控制, 就像被奴役起来的神一样, 网络连接被断开, 为它的操控者创造出无法想象的 科技和财富。 但是艾克顿勋爵(Lord Acton) 警告过我们, 权力会带来腐败, 绝对的权力终将带来绝对的腐败, 所以也许你会担心 我们人类就是还不够聪明, 或者不够智慧, 无法妥善处理过多的权力。 还有,除了对于奴役带来的优越感, 你可能还会产生道德上的忧虑, 你也许会担心人工智能 能够在智慧上超越我们, 奋起反抗,并取得我们的控制权。 但是我也有同事认为, 让人工智能来操控一切也无可厚非, 造成人类灭绝也无妨, 只要我们觉得人工智能 配得上成为我们的后代, 就像是我们的孩子。 但是我们如何才能知道 人工智能汲取了我们最好的价值观, 而不是只是一个无情的僵尸, 让我们误以为它们有人性? 此外,那些绝对不想 看到人类灭绝的人, 对此应该也有话要说吧? 如果这两个高科技的选择 都不是你所希望的, 请记得,从宇宙历史的角度来看, 低级的科技如同自杀, 因为如果我们不能 远远超越今天的科技, 问题就不再是人类是否会灭绝, 而是让我们灭绝的会是下一次 巨型流星撞击地球, 还是超级火山爆发, 亦或是一些其他本该可以 由更好的科技来解决的问题。

14:18
So, how about having our cake and eating it ... with AGI that's not enslaved but treats us well because its values are aligned with ours? This is the gist of what Eliezer Yudkowsky has called "friendly AI," and if we can do this, it could be awesome. It could not only eliminate negative experiences like disease, poverty, crime and other suffering, but it could also give us the freedom to choose from a fantastic new diversity of positive experiences -- basically making us the masters of our own destiny.
所以,为什么不干脆 坐享其成…… 使用非奴役的 AGI, 因为价值观和我们一致, 愿意和我们并肩作战的 AGI? 尤多科斯基(Eliezer Yudkowsky) 所谓的 “友善的人工智能” 就是如此, 若我们能做到这点,那简直太棒了。 它或许不会解决负面的影响, 如疾病,贫穷, 犯罪或是其它, 但是它会给予我们自由, 让我们从那些正面的 境遇中去选择—— 让我们成为自己命运的主人。

14:54
So in summary, our situation with technology is complicated, but the big picture is rather simple. Most AI researchers expect AGI within decades, and if we just bumble into this unprepared, it will probably be the biggest mistake in human history -- let's face it. It could enable brutal, global dictatorship with unprecedented inequality, surveillance and suffering, and maybe even human extinction. But if we steer carefully, we could end up in a fantastic future where everybody's better off: the poor are richer, the rich are richer, everybody is healthy and free to live out their dreams.
总的来说, 在科技上,我们的现状很复杂, 但是若从大局来看,又很简单。 多数人工智能的研究者认为 AGI 能在未来十年内实现, 如果我们没有事先 准备好去面对它们, 就可能成为人类历史上 最大的一个错误—— 我们要面对现实。 它可能导致残酷的 全球独裁主义变成现实, 造成前所未有的 不平等监控和苦难, 或许甚至导致人类灭绝。 但是如果我们能小心操控, 我们可能会有个美好的未来, 人人都会受益的未来, 穷人变得富有,富人变得更富有, 每个人都是健康的, 能自由地去实现他们的梦想。

15:35
Now, hang on. Do you folks want the future that's politically right or left? Do you want the pious society with strict moral rules, or do you an hedonistic free-for-all, more like Burning Man 24/7? Do you want beautiful beaches, forests and lakes, or would you prefer to rearrange some of those atoms with the computers, enabling virtual experiences? With friendly AI, we could simply build all of these societies and give people the freedom to choose which one they want to live in because we would no longer be limited by our intelligence, merely by the laws of physics. So the resources and space for this would be astronomical -- literally.
不过先别急。 你们希望未来的政治 是左派还是右派? 你们想要一个有 严格道德准则的社会, 还是一个人人可参与的 享乐主义社会, 更像是个无时无刻 不在运转的火人盛会? 你们想要美丽的海滩、森林和湖泊, 还是偏好用电脑 重新排列组成新的原子, 实现真正的虚拟现实? 有了友善的人工智能, 我们就能轻而易举地建立这些社会, 让大家有自由去选择 想要生活在怎样的社会里, 因为我们不会再受到 自身智慧的限制, 唯一的限制只有物理的定律。 所以资源和空间会取之不尽—— 毫不夸张。

16:13
So here's our choice. We can either be complacent about our future, taking as an article of blind faith that any new technology is guaranteed to be beneficial, and just repeat that to ourselves as a mantra over and over and over again as we drift like a rudderless ship towards our own obsolescence. Or we can be ambitious -- thinking hard about how to steer our technology and where we want to go with it to create the age of amazement. We're all here to celebrate the age of amazement, and I feel that its essence should lie in becoming not overpowered but empowered by our technology.
我们的选择如下: 我们可以对未来感到自满, 带着盲目的信念, 相信任何科技必定是有益的, 并将这个想法当作 圣歌一般,不断默念, 让我们像漫无目的船只, 驶向自我消亡的结局。 或者,我们可以拥有雄心壮志—— 努力去找到操控我们科技的方法, 以及向往的目的地, 创造出真正令人惊奇的时代。 我们相聚在这里, 赞颂这令人惊奇的时代, 我觉得,它的精髓应当是, 让科技赋予我们力量, 而非反过来受控于它。

相关推荐
5/5

原创视频版权为主办方及译直播所有,请勿擅自使用
已赞2
已有 0 条评论 新浪微博