您好,欢迎进入半岛平台!

咨询服务热线

020-88888888s

半岛平台:放一百个心 机器人不会反攻人类

发布时间:2024-07-02 06:57人气:
本文摘要:Fear of robots, computers, and automation may be at an all-time high since B movies of the 1950s. Not only is there concern about jobs — even white-collar occupations are vulnerable — but big names in technology have weighed in with their worries.自从上世纪50年代开始兴起大量以机器人为主题B级片以来,人类对机器人、计算机和自动化的不安早已超过历史最低水平。

Fear of robots, computers, and automation may be at an all-time high since B movies of the 1950s. Not only is there concern about jobs — even white-collar occupations are vulnerable — but big names in technology have weighed in with their worries.自从上世纪50年代开始兴起大量以机器人为主题B级片以来,人类对机器人、计算机和自动化的不安早已超过历史最低水平。这不仅是因为机器人有可能偷走他们的工作(甚至就连白领工作也显得岌岌可危),一些科技界大佬的言论也减轻了人们的忧虑。Philanthropist and Microsoft co-founder Bill Gates said, “[I] don’t understand why some people aren’t concerned” about artificial super intelligence that could exceed human control. Physicist Stephen Hawking thinks that “development of full artificial intelligence could spell the end of the human race,” as machines could redesign themselves at a rate that would leave biological evolution in the dust. Tesla Motors CEO and technology investor Elon Musk said research in the area could be like “summoning the demon” that is beyond control. Hedonated $10 million to the Future of Life Institute, which sponsors research into how humanity can navigate the waters of change in the face of technology.微软公司牵头创始人、慈善家比尔o盖茨曾说道过:“我无法解读为什么有些人不担忧不会经常出现人类无法控制的超级人工智能”。

物理学家史蒂芬o霍金也指出,“任由人工智能无拘无束地发展,可能会招来人类的覆灭,”因为机器需要以生物进化意味著约将近的速度新的设计自己。特斯拉公司的CEO、科技投资人伊隆o马斯克回应,人工智能领域的研究有可能“恶魔出有人类无法控制的恶魔”。

马斯克最近向生命未来学院捐献了1000万美金,这所学院主要研究人类如何平安地在科技变革中存活下去。That’s one camp.这是一个阵营。

Then there’s another that says doomsday concerns are overblown and that, like a new age FDR, the only thing to fear is fear itself. These people — technologists, economists, and others — say that the combination of artificial intelligence, automation, and robotics will usher in new, better solutions to world problems.还有另一群科学家回应,人工智能带给的“末日危机”只不过被高估了,就像美国总统罗斯福所说的,唯一有一点我们不安的就是不安本身。反对人工智能的科学家和经济学家回应,人工智能、自动化和机器人不会为人类世界的各种问题带给新的、更佳的解决方案。

They argue that the fear of technology is old and past experience has proven that while new developments can kill off jobs, they create even more to replace them. Machines could, in theory, replace humans in a wide variety of occupations, but shortcomings in creativity, change, and even common sense are vast, making them unable to in the foreseeable future.他们指出,人们对科技的不安只不过早就不存在。以往的经验指出,人工智能领域的新发展虽然不会褫夺一些人的工作,但同时也不会建构更好的工作岗位来代替原有的职业。从理论上看,机器虽然可以代替很多种由人类专门从事的职业,但机器缺少创意和变革的能力,甚至缺少常识,这就使得它们在可以意识到的未来还无法完全代替人类。

Instead, these people suggest, robots and computers will work side by side with humans, enhancing productivity and opening new vistas of freedom for people to move beyond the drudgery of current life. In short, the coming years will look like all the ones that came before and society will sort itself out. In fact, a new film “Chappie,” due out March 6, depicts an anti-Terminator view, a world in which robots hold the solutions and humans are the bad guys. “You would have something that has 1,000 times the intelligence that we have, looking at the same problems that we look at,” the director Neill Blomkamp told NBC News. “I think the level of benefit would be immeasurable.”他们指出,机器人和计算机将与人类并肩作战工作,在提升工作效率的同时,还可以为人类带给更加多权利,因为它们需要让我们免遭专门从事一些累人的苦差事。简而言之,未来与之前的岁月没什么区别,社会几乎需要自我调节。今年3月6日在美国公映的新片《无敌查派》就描写了一个“反终结者”的故事,世界要靠机器人来解救,而人类出了坏人。

该片编剧尼尔o布洛姆坎普对《NBC新闻》回应:“未来机器人不会享有相等于我们1000倍的智能,如果让它们来解决问题我们面对的问题,我指出这种益处是无法估量的。”The swings of show biz reflect a deep concern and disagreement over whether technology holds promise or peril. The question comes down to whether the past necessarily predicts the future or if humankind could be in for a nasty shock. Hopefully the optimists will be able to say, “We told you so.” Here are five voices that say worries are overblown and leaps in technology will bring the human race along with them.娱乐业在“终结者”和“反终结者”之间的摆动,体现出有人们对科技到底不会带给福音还是灾难这一问题的注目与分歧。归根结底,问题在于过去的经验否必定能体现未来?还是未来的某天会再次发生令其全人类愤慨的“大事件”?期望乐观主义者到时候不会说道:“我们早于说道了没人吧。

”以下五位科学家就是这种乐观主义者,他们指出人们对人工智能的忧虑几乎是杞人忧天,并指出科技的进步必定不会增进人类社会的变革。David Autor大卫o奥特尔Professor of Economics and Associate Department Head, Department of Economics, Massachusetts Institute of Technology麻省理工学院经济学院副院长、经济学教授In 1966, the philosopher Michael Polanyi observed, We can know more than we can tell... The skill of a driver cannot be replaced by a thorough schooling in the theory of the motorcar; the knowledge I have of my own body differs altogether from the knowledge of its physiology. Polanyi’s observation largely predates the computer era, but the paradox he identified — that our tacit knowledge of how the world works often exceeds our explicit understanding — foretells much of the history of computerization over the past five decades. ...[J]ournalists and expert commentators overstate the extent of machine substitution for human labor and ignore the strong complementarities. The challenges to substituting machines for workers in tasks requiring adaptability, common sense, and creativity remain immense.“哲学家迈克尔o波兰尼在1966年认为:‘我们所告诉的东西,少于我们所能传达的……驾驶员的技能是再行详尽的驾驶员理论教学也代替没法的;我对自己身体的了解,与它的生理学实际也有相当大区别。’波兰尼的仔细观察在时间上要相比之下早于于计算机时代,但是他找到的悖论——即我们对世界的隐性科学知识往往多达了显性解读——在相当大程度上顺利应验了过去50年的计算机发展史……记者和专业评论人士高估了机器代替人力的程度,却忽视了两者之间不存在极强的互补性。

人类专门从事的许多任务都必须适应性、尝试和创新能力,机器人要想要取而代之,仍然面对极大的挑战。”Jeff Hawkins杰夫o霍金斯Executive director and chairman of cognitive theory research organization Redwood Neuroscience Institute, co-founder of Palm Computing, and co-founder of machine intelligence company Numenta红木神经科学中心常务董事兼任主席、Palm Computing公司牵头创始人、人工智能公司Numenta牵头创始人。The machine-intelligence technology we are creating today, based on neocortical principles, will not lead to self- replicating robots with uncontrollable intentions.There won’t be an intelligence explosion. There is no existential threat. This is the reality for the coming decades, and we can easily change direction should new existential threats appear.“我们目前正在创立的机器智能技术基于大脑的新皮质原理,会促成有意识瓦解人类掌控并且具备自我复制功能的机器人。它并不是一个现实威胁。

这就是未来几十年的现实。而且就算未来知道经常出现了现实威胁,我们也可以只能改变方向。”Eric Horvitz埃里克o霍尔维茨Distinguished Scientist Managing Director, Microsoft Research著名科学家、微软公司研究院常务董事There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally dont think thats going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end well be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.“有人担忧未来我们可能会丧失对某些智能的掌控。我指出这种情况不大可能再次发生。

我指出在用于人工智能系统这个问题上,我们不会十分积极主动的。而且最后在人类生活的方方面面,从科学到教育到经济,再行到日常生活,我们都能享用到机器智能带给的难以置信效益。

”Deborah Johnson黛伯拉o约翰逊Anne Shirley Carter Olsson Professor of Applied Ethics in the Science, Technology, and Society Program in the School of Engineering and Applied Sciences at the University of Virginia维吉尼亚大学工程与应用科学学院科学、技术与社会项目伦理学教授Presumably in fully autonomous machines all the tasks are delegated to machines. This, then, poses the responsibility challenge. Imagine a drone circulating in the sky, identifying a combat area, determining which of the humans in the area are enemy combatants and which are noncombatants, and then deciding to fire on enemy targets.“如果几乎自动化的机器成熟期了,所有任务可以倚赖这些机器自行已完成。那么这首先不会带给职责上的挑战。

设想一架飞过在空中的无人机,能自动识别战斗区域,并确认战场上的哪些人是敌军、哪些是平民,然后自行决定向敌军目标还击。Although drones of this kind are possible, the description is somewhat misleading. In order for systems of this kind to operate, humans must be involved.Humans make the decisions to delegate to machines; the humans who design the system make decisions about how the machine tasks are performed or, at least, they set the parameters in which the machine decisions will be made; and humans decide whether the machines are reliable enough to be delegated tasks in real-world situations.虽然生产这种无人机是有可能的,但这种叙述具备一定的误导性。

这类系统要想要流畅运营,人类是必需要参予的。人类要作出这些决策,然后将任务委托给机器。设计这套系统的人类要要求机器怎样已完成任务,最少也要原作涉及参数,来对机器的决策展开限定版。另外在现实的环境中,人类还要辨别机器否充足可信,能否委以重任。

”Michael Littman迈克尔o利特曼Professor of Computer Science, Brown University布朗大学计算机科学教授To be clear, there are indeed concerns about the near-term future of AI — algorithmic traders crashing the economy, or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population. Theres also a concern that systemic biases within academia and industry prevent underrepresented minorities from participating and helping to steer the growth of information technology. These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic.“要具体的是,的确有人担忧旋即的未来人工智能不会对人类产生影响——比如人工智能的交易商不会造成经济瓦解,或是脆弱的电网管理系统对用电量的平缓产生过度反应,从而截断了大量人口的用电。还有人担忧,学术界和产业界的系统性编见,有可能造成代表性严重不足的少数派无法参予掌控信息技术的发展方向。在新理念的发展和部署过程中,这些忧虑应当不会扮演着最重要的角色。

但是很多人担忧计算机不会忽然醒来时反击我们,这种忧虑是不现实的。


本文关键词:半岛平台

本文来源:半岛平台-www.mysuggester.com

  • 联系方式
  • 传 真:020-99999999
  • 手 机:14144846231
  • 电 话:020-88888888
  • 地 址:吉林省吉林市融安县平标大楼7254号
友情链接
大阳城集团游戏
竞技宝官网平台
乐虎电子游戏官网
在线咨询

咨询电话:

020-88888888

  • 微信扫码 关注我们

扫一扫咨询微信客服