您好!欢迎光临开运·全站下载,我们竭诚为您服务!

专注消防器材研发制造

打造消防器材行业领军品牌

服务咨询热线:

14566292044
当前位置: 主页 > 新闻动态 > 企业新闻

“开运·全站下载”如何让人工智能造福人类

  • 发表时间:2024-04-10
  • 来源:
  • 人气:
本文摘要:The technology industry is facing up to the world-shaking ramifications of artificial intelligence. There is now a recognition that AI will disrupt how societies operate, from education and employment to how data will be collected about people.科技行业于是以打算庆贺人工智能带给的震惊世界的影响。

The technology industry is facing up to the world-shaking ramifications of artificial intelligence. There is now a recognition that AI will disrupt how societies operate, from education and employment to how data will be collected about people.科技行业于是以打算庆贺人工智能带给的震惊世界的影响。如今人们意识到,从教育、低收入,到如何搜集人们的数据,人工智能将妨碍社会运转的方式。

Machine learning, a form of advanced pattern recognition that enables machines to make judgments by analysing large volumes of data, could greatly supplement human thought. But such soaring capabilities have stirred almost Frankenstein-like fears about whether developers can control their creations.机器学习是一种高级形态的模式识别,需要让机器通过分析大量数据来作出辨别。这未来将会大大辅助人类思维。

但这种与日俱增的能力引起了几近“科学怪人”(Frankenstein)式的忧虑:开发人员能否掌控他们建构出有的机器?Failures of autonomous systems — like the death last yearof a US motorist in a partially self-driving car from Tesla Motors — have led to a focus on safety, says Stuart Russell, a professor of computer science and AI expert at the University of California, Berkeley. “That kind of event can set back the industry a long way, so there is a very straightforward economic self-interest here,” he says.加州大学伯克利分校(University of California, Berkeley)计算机科学教授、人工智能专家斯图亚特?拉塞尔(Stuart Russell)回应,自动系统的犯规(就像去年驾驶员一辆特斯拉汽车(Tesla Motors)部分自动驾驶汽车的美国驾车者丧生那样)促成人们注目安全性。他回应:“这种事件可能会相当严重妨碍行业的发展,因此这里具有十分必要的经济自身利益。

”Alongside immigration and globalisation, fears of AI-driven automation are fuelling public anxiety about inequality and job security. The election of Donald Trump as US president and the UK’s vote to leave the EU were partly driven by such concerns. While some politicians claim protectionist policies will help workers, many industry experts say most jobs losses are caused by technological change, largely automation.除了移民和全球化,对人工智能驱动的自动化的忧虑,于是以引起公众对于不公平和低收入安全性的忧虑。唐纳德?特朗普(Donald Trump)被选为美国总统以及英国投票解散欧盟(EU)在一定程度上就是受到这类忧虑的推展。尽管一些政治人士声称,保护主义政策将不利于劳动者,但很多行业专家回应,多数低收入损失是由科技变革(主要是自动化)导致的。

Global elites — those with high income and educational levels, who live in capital cities — are considerably more enthusiastic about innovation than the general population, the FT/Qualcomm Essential Future survey found. This gap, unless addressed, will continue to cause political friction.英国《金融时报》/高通(Qualcomm)牵头积极开展的Essential Future调查找到,全球精英(那些收益和不受教育程度低、生活在大城城市的人)对于创意要比普通大众热情得多。除非调和这种差距,否则它将之后引起政治摩擦。

Vivek Wadhwa, a US-based entrepreneur and academic who writes about ethics and technology, thinks the new wave of automation has geopolitical implications: “Tech companies must accept responsibility for what they’re creating and work with users and policymakers to mitigate the risks and negative impacts. They must have their people spend as much time thinking about what could go wrong as they do hyping products.美国企业家、编写道德和科技文章的学者维微克?瓦德瓦(Vivek Wadhwa)指出,新的自动化浪潮具备地缘政治上的潜在影响:“科技公司必需对他们所建构出有的东西承担责任,并与用户和政策制定者合作,减轻风险和负面影响。他们必需让员工花上时间思维哪里有可能错误,就像他们花上时间宣传产品那样。”The industry is bracing itself for a backlash. Advances in AI and robotics have brought automation to areas of white-collar work, such as legal paperwork and analysing financial data. Some 45 per cent of US employees’ work time is spent on tasks that could be automated with existing technologies, a study by McKinsey says.人工智能行业正在打算应付声浪。

人工智能和机器人领域的变革,早已把自动化引进白领工作领域,例如法律文书和分析财务数据。麦肯锡(McKinsey)的一项研究称之为,在美国员工的工作时间中,约有45%用在可以利用现有技术构建自动化的任务上。Industry and academic initiatives have been set up to ensure AI works to help people. These include the Partnership on AI to Benefit People and Society, established by companies including IBM, and a $27m effort involving Harvard and the Massachusetts Institute of Technology. Groups like Open AI, backed by Elon Musk and Google, have made progress, says Prof Russell: “We’ve seen papers?.?.?.?that address the technical problem of safety.”为了保证人工智能不利于人类,早已创建了一些行业和学术计划。其中还包括由IBM等公司创立的人工智能造福人类和社会合作的组织(Partnership on AI to Benefit People and Society),以及牵涉到哈佛大学(Harvard)和麻省理工学院(MIT)的一项2700万美元计划。

获得埃隆?马斯克(Elon Musk)和谷歌(Google)反对的OpenAI等的组织已获得进展,拉塞尔教授回应:“我们看见了一些论文……它们针对安全性的技术问题。”There are echoes of past efforts to deal with the complications of a new technology. Satya Nadella, chief executive of Microsoft, compares it to 15 years ago when Bill Gates rallied his company’s developers to combat computer malware. His “trustworthy computing” initiative was a watershed moment. In an interview with the FT, Mr Nadella said he hoped to do something similar to ensure AI works to benefit humans.这方面有一些过去应付新技术影响希望的Echo。微软公司(Microsoft)首席执行官萨蒂亚?纳德纳(Satya Nadella)将其与15年前比起,当时比尔?盖茨(Bill Gates)动员公司的开发人员抗击电脑恶意程序。

他发动的“可信计算”倡议是一个分水岭。纳德纳在拒绝接受英国《金融时报》专访时回应,他期望采行类似于的措施以保证人工智能教化于人类。

AI presents some thorny problems, however. Machine learning systems derive insights from large amounts of data. Eric Horvitz, a Microsoft executive, told a US Senate hearing late last year that these data sets may themselves be skewed. “Many of our data sets have been collected?.?.?.?with assumptions we may not deeply understand, and we don’t want our machine-learned applications?.?.?.?to be amplifying cultural biases,” he said.然而,人工智能带给了一些棘手的问题。机器学习系统从大量数据中得出结论看法。微软公司高管埃里克?霍维茨(Eric Horvitz)去年底在美国参议院听证会上回应,这些数据集有可能本身就不存在问题。

他回应:“我们的很多数据集是……在假设我们有可能并不了解解读的情况下搜集的,我们不期望让我们的机器学习应用于……缩放文化种族主义。”Last year, an investigation by news organisation ProPublica found that an algorithm used by the US justice system to determine whether criminal defendants were likely to reoffend, had a racial bias. Black defendants with a low risk of reoffending were more likely than white ones to be labelled as high risk.新闻机构ProPublica去年展开的一项调查找到,美国司法机构用来确认刑事被告人否有可能再度犯罪的算法不存在种族种族主义。

再度犯罪风险较低的黑人被告比白人被告更容易被标记为高风险。Greater transparency is one way forward, for example making it clear what information AI systems have used. But the “thought processes” of deep learning systems are not easy to audit.Mr Horvitz says such systems are hard for humans to understand. “We need to understand how to justify [their] decisions and how the thinking is done.”提升透明度是一条决心,比如具体人工智能系统用于了哪些信息。

但深度自学系统的“思维过程”不更容易加以审查。霍维茨回应,人类很难解读这种系统。

“我们必须解读如何证明(它们的)决策合理,以及这种思维是如何已完成的。”As AI comes to influence more government and business decisions, the ramifications will be widespread. “How do we make sure the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society?” asks Joi Ito, director of MIT’s Media Lab.随着人工智能影响更加多政府和企业决策,影响将是普遍的。

“我们如何保证我们‘培训’的机器会烧结和缩放后遗症社会的人类种族主义?”麻省理工学院媒体实验室主任伊藤穰一(Joi Ito)问道。Executives like Mr Nadella believe a mixture of government oversight — including, by implication, the regulation of algorithms — and industry action will be the answer. He plans to create an ethics board at Microsoft to deal with any difficult questions thrown up by AI.纳德纳等高管指出,答案将是融合政府监督(言外之意,这还包括对算法的监管)和行业行动。

他计划在微软公司正式成立一个道德委员会,以处置人工智能带给的任何棘手问题。He says: “I want?.?.?.?an ethics board that says, ‘If we are going to use AI in the context of anything that is doing prediction, that can actually have societal impact?.?.?.?that it doesn’t come with some bias that’s built in.’”他说道:“我期望有……一个道德委员会,它不会这样说道,‘如果我们要在任何做出预测、有可能具备实际社会影响的场合用于人工智能……那么它不具有内置的一些种族主义’。”Making sure AI systems benefit humans without unintended consequences is difficult. Human society is incapable of defining what it wants, says Prof Russell, so programming machines to maximise the happiness of the greatest number of people is problematic.保证人工智能在会带给一些意想不到的后果的情况下造福人类,是很艰难的。拉塞尔教授说道,人类社会无法界定自身想什么,因此通过编程让机器为最少数量的人寻求仅次于快乐是不存在问题的。

This is AI’s so-called “control problem”: the risk that smart machines will single-mindedly pursue arbitrary goals even when they are undesirable. “The machine has to allow for uncertainty about what it is the human really wants,” says Prof Russell.这就是人工智能所谓的“掌控问题”:智能机器将只想追赶不合理的目标,甚至当这些目标并不是非的时候也是如此。“机器必需考虑到人类确实想的东西具备不确定性,”拉塞尔教授说道。Ethics committees will not resolve concerns about AI taking jobs, however. Fears of a backlash were apparent at this year’s World Economic Forum in Davos as executives agonised over how to present AI. The common response was to say machines will make many jobs more fulfilling though other jobs could be replaced.然而,道德委员会无法平息人们对人工智能夺去工作的忧虑。

在今年的达沃斯世界经济论坛(World Economic Forum)上,对声浪的忧虑很显著,高管们对于如何使用人工智能并做出说明十分情绪。广泛的对此是,声称机器在有可能代替一些工作的同时,也将让许多工作更加能带给成就感。The profits from productivity gains for tech companies and their customers could be huge. How those should be distributed will become part of the AI debate. “Whenever someone cuts cost, that means, hopefully, a surplus is being created,” says Mr Nadella. “You can always tax surplus — you can always make sure that surplus gets distributed differently.”对科技公司和它们的客户而言,生产率提升带给的利益有可能是极大的。

如何分配这些利益将沦为有关人工智能的辩论的一部分。“每当有人缩减了成本,那就意味著未来将会建构出有一些盈余,”纳德纳说道,“你总可以对盈余课税——你总可以保证以有所不同的方式分配这些盈余。


本文关键词:开运·全站下载,开运·全站下载(中国)官方网站,开云官方下载

本文来源:开运·全站下载-www.mzhtm.com

推荐资讯
推荐产品
  • 产品中心标题一 产品中心标题一
    用于生产保险粉,磺胺二甲基嘧啶安乃近,己内酰胺等以及氯仿,苯丙砜和苯甲醛的净化。照相工业用作定影剂的配料。香料工业用于生产香草醛。用作酿造工业防腐剂,橡胶凝固剂和
  • 产品中心标题二 产品中心标题二
    用于生产保险粉,磺胺二甲基嘧啶安乃近,己内酰胺等以及氯仿,苯丙砜和苯甲醛的净化。照相工业用作定影剂的配料。香料工业用于生产香草醛。用作酿造工业防腐剂,橡胶凝固剂和
  • 产品中心标题九 产品中心标题九
    岗亭,英文名字为Watch House,字面理解就是岗哨工作的小房子。在车场管理中,岗亭常常也称之为收费亭,是停车场管理人员收取停车费的工作场所,除此以外还可用作小区保安门卫值
  • 产品中心标题八 产品中心标题八
    岗亭,英文名字为Watch House,字面理解就是岗哨工作的小房子。在车场管理中,岗亭常常也称之为收费亭,是停车场管理人员收取停车费的工作场所,除此以外还可用作小区保安门卫值