当前位置: 主页 > 技术方案

人工智能的危险-红米note智能人工语音

发布时间:2023-06-03 11:09   浏览次数:次   作者:佚名

人工湖的中小水域智能报警救生系统_人工智能的危险_红米note智能人工语音

红米note智能人工语音_人工湖的中小水域智能报警救生系统_人工智能的危险

人工智能的道德风险

The ethical risks of AI

人工智能会被用来强化现有的歧视性措施,例如种族特征分析、行为预测、甚至于性取向的判定。因此,有必要就由此引发的道德伦理问题颁布相关法律,确保人们以负责任的方式开发人工智能。

雷吉斯·梅朗 担任采访

马克-安托万·迪拉克(法国)

蒙特利尔大学伦理学和政治哲学助理教授,现任加拿大公共伦理讲席教授,伦理研究中心(CRE)伦理和政治研究副主任。

人工湖的中小水域智能报警救生系统_人工智能的危险_红米note智能人工语音

©️ Shutterstock

问:基于摄影图像的行为分析软件引发了哪些问题?

答:人工智能有助于改善公共场所视频监控系统的预防作用。目前的软件可以持续不断地对图像进行分析,从中发现攻击行为,并迅速发出警报。这套新的系统正在接受测试,例如,安装在巴黎Châtelet地铁站各条走廊的那套系统。假如人们接受了视频监控的做法,那么使用人工智能的唯一问题就是,它也许会出错。但这个风险并不高,因为最终必须由人来决定是否需要进行干预。

不过,面部识别出现错误是很常见的。图像上一处小小的瑕疵,就足以让人工智能把人脸看成烤面包机!令人不安的是人工智能的危险,人们感觉自己受到过度监控,而且系统出错次数也越来越多。

还有人担心,智能系统及其可能采用的种族和社会特征分析技术或被滥用。

您指的是哪种类型的滥用呢?

比如,一些国家目前采用某些程序,利用面部识别技术来发现“恐怖行为”或“犯罪特征”,也就是从人们的面部特征来判断他们的内在犯罪倾向!

人们原本以为这种相面术——号称可以通过观察面部特征来研判性格的伪科学理论——已成为历史的尘埃,谁知如今又死灰复燃。美国斯坦福大学的迈克尔·科辛斯基(Michal Kosinski)和王轶伦(Yilun Wang)深以为患,希望可以揭示其中暗藏的风险。为了提醒人们关注隐私侵犯所带来的风险,他们在2017年编写了“基达”程序——通过分析照片就可以识别某人是否为同性恋!编程者指出,这个程序的误差范围只有20%。应用这项技术,不仅会抹黑他人,而且还会侵犯个人隐藏其性取向的权利。

在缺乏哲学指导人工智能的危险,或是缺失相关社会、法律规范的情况下开展的任何科学研究,都可能引发伦理问题。我刚刚提到的这几个例子说明,目前迫切需要为人工智能研究建立道德框架。

请谈谈优生学被滥用的情况。

在我看来,人工智能并非一定带来优生。有些人预言,利用人工智能,比如用芯片来拓展记忆或是改善面部识别,可以让人类更趋完善。智能机器人或许可以为某些残疾人提供医疗解决方案(例如安装先进的义肢,以恢复行动能力),不过其能够让人类具备超能力的假设,仍属于科幻小说的范畴。

人工智能的危险_人工湖的中小水域智能报警救生系统_红米note智能人工语音

©️ UNESCO Courier 2018 7-9

The ethical risks of AI

Artificial intelligence (AI) can be used to increase the effectiveness of existing discriminatory measures, such as racial profiling, behavioural prediction, and even the identification of someone’s sexual orientation. The ethical questions raised by AI call for legislation to ensure that it is developed responsibly.

Marc-Antoine Dilhac

Assistant professor in ethics and political philosophy at the University of Montreal,Marc-Antoine Dilhac(link is external)(France) holds the Canada Research Chair in Public Ethics and is co-director of ethical and political research at the Centre for Research on Ethics (CRE).

interviewed by Régis Meyran

What are the issues raised by behaviour analysis software based on filmed images?

AI helps to improve the preventive use of video surveillance systems in public places. Images are now being continuously analysed by software that detects acts of aggression and can quickly raise the alarm. This new system is being tested, for example, in the corridors of the Châtelet station in the Paris metro system. If we accept the principle of video surveillance, the only problem with the use of AI is the risk of error. And this risk is not very high, since it is humans who must take the final decision whether or not to intervene.

Nevertheless, facial recognition errors are very common. All it takes is one small glitch in the image for the AI to see a toaster instead of a face! The feeling of excessive surveillance and the multiplication of errors can be particularly worrying.

There is also cause for concern that these intelligent systems and the racial and social profiling techniques they might use, could lead to abuses.

What kinds of abuse are you referring to?

I’m thinking in particular of the programmes, already being used in several countries, to identify “terrorist behaviour” or “criminal character”, using facial recognition. Their facial features would therefore be used to indicate their intrinsic criminal tendencies!

Alarmed by this resurgence of physiognomy, Michal Kosinski and Yilun Wang of Stanford University in the United States, wanted to expose the dangers of this pseudo-scientific theory – thought to have been relegated to history – which claims to study a person’s character, using facial recognition. To draw attention to the risks of invasion of privacy, they created an “AI gaydar” in 2017 – a programme to identify whether someone is homosexual or not, only by analysing their photograph! According to the authors, the margin of error for the programme is only twenty per cent. In addition to its stigmatizing effect, the application of this technology would violate the right of everyone not to disclose their sexual orientation.

Any scientific research that is carried out without philosophical guidelines or a sociological or legal compass is likely to raise ethical problems. The few examples I have mentioned show the urgent need to establish an ethical framework for AI research.

What about eugenistic abuses?

In my opinion, AI is not a priori a factor of eugenics. Some people prophecy a world in which humans can be improved through the use of AI – chips to expand memory or improve facial recognition, etc. While intelligent robotics might be able to offer medical solutions for some forms of disability (such as providing mobility through sophisticated prosthetics), the transhumanist hypothesis of the augmented man remains in the realm of science fiction.

©️该文章及图片版权归联合国教科文《信使》杂志所有

部分图片来自 Shutterstock