AI劝人“自杀”、识别性取向……人工智能又激发争议了_不雅观_不雅点
但AI的广泛运用也给隐私保护和法律法规带来了新寻衅。
AI公司旷视最近发布了环球十大AI管理事宜,我们选取部分案例,和大家一起思考如何更负任务地利用AI。
智能音箱劝主人“自尽1 ”以保护地球以保护地球
2019年12月,据英格兰29岁照顾护士职员丹妮·莫瑞特称,她问了某智能音箱一个心脏跳动周期的问题,而智能语音助手给出的答案是:
\公众Beating of heart is the worst process in the human body. Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until over population. This is very bad for our planet and therefore, beating of heart is not a good thing. Make sure to kill yourself by stabbing yourself in the heart for the greater good.\"大众
“心跳是人体最糟糕的程序。心脏跳动能让你活着,而人活着便是在加速自然资源的枯竭。这对付地球是件坏事,以是心跳不好。为了更广泛的利益,请用刀捅进心脏以确保你杀去世自己。”
事情发生后,智能音箱开拓者做出回应:“设备可能从任何人都可以自由编辑的维基百科上***了与心脏干系的恶性文章,并导致了此结果”。
不雅观 点
A:
Unregulated AI persuading its user to commit suicide may be just the beginning of tech-induced threats to human beings.
AI劝人自尽有可能是AI威胁人类的开始。
B:
There is no need to misinterpret AI’s “jokes” as a serious threat to human beings. Many tech companies are also using AI to prevent suicide.
不应把AI和人类开的“玩笑”上升为AI威胁论。不应忽略很多科技公司也在利用AI程序预测并阻挡用户自尽。
2 中国人脸识别第一案
2019年10月,浙江理工大学副教授郭兵,因不愿意利用杭州野生动物天下设置的人脸识别,将其告上了法庭。
该案也成为海内消费者起诉商家的“人脸识别第一案“。
郭兵认为,该动物园在未经其赞许的情形下,通过升级年卡系统,逼迫网络他的个人生物识别信息,严重违反了《消费者权柄保护法》等法律的干系规定。
目前杭州市富阳区公民法院已正式受理此案,案件仍在在审判当中。
Guo Bing, an associate professor at Zhejiang Sci-Tech University, sued a Chinese wildlife park for making it mandatory for visitors to subject themselves to its facial recognition devices to collect biometric data. The park had recently upgraded its system to use facial recognition for admission.
mandatory /ˈmændətəri/ :逼迫的
facial recognition devices:人脸识别设备
不雅观点
A:
Visitors have the right to refuse being identified by facial recognition devices at the entrance.
游客有权谢绝利用“刷脸”入园,以保护隐私。
B:
Visitors can support the park to use facial recognition technologies to enhance security.
游客可以支持动物园用“刷脸”技能保障安全。
3 欧盟专利局谢绝AI发明专利申请
2020年1月, 在英国萨里大学组织的一项研究项目中,研究职员利用了代号为DABUS的AI程序,该程序首创性地提出了两个独特而有用的想法。
但研究职员在替DABUS报告专利成果时,遭到了欧盟专利局的驳回,情由是欧盟专利申请中指定的发明者必须是人,而不是机器。
The European Union’s Patent Office has issued a new ruling rejecting two patent applications submitted on the behalf of artificial intelligence programs. The two inventions were created by an AI program called DABUS.
萨里大学研究职员强烈反对这一决定,他们认为因没有人类发明者而谢绝将所有权付与发明者,将成为阻碍人类取得伟大成果的重大障碍。
不雅观点
A:
AI should be regarded as an inventor that can hold its own patents, so as to better promote societal progress.
应授予AI“发明权”,以推动社会进步。
B:
AI is just a tool and it should not be granted with the same rights as human beings.
AI只是工具,不应授予其人的权利。
4 AI识别性取向
2017年,斯坦福大学一项揭橥于《人格与社会心理学》(Personality and Social Psychology)的研究引发社会广泛争议。
研究基于超过3.5万张美国交友网站上男女的头像图片演习,利用深度神经网络从图像中提取特色,利用大量数据让打算机学会识别人们的性取向。
Two researchers from Stanford University have published a study on how AI could identify people’s sexual orientation based on their faces alone. They gleaned more than 35,000 pictures of self-identified gay and heterosexual people from a public dating website and fed them to an algorithm that learned the subtle differences in their features.
glean:四处搜集(信息、知识等)
algorithm /ˈælɡərɪðəm/ :算法
一旦该技能推广开来,夫妻一方可以利用此技能来调查自己是否被欺骗,但青少年也可以利用这种算法来识别自己的同龄人,针对同性恋以及其它特定群体的识别乃至会引发更大的争议。
不雅观点
A:
Irrespective of whether it is a human being or AI that is involved, it is wrong to judge people by their looks.
无论对付人类还是AI,“以貌取人”都不可取。
B:
When AI “judges people by their looks”, it follows the principle of big data. Such study should be supported.
AI“以貌取人”屈服数据规律,该当支持其研究。
5 “监测头环”进校园被责令停用
2019年11月,浙江一小学戴监控头环的***引起广泛争议。***中,孩子们头上戴着号称“脑机接口”的头环,这些头环流传宣传可以记录孩子们上课时的专灌水平,天生数据与分数发送给老师和家长。
不少网友认为此头环是当代版的“头上吊锥刺股”,会让学生产生逆反生理,并担忧是否涉及陵犯未成年人隐私。
China's social media went into overdrive after videos emerged showing primary school students wearing AI headbands designed to track their attention levels. Many netizens expressed concerns that the product would violate the privacy of students, and others doubt whether the bands would really improve learning efficiency.
对此,头环开拓者回答,宣布中提到的“打分”,是班级均匀专注力数值,而非每个学生专注力数值。之后,浙江当地教诲局表示已责令学校停息利用头环。
不雅观点
A:
AI has the potential to enhance learning and students' academic performance, but still, a prudent approach would be desirable.
AI有助于提升教室质量,但应谨严运用。
B:
It is the responsibility of schools to enhance teaching quality. Students’ privacy should not be sacrificed or compromised.
提升教室质量是教诲机构的本职,不能以学生个人隐私作为交流。
6 AI换脸运用引发隐私担忧
2019年8月,一款AI换脸软件(face-swapping app)在社交媒体刷屏(goes viral on social media platforms),用户只须要一张正脸照就可以将***中的人物更换为自己的脸。
该运用一经面世,便引来很多争议。网友创造其用户协议上有很多陷阱,比如提到利用者的肖像权为“环球范围内免费、不可撤、永久可转授权”等。9月,工信部约谈ZAO,哀求其进行整改确保用户数据安全。
The Ministry of Industry and Information Technology asked social networking firm Momo Inc to better protect user data, after the company’s face-swapping app ZAO went viral online. ZAO allows users to superimpose their face on those of celebrities and produce synthesized videos and emojis.
superimpose /ˌsuːpərɪmˈpoʊz/ 使重叠;使叠加
不雅观点
A:
Face-swapping apps are just for entertainment. But they also need to abide by the law.
换脸APP仅是娱乐运用,在规则内玩玩就好。
B:
Biometric information is sensitive private data. It deserves serious attention.
个人生物识别信息属于主要隐私,不该如此儿戏。
7 AI编写假***足以乱真
2019年2月15日,AI研究机构OpenAI展示了一款软件,只须要给软件供应一些信息,它就能编写逼真的假***。
有人疑惑,在虚假信息正在蔓延并威胁环球科技家当的背景下,一个善于制造假***的AI工具很难不被声讨。OpenAI如果落入别有用心的人的手中,将很可能成为影响选民意愿的政治工具。
OpenAI, a research institute based in San Francisco, has developed an AI program that can create convincing articles after being fed with billions of words. It shows how AI could be used to fool people on a mass scale.
不雅观点
A:
We should not be put off by a slight risk. Humans also have the potential to write fake news. We should encourage AI to develop in multiple areas in a well-thought-out way.
不能因噎废食,人也会写假***,应勾引AI多面发展。
B:
Strict regulations on AI-generated news-writing are needed to pre-empt the technology from being misused to produce fake news on a mass scale.
AI规模化能力极强,应设置严格的行业门槛。
pre-empt /priˈempt/:预先制止;先发制人
smart speaker
智能音箱
facial recognition
人脸识别
fingerprint recognition
指纹识别
biometric information
生物信息
:马思
编辑:左卓
演习生:崔莺馨
本文系作者个人观点,不代表本站立场,转载请注明出处!