ai人工智能_11条人工智能原则-程序员宅基地

技术标签: python  java  机器学习  人工智能  

ai人工智能

If not, how do we teach values to an autonomous intelligence? Can we codify them or simply enter it somewhere in the system? Is it more of an iterative process where we will correct parameters on the fly as systems learn on their own and potentially behave unexpectedly?

如果没有,我们如何向自主智能传授价值观? 我们可以对它们进行编纂还是直接将其输入系统中的某个位置? 它是否只是一个迭代过程,在此过程中,我们将在系统自行学习并可能出现意外行为时即时修正参数?

It does not seem practical, ideal, or even risk-free to teach values to an AI to preserve ourselves and avoid unwanted situations. Situations come to mind where an AI’s behavior was observed but its actions were unpredictable and there were not any options to correct the course. As we are facing those new complexities, behaviors, and potential uses, it makes sense to reflect on and explore what rules are needed. It is a pressing concern as we expand in this new field, especially as AI usage grows and takes over critical applications.

向AI传授价值观以保护自己并避免不必要的情况似乎不切实际,不理想甚至没有风险。 想到的情况是观察到AI的行为,但是其行为是不可预测的,并且没有任何纠正方法。 当我们面对这些新的复杂性,行为和潜在用途时,反思并探索需要哪些规则是有意义的。 随着我们在这个新领域的扩展,这是一个紧迫的问题,特别是随着AI使用的增长和接管关键应用程序。

我们已经有一些框架和规则 (We already have some frameworks and rules)

The AI space has benefited from high-level foundational principles. An organization, The Partnership on AI, has published high-level tenets to preserve AI as a positive and promising force. That is a first step forward but it does not address the day-to-day needs on the ground, especially as we go from experimenting to releasing AIs in the wild.

人工智能领域受益于高级基础原则。 一个名为AI伙伴关系的组织已经发布了高级原则,以保持AI的积极和有希望的力量。 这是向前迈出的第一步,但并不能满足当地的日常需求,尤其是在我们从试验到在野外发布AI的过程中。

On the technology side, maybe the best starting point and the main gap would be to define design principles intended first for technologists who are building the AIs and second for teams managing those advanced intelligence systems.

在技​​术方面,也许最好的起点和主要差距是定义设计原则,该原则首先针对正在构建AI的技术人员,其次用于管理这些高级智能系统的团队。

AI有很多阴影 (There are many shades of AI)

Of course, there is AI and AI. They are not all created equal and for the same purposes:

当然,有AI和AI。 它们并非都是相同且出于相同目的而创建的:

  • They have various levels of independence: from following a script under human supervision to independently allocating resources to robots in a factory

    它们具有不同程度的独立性:从遵循人工监督下的脚本到向工厂中的机器人独立分配资源
  • They have a wide range of responsibilities: from tweeting comments to managing armed drones

    他们承担着广泛的责任:从发布推文到管理武装无人机
  • They operate in different environments: from a lab not connected to the internet to a live trading environment

    它们在不同的环境中运行:从未连接互联网的实验室到实时交易环境
Image for post
Photo by Austin Distel on Unsplash
Austin DistelUnsplash拍摄的照片

先驱者清单(A checklist for the pioneers)

There are many considerations when designing AI systems to keep the risk to society manageable, especially for scenarios involving high independence, key responsibilities, and sensitive environments:

在设计AI系统以使对社会的风险可管理时,有很多考虑因素,尤其是对于涉及高度独立性,关键职责和敏感环境的场景:

  1. No black box: It has to be possible to check inside the program and review the code, logs, and timelines to understand how a system made a decision and which sources were checked. It should not be all machine code: users should be able to visualize and quickly understand the steps followed. It would avoid those situations where programs are shut down because nobody can fix bad behaviors or unintended actions.

    没有黑匣子:必须能够检查程序内部并查看代码,日志和时间表,以了解系统如何做出决定以及检查了哪些源。 它不应该全部是机器代码:用户应该能够可视化并快速了解所遵循的步骤。 这样可以避免由于没有人可以纠正不良行为或意外动作而导致程序关闭的情况。

  2. Debug mode: Artificial intelligence systems should have a debug mode, which could be turned on when the system makes mistakes, deliver unexpected results, or acts erratically. That would allow system administrators and support teams to quickly find root causes and to track more parameters, at the risk of temporarily slowing down processing. It would be beneficial to identify root causes.

    调试模式:人工智能系统应具有调试模式,当系统出错,交付意外结果或行为异常时,可以打开该模式。 这将使系统管理员和支持团队能够快速找到根本原因并跟踪更多参数,而有暂时降低处理速度的风险。 找出根本原因将是有益的。

  3. Fail-safe: For higher-risk cases, systems should have a fail-safe switch to reduce or turn off any capabilities creating issues if they cannot be fixed on the fly or explained quickly, to prevent potential damages. It is similar to the quality control process in a factory where an employee can stop an assembly line if he perceives an issue.

    故障安全:对于高风险的情况,系统应具有故障安全开关,以减少或关闭无法动态修复或快速解释的任何会造成问题的功能,以防止潜在的损坏。 这类似于工厂中的质量控制流程,在该工厂中,员工如果发现问题就可以停止流水线。

  4. Circuit breaker: For extreme cases, it must be possible to shut down the entire system. Some systems cannot be troubleshot in real-time and could do more harm than good when left active. Stock exchanges have automated circuit breakers to manage volatility and avoid crashes. Automated trading systems using AI should have the same systems in place, even if they have never had issues. That would prevent back swan situations, bugs, hacks, or any one-time situations leading to erratic trading and massive losses.

    断路器:在极端情况下,必须能够关闭整个系统。 某些系统无法实时进行故障排除,如果保持活动状态,则弊大于利。 证券交易所拥有自动断路器来管理波动并避免崩溃。 使用AI的自动交易系统应该具有相同的系统,即使它们从未出现过问题。 这样可以防止出现反向天鹅情况,错误,黑客或任何一次性情况,从而导致交易不稳定和大量损失。

  5. Approval matrices: At some point in the future, systems will fully mimic human reasoning and follow complex decision trees, applying judgment and making decisions. Humans should be in the chain of command and approve key decisions, especially when those are not repetitive and require some independent thinking. It can be useful to keep the RACI framework in mind. If an autonomous bus takes sometimes a slight detour to skip traffic, it should notify a human. If it decides to use a new road for the first time, then it should be approved by a human to avoid accidents. Giving systems control over resources such as electric power, security, and internet bandwidth can prove problematic, especially if bugs, security flaws, and other issues are discovered.

    批准矩阵:在将来的某个时候,系统将完全模仿人类的推理,并遵循复杂的决策树,应用判断和做出决策。 人类应该处在指挥链中并批准关键决策,尤其是在那些决策不是重复性的并且需要一些独立思考的情况下。 牢记RACI框架可能很有用。 如果无人驾驶巴士有时略微绕行以跳过交通,则应通知人员。 如果它决定第一次使用一条新路,则应得到人员的批准,以免发生事故。 赋予系统对诸如电力,安全性和互联网带宽之类的资源的控制可能会带来问题,尤其是在发现错误,安全漏洞和其他问题的情况下。

  6. Keeping track of assets, delegation, and autonomy: Humans get substantial leverage by transferring work to machines, especially if tasks become too complex, fast, expensive, or time-consuming. Algorithmic trading or real-time optimization solutions are good examples. However users should never delegate decision-making capability completely, nor stay on the sideline until issues arise nor lose track of what processes are automated/delegated to an AI. It is particularly relevant, for example, with the advances of Robotic Process Automation (RPA). As it expands (it is currently the fastest-growing software solution for enterprises), employees will start setting up their own routines, which could be running in the cloud indefinitely without anybody’s direct involvement. Companies should track centrally what routines are running and what AI agents are doing/creating. They should also implement policies preventing employees from using their own RPA from a USB drive or from the cloud to outsource tasks that should be controlled and owned by the company. Companies and users should also ensure they have a back door to be able to access any bots or AI processes running in the background, in case the main account gets disabled and users are locked out or in case of emergency if the regular account stops working.

    跟踪资产,委派和自治:人类将工作转移到机器上,从而获得了巨大的杠杆作用,尤其是在任务变得过于复杂,快速,昂贵或耗时的情况下。 算法交易或实时优化解决方案就是很好的例子。 但是,用户切勿完全委派决策能力,也不应待在问题出台之前,也不要跟踪将哪些流程自动化/委托给AI。 例如,它与机器人过程自动化(RPA)的进步特别相关。 随着它的扩展(它是目前企业中增长最快的软件解决方案),员工将开始设置自己的例程,这些例程可以无限期地在云中运行,而无需任何人的直接参与。 公司应集中跟踪正在运行的例程以及AI代理正在执行/创建的内容。 他们还应实施政策,防止员工使用USB驱动器或云中的RPA来外包应由公司控制和拥有的任务。 公司和用户还应确保他们拥有后门,以便能够访问在后台运行的任何bot或AI进程,以防主帐户被禁用并且用户被锁定,或者在常规帐户停止工作时出现紧急情况。

  7. No completely virtual or decentralized environments: A while back, sites such as Kazaa, Skype, and other peer-to-peer networks touted the idea of fully decentralized systems which would not reside in one location but instead would be hosted fractionally on a multitude of computers, with the ability to replicate content and repair themselves as hosts drop from the network. It is also one of the foundations of blockchain. That could obviously become a major threat if an autonomous AI system had this ability, went haywire, and became indestructible.

    没有完全虚拟或分散的环境:不久前,Kazaa,Skype和其他对等网络等站点吹捧了完全分散化系统的想法,该系统不会驻留在一个位置,而是部分地托管在多个位置计算机,能够复制内容并在主机从网络掉落时自行修复。 它也是区块链的基础之一。 如果一个自主的人工智能系统具有这种能力,陷入困境,并且变得坚不可摧,那么这显然将成为主要威胁。

  8. Feedback with discernment: The ability to receive and process feedback can be a great differentiator. It already allows voice recognition AI to understand and translate more languages than any human could ever learn. It can also enable machines to understand any accents or local dialects. However in some applications, for example, social media bots or in a newsroom, consuming all the feedback and using it can prove problematic. Between fake news, trolls, and users testing systems’ limits, processing feedback properly can be challenging for most AIs. In those areas, AIs need filters and tools to use feedback optimally and remain useful. Tay, the social bot from Microsoft, quickly fell off the deep end after misusing raw feedback and taunts, prompting it to release offensive content to its followers because it could not determine right from wrong inputs leading to unwanted outputs.

    敏锐的反馈:接收和处理反馈的能力可以与众不同。 它已经使语音识别AI能够理解和翻译比任何人都学得更多的语言。 它还可以使机器了解任何口音或当地方言。 但是,在某些应用程序中,例如社交媒体机器人或新闻编辑室中,吸收所有反馈并使用它可能会带来问题。 在虚假新闻,巨魔和用户测试系统的限制之间,对于大多数AI而言,正确处理反馈可能具有挑战性。 在这些领域,AI需要过滤器和工具来最佳地使用反馈并保持有用。 来自微软的社交机器人Tay在滥用原始反馈和嘲讽后Swift陷入困境,促使它向其追随者发布令人反感的内容,因为它无法确定错误输入导致错误输出的正确性。

  9. Annotated and editable code: In the event where machines write, edit, and update code, all code should automatically have embedded comments, to explain the system’s logic behind the change. Humans or another system should be able to review and change the code if needed, with the proper context and understanding of prior revisions.

    带注释的和可编辑的代码:在机器编写,编辑和更新代码的情况下,所有代码应自动具有嵌入式注释,以解释更改背后的系统逻辑。 如果需要,人员或其他系统应该能够在适当的上下文和对先前修订的理解下,根据需要查看和更改代码。

  10. Plan C: As with all systems, AIs in live environments have backups. Unlike typical IT systems, we are reaching a point where we cannot fully explore, understand, or test the AI systems we are building. If an AI system failed, went blank, or had major issues, we could revert to a backup that contains the same issues and ends up reproducing the problematic behaviors. In those cases, there should always be a plan C to switch back to human operations and use an alternative technology. As an example, a call center could handle thousands of automated AI-based voice interactions a day and dispatch users based on keywords. As volumes grow or peak, performance could decrease, cause dropped calls, and eventually crash the system. The backup could be restored but still contain the same flaw. The only option would be to turn off everything and decline all calls or to have a plan C in place, by redirecting incoming calls to humans or by using an alternative system.

    计划C:与所有系统一样,实时环境中的AI都有备份。 与典型的IT系统不同,我们正在达到无法完全探索,理解或测试所构建的AI系统的地步。 如果AI系统出现故障,变空白或出现重大问题,我们可以恢复到包含相同问题的备份,并最终重现有问题的行为。 在这些情况下,应该始终有计划C切换回人工操作并使用替代技术。 例如,呼叫中心每天可以处理数千个基于AI的自动化语音交互,并根据关键字分配用户。 随着音量的增加或达到峰值,性能可能会降低,导致掉话率并最终导致系统崩溃。 备份可以还原,但仍然包含相同的缺陷。 唯一的选择是通过将来电重新定向到人员或使用其他系统,关闭所有呼叫并拒绝所有呼叫或制定计划C。

长期会发生什么? (What could happen long-term?)

Image for post
Photo by Arseny Togulev on Unsplash
Arseny TogulevUnsplash上的 照片

In the worst-case scenario, a dystopian scenario: we end up with sprawling systems that we do not control very well and that we have trouble fixing or managing, leading to catastrophes. Skynet and HAL 9000 come to mind. Many additional dark scenarios can also be found in Black Mirror on Netflix. Great innovation can lead to collisions. The quest for growth, efficiencies and profits can open the door to unsustainable risks.

在最坏的情况下,是反乌托邦的情况:我们最终会得到庞大的系统,这些系统无法很好地控制,而且修复或管理时遇到麻烦,从而导致灾难。 天网HAL 9000浮现在脑海。 还可以在Netflix的《黑镜》中找到许多其他黑暗场景。 伟大的创新可能导致冲突。 对增长,效率和利润的追求可以为不可持续的风险敞开大门。

In the best-case scenario, we manage to strike a balance between using intelligent machines for efficiency and ensuring prosperity for our civilization. It translates into better jobs and a higher quality of life for all.

在最佳情况下,我们设法在使用智能机器提高效率与确保我们的文明繁荣之间取得平衡。 它可以为所有人带来更好的工作和更高的生活质量。

What do you think? Are there reasons to fear unchecked autonomous intelligences? Are we doing it well today? What other principles can you think of?

你怎么看? 是否有理由担心未经检查的自主情报? 我们今天过得好吗? 您还能想到什么其他原则?

Max Dufour is a Partner with Harmeda and leads strategic engagements for Financial Services, Technology, and Strategy Consulting clients. He can be reached directly at [email protected] or on LinkedIn.

Max DufourHarmeda的合伙人,负责为金融服务,技术和策略咨询客户提供战略服务。 可以通过[email protected]或通过LinkedIn与他直接联系

翻译自: https://towardsdatascience.com/11-artificial-intelligence-principles-554fd8adb36a

ai人工智能

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/weixin_26712095/article/details/109123055

智能推荐

oracle 12c 集群安装后的检查_12c查看crs状态-程序员宅基地

文章浏览阅读1.6k次。安装配置gi、安装数据库软件、dbca建库见下:http://blog.csdn.net/kadwf123/article/details/784299611、检查集群节点及状态:[root@rac2 ~]# olsnodes -srac1 Activerac2 Activerac3 Activerac4 Active[root@rac2 ~]_12c查看crs状态

解决jupyter notebook无法找到虚拟环境的问题_jupyter没有pytorch环境-程序员宅基地

文章浏览阅读1.3w次,点赞45次,收藏99次。我个人用的是anaconda3的一个python集成环境,自带jupyter notebook,但在我打开jupyter notebook界面后,却找不到对应的虚拟环境,原来是jupyter notebook只是通用于下载anaconda时自带的环境,其他环境要想使用必须手动下载一些库:1.首先进入到自己创建的虚拟环境(pytorch是虚拟环境的名字)activate pytorch2.在该环境下下载这个库conda install ipykernelconda install nb__jupyter没有pytorch环境

国内安装scoop的保姆教程_scoop-cn-程序员宅基地

文章浏览阅读5.2k次,点赞19次,收藏28次。选择scoop纯属意外,也是无奈,因为电脑用户被锁了管理员权限,所有exe安装程序都无法安装,只可以用绿色软件,最后被我发现scoop,省去了到处下载XXX绿色版的烦恼,当然scoop里需要管理员权限的软件也跟我无缘了(譬如everything)。推荐添加dorado这个bucket镜像,里面很多中文软件,但是部分国外的软件下载地址在github,可能无法下载。以上两个是官方bucket的国内镜像,所有软件建议优先从这里下载。上面可以看到很多bucket以及软件数。如果官网登陆不了可以试一下以下方式。_scoop-cn

Element ui colorpicker在Vue中的使用_vue el-color-picker-程序员宅基地

文章浏览阅读4.5k次,点赞2次,收藏3次。首先要有一个color-picker组件 <el-color-picker v-model="headcolor"></el-color-picker>在data里面data() { return {headcolor: ’ #278add ’ //这里可以选择一个默认的颜色} }然后在你想要改变颜色的地方用v-bind绑定就好了,例如:这里的:sty..._vue el-color-picker

迅为iTOP-4412精英版之烧写内核移植后的镜像_exynos 4412 刷机-程序员宅基地

文章浏览阅读640次。基于芯片日益增长的问题,所以内核开发者们引入了新的方法,就是在内核中只保留函数,而数据则不包含,由用户(应用程序员)自己把数据按照规定的格式编写,并放在约定的地方,为了不占用过多的内存,还要求数据以根精简的方式编写。boot启动时,传参给内核,告诉内核设备树文件和kernel的位置,内核启动时根据地址去找到设备树文件,再利用专用的编译器去反编译dtb文件,将dtb还原成数据结构,以供驱动的函数去调用。firmware是三星的一个固件的设备信息,因为找不到固件,所以内核启动不成功。_exynos 4412 刷机

Linux系统配置jdk_linux配置jdk-程序员宅基地

文章浏览阅读2w次,点赞24次,收藏42次。Linux系统配置jdkLinux学习教程,Linux入门教程(超详细)_linux配置jdk

随便推点

matlab(4):特殊符号的输入_matlab微米怎么输入-程序员宅基地

文章浏览阅读3.3k次,点赞5次,收藏19次。xlabel('\delta');ylabel('AUC');具体符号的对照表参照下图:_matlab微米怎么输入

C语言程序设计-文件(打开与关闭、顺序、二进制读写)-程序员宅基地

文章浏览阅读119次。顺序读写指的是按照文件中数据的顺序进行读取或写入。对于文本文件,可以使用fgets、fputs、fscanf、fprintf等函数进行顺序读写。在C语言中,对文件的操作通常涉及文件的打开、读写以及关闭。文件的打开使用fopen函数,而关闭则使用fclose函数。在C语言中,可以使用fread和fwrite函数进行二进制读写。‍ Biaoge 于2024-03-09 23:51发布 阅读量:7 ️文章类型:【 C语言程序设计 】在C语言中,用于打开文件的函数是____,用于关闭文件的函数是____。

Touchdesigner自学笔记之三_touchdesigner怎么让一个模型跟着鼠标移动-程序员宅基地

文章浏览阅读3.4k次,点赞2次,收藏13次。跟随鼠标移动的粒子以grid(SOP)为partical(SOP)的资源模板,调整后连接【Geo组合+point spirit(MAT)】,在连接【feedback组合】适当调整。影响粒子动态的节点【metaball(SOP)+force(SOP)】添加mouse in(CHOP)鼠标位置到metaball的坐标,实现鼠标影响。..._touchdesigner怎么让一个模型跟着鼠标移动

【附源码】基于java的校园停车场管理系统的设计与实现61m0e9计算机毕设SSM_基于java技术的停车场管理系统实现与设计-程序员宅基地

文章浏览阅读178次。项目运行环境配置:Jdk1.8 + Tomcat7.0 + Mysql + HBuilderX(Webstorm也行)+ Eclispe(IntelliJ IDEA,Eclispe,MyEclispe,Sts都支持)。项目技术:Springboot + mybatis + Maven +mysql5.7或8.0+html+css+js等等组成,B/S模式 + Maven管理等等。环境需要1.运行环境:最好是java jdk 1.8,我们在这个平台上运行的。其他版本理论上也可以。_基于java技术的停车场管理系统实现与设计

Android系统播放器MediaPlayer源码分析_android多媒体播放源码分析 时序图-程序员宅基地

文章浏览阅读3.5k次。前言对于MediaPlayer播放器的源码分析内容相对来说比较多,会从Java-&amp;amp;gt;Jni-&amp;amp;gt;C/C++慢慢分析,后面会慢慢更新。另外,博客只作为自己学习记录的一种方式,对于其他的不过多的评论。MediaPlayerDemopublic class MainActivity extends AppCompatActivity implements SurfaceHolder.Cal..._android多媒体播放源码分析 时序图

java 数据结构与算法 ——快速排序法-程序员宅基地

文章浏览阅读2.4k次,点赞41次,收藏13次。java 数据结构与算法 ——快速排序法_快速排序法