ASILOMAR AI PRINCIPLES | FOLI

  |     |   home

THE ASILOMAR AI PRINCIPLES | FOLI | FUTURE OF LIFE INSTITUTE

These principles were developed in conjunction with the 2017 Asilomar conference ( videos here ) , through the process described here .

Artificial intelligence has already provided beneficial tools that are used every day by people around the world . Its continued development , guided by the following principles , will offer amazing opportunities to help and empower people in the decades and centuries ahead .


|| I | RESEARCH ISSUES ||

01 . RESEARCH GOAL : The goal of AI research should be to create not undirected intelligence , but beneficial intelligence .

02 . RESEARCH FUNDING : Investments in AI should be accompanied by funding for research on ensuring its beneficial use , including thorny questions in computer science , economics , law , ethics , and social studies , such as :

* How can we make future AI systems highly robust , so that they do what we want without malfunctioning or getting hacked ?

* How can we grow our prosperity through automation while maintaining peopleโ€™s resources and purpose ?

* How can we update our legal systems to be more fair and efficient , to keep pace with AI , and to manage the risks associated with AI ?

* What set of values should AI be aligned with , and what legal and ethical status should it have ?

03 . SCIENCE-POLICY LINK : There should be constructive and healthy exchange between AI researchers and policy-makers .

04 . RESEARCH CULTURE : A culture of cooperation , trust , and transparency should be fostered among researchers and developers of AI .

05 . RACE AVOIDANCE : Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards .


|| II | ETHICS & VALUES ||

06 . SAFETY : AI systems should be safe and secure throughout their operational lifetime , and verifiably so where applicable and feasible .

07 . FAILURE TRANSPARENCY : If an AI system causes harm , it should be possible to ascertain why .

08 . JUDICIAL TRANSPARENCY : Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority .

09 . RESPONSIBILITY : Designers and builders of advanced AI systems are stakeholders in the moral implications of their use , misuse , and actions , with a responsibility and opportunity to shape those implications .

10 . VALUE ALIGNMENT : Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation .

11 . HUMAN VALUES : AI systems should be designed and operated so as to be compatible with ideals of human dignity , rights , freedoms , and cultural diversity .

12 . PERSONAL PRIVACY : People should have the right to access , manage and control the data they generate , given AI systems' power to analyze and utilize that data .

13 . LIBERTY & PRIVACY : The application of AI to personal data must not unreasonably curtail people's real or perceived liberty .

14 . SHARED BENEFIT : AI technologies should benefit and empower as many people as possible .

15 . SHARED PROSPERITY : The economic prosperity created by AI should be shared broadly , to benefit all of humanity .

16 . HUMAN CONTROL : Humans should choose how and whether to delegate decisions to AI systems , to accomplish human-chosen objectives .

17 . NON-SUBVERSION : The power conferred by control of highly advanced AI systems should respect and improve , rather than subvert , the social and civic processes on which the health of society depends .

18 . AI ARMS RACE : An arms race in lethal autonomous weapons should be avoided .


|| III | LONGER-TERM ISSUES ||

19 . CAPABILITY CAUTION : There being no consensus , we should avoid strong assumptions regarding upper limits on future AI capabilities .

20 . IMPORTANCE : Advanced AI could represent a profound change in the history of life on Earth , and should be planned for and managed with commensurate care and resources .

21 . RISKS : Risks posed by AI systems , especially catastrophic or existential risks , must be subject to planning and mitigation efforts commensurate with their expected impact .

22 . RECURSIVE SELF-IMPROVEMENT : AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures .

23 . COMMON GOOD : Superintelligence should only be developed in the service of widely shared ethical ideals , and for the benefit of all humanity rather than one state or organization .

CONTINUE TO SOURCE PAGE 1 โžคโžค


DISCUSSION ABOUT THE ASILOMAR AI PRINCIPLES

|| RULES FOR DISCUSSION ||

We want to encourage discussion of all principles , but please abide by the following rules to keep the discussion focused and friendly :

Users should adhere to a code of conduct in which the discussion remains civil , helpful , and constructive . Users should only post comments or questions that they would also make in person , in a professional public setting , such as a classroom or seminar hall .

Posts that do not contribute to constructive discussion will not be approved or will be removed . In particular, posts must be appropriate for an academic workplace ; that is , comments should not contain language or content that is :

1 . Rude or disrespectful ;

2 . Combative or overly aggressive ;

3 . Unpleasant or offensive by common workplace standards ;

4 . Targeted at individuals , rather than at arguments or ideas .

To enable useful discussions , posts should also not contain language or content that is :

1 . Outside of the scope of the forum topic ;

2 . Overly repetitive , including comments repeated in multiple forums ;

3 . Incomprehensible or extremely lengthy ;

4 . Commercial in nature .

We will not accept comments that have promotions / links to personal blogs . We will not accept comments that do not address the topic of the post in a scientific and rational manner . We value a variety of opinions so please feel free to disagree , just do so politely .

CONTINUE TO SOURCE PAGE 2 โžคโžค

SOURCE | SATYAVEDISM.ORG