An AI
code of ethics, also called an AI value platform, is a policy
statement that formally defines the role of artificial intelligence as it
applies to the continued development of the human race. The purpose of an AI
code of ethics is to provide stakeholders with guidance when faced with an
ethical decision regarding the use of artificial intelligence.
Isaac
Asimov, the science fiction writer, foresaw the potential dangers of
autonomous AI agents long before their development and created The Three Laws
of Robotics as a means of limiting those risks. In Asimov's code of ethics,
the first law forbids robots from actively harming humans or allowing harm to
come to humans by refusing to act. The second law orders robots to obey
humans, unless the orders are not in accordance with the first law. The third
law orders robots to protect themselves, insofar as doing so is in accordance
with the first two laws.
Although developers are still in the early
stages of AI adoption, it's important for enterprises to take ethical and
responsible approaches when creating AI systems. To that end, a non-profit
institute founded by MIT cosmologist Max Tegmark, Skype co-founder Jaan
Tallinn and DeepMind research scientist Viktoriya Krakovnahas -- worked with
AI researchers and developers to establish a set of guiding principles which
are now referred to as the Asilomar AI Principles. This AI code of ethics
mandates that:
The goal of AI research should be to create
not undirected intelligence, but beneficial intelligence.
Investments in AI should be accompanied by
funding for research on ensuring its beneficial use.
If an AI system causes harm, it should be
possible to ascertain why.
Any involvement by an autonomous system in
judicial decision-making should provide a satisfactory explanation auditable
by a competent human authority.
There should be a constructive and healthy
exchange between AI researchers and policy-makers.
A culture of cooperation, trust, and
transparency should be fostered among researchers and developers of AI.
Teams developing AI systems should actively
cooperate to avoid corner-cutting on safety standards.
AI systems should be safe and secure
throughout their operational lifetime, and verifiably so where applicable and
feasible.
Designers and builders of advanced AI
systems are stakeholders in the moral implications of their use, misuse, and
actions, with a responsibility and opportunity to shape those implications.
Highly autonomous AI systems should be
designed so that their goals and behaviors can be assured to align with human
values throughout their operation.
AI systems should be designed and operated
so as to be compatible with ideals of human dignity, rights, freedoms, and
cultural diversity.
People should have the right to access,
manage and control the data they generate, given AI systems' power to analyze
and utilize that data.
The application of AI to personal data must
not unreasonably curtail people's real or perceived liberty.
AI technologies should benefit and empower
as many people as possible.
The economic prosperity created by AI
should be shared broadly, to benefit all of humanity.
Humans should choose how and whether to
delegate decisions to AI systems, to accomplish human-chosen objectives.
The power conferred by control of highly
advanced AI systems should respect and improve, rather than subvert, the
social and civic processes on which the health of society depends.
Risks posed by AI systems, especially
catastrophic or existential risks, must be subject to planning and mitigation
efforts commensurate with their expected impact.
An arms race in lethal autonomous weapons
should be avoided.
|