For AI technology to fulfill its full promises, we must design effective
mechanisms into the AI systems to support responsible AI behavior and curtail
potential irresponsible use, e.g. in areas of privacy protection, human
autonomy, robustness, and prevention of biases and discrimination in automated
decision making. In this paper, we present a framework that provides
computational facilities for parties in a social ecosystem to produce the
desired responsible AI behaviors. To achieve this goal, we analyze AI systems
at the architecture level and propose two decentralized cryptographic
mechanisms for an AI system architecture: (1) using Autonomous Identity to
empower human users, and (2) automating rules and adopting conventions within
social institutions. We then propose a decentralized approach and outline the
key concepts and mechanisms based on Decentralized Identifier (DID) and
Verifiable Credentials (VC) for a general-purpose computational infrastructure
to realize these mechanisms. We argue the case that a decentralized approach is
the most promising path towards Responsible AI from both the computer science
and social science perspectives.

Go to Source of this post
Author Of this post: <a href="">Wenjing Chu</a>

By admin