Responsible AI is a guide to how business leaders can develop and implement a robust and responsible AI strategy for their organizations.
Responsible AI has rapidly transitioned to a strategic priority for leaders and organizations worldwide. Responsible AI guides readers step-by-step through the process of establishing robust yet manageable ethical AI initiatives for any size organization, outlining the three core pillars of building a responsible AI strategy: people, process and technology. It provides the insight and guidance needed to help leaders fully understand the technical and commercial potential of ethics in AI while also covering the operations and strategy needed to support implementation.
Responsible AI breaks down what it means to use ethics and values as a modern-day decision-making tool in the design and development of AI. It conceptually covers both how ethics can be used to identify risks and establish safeguards in the development of AI and how to use ethics-by-design methods to stimulate AI innovation. It also covers the different considerations for large enterprises and SMEs and discusses the role of the AI ethicist. It is supported by practical case studies from organizations such as IKEA, Nvidia, Rolls-Royce and NatWest Group.