Providing a high level of autonomy for a human-machine team requires assumptions that address behavior and mutual trust. The performance of a human-machine team is maximized when the partnership provides mutual benefits that satisfy design rationales, balance of control, and the nature of autonomy. The distinctively different characteristics and features of humans and machines are likely why they have the potential to work well together, overcoming each other's weaknesses through cooperation, synergy, and interdependence which forms a "collective intelligence.¿ Trust is bidirectional and two-sided; humans need to trust AI technology, but future AI technology may also need to trust humans.Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams focuses on human-machine trust and "assured¿ performance and operation in order to realize the potential of autonomy. This book aims to take on the primary challenges of bidirectional trust and performance of autonomous systems, providing readers with a review of the latest literature, the science of autonomy, and a clear path towards the autonomy of human-machine teams and systems. Throughout this book, the intersecting themes of collective intelligence, bidirectional trust, and continual assurance form the challenging and extraordinarily interesting themes which will help lay the groundwork for the audience to not only bridge the knowledge gaps, but also to advance this science to develop better solutions.
- Assesses the latest research advances, engineering challenges, and the theoretical gaps surrounding the question of autonomy
- Reviews the challenges of autonomy (e.g., trust, ethics, legalities, etc.), including gaps in the knowledge of the science
- Offers a path forward to solutions
- Investigates the value of trust by humans of HMTs, as well as the bidirectionality of trust, understanding how machines learn to trust their human teammates