How to Avoid AI Disasters
Why NIST’s AI Risk Management Framework is Your Lifeboat
When you decide to get up in the morning and commute to work you put yourself potentially in a life-threatening situation. On the way to the office you could get robbed or get killed in a car accident.
Why would you expose yourself to such a risk?
Probably this is because you internally processed the risk-benefit analysis and came to the conclusion, the benefit outweighs the limited risk many times over. While in many areas of our life this risk-benefit analysis is a simple task, it becomes more challenging when we are exploring new territories, like Artificial Intelligence (AI).
As AI continues to evolve and integrate into various facets of our lives, the need for effective risk management frameworks becomes increasingly apparent. While the potential benefits are vast, the deployment of AI technologies introduces unique risks that can have profound implications for individuals, communities, organizations and the broader society. Recognizing the need for a comprehensive approach to managing these risks, the National Institute of Standards and Technology (NIST) has introduced the AI Risk Management Framework (AI RMF). This framework aims to guide organizations in navigating the complexities of AI risk and fostering responsible AI development and deployment.
Defining AI Systems and Risks
The NIST AI Risk Management Framework defines AI systems as engineered or machine-based systems capable of generating outputs such as predictions, recommendations, or decisions that influence real or virtual environments. These systems operate with varying levels of autonomy, reflecting the dynamic nature of AI technologies. Unlike traditional software or information-based systems, AI systems present unique challenges due to their ability to adapt and learn from data, the complexity of their deployment contexts, and their socio-technical nature.
AI risks can manifest in diverse ways, spanning long- or short-term, high- or low-probability, and systemic or localized categories. The interconnectedness of technical aspects and societal factors makes AI risk management a complex and challenging endeavor. Without proper controls, AI systems can amplify inequities, perpetuate biases, and lead to undesirable outcomes. However, with responsible AI practices and effective risk management, organizations can mitigate these risks and contribute to the development of equitable and accountable AI technologies.
Responsible AI Practices and Core Concepts
The NIST AI Risk Management Framework aligns with responsible AI practices, emphasizing human centricity, social responsibility, and sustainability. Human centricity ensures that AI development and deployment prioritize the well-being and rights of individuals. Social responsibility requires organizations to consider the broader impact of their AI decisions on society and the environment. Sustainability, in the context of AI, emphasizes the need to meet present needs without compromising the ability of future generations to meet their own needs.
The framework encourages organizations to adhere to “professional responsibility,” defined by ISO as an approach that recognizes the unique influence professionals have on people, society, and the future of AI. By incorporating these core concepts, responsible AI practices aim to create technology that is not only advanced but also equitable, transparent, and accountable.
The National Artificial Intelligence Initiative Act of 2020 and the AI RMF
Enacted in response to the growing influence of AI, the National Artificial Intelligence Initiative Act of 2020 directed the development of the AI Risk Management Framework. The framework serves as a resource for organizations involved in designing, developing, deploying, or using AI systems, offering guidance to manage the myriad risks associated with AI technologies. Voluntary, rights-preserving, and non-sector-specific, the framework provides flexibility for organizations of all sizes and sectors to implement its approaches.
The AI RMF is designed to equip organizations and individuals, referred to as AI actors, with approaches that enhance the trustworthiness of AI systems. It emphasizes responsible design, development, deployment, and use of AI systems to foster societal benefits while safeguarding against potential harms. The framework is intended to be practical, adaptable to the evolving AI landscape, and operationalized by organizations of varying capacities.
Two-Part Framework Structure
The AI RMF consists of two parts, each serving a distinct purpose in guiding organizations through the process of managing AI risks.
Part 1: Framing Risks and Intended Audience Part 1 provides an overview of how organizations can frame AI-related risks and defines the intended audience for the framework. It emphasizes the importance of understanding AI risks and trustworthiness and analyzes the characteristics of trustworthy AI systems, including validity and reliability, safety, security and resilience, accountability and transparency, explainability and interpretability, privacy enhancement, and fairness with managed biases.
Part 2: Core Functions — GOVERN, MAP, MEASURE, and MANAGE Part 2 constitutes the core of the framework, presenting four specific functions — GOVERN, MAP, MEASURE, and MANAGE — to help organizations address AI risks in practice. While GOVERN applies across all stages of AI risk management, the other three functions are tailored to AI system-specific contexts and specific stages of the AI lifecycle.
- GOVERN: This function applies to all stages of the AI risk management process and involves establishing governance structures and policies.
- MAP: This function focuses on mapping AI risks, including identifying and assessing risks, and creating risk maps to inform decision-making.
- MEASURE: This function involves developing metrics to quantify and monitor AI risks and assessing the effectiveness of risk mitigation measures.
- MANAGE: The MANAGE function addresses the implementation of risk mitigation measures, including planning, executing, and continually monitoring and adjusting these measures.
The framework is designed to be practical, encouraging organizations to operationalize its principles and adapt them to the dynamic landscape of AI technologies. By applying the AI RMF, organizations can enhance the trustworthiness of their AI systems, contribute to responsible AI practices, and build public trust.
Future Development
As AI technologies continue to advance and integrate into various aspects of our lives, effective risk management becomes imperative. The NIST AI Risk Management Framework provides a comprehensive and flexible guide for organizations to navigate the unique challenges posed by AI. By embracing responsible AI practices and integrating risk management principles, organizations can not only harness the transformative power of AI but also contribute to a future where AI technologies are equitable, accountable, and trusted by society. Recognizing the dynamic nature of AI technologies and the evolving standards landscape, the NIST AI Risk Management Framework is designed for continuous improvement. The framework, along with supporting resources, will be updated, expanded, and improved based on emerging technology trends, international standards, and feedback from the AI community. NIST remains committed to aligning the AI RMF with international standards, guidelines, and practices, ensuring its relevance and effectiveness in the rapidly advancing field of AI.
About Tobias Faiss
Tobias is a Senior Engineering Manager, focusing on applied Leadership, Analytics and Cyber Resilience. He has a track record of 18+ year in managing software-projects, -services and -teams in the United States, EMEA and Asia-Pacific. He currently leads several multinational teams in Germany, India, Singapore and Vietnam. Also, he is the founder of the delta2 edventures platform where its mission is to educate students, and IT-Professionals to transition into an IT-Management role.
Tobias’ latest book is ‘The Art of IT-Management: How to Successfully Lead Your Company Into the Digital Future’. You can also contact him on his personal website tobiasfaiss.com