As artificial intelligence (AI) becomes more ubiquitous across civilian, defense and intelligence agencies, the need to mitigate risk while continuing to find new and innovative use cases to enhance citizen and warfighter needs is critical. In fact, White House memorandum M-25-21 “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust” clearly directs agencies “to provide improved services to the public, while maintaining strong safeguards for civil rights, civil liberties, and privacy.” Even with implementation guidance, many organizations may be struggling with where to start.
Risk can be managed in a multitude of ways. Fortunately, NIST, in conjunction with both private and public organizations, has helped take the guesswork out of where to start. The NIST AI Risk Management Framework (RMF) and associated playbook lay out a comprehensive methodology to reduce AI risks. In the AI RMF core (arguably the most important aspect of the framework) sit four functions that should work in a continuous cycle of advancement and improvement: Govern, Map, Measure and Manage.
Govern, the center piece of the RMF, encourages agencies to build a culture of risk management and ensure that it is truly a cross agency effort. Map helps bring context to AI risks and opens lines of communication between stakeholder groups to understand and identify potential risks. Measure has the agency actively assessing and analyzing risks and their impact to ensure the proper data is collected for a complete picture. Finally, Manage takes the output of the three prior functions to effectively act upon risks that will or have become too great and should be eliminated.
Collibra AI Governance and the NIST RMF
Collibra AI Governance has been deployed by a variety of agencies to help quickly and systematically catalog and govern both existing and new AI use cases, helping to ensure safe and reliable AI. And to get agencies up and running even faster, Collibra now offers a complete NIST RMF assessment out-of-the-box. This is an exact 1:1 of the NIST RMF that agencies can start using immediately, or if desired, can customize to their specific needs. The NIST RMF is a valuable resource for any organization that is developing, deploying, or using AI systems.
However, AI doesn’t start and stop with a risk assessment – it starts with data. Data feeds AI, and no matter how good of an AI framework you have, without great, high-quality data feeding that AI, you’ll never be able to deliver innovative or reliable AI use cases. AI governance should be a natural extension of your data governance efforts. The same rigorous standards and controls put in place for data should be applied for AI. Collibra AI Governance was built with that in mind and works seamlessly with the functions of the NIST RMF to help ensure reliable and trustworthy AI.
- Govern: Collibra AI Governance was built with sole purpose to provide organizations a systematic approach to not only mitigate the risks of AI, but use governance as an enabler for innovation. Not only does Collibra help you govern your AI, but also data from across your organization
- Map: AI requires a broad set of stakeholder input for success. Collibra AI Governance was developed with both technical and business users in mind, helping both sets of users easily provide context to use cases and their known or potential risks. Collibra AI Governance also comes with full lineage capabilities to visualize data being fed into AI as well as the AI outputs
- Measure: Measurements of critical metrics, including risks, should be a continuous and ongoing process. Even after AI models are deemed safe to deploy, it is imperative that models are measured as data may have changed over time, leading to increased risk of unexpected model behavior which could lead to negative outcomes. With Collibra AI Governance, agencies can create scheduled AI use case reviews to ensure everything from outputs to legal requirements are being regularly checked
- Manage: Collibra AI Governance provides a single location to document and review all aspects of the AI use case for agencies to make informed decisions as to whether internally developed and third-party AI risks are too great or do not meet agency standards. Remediation plans can also be crafted and documented if the need to address risks if needed
Collibra AI Governance combined with the NIST AI RMF provides agencies with confidence needed to innovate safely with AI. To learn more about Collibra AI Governance, read this factsheet. Want to get hands-on? Take a product tour here.