Collibra powers UC Davis Health’s responsible AI adoption


Healthcare is literally a matter of life and death. At a time when the world is abuzz with hype and promise around artificial intelligence in (almost) equal measure, those who can most effectively leverage and master AI-based innovations will be the real industry change makers.

UC Davis Health (UCDH) adopted Collibra and built out AI Governance to improve its patient care experience. Knowing how unpredictable AI can be in its current form, the organization was careful to apply proper AI governance to mitigate risk while driving sustainable growth.

At Data Citizens 2024, the medical center’s Data Curation Manager Chris Hilscher and Data Governance Supervisor Bhupesh Sovani explained how UCDH operationalized its AI objectives via an analytics oversight committee (AOC). Combining S.M.A.R.T. and S.A.F.E. frameworks with Collibra’s model registry ensured safe, responsible AI deployment.

The UCDH journey to AI adoption

As part of the University of California (UC) system and the Sacramento region’s only academic health center, UCDH serves the broader northern California area for both tertiary and quaternary care across one hospital and approximately 15 outpatient clinics. 

In early 2019, the UC Office of the President decreed that all university health systems needed to define their data management and sharing processes — along with their artificial intelligence and machine learning approaches. Each UC center was tasked with creating a health data oversight committee (HDOC), ultimately consisting of numerous subcommittees — which, in March 2020, is exactly what UCDH did.

A year later, UCDH realized just how revolutionary AI would be to workflows and business processes. “Every single tool is going to have it; it’s going to be in every contract; it’s going to seep into all of the things we do” Chris recalls thinking. “I think that was a correct assumption.” Out of that assumption, the health center’s AOC was born, headed up by its Chief Nursing Information Officer and Chief Research Information Officer.

Fast forward to July 2023, and the AOC published its S.M.A.R.T. and S.A.F.E. frameworks, designed to help with clinical AI evaluation and analytics oversight. This involved all the advanced analytic models that UCDH planned to deploy in clinical or clinical research environments — including AI, ML, and natural language processing.

The challenges of AI adoption

Organizations adopting AI need to do so for the right reasons. Even then, it can be a tough balancing act. 

Here’s what UCDH did to ensure it was implementing AI models effectively.

AI for the greater good

For UCDH, AI is a way of better serving the community. “UC has one of the largest patient population sets in the nation, and leveraging that for AI and ML ultimately benefits the public good,” Chris explains.

To meet its mission, UCDH must define the scope of health data. With so many types of data contained in electronic medical records (EMRs) and varied, competing interests, the task can seem daunting. 

Equally, to avoid negative ethical, regulatory, compliance-based, and reputational implications, UCDH also needed to establish stringent criteria to evaluate projects and transactions. Just because an organization can do something doesn’t mean that it should

Ultimately, a patient-informed, justice-based model founded in a health data office can balance multiple interests while ethically and compliantly serving people.

Digging down on data to holistically govern AI 

The concept of AI governance isn’t sufficient as a standalone: It comes with industry- and sometimes committee-specific considerations.

Within UCDH’s HDOC is a data sharing committee, which partnered with the AOC to determine how the medical center would share data with outside parties. EMR data is very valuable — especially for training large language models (LLMs) — so organizations must ask in advance:

  • What does the contract with the third party look like? 
  • Is the third party going to use data for commercial purposes or attempt to sell it? 
  • What are the potential security vulnerabilities and exploits?

The AOC also provided the medical center’s request prioritization committee with complete data source access to improve on-premises and cloud data management best practices.

Together, these three committees carry out UCDH’s holistic approach to AI governance.

Three pathways to implementation

Whenever the AOC analyzes and evaluates AI models, it must assess requirements based on the case in question:

  • Vendors without FDA clearance require a lower level of supporting documents and evidence, as there may be no published studies or research 
  • Vendors with FDA clearance — including approvals based on 501(k) or De Novo — require a higher level of supporting documents and evidence
  • Homegrown, custom, and other academic models help developers get a better look at the inevitable black box of AI: “We know what the feature set is, how we coded it, and what the pre-training data looks like,” Chris says

The S.M.A.R.T. and S.A.F.E. frameworks

The AOC designed novel frameworks for clinical AI evaluation at UCDH to determine whether models are business-aligned. Each model must first pass the S.M.A.R.T criteria:

    • Specific: Have the proposed AI use and implementation plan been specifically defined in relation to clinical, research, strategic, and financial objectives? The how of implementation is sometimes more important than the what in hitting these objectives.
  • Measurable: How will the impact of the proposed solution — including direct and indirect benefits and potential consequences — be measured? Is there a way to differentiate whether post-implementation outcomes are attributable to the AI solution, other associated changes in business workflows, or unrelated secular trends?
  • Aligned: Is the proposed use of AI aligned with a defined, organizational strategic objective — for example, an enterprise clinical strategic plan or Institute for Healthcare Improvement Quintuple Aim? Who else may be affected by its implementation, and has it received support from organizational stakeholders required for successful implementation? It takes a village for an AI model to succeed
  • Realistic: What are the chances that the proposed AI solution will work as promised, and will its implementation alter clinical or operational practices? Not all training sets on which specific models are built match patient populations — how might cross-implementation affect different regions?
  • Transformative: Will the proposed use of AI have an incremental or transformative effect on how the organization or those outside the organization manage themselves, deliver care, and conduct research?

Should the model pass the S.M.A.R.T. criteria, the AOC then reviews the model based on S.A.F.E criteria:

  • Safety and risk: including potential harm identification and mitigation, International Medical Device Regulators Forum (IMDRF) safety categorization, on- or off-label model usage, potential maintenance or improvement of the current standard of care, and acceptable level of safe implementation
  • Accuracy: including patient population deployment, training and testing criteria, model assessment metrics, calibration assessment and acceptability, performance versus existing models, and accuracy relative to risk degree
  • Fairness and bias: including evaluation in vulnerable subgroups, use in model accuracy assessment and calibration, and reasonable mitigation of discovered unfair performance 
  • Evidence: including level of peer-reviewed study model performance evaluation, FDA clearance (via what mechanism, i.e. 510(k) or De Novo), whether post-marketing real-world studies substantiate or refute initial claims to the FDA, and if overall evidence assessment supports institutional use of the model

If the model passes the S.A.F.E criteria, the AOC then renders an SBAR decision — based on the summary, benefits, assessment, and recommendations of the model.

This stringent process is what enables UCDH’s AOC to balance safe and responsible AI adoption with innovation.

Collibra helps mitigate risk and drive growth with AI governance 

To follow its S.M.A.R.T. and S.A.F.E. frameworks, the AOC needs to leverage AI governance, which is what Collibra provides.

UCDH deploys clinical models tracked in their  Health Analytics Core (HAC) Model Registry built with Collibra. Collibra created a custom asset type — called advanced analytics model — with attributes aligned to UCDH’s S.M.A.R.T. and S.A.F.E. frameworks. Collibra also created relations to track model versions and custom roles, such as clinical champion and model developer, ultimately cataloging over 60 different models. Additional features include:

  • AI model statuses, such as evaluation and deployment
  • Access control for more granular domain control across communities
  • Specific, customized views for asset navigation
  • Model search for related, captured terms 

Don’t get left behind: Adopt AI safely and responsibly

“Embracing AI is incredibly important today,” Chris emphasizes. “It’s the future, and if you’re not going to be involved in it, then you’re probably going to get left behind.”

But healthcare organizations can’t just take a leap of faith: There are inherent risks involved, and the right frameworks and processes are needed to assess and effectively prepare for safe, responsible AI integration.

Choosing AI-driven tools that pass rigorous standards like S.M.A.R.T. and S.A.F.E. increases the chance of successful AI journeys — even if a totally seamless implementation is near impossible. 

It’s also important to remember that AI is not just artificial intelligence — it’s also augmented intelligence. AI tools support clinical decisions and reduce administrative burdens. Used correctly, AI ultimately supports and promotes better patient care.

This article is based on Collibra and UC Davis Health’s discussion at the Data Citizens 2024 conference in Orlando, FL, bringing together the world’s most innovative community of data leaders to experience breakthrough solutions. Collibra puts reliable, high-quality data in the hands of healthcare data citizens.

Want to hear more from UC Davis Health?

Check out their session on-demand

Related resources


Must know terms of AI governance

Product video

Collibra AI Governance: launch overview


AI governance 101: AI and the mountains of unstructured data you hold

View all resources

Want to hear more from UC Davis Health?

Check out their session on-demand

More stories like this one

Jul 15, 2024 - 4 min read

How to observe data quality for better, more reliable AI

Read more
Jul 2, 2024 - 4 min read

Collibra AI Governance and de-risking unstructured data at Ohalo

Read more
Jun 27, 2024 - 5 min read

Defining responsible AI governance with UCLA Health

Read more