Unleashing incredible Data Quality & Observability innovations with Kirk Haslbeck

This week at Data Citizens ‘22, we are unleashing incredible innovations that are making it easier for customers to get better quality data – and do it at scale. I talk to customers every day, I know their pain points and I know how excited they are about the future because of our new Collibra solutions.

I was on the mainstage yesterday to help unveil breakthrough products – and I’m finding that everyone wants to stop and talk more about them to learn as much as they can.

Collibra Data Quality & Observability 

We just announced that Collibra Data Quality & Observability 

is now fully cloud-enabled. You are going to be able to bring scalability, agility and security to your data quality operations across clouds.

Now you can reduce your infrastructure costs, get to value faster with automatic feature upgrades, and take advantage of native integration with Collibra Data Intelligence Cloud and external cloud applications.

Data Quality Pushdown for Snowflake 

We also just announced Data Quality Pushdown for Snowflake, which is in beta. This is another cloud option for running your data quality jobs. 

In pushdown mode, the job runs entirely in the Snowflake data warehouse. We’ll also be announcing pushdown support for other cloud databases in the near future, so stay tuned.

I’m really excited about this because by running a Pushdown operation, you can lower your costs by eliminating egress charges, get faster time to value by eliminating dependencies on Spark compute and improve agility with the ability to scale on demand.

The Stakes are High

We know that most companies lose about $15 million a year due to poor quality data, according to Gartner. And many customers are using manual inputs instead of proactive, automated self-service data quality tools. They are definitely at a disadvantage.

At Collibra, we have been focused on automatic, adaptive rules in our core data quality product. We know that writing rules is expensive, and maintaining them can be overwhelming.

We’re helping support customers with built-in data observability with machine learning-generated rules. These adaptive rules learn and evolve over time so your teams can easily identify anomalies. We of course also support traditional rule writing and you can import existing rules or create new ones from scratch.

We also see companies loading so much data that often, they are looking at data that hasn’t been updated and isn’t fresh. It’s old data. That’s why our team has been hard at work to make the great capabilities of data quality and observability on-premises available to customers more easily and at scale.

I’m so proud of this outstanding work to empower users with a simple yet effective UI to set up data quality jobs. We enhanced the usability of our easy step-by-step wizard so that basic users can leverage it but it also has granular configuration for more advanced users. 

You can learn more about our new wave of products and even ask questions. Join us for our deep-dive webinar Nov. 16!

Related resources

E-book

Predictive data quality and observability

Video/Webinar

Measuring Data Quality return on investment

Blog

The 6 dimensions of data quality

View all resources

More stories like this one

Mar 17, 2023 - 6 min read

High quality data is the foundation of ESG

Read more
Arrow
Blog author Kory Cunningham speaking on stage at Data Citizens 22
Mar 16, 2023 - 5 min read

Collibra Usage Analytics: Driving product adoption with pragmatic product...

Read more
Arrow
Mar 8, 2023 - 2 min read

SAP and Collibra: Delivering end-to-end data governance across the enterprise...

Read more
Arrow