April 2021 marked a significant milestone in the history of AI, as the European Commission published the first-ever regulation proposal about AI. Considering the rapid proliferation of AI in technology used by businesses and consumers, this is a prominent but not unexpected move.
Status of the current proposal
After several years of preparation, the European Commission published its proposal for AI regulation, laying down potential rules on artificial intelligence. The proposal is part of the European Commission’s strategy for bringing greater transparency and control over data and the timing follows the data governance act proposal presented last year.
The proposed AI regulation applies to:
- Service providers that place on the market or put into service AI systems, regardless of whether those providers are established within or outside of the EU (European Union)
- Users of the AI systems in the EU
- Service providers and users of AI systems that are located in a non-EU country where the output produced by the system is used in the EU
Discussions and debates are expected across the world before the proposal is adopted as a law, perhaps in the next two years.
Key compliance obligations
The proposed AI regulation focuses on compliance obligations for high-risk AI systems that are:
1) Intended to be used as safety components of products that are subject to third-party ex-ante conformity assessment
2) Have fundamental rights implications
The primary objective of the proposal is to consider the risks of using AI and allow humans to understand and control it. Some of the key compliance requirements are:
- Data and data governance (Article 10): data used to train AI systems should follow data governance best practices that include considerations in design choices, data collection, data preparation, dataset biases, and data quality.
- Record-keeping (Article 12): AI systems need to automatically record events or logs to enable an audit trail of AI system operations. Logging requirements must include, for example, the ability to record the time of operation and provide a reference database to review data inputs.
- Transparency (Article 13): AI systems need to ensure transparency about its intended purpose, level of accuracy as validated through testing, and expected performance, enabling users to interpret and use the systems appropriately.
- Human oversight (Article 14): AI systems must include an interface mechanism for users to monitor the AI system’s operations, interpret and override its output, and shut down the system if necessary.
Non-compliance will impose administrative fines between 2% and 6% of the total annual worldwide turnover of the organization.
Implication for governance and privacy teams
For organizations leveraging AI and ML technologies to maximize the value of data, the proposed AI regulation has broader implications. They need to evaluate their current processes for managing data accuracy, bias, transparency, and AI governance. Complying with the proposal requires a strategic and thoughtful approach to managing these obligations.
The principles for building a data intelligence foundation to support governance, data quality, and data privacy initiatives align with the proposed AI regulation.
A data intelligence foundation provides:
- Understanding of what data exists to help with transparency
- Context on how data is being used, which will enable transparency and human oversight
- Trust in the completeness and meaning of data, for ensuring transparency
- Clear ownership of data to help with human oversight
- Clear audit trail of how the data is used and shared
- Data quality to identify anomalies with data
The proposed AI regulation will have a profound impact on the development and use of AI, similar to what GDPR had on the use of personal data. A robust data intelligence platform can help organizations prepare for future requirements regarding the AI systems they will increasingly depend on.