Last week I attended the launch of the AI For Belgium Coalition, a community-led approach to enable Belgian people and organizations to capture the opportunities of Artificial Intelligence (AI) while facilitating the ongoing transition responsibly. The event was also attended by European Vice President of the European Commission, Andrus Ansip; Vice Prime Minister, Alexander De Croo; and Caroline Pauwels, Rector of the Vrije Universiteit Brussel, along with a wide representation of the press, academics, professionals, and investors.
AI For Belgium has the ambition to position Belgium in the European AI landscape. The coalition consists of prominent Belgian personalities in the Data and AI field, including Pattie Maes from MIT, Marc Raisière – CEO of Belfius, Thierry Geerts – CEO Google Belgium, and many others. I’m honored to be a part of the coalition as Collibra’s Chief Science Officer and Co-founder. Just a year ago we were invited to bring our advice to a similar initiative with the Norwegian government organized by Columbia University. I think all Collibrians can be proud that our voice is heard on the policy level. We have seen similar initiatives emerging in many other countries. On February 11th, President Donald J. Trump signed an executive order, titled “Maintaining American Leadership in Artificial Intelligence.”
This order stresses the paramount importance of AI for the economic and national security of the United States.
The Coalition has published a document with five key recommendations:
- Set up a new learning deal – Technology and AI are transforming society and our job market. We currently lack both the capacity and tools to support this transition and our schools are not preparing future generations for the 21st century. Therefore the Coalition proposes a new learning deal; a universal skills building program for adults and more digital – as well as human – skills for our youth.
- Develop a responsible data strategy – Trust is the cornerstone of any transformation. We believe in the need for a robust and up-to-date legal framework, ethical principles and more transparency. Data is the energy that will fuel the fourth industrial revolution. But data often remains inaccessible. We need to build a data ecosystem that facilitates more responsible data-sharing with reinforced open data policies, more collaborations and a platform with well-structured tools and approaches.
- Support private sector AI adoption – It can be hard for companies, particularly SMEs, to start working with AI. It can be perceived as complex; companies might lack the internal resources and the iterative approach can be too costly. The Coalition proposes to demystify AI through a lighthouse approach (training programs, large-scale events, and social-impact projects). We also believe in more collaboration and accessibility to AI through a national AI hub, and the need to facilitate experimentation.
- Innovate and radiate – Belgium has world-class researchers, but our research is not at scale. Also, we have yet to develop, attract, and retain enough AI talent. It is also hard for innovative start-up companies to grow beyond the early stages the way, for example, Collibra did. We propose to position Belgium as Europe’s AI lab through sandboxes and large-scale collaboration within academia, leveraging Belgian transposition of the GDPR. Next, we recommend creating more AI-related training programs, more focus on practical applications and more selective migration. We also suggest supporting the growth of our AI companies through an investment fund and by differentiating our expertise.
- Improve public service and boost the ecosystem – Too few public organizations are currently experimenting with AI. The Coalition recommends that public institutions rethink their own roles and evolve towards a platform approach. We need to give public institutions the tools to experiment; such as a rolling fund and more innovation-friendly procurement. We also recommend creating a national Chief Data/Digital Officer role to organize internal transformations and launch large-scale transversal projects. A few principles to ensure a sustainable implementation: ensuring continued trust from the public, a European approach, collaboration between all stakeholders, a grass-roots/community-led approach, focus on specific areas (such as healthcare/life sciences) and, lastly, daring to be ambitious and audacious.
During the event, Caroline Pauwels, Rector of the Vrije Universiteit Brussel, and I unveiled our plans for a Collibra-VUB joint multidisciplinary AI Research Center. Collibra was founded in 2008 out of The Vrije Universiteit Brussel, which has a long track-record in AI research since Luc Steels founded the first AI interdisciplinary research team on the European continent in 1983. This research team is now being led by Professor Ann Nowé and has always been at the forefront of the debate on the ethical implications of science and technology.
The collaborative research center will focus on both long-term and applied research in the domain of multi-agent AI with human-like computing as the ultimate goal, and particular emphasis on:
(a) user-centric and responsible AI,
(b) hybrid AI and collaborative cognitive robotics
(c) conscience, safety, reflection, and anticipatory reasoning.
This multidisciplinary research group will also aim to establish a broader policy think tank that analyzes the societal and ethical implications of these technologies.
Big data and AI have pushed companies further into digital transformation – creating entirely new classes of goods and services, disrupting go-to-market strategies, and leading to more sustaining customer relationships. Yet there is an understated risk to digital transformations such as data spills, data exploration costs, and blind trust in unregulated, incontestable, and oblique AI. Where data records human behavior, they have been perceived as a threat to fundamental values, including autonomy, equality, democracy, and most importantly privacy. In a study on Public Perceptions of Privacy and Security in the Post-Snowden Era, 91% of adults in the survey “agree” or “strongly agree” that consumers have lost control over how personal information is collected and used by companies. Some 70% of social media users say that they are at least somewhat concerned about the government accessing some of the information they share on social networking sites without their knowledge. In another survey of US consumers, when presented with a list of popular AI services (e.g., home assistance, financial planning, medical diagnosis, and hiring), 41.5% of respondents said they didn’t trust any of these services. In another study they asked US consumers what feelings describe best their emotions when thinking about AI, the most common response (more than 40%) was “concerned,” “skeptical,” and “unsure.” You can read more about the findings of that survey here.
In their journey to unlock competitive advantage and maximize value from the application of big data, it is vital that we guide our data leaders through our research to find the right balance between value creation and risk exposure. These responsible innovators need to consider not only cataloging/capture of data, but also how data is used, and how it is expected to be used, whether it is in traditional data warehousing or Jupyter notebooks. They must rethink assumptions, processes, and approaches to governing and stewarding that data. And to succeed, they must deliver credible, coherent, and trustworthy algorithms and data access clearing mechanisms for everyone who can use it. As data becomes the most valuable resource, data governance delivers an imperative certification for any business to trust one another, but also increasingly sets a precondition for any citizen to engage in a trustworthy and endurable relationship with a company or government.
This research ultimately should find a way in so-called smart data set catalogs that self-organize based on usage and such. In similar ways, Google devised page-ranking and AdWords to index and rank “pages” on the Web, by exploiting mere inherent properties of these pages. Furthermore, research in responsible AI will enable data sets and algorithms to explain themselves in light of consumers claiming their privacy rights, such as the right for explanation, as stipulated by GDPR and CCPA.
I’m very excited to be a part of this coalition and look forward to working in close partnership with VUB to advance these research initiatives and the AI for Belgium’s recommendations.