The EU has a chance to shape how the world approaches AI and data governance. AI News spoke with Resham Kotecha, Global Head of Policy at the Open Data Institute (ODI), who said that opportunity lies in proving that protecting people’s rights and supporting innovation can go hand in hand.
The ODI’s European Data and AI Policy Manifesto sets out six principles for policymakers, calling for strong governance, inclusive ecosystems, and public participation to guide AI development.
Setting standards in AI and data
“The EU has a unique opportunity to shape a global benchmark for digital governance that puts people first,” Kotecha said. The manifesto’s first principle makes clear that innovation and competitiveness must be built on regulation that safeguards people and strengthens trust.
Common European Data Spaces and Gaia-X are early examples of how the EU is building the foundations for AI development while protecting rights. The initiatives aim to create shared infrastructure that lets governments, businesses, and researchers pool data without giving up control. If they succeed, Europe could combine large-scale data use with strong protections for privacy and security.
Privacy-enhancing technologies (PETs) are another piece of the puzzle. The tools allow organisations to analyse or share insights from sensitive datasets without exposing the raw data itself. Horizon Europe and Digital Europe already support research and deployment of PETs. What is needed now, Kotecha argued, is consistency: “Making sure PETs move out of pilots and into mainstream use.” That shift would allow firms to use more data responsibly and show citizens their rights are taken seriously.
Trust will also depend on oversight. Independent organisations, Kotecha said, provide the checks and balances needed for trustworthy AI. “They offer impartial scrutiny, build public confidence, and hold both governments and industry accountable.” The ODI’s own Data Institutions Programme offers guidance on how these bodies can be structured and supported.
Open data as the EU’s foundation for AI
The manifesto calls open data a foundation for responsible AI, but many businesses remain wary of sharing. Concerns range from commercial risks and legal uncertainty to worries about quality and format. Even when data is published, it is often unstructured or inconsistent, making it hard to use.
Kotecha argued the EU should reduce the costs organisations face in collecting, using, and sharing data for AI. “The EU should explore a range of interventions, including combining legislative frameworks, financial incentives, capacity building, and data infrastructure development,” she said. By lowering barriers, Europe could encourage private organisations to share more data responsibly, creating both public and economic benefits.
The ODI’s research shows that clear communication matters. Senior decision-makers need to see tangible business benefits of data sharing, not just broad ‘public good’ arguments. At the same time, sensitivities around commercial data need to be addressed.
Useful structures already exist – the Data Spaces Support Centre (DSSC) and the International Data Spaces Association (IDSA) are building governance and technical frameworks that make sharing safer and easier. Updates to the Data Governance Act (DGA) and GDPR are also clarifying permissions for responsible reuse.
Regulatory sandboxes can build on this foundation. By letting firms test new approaches in a controlled environment, sandboxes can demonstrate that public benefit and commercial value are not in conflict. Privacy-enhancing technologies add another layer of safety by enabling the sharing of sensitive data without exposing individuals to risk.
Building EU-wide trust and cross-border AI ecosystems
One of the biggest hurdles for Europe is making data work inside member countries. Legal uncertainty, diverging national standards, and inconsistent governance fragment any system.
The Data Governance Act is central to the EU’s plan to create trusted, cross-border AI ecosystems. But laws on their own will not solve the problem. “The real test will be in how consistently member states implement [the Data Governance Act], and how much support is given to organisations that want to participate,” Kotecha said. If Europe can align on standards and execution, it could strengthen its AI ecosystem and set the global standard for trustworthy cross-border data flows.
That will require more than technical fixes – building trust between governments, businesses, and civil society is just as important. For Kotecha, the solution lies in creating “an open and trustworthy data ecosystem, where collaboration helps to maximise data value while managing risks connected with cross-border sharing.”
Independence through funding and governance
Oversight of AI systems requires sustainable structures. Without long-term funding, independent organisations risk becoming project-based consultancies rather than consistent watchdogs. “Civil society and independent organisations need commitments for long-term, strategic funding streams to carry out oversight, not just project-based support,” Kotecha said.
The ODI’s Data Institutions Programme has explored governance models that keep organisations independent while enabling them to steward data responsibly. “Independence relies on more than money. It requires transparency, ethical oversight, inclusion in political decision-making, and accountability structures that keep organisations anchored in the public interest,” Kotecha said.
Embedding such principles into EU funding models might ensure oversight bodies remain independent and effective. Strong governance should include ethical oversight, risk management, transparency, and clear roles, handled by board sub-committees on ethics, audit, and remuneration.
Making data work for startups
Access to valuable datasets is often limited to major tech firms. Smaller players struggle with the cost and complexity of acquiring high-value data. This is where initiatives like AI Factories and Data Labs come in. Designed to lower barriers, they give startups curated datasets, tools, and expertise that would otherwise be out of reach.
The model has worked before; like Data Pitch, a project that paired SMEs and startups with data from large organisations. That helped unlock previously closed datasets. Over three years, it supported 47 startups from 13 countries, helped create more than 100 new jobs, and generated €18 million in sales and investments.
The ODI’s OpenActive initiative showed a similar impact in the fitness and health sector, using open standards to power dozens of SME-built apps. At a European level, DSSC pilots and new sector-specific data spaces in areas like mobility and health are starting to create similar opportunities. For Kotecha, the challenge now is ensuring these schemes “genuinely lower barriers for smaller players, so they can build innovative products or services based on high-value data.”
Bringing communities into the conversation
The manifesto also stresses that the EU’s AI ecosystem will only succeed if public understanding and participation are built-in. Kotecha argued that engagement cannot be top-down or tokenistic. “Participatory data initiatives empower people to play an active role in the data ecosystem,” she said.
The ODI’s 2024 report What makes participatory data initiatives successful? maps out how communities can be involved directly in data collection, sharing, and governance. It found that local participation strengthens ownership and gives under-represented groups influence.
In practice, this could mean community-led health data projects, like those supported by the ODI, or open standards that are embedded in everyday tools like activity finders and social prescribing platforms. These approaches raise awareness and give people agency.
Effective participation requires training and resources so communities can understand and shape how data is used. Representation must also reflect the diversity of the community itself, using trusted local champions and culturally relevant methods. Technology should be accessible, whether low-tech or offline, and communication should be clear about how data is protected.
“If the EU wants to reach under-represented groups, it should back participatory approaches that start from local priorities, use trusted intermediaries, and build in transparency from the outset,” Kotecha said. “That’s how we turn data literacy into real influence.”
Why trust could be the EU’s competitive advantage in AI
The manifesto argues that Europe has an opportunity. “The EU has a unique chance to prove that trust is a competitive advantage in AI,” Kotecha said. By showing that open data, independent oversight, inclusive ecosystems, and data skills development are central to AI economies, Europe can prove that protecting rights and fostering innovation are not opposites.
This position would stand in contrast with other digital powers. In the US, regulation remains fragmented. In China, state-driven models raise concerns about surveillance and human rights. By setting clear and principled rules for responsible AI, the EU could turn regulation into soft power, exporting a governance model that others might adopt.
For Kotecha, this is not just about rules but about shaping the future: “Europe can position itself not just as a rule-maker, but as a global standard-setter for trustworthy AI.”
(Photo by Christian Lue)
See also: Agentic AI: Promise, scepticism, and its meaning for Southeast Asia

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post Resham Kotecha, Open Data Institute: How the EU can lead in AI appeared first on AI News.
Read the full article here