The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models.
The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each addressing specific aspects of AI governance and risk mitigation:
- Working Group 1: Transparency and copyright-related rules
- Working Group 2: Risk identification and assessment for systemic risk
- Working Group 3: Technical risk mitigation for systemic risk
- Working Group 4: Governance risk mitigation for systemic risk
The draft is aligned with existing laws such as the Charter of Fundamental Rights of the European Union. It takes into account international approaches, striving for proportionality to risks, and aims to be future-proof by contemplating rapid technological changes.
Key objectives outlined in the draft include:
- Clarifying compliance methods for providers of general-purpose AI models
- Facilitating understanding across the AI value chain, ensuring seamless integration of AI models into downstream products
- Ensuring compliance with Union law on copyrights, especially concerning the use of copyrighted material for model training
- Continuously assessing and mitigating systemic risks associated with AI models
Recognising and mitigating systemic risks
A core feature of the draft is its taxonomy of systemic risks, which includes types, natures, and sources of such risks. The document outlines various threats such as cyber offences, biological risks, loss of control over autonomous AI models, and large-scale disinformation. By acknowledging the continuously evolving nature of AI technology, the draft recognises that this taxonomy will need updates to remain relevant.
As AI models with systemic risks become more common, the draft emphasises the need for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure appropriate risk identification, analysis, and mitigation throughout a model’s lifecycle.
The draft suggests that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks.
Taking a proactive stance to AI regulatory guidance
The EU AI Act, which came into force on 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative underscores the EU’s proactive stance towards AI regulation, emphasising the need for AI safety, transparency, and accountability.
As the draft continues to evolve, the working groups invite stakeholders to participate actively in refining the document. Their collaborative input will shape a regulatory framework aimed at safeguarding innovation while protecting society from the potential pitfalls of AI technology.
While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a benchmark for responsible AI development and deployment globally. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection.
This draft is open for written feedback until 28 November 2024.
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Read the full article here