EU AI Act: Draft guidance for general purpose AIs shows first steps for Big AI to comply
Reading Time: 6 minutesA first draft of a Code of Practice that will apply to providers of general-purpose AI models under the European Union’s AI Act has been published, alongside an invitation for feedback — open until November 28 — as the drafting process continues into next year, ahead of formal compliance deadlines kicking in over the coming years.
The pan-EU law, which came into force this summer, regulates applications of artificial intelligence under a risk-based framework. But it also targets some measures at more powerful foundational — or general-purpose — AI models (GPAIs). This is where this Code of Practice will come in.
Among those likely to be in the frame are OpenAI, maker of the GPT models (which underpin the AI chatbot ChatGPT), Google with its Gemini GPAIs, Meta with Llama, Anthropic with Claude, and others, like France’s Mistral. They will be expected to abide by the General-Purpose AI Code of Practice if they want to make sure they are complying with the AI Act and thus avoid the risk of enforcement for non-compliance.
To be clear, the Code is intended to provide guidance for meeting the EU AI Act’s obligations. GPAI providers may choose to deviate from the best practice suggestions if they believe they can demonstrate compliance via other measures.
This first draft of the Code runs to 36 pages but is likely to get longer — perhaps considerably so — as the drafters warn it’s light on detail as it’s ‘a high-level drafting plan that outlines our guiding principles and objectives for the Code.’
The draft is peppered with box-outs asking ‘open questions’ the working groups tasked with producing the Code have yet to resolve. The sought for feedback — from industry and civil society — will clearly play a key role in shaping the substance of specific Sub-Measures and Key Performance Indicators (KPIs) that are yet to be included.
But the document gives a sense of what’s coming down the pipe (in terms of expectations) for GPAI makers, once the relevant compliance deadlines apply.
Transparency requirements for makers of GPAIs are set to enter into force on August 1, 2025.
But for the most powerful GPAIs — those the law defines as having ‘systemic risk’ — the expectation is they must abide by risk assessment and mitigation requirements 36 months after entry into force (or August 1, 2027).
There’s a further caveat in that the draft Code has been devised on the assumption that there will only be ‘a small number’ of GPAI makers and GPAIs with systemic risk. ‘Should that assumption prove wrong, future drafts may need to be changed significantly, for instance, by introducing a more detailed tiered system of measures aiming to focus primarily on those models that provide the largest systemic risks,’ the drafters warn.
Copyright
On the transparency front, the Code will set out how GPAIs must comply with information provisions, including in the area of copyrighted material.
One example here is ‘Sub-Measure 5.2’, which currently commits signatories to providing details of the name of all web crawlers used for developing the GPAI and their relevant robots.txt features ‘including at the time of crawling.’
GPAI model makers continue to face questions over how they acquired data to train their models, with multiple lawsuits filed by rights holders alleging AI firms unlawfully processed copyrighted information.
Another commitment set out in the draft Code requires GPAI providers to have a single point of contact and complaint handling to make it easier for rights holders to communicate grievances ‘directly and rapidly.’
Other proposed measures related to copyright cover documentation that GPAIs will be expected to provide about the data sources used for ‘training, testing and validation and about authorisations to access and use protected content for the development of a general-purpose AI.’
Systemic risk
The most powerful GPAIs are also subject to rules in the EU AI Act that aim to mitigate so-called ‘systemic risk.’ These AI systems are currently defined as models that have been trained using a total computing power of more than 10^25 FLOPs.
The Code contains a list of risk types that signatories will be expected to treat as systemic risks. They include:
- Offensive cybersecurity risks (such as vulnerability discovery).
- Chemical, biological, radiological, and nuclear risk.
- ‘Loss of control’ (here meaning the inability to control a ‘powerful autonomous general-purpose AI) and automated use of models for AI R&D.
- Persuasion and manipulation, including large-scale disinformation/misinformation which could pose risks to democratic processes or lead to a loss of trust in media.
- Large-scale discrimination.
This version of the Code also suggests that GPAI makers could identify other types of systemic risks that are not explicitly listed, too — such as ‘large-scale’ privacy infringements and surveillance, or uses that might pose risks to public health. And one of the open questions the document poses here asks which risks should be prioritized for addition to the main taxonomy. Another one is how the taxonomy of systemic risks should address deepfake risks (related to AI-generated child sexual abuse material and non-consensual intimate imagery).
The Code also seeks to provide guidance around identifying key attributes that could lead to models creating systemic risks, such as ‘dangerous model capabilities’ (e.g. cyber offensive or ‘weapon acquisition or proliferation capabilities’), and ‘dangerous model propensities’ (e.g. being misaligned with human intent and/or values; having a tendency to deceive; bias; confabulation; lack of reliability and security; and resistance to goal modification).
While much detail still remains to be filled in, as the drafting process continues, the authors of the Code write that its measures, sub-measures, and KPIs should be ‘proportionate’ with a particular focus on ‘tailoring to the size and capacity of a specific provider, particularly SMEs and start-ups with less financial resources than those at the frontier of AI development.’ Attention should also be paid to ‘different distribution strategies (e.g. open-sourcing), where appropriate, reflecting the principle of proportionality and taking into account both benefits and risks,’ they add.
Many of the open questions the draft poses concern how specific measures should be applied to open source models.
Safety and security in the frame
Another measure in the code concerns a ‘Safety and Security Framework’ (SSF). GPAI makers will be expected to detail their risk management policies and ‘continuously and thoroughly’ identify systemic risks that could arise from their GPAI.
Here there’s an interesting sub-measure on ‘Forecasting risks.’ This would commit signatories to include in their SSF ‘best effort estimates’ of timelines for when they expect to develop a model that triggers systemic risk indicators — such as the aforementioned dangerous model capabilities and propensities. It could mean that, starting in 2027, we’ll see cutting-edge AI developers putting out time frames for when they expect model development to cross certain risk thresholds.
Elsewhere, the draft Code puts a focus on GPAIs with systemic risk using ‘best-in-class evaluations’ of their models’ capabilities and limitations and applying ‘a range of suitable methodologies’ to do so. Listed examples include: Q&A sets, benchmarks, red-teaming and other methods of adversarial testing, human uplift studies, model organisms, simulations, and proxy evaluations for classified materials.
Another sub-measure on ‘substantial systemic risk notification’ would commit signatories to notify the AI Office, an oversight and steering body established under the Act, ‘if they have strong reason to believe substantial systemic risk might materialise.’
The Code also sets out measures on ‘serious incident reporting.’
‘Signatories commit to identify and keep track of serious incidents, as far as they originate from their general-purpose AI models with systemic risk, document and report, without undue delay, any relevant information and possible corrective measures to the AI Office and, as appropriate, to national competent authorities,’ it reads — although an associated open question asks for input on ‘what does a serious incident entail.’ So there looks to be more work to be done here on nailing down definitions.
The draft Code includes further questions on ‘possible corrective measures’ that could be taken in response to serious incidents. It also asks ‘what serious incident response processes are appropriate for open weight or open-source providers?’, among other feedback-seeking formulations.
‘This first draft of the Code is the result of a preliminary review of existing best practices by the four specialised Working Groups, stakeholder consultation input from nearly 430 submissions, responses from the provider workshop, international approaches (including the G7 Code of Conduct, the Frontier AI Safety Commitments, the Bletchley Declaration, and outputs from relevant government and standard-setting bodies), and, most importantly, the AI Act itself,’ the drafters go on to say in conclusion.
‘We emphasise that this is only a first draft and consequently the suggestions in the draft Code are provisional and subject to change,’ they add. ‘Therefore, we invite your constructive input as we further develop and update the contents of the Code and work towards a more granular final form for May 1, 2025.’
Ref: techcrunch
MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG