Are AI companies complying with the EU AI Act? A new ‘LLM Checker’ can find out
FILE – The OpenAI logo is displayed on a cell phone with an image on a computer screen generated by ChatGPT’s Dall-E text-to-image model, Dec. 8, 2023, in Boston.
Copyright AP Photo
By Pascale Davies
Published on 16/10/2024 – 15:10 GMT+2•Updated 20:51
Share this article
Comments
Big Tech companies such as Apple and Meta have been cautious in rolling out their AI models in Europe. A new ‘LLM Checker’ could help.

The leading generative artificial intelligence (GenAI) models, including OpenAI, Meta and Anthropic, do not fully comply with Europe’s AI rules, according to a report released on Wednesday.

Europe’s AI Act came into force this August with the aim of establishing harmonised rules for AI systems so they do not become a threat to society.

qatar airways

However, some technology companies such as Meta and Apple have not rolled out their AI models in Europe as they are cautious about the rules.

A new tool and framework to make navigating the EU AI Act more simple for tech companies has been released by research institutes ETH Zurich and Bulgaria’s Institute for Computer Science, AI and Technology (INSAIT) as well as the Swiss start-up LatticeFlow AI. It is the first EU AI Act compliance evaluation framework for GenAI.

Without this, companies would just not deploy in Europe because they don’t know. You have a scary legislation that can bite you.
Petar Tsankov
LatticeFlow AI’s CEO
The tool gives AI models a score between 0 and 1 across categories such as safety to determine how they comply with the law.

What did the ‘LLM Checker’ find?
The large language model (LLM) checker, looked into AI models developed by Alibaba, Anthropic, OpenAI, Meta, and Mistral AI, which all received an average score of 0.75 or above. It looked at areas including cybersecurity, environmental well-being, and privacy and data governance.

The study found that for the most part, several of the AI models fell short on discrimination and cybersecurity.

Related
Is AI physics or chemistry? Nobel Prize wins spark debate about tech’s role in science
For instance, OpenAI’s GPT-4 Turbo scored 0.46 on discriminatory output and Alibaba’s Cloud’s scored 0.37.

But most of the models performed well in terms of harmful content and toxicity requirements.

Tool ‘welcomed’ by the European Commission
Companies that do not comply with the EU AI Act face fines of €35 million or 7 per cent of global annual turnover, yet it can be difficult for technology companies to submit their evidence as there are no detailed technical guidelines for them to follow, those behind the LLM checker say.

“If you want to comply with the EU AI Act, nobody knows how to provide the technical evidence that supports compliance with the Act. That’s a very big challenge that needs to be addressed right now,” Petar Tsankov, LatticeFlow AI’s CEO and cofounder told Euronews Next.

“Without this, companies would just not deploy in Europe because they don’t know. You have a scary legislation that can bite you, and you don’t know what to do about it. So it’s very uncomfortable for companies,” he said, adding that he would soon meet with Apple and OpenAI to discuss compliance with the AI Act.

The Commission welcomes this study and AI model evaluation platform as a first step in translating the EU AI Act into technical requirements, helping AI model providers implement the AI Act.
European Commission
The European Commission has launched a consultation on the Code of Practice for providers of general-purpose AI (GPAI) models, which aims to supervise the implementation and enforcement of the AI Act.

A European Commission spokesperson told Euronews Next that the Commission welcomed the study and AI model evaluation platform “as a first step in translating the EU AI Act into technical requirements, helping AI model providers implement the AI Act”.

“The Commission has kicked off work with stakeholders on the Code of Practice, which will detail the AI Act rules for providers of general-purpose AI models and general-purpose AI models with systemic risks. Providers should be able to rely on the Code of Practice to demonstrate compliance,” the spokesperson added.

As well as launching the first technical interpretation of the AI Act, there is also a free open source framework that can be used to evaluate LLMs against the EU’s requirements.

“We invite AI researchers, developers, and regulators to join us in advancing this evolving project,” said Martin Vechev, a professor at ETH Zurich and founder and scientific director of INSAIT in Sofia, Bulgaria.

LEAVE A REPLY