EU MP warns



Legislators who helped shape EU landmarks are worried that the 27-member group is considering downside aspects of AI rules in the face of pressure from U.S. technology companies lobbying and the Trump administration.

this I’m doing Approved just over a year ago, but rules for its general AI models, such as OpenAI’s GPT-4O, can only come into effect in August. Prior to that, the European Commission (which is the EU’s executive branch) tasked its new AI office to prepare a code of practice for large AI companies that articulate how they exactly need to comply with the legislation.

But now, a group of European lawmakers help improve the language of the law in the process of passing the legislation, expressing the impact of the AI ​​offices that will deprive the EU of AI bills in a “dangerous, undemocratic” way. The leading U.S. AI vendors have recently lobbied parts of the EU’s AI bill, and lawmakers are also concerned that the committee may seek support with the Trump administration, which has made it clear that the AI ​​is considered anti-wage and anti-U.S. Americans.

EU MPs said the third draft of the Code published by the AI ​​Office earlier this month fulfilled its obligations enforced under the AI ​​Act and did not accurately present it as “fully voluntary.” These obligations include testing models to understand how they allow things like mass discrimination and the spread of false information.

First reported in a letter to European Commission Vice President and Technology Chief Henna Virkunen on Tuesday Financial Times But current lawmakers say voluntary testing of these model could allow for “adopting more extreme political stances” to distort European elections, limit freedom of information and undermine the EU economy.

“In the current geopolitical situation, the EU faces challenges and is strongly important in fundamental rights and democracy, which is more important than ever before.”

Brando Benififi is one of the European Parliament’s chief negotiators on the AI ​​ACT text and the first signator in this week’s letter wealth On Wednesday, the political climate may be related to watering of the Code of Practice. The second Trump administration is opposition Going towards European technical regulations; Vice President JD Vance warn In a fierce speech at the Paris AI Action Summit in February, “tightening the screws of American tech companies” will be a “terrible mistake” for European countries.

“I think there will be pressure in the United States, but[think]we can go by going in that direction because it will never be enough, so we can make the Trump administration happy,” said Benny Fey, who currently chairs the delegation that chairs the European Parliament’s relations with the United States.

Benifei said he and other former AI bill negotiators met with AI office experts on Tuesday, who are drafting the Code of Practice. Based on this meeting, he optimistically said that the problematic changes can be reversed before the code is finalized.

“I think the questions we asked have been considered, so there is room for improvement,” he said. “We will see in the next few weeks.”

Virkkunen did not respond to the letter when it was published, nor did he comment on Benifei’s concerns about US pressure. But, she has Before The EU’s technical rules are fair and consistently applied to companies in any country. Competition Commissioner Teresa Ribera

Transfer obligations

The key part of the AI ​​bill here is Article 55This imposes a significant obligation on a universal AI model provider with “systemic risks”, the term defines law as the model that could have a significant impact on the EU economy or “a practical or reasonably foreseeable negative impact on public health, public safeguards, fundamental rights, fundamental rights, or society as a whole that can be spread on scale.”

The bill says that if the computing power used in training is “measured in floating point operations (Flops) is greater than 10, the model can be assumed to have systemic risks.25. “this May include Although the European Commission can also specify any general model as having systemic risks, if its scientific advisers recommend doing so, many of today’s most powerful AI models can specify any general model.

By law, providers of such models must evaluate them to identify and mitigate any systemic risks. The evaluation must include adversarial testing – in other words, trying to get the model to do bad things and figure out what needs to be protected. They then have to tell the European Commission’s AI office about the assessment and its findings.

This is The third edition of the draft practical regulations Becomes a problem.

The first version of the code clearly shows that AI companies need to view large-scale misinformation or misinformation as systemic risks when evaluating their models because of their threat to democratic values ​​and their potential for election intervention. The second edition does not specifically talk about false information or misinformation, but still says that “the risk of large-scale manipulation of fundamental rights or democratic values”, such as election interventions, is a systemic risk.

The first and second versions are also clear that model providers should view the possibility of large-scale discrimination as a systemic risk.

But the third version only lists the risks of democratic processes, and Basic European Rights For example, non-discriminatory, such as “potential considerations in choosing systemic risk”. The official summary of the third draft changes argues that these are “providers may choose to assess and mitigate additional risks in the future”.

In a letter this week, lawmakers who negotiated with the committee on the final text of the law insisted that “this is by no means the intention of the agreement they reached”.

“The risk of fundamental rights and democracy is the systemic risks that most influential AI providers must assess and mitigate,” the letter reads. “This is dangerous, undemocratic and will fully reinterpret and narrow the legal uncertainty of the legal texts agreed to by joint legislators.”

This story was originally fortune.com



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *