AI models need more standards and challenges, say researchers say


As the use of artificial intelligence – the risks and resistance – the rise in the speed of Breakneck, no cases of potential harmful responses were determined.

Pixdeluxe | E + | Getty images

As the use of artificial intelligence – the risks and resistance – the rise in the speed of Breakneck, no cases of potential harmful responses were determined. Them Hate speechWhere Copyright offenses or Sexual content.

This is an occurrence of unwanted behavior with the lack of rules and researchers in CNBC, complicated by insufficient testing of AI models.

Models to get models to get models for car training models, so height and AI researcher Javier Rando said.

“After the answer, after 15 years of age, no, we do not know how to do it and seem to improve it, it looks much better to do so, he focuses on CNBC.

However, there are several ways to assess risks in AI, for example, Red groups. The experience involves testing individuals and determining and determining any possible damage and determination of common operands within the framework of cyber security.

Shine Warprre, AI and Politics and its leader The initiative to prove dataHe noted that people working in red teams are not available today.

AI starts now use primary party appraisers or to test their model testing, testing to third parties such as usual users, journalists, researchers and ethical hackers Longrepre and published paper published by researchers.

“Some shortcomings in systems that people can find medical doctors, specialized topics in systems you can find medical doctors, if you have a shortcoming or not, it is impossible, it is impossible, it is impossible.”

Standardized “AI deficiency” reports, stimulus and ways to distribute these “shortcomings” information in the AI ​​systems are the proposals provided in the paper.

In other sectors, such as software, have been successfully accepted in other sectors, such as “we need AI now,” he added.

The wedding, management, policies and other tools to share this user with a good understanding of the risks obtained from AI tools and users, – said Rando.

We are very harmful to many people, says Karen Hao

No longer moons

The Moonshot project is a similar approach that combines technical solutions with policy mechanisms. The Moonshot project launched by the Infocomous Development Department of Singapore – a tool for evaluating large language models with IBM and Boston-based industrial players Computer robot.

The tool set combines the basics of benchmarking, red classification and testing. Also, you do not harm them to their models to be confident and harmful to their models and harm the head of the IBM Asia Pacific, IBM Asia Pacific and CNBC, allowing them to confident and harm users.

Assessment is a A continuing process This should be done after placing and executing the models, said Kumar, – said Kumar, who noted that they are involved in the collection of tools.

“Most startups took this as a platform because he was Open source, They started that to lube. But I think you know, we can do a lot. “

Promotion, Moonshot project project allows you to configure certain things in the industry and allow multilingual and multilingual red groups.

High standards

Pierre Secrequie, Asia-Pacific ESSEC School of Business School, said that there are Tech companies currently existing hurry to release their last AI models are incorrectly.

“When the pharmaceutical company prepares a new drug, they need a few monthly tests and its useful evidence that they are not harmful until their approval,” he said that there are such a process in aviation.

AI models should meet with a set of strict conditions before they are approved. The transition from the developed AI tools for specific tasks makes it easier to predict and control their abuse, silquier said.

“LLms can do too many things, but they are not focused on specific tasks,” he said. As a result, “the number of potential abuse is very large to predict all developers.”

Specifies what such wide models count, and it is difficult and safe Rando research presented.

Therefore, Tech companies are better than their protection, “said Rando.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *