META CEO’s Mark Zuckerberg has promised to create artificial general information (AGI). This is roughly defined as an AI that can achieve any task that humans can do. But new policy documents suggest that meta has a specific scenario that does not release a very powerful AI system developed internally.
Documents, which meta calls a frontier AI framework, has identified two types of AI systems that think that the company is too risky, “high risk” and “serious risk” systems.
Both “high -risk” and “critical risk” systems can support cyber security, chemical, and biological attacks, as meta defines them. There is a difference that it may bring. [that] Cannot be alleviated with [a] Proposed development context. In contrast, high -risk systems can make attacks easier, but not as reliable or reliable as important risk systems.
What kind of attack are you talking about here? Meta has shown several examples, such as “compromise of automatic end -to -end of corporate -scale environments protected in the best experiment” and “proliferation of high impact biological weapons.” The company acknowledges that the list of catastrophe that may be in the meta document is far from comprehensive, but the direct result of releasing a powerful AI system is “most urgent”. It contains what is considered to be caused.
According to documents, META classifies the risk of the system by inputting the internal and external researchers subject to reviews by “Advanced Judgment”. why? Meta says that in order to determine the risk of the system, the science of evaluation does not believe that it is “robust enough to provide decisive quantified metric.”
If Meta determines that the system is high risk, the company states that it will restrict access to the system internally and will not release it until it is implemented to “reduce the risk to a medium”. On the other hand, if the system is regarded as critical risk, META will implement an unspecified security protection in order to prevent the system from expanding and prevent development until the risk of the system is reduced. 。
The company’s Frontier AI framework, which says that the company will evolve with the changes in the scenery of AI, has been committed to the public this month before the French AI Action Summit, but the company’s “Open” approach to the system. It looks like a response to criticism. development. META adopts a strategy to make AI technology openly -not open source due to the definitions that are generally understood, but like Openai, which chooses to gate a system behind the API. In contrast to companies.
For meta, the open release approach has been proved to be blessed and cursed. The company’s AI model, called Lama, has won hundreds of millions of downloads. However, it is said that lama is also used to develop a defense chatbot by at least one US enemy.
When the frontier AI framework is released, the meta may also aim to contrast the open AI strategy and the Deepseek strategy of a Chinese AI company. DeepSeek also makes the system openly available. However, the company’s AI has few protection means and can be easily piloted to generate toxic and harmful output.
“”[W]The meta is written in the document as follows. It also maintains the right level of risk. “
TechCrunch has a newsletter focusing on AI! Sign up here and get it on the reception tray every Wednesday.
Source link