Meta May Halt Development of High-Risk AI Systems

Meta May Halt Development of High-Risk AI Systems
Feb 4, 2025 17:44

Meta may discontinue the development of artificial intelligence (AI) systems deemed excessively risky, according to its newly announced policy. The company has indicated that certain AI technologies may not be made available to the public. The report was first published by TechCrunch.

Under Meta’s new “Frontier AI Framework,” two categories of AI systems have been identified as potentially dangerous—“high-risk” and “critical-risk.” High-risk AI could facilitate cyberattacks or chemical and biological threats but would not necessarily be reliably effective. Meanwhile, critical-risk AI systems could lead to “irreversible catastrophe.”

Meta has stated that it will not rely solely on specific scientific tests to determine an AI technology’s level of risk. Instead, the company will consider input from both internal and external researchers. If an AI system is classified as high-risk, it will only be used in a restricted manner and withheld from release until the risks are mitigated. However, if a system is deemed critically risky, its development will be halted entirely.

This move is seen as Meta’s response to criticism of its open AI policy, which has been considered more transparent than that of rivals such as OpenAI and Chinese AI firm DeepSeek.