Meta AI Safety Pledge: How it compares to EU, US AI regulations
Feb 04, 2025
Meta, a big tech company, has announced a new policy to stop risky AI models. They will focus on safety and security to prevent dangerous outcomes. The company will classify risks based on the potential harm they can cause, like cyber threats and biological risks. If a model is critical or high risk, they will limit access and add more security measures.
Compared to the EU, Meta's approach is more focused on specific threat scenarios. The EU AI Act bans AI models that pose unacceptable risks to people's safety and rights. High-risk models must meet strict obligations before being released. Models with limited risks need transparency for users, while those with minimal risks are less regulated.
In the US, the NIST published guidelines for managing AI risks. They categorize risks into technical/model risks, misuse by humans, and ecosystem/societal risks. Meta's new policy comes at a time when other companies are facing data privacy concerns.
Overall, Meta's focus on risk management shows a commitment to safety and security in their AI models. This move aims to differentiate them from their competitors and ensure compliance with global norms. As technology evolves, it will be interesting to see how Meta adapts and updates its risk-based approach to AI.