Meta rolls out new AI content enforcement systems while reducing reliance on third-party vendors
Meta on Thursday announced that it’s starting to roll out more advanced AI systems to handle content enforcement as it plans to cut back on third-party vendors. Tasks related to content enforcement include catching and removing content about terrorism, child exploitation, drugs, fraud, and scams.
At the same time, it will reduce its reliance on third-party vendors for content enforcement.
“While we’ll still have people who review content, these systems will be able to take on work that’s better-suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as with illicit drug sales or scams,” Meta explained in a blog post. Meta believes these AI systems can detect more violations with greater accuracy, better prevent scams, respond more quickly to real-world events, and reduce over-enforcement.
It also says the systems can identify and prevent more impersonation accounts involving celebrities and other high-profile individuals, as well as help stop account takeovers by detecting signals such as logins from new locations, password changes, or edits made to a profile.
” The move comes as Meta has been loosening its content moderation rules over the past year or so, as President Donald Trump took office for a second time. Last year, the company ended its third-party fact-checking program in favor of an X-like Community Notes model.
Logic Quality Breakdown:
- Updated_At:
- Truth_Blocks:
- Analysis_Method: