Facebook is turning to artificial intelligence to detect if someone might be contemplating suicide.

Facebook already has mechanisms for flagging posts from people thinking about harming themselves. The new feature is intended to detect such posts before anyone reports them.

The service will scan posts and live video with a technique called “pattern recognition.” For example, comments from friends such as “are you ok?” can indicate suicidal thoughts.

Facebook has already been testing the feature in the U.S. and is making it available in most other countries. The European Union is excluded, though; Facebook won’t say why.

The company is also using AI to prioritize the order that flagged posts are sent to its human moderators so they can quickly alert local authorities.

It’s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like Save.org, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.

“This is about shaving off minutes at every single step of the process, especially in Facebook Live,” says VP of product management Guy Rosen. Over the past month of testing, Facebook has initiated more than 100 “wellness checks” with first-responders visiting affected users. “There have been cases where the first-responder has arrived and the person is still broadcasting.”

Facebook didn’t have answers about how the new technology would avoid scanning for political dissent or petty crime, with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.”

Facebook CEO Mark Zuckerberg praised the product update in a post Monday, writing that “In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.”