Meta revealed how the company tries and prevent abuse on its platforms when it can no longer "scan" messages.
"In an end-to-end encrypted environment, we will use artificial intelligence to proactively detect accounts engaged in malicious patterns of behavior instead of scanning your private messages," it said.
"Our machine learning technology will look across non-encrypted parts of our platforms — like account information and photos uploaded to public spaces — to detect suspicious activity and abuse."
This comes out of deep concern from children's charities that end-to-end encryption could lead to online child abuse going unnoticed.
In this regard, children's charity NSPCC has stated that end-to-end encryption could lead to a "significant drop in reports of child abuse... a [failure] to protect children from avoidable harm.”
Therefore, Meta addressed these concerns in its blog by stating: "For example, if an adult repeatedly sets up new profiles and tries to connect with minors they don’t know or messages a large number of strangers, we can intervene to take action, such as preventing them from interacting with minors."
"We can also default minors into private or “friends only” accounts. We’ve started to do this on Instagram and Facebook."
Furthermore, the tech giant asserted it will also be educating "young people" with in-app advice and warnings about people who are messaging them if they are deemed suspicious.
Meta claims that safety notices on Messenger have been successful already and could stop people from getting scammed or "flag suspicious adults attempting to connect to minors."
Reporting harmful behavior will also be encouraged on Messenger and there's an option to say whether the activity "involves a child".