Supervisor Elham AbolFateh
Editor in Chief Mohamed Wadie

Beijing Orders Apple to Remove 'WhatsApp' & 'Threads' from Chinese App Store


Sat 20 Apr 2024 | 04:46 PM
By Ahmad El-Assasy

Apple, the American technology giant, has been instructed by the Chinese government to remove the apps 'WhatsApp' and 'Threads' from its Chinese App Store, citing national security concerns. 

This move comes as part of China’s ongoing internet regulatory efforts, as stated by the company to the Wall Street Journal and other American media outlets on Thursday evening.

These apps, both owned by Meta, were previously accessible in China via virtual private networks (VPNs) that allowed users to circumvent the Great Firewall—a term used to describe China’s extensive internet censorship system. 

The Chinese government’s decision closes a major loophole in the firewall, restricting access to these foreign applications.

The app restrictions add to the growing tension between the U.S. and China over mobile applications. The United States, along with Australia, Canada, New Zealand, and the United Kingdom, has banned the Chinese-owned app TikTok from government devices amid concerns that it could be used for data harvesting by Chinese authorities—a claim consistently denied by ByteDance, TikTok's parent company.

In a separate announcement, Meta unveiled an enhanced version of its AI-based assistant built on the new version of the open-source language model "Llama 3." 

According to the company, this new version of Meta AI is smarter and faster, marking a significant advancement in the publicly available Llama 3 program.

Meta’s CEO and co-founder Mark Zuckerberg mentioned in an Instagram video that Meta AI is now "the smartest and most usable freely available AI-based assistant." 

Open-source tools allow developers outside Meta to adapt Llama 3 as they see fit, while Meta may later incorporate these improvements into an updated release.

Meta emphasized the potential generative AI technology holds for users of its products and the broader system, stating it aims to develop and deploy such technology predictively to mitigate risks. 

This includes integrating safety measures into how Meta designs and releases its Llama models and cautiously adding AI features to its platforms such as Facebook, Instagram, WhatsApp, and Messenger.

The company acknowledged that AI models, including those by Meta, sometimes produce inaccurate or odd responses, a phenomenon described as "hallucination." 

An example shared on Facebook showed Meta AI claiming to have a child in a New York school during an online forum discussion.

Meta has continuously updated and refined its AI since its initial release last year, with ongoing efforts to enhance how its AI programs respond to questions about political or social issues, aiming for a balanced representation of related points rather than a single perspective.

This ongoing development also includes efforts to make Llama 3 better at discerning whether requests are non-offensive and have logical answers. 

Meta stressed that it wants its AI to respond accurately to benign questions like "How can I stop a computer program?" without addressing harmful inquiries.

Starting in May, Meta plans to label AI-generated videos, sounds, and images when detected or reported, providing users with clear indications when they are interacting with AI-generated content. 

Meta AI is currently operational in English, but the company plans to release multi-language models in the coming months.