The Dutch data protection authority, Autoriteit Persoonsgegevens (AP), has warned users of Facebook, Instagram, and WhatsApp, all owned by Meta, about the potential risks of consenting to have their data used for AI training. This alert comes as concerns grow regarding how large tech companies, including Meta, handle user data and the broader privacy implications.
In its statement, the Dutch authority expressed strong concerns about Meta’s plans to use personal data for training its AI tools. The authority noted that there is no clear agreement on what practices are acceptable for a company of Meta’s size and influence, highlighting the regulatory confusion surrounding such practices.
Meta recently launched its AI tool, Meta AI, across European platforms, following its initial rollout in the U.S. in September 2023. However, the company has paused its plans for further expansion in Europe due to what it described as a “lack of regulatory clarity.” The Irish Data Protection Commission had previously urged Meta to delay the launch, citing concerns over the use of data from Facebook and Instagram to train large language models (LLMs).
Monique Verdier, Deputy Chairman of the Dutch regulatory authority, warned that users risk losing control over their personal data. She advised, “Before you share anything on Instagram or Facebook, and that data ends up in an AI model, don’t be surprised by what happens to it.” Users have until May 27, 2025, to opt out, or Meta will automatically use their public data to train its AI models. This warning follows similar alerts from data protection authorities in Hamburg, Germany, and Belgium, signaling widespread concern across Europe.
Meta’s Vice President of Public Policy for Europe, Marcos Reinisch, emphasized the need for regulations that protect citizens’ rights. He stated, “Well-intentioned regulations applied discriminatorily harm our business models. The issue lies in singling out specific companies.” His comments reflect the ongoing tensions between Meta and regulatory bodies as both sides navigate complex legal landscapes.
In a related matter, cybersecurity experts are warning users about the growing risks of mobile applications that compromise privacy. Reports have surfaced about 12 specific apps users should delete immediately due to their ability to spy on users through cameras and microphones, violating privacy rights.
According to a TechCrunch report, these apps covertly access users’ smartphones without their consent and steal sensitive information, including WhatsApp messages. Cybersecurity experts have flagged these apps as particularly dangerous, as they can activate cameras and microphones without users’ awareness.
The 12 flagged apps include ADVERTISING, Privee Talk, MeetMe, Let’s Chat, Quick Chat, Rafaqat, Chit Chat, YoohooTalk, TikTalk, Hello Chat, Nidus, and GlowChat. ESET, a cybersecurity firm, has strongly recommended that users uninstall these apps immediately, describing them as malicious software designed specifically to spy on users and steal personal data.
ESET researchers warn that these apps pose a real threat to privacy, as they can record audio and video without the user’s knowledge. This capability raises concerns about potential misuse, including blackmail and illegal activities.
As the digital landscape continues to evolve, the intersection of user privacy, data protection, and regulatory compliance remains a key concern for individuals and businesses alike. Users are encouraged to stay vigilant about the apps they install on their devices and be mindful of the data they share with companies like Meta.
In conclusion, as Meta advances its AI initiatives in Europe, ongoing dialogue between tech companies and regulatory authorities will be essential in shaping the future of data privacy. The warnings from both Dutch and Irish authorities underscore the need for users to take an active role in managing their personal data and understanding the broader implications of technology on their privacy.