Apple is preparing to roll out a new method for training its AI models in the upcoming beta versions of iOS 18.5 and macOS 15.5. This change is aimed at enhancing the performance of Apple’s artificial intelligence systems, such as its email summarization tools, while keeping your personal data safe and secure.
A Shift in Training Apple AI
Apple has always placed a strong emphasis on user privacy, and their new approach reflects this commitment. Traditionally, Apple has used synthetic data to train AI models. Synthetic data is essentially artificial information generated by computers, rather than being sourced from real user data. This method ensures privacy, but it has limitations. It is particularly challenging to understand real human behavior or preferences. This includes how people write emails or summarize messages.
Device-Based Learning
To address these limitations, Apple has come up with a fresh solution. Instead of collecting real user data, Apple plans to use a system where the training happens directly on users’ devices. Here’s how it works:
Apple creates thousands of fake emails covering a variety of everyday topics—like “Would you like to play tennis tomorrow at 11:30AM?” These synthetic emails are transformed into something called “embeddings”—data that represents the content, such as its topic or length. These embeddings are then sent to a small group of users who have opted into Apple’s Device Analytics program.
Users’ devices will compare the fake emails to a small sample of their recent, real emails (again, without the content leaving the device). The device then chooses the most similar fake email and sends back anonymous signals to Apple. Crucially, Apple never sees the actual emails, only the anonymized results.
Maintaining Privacy Through Differential Privacy
The process relies on a privacy technique called differential privacy. This method ensures that only anonymized data is sent back to Apple, with no personal or identifiable information being collected. Even when Apple analyzes the data, it can only see which synthetic messages were the most popular. This information is not linked to any individual device or user.
This technique isn’t new. Apple has already applied similar methods to tools like Genmoji, its custom emoji feature, and will soon use the same privacy approach for other AI tools, including Image Playground, Image Wand, and Memories creation. These features will improve by learning from popular user behavior, while ensuring rare or unique requests are kept private.
What Does This Mean for Apple Users?
So, what does this mean for Apple users? Simply put, Apple is improving its AI features without compromising your privacy. Your device will help train AI systems by providing anonymized data, ensuring that Apple’s tools—like email summaries—become more accurate and personalized based on real-world content. And most importantly, this all happens without compromising your privacy.
In a world where data privacy concerns are growing, Apple’s approach offers a balanced solution: better AI performance with stronger privacy protections. By involving users in a transparent and secure way, Apple is setting a new standard for how AI should operate—secure, private, and user-centered.
This move not only strengthens the trust users have in Apple’s products. It also sets a powerful example for other tech companies looking to innovate while respecting privacy. For now, Apple’s on-device AI training approach is a win-win for both privacy-conscious users and AI enthusiasts.