Apple says its AI learns from your emails — without reading them

The iPhone maker is trying a new tactic to catch up with rivals: using patterns from your emails to fine-tune its AI without revealing the content
FILES-US-TARIFF-TRADE-EXEMPTIONS-TECH
An iPhone 16 on display at the Apple Store in New York
AFP via Getty Images
Saqib Shah1 minute ago

Apple has been caught flat-footed in the AI race. While competitors such as OpenAI, Amazon and Google have dazzled with their multi-hyphenate chatbots, Apple Intelligence has faltered, with key features delayed or briefly dropped after embarrassing faux pas.

Apple is trying a new tactic to catch up with rivals: using patterns from your emails to fine-tune its AI without revealing the content.

Apple says it uses synthetic (or fake) emails, designed to mimic real ones in style and topic, without containing any actual user data.

These fake messages are then compared to email fragments stored on devices that have opted into Apple’s Device Analytics. This programme, activated during setup or via device settings, enables users to voluntarily share anonymised usage data to help improve Apple’s products.

By matching these synthetic emails with user data on the device, Apple can identify patterns and trends to improve its AI models — all while keeping your emails private and never sending them to Apple’s servers.

“When creating synthetic data, our goal is to produce synthetic sentences or emails that are similar enough in topic or style to the real thing to help improve our models for summarisation, but without Apple collecting emails from the device,” Apple explained in a blog post.

Large language models, including those behind Apple Intelligence and rival systems such as ChatGPT, rely on vast amounts of real-world data to understand how people actually write, speak and interact.

The more authentic examples they’re exposed to — whether from the open internet or user-generated content — the better they become at producing convincing summaries, answers and suggestions.

However, for privacy-conscious Apple, finding ways to refine its models without directly harvesting personal data has always been a delicate balancing act.

Apple hopes the new technique will allow it to compete better in AI. Until now, the company has taken a more cautious approach than its fast-moving rivals. It has balanced its privacy-driven ethos with the AI features users expect, from rewriting and summarising messages to generating images.

Will it work? iPhone users can see the results for themselves when the new system rolls out in an upcoming beta version of iOS 18.5, according to Bloomberg’s resident Apple reporter, Mark Gurman.

Apple says it’s applying similar privacy-first techniques across other AI features, including its image tools and memory creation software.

For example, it’s already using something called differential privacy to improve Genmoji — the feature that lets users create custom emojis — by spotting popular trends, like multiple people asking for a “dinosaur in a cowboy hat,” without exposing individual requests.

These upgrades will apply to only users who’ve opted in to share device analytics, a setting found under Privacy and Security.