Updated · 8 min read
Apple Mail Privacy Protection, four years in
Mail Privacy Protection — Apple's 2021 feature that pre-fetches the tracking pixel in every email — shipped at WWDC and every deliverability blog declared email dead. It wasn't. What Apple actually killed was the open rate, which turned out to be doing four jobs nobody had bothered to separate, and all four went sideways at different speeds. Here's the honest version of what broke, what to use instead, and why the programs that fixed this in 2022 are quietly running laps around the ones that didn't.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
Picture this: the email is opened before the user wakes up
The open rate died the week iOS 15 shipped. Nobody noticed for months because the number went up, and up-and-to-the-right is a very easy number to ignore.
Mail Privacy Protection — MPP from here on, Apple's privacy feature for the Mail app — does one thing that breaks email measurement at the root. The moment a message arrives in an Apple Mail inbox, Apple's servers reach out and pre-fetch every remote image in it. Including the tracking pixel — the tiny invisible image that fires when an email is "opened" and tells your ESP (email service provider, the platform that sends your messages) the recipient saw it.Source · AppleUse Mail Privacy Protection on iPhoneApple's official documentation of Mail Privacy Protection behaviour, including image pre-fetching and IP masking.support.apple.com/guide/iphone/use-mail-privacy-protection-iph2a7a6fdac/ios
No human action required. The recipient could be asleep, on a flight, halfway up a mountain. Pixel fires anyway, and Apple's proxy servers launder the IP address on the way through so you can't even tell where the "open" came from.
By default, every new Apple Mail account has MPP switched on. Which means, in practical terms, that for a huge share of your list the email registers as "opened" within seconds of arrival regardless of what the human on the other end is doing. At the gym, at dinner, on the toilet, dead asleep. Still an open.
Can you tell which specific users have it switched on? Not directly — Apple masks that on purpose. You can infer it well enough in aggregate: machine opens fire within a minute or two of send, route through a tight set of Apple IP ranges, and often share identical user agents (the string a mail client sends to identify itself). Most mature ESPs flag them as machine-opened or pre-fetched in reporting. Braze, Iterable, and Customer.io all surface this; if yours doesn't, assume every Apple Mail open is suspect until proven otherwise.
The knock-on damage arrived quietly. Send-time optimisation — the feature that picks the "best" hour to send each user based on when they've historically opened. Engagement-based triggers — the flows that fire when someone opens, or doesn't. "They opened it but didn't click" sequences. All of them, silently broken. And nobody screamed because the dashboards kept showing green.
Why one broken metric broke four things at once
Before 2021 the open rate was holding down four roles at once, like an extremely cheap employee nobody realised was doing the work of three. Call them jobs one through four: A/B test proxy (does subject line A beat subject line B?), trigger signal (did this user engage, so should we move them through this flow?), engagement score for list hygiene (is this address still a real person we should keep mailing?), and deliverability diagnostic (are our messages actually landing in inboxes?). Each was doing a different thing. Each broke at a different pace.
A/B tests went first. Apple users are a chunky share of most tested audiences, so subject-line "winners" started being whichever variant happened to skew Apple. Triggers broke second. Re-engagement flows began firing at people who had literally never seen the original email, which is the email version of asking someone why they haven't replied to a letter you never sent. Engagement scoring degraded more slowly, mostly because it was a weak signal doing a heavy job to begin with. Deliverability diagnostics still work, sort of, if you segment Apple out and squint.
So should you still report open rate? Yes, carefully. It's a diagnostic, not a KPI (key performance indicator — the number on the board deck that decision-makers actually steer by). An aggregate drop in opens — even inflated Apple-heavy ones — is still real signal that something in delivery just broke and you should go look. What it isn't is a metric worth putting in the board deck, the quarterly review, or anywhere a decision-maker might mistake a number that goes up automatically for a number that means something.
Teams that came through MPP fastest all shared one trait: they could answer, for every dashboard and every flow, the question "what is this open rate actually for?" If you couldn't answer that, you couldn't fix it. Most couldn't.
What to use instead, one job at a time
For A/B testing content: click rate (the share of recipients who clicked a link in the email). One metric, end of committee. A click is a real human doing a real thing with real intent, and Apple's pre-fetch doesn't touch it. Yes, click rate is lower volume than open rate — which means tests need more recipients to reach significance (the point at which a result is statistically real, not noise). That's fine. Honest noise beats inflated fake signal on any day ending in a weekday. The A/B testing guide has the sample-size maths.
For triggers: rebuild them on clicks or on downstream product events (things the user did inside your product — viewed a page, completed a step, made a purchase). A click is a person. An open on Apple Mail is, at best, an optimistic hope. If a specific flow genuinely needs a pre-click signal — some onboarding paths really do — fine, trigger on the inflated proxy, but rewrite the downstream copy to assume less intent than the metric suggests. "Thanks for checking out the guide!" lands strangely when the guide arrived at 2 a.m. while the user was asleep.
For engagement-based hygiene: build a composite score and stop relying on any single signal. A composite score blends several inputs into one number per user, so no single broken signal can sink the whole thing. The recipe is plain: recent clicks, product visits, key actions completed, successful deliveries — weighted and blended into one number per user. That's it. No single input carries the whole thing, which means the next privacy change (and there will be one; this is the industry we picked) can't flatten the whole signal at once. It's more data engineering up front. That's the feature, not the cost. And it's the difference between a program that survives what Apple does next and a program that's currently one feature release away from a second MPP moment.
The Orbit Lifecycle Reporting skillhandles the composite engagement score end to end — weights, dashboards, the inevitable "why does this user score 62 and not 71" question from a stakeholder at 4:45 p.m. on a Friday.
What the open rate is still good for
Not nothing. Let's not bin the whole metric with the breathless enthusiasm of someone rebranding a pricing page.
Three things hold. First: aggregate drops are still diagnostic. If opens collapse in a way the Apple-inflation floor can't explain, something in deliverability just broke and you need to go find it before the complaint rate does. Second: non-Apple segments retain real signal, so programs skewing Android or web-based can still read the number meaningfully. Third: a user whose open rate is flat zero across twelve months is disengaged whether Apple inflated the numerator or not. Zero is still zero.
The trap that kept MPP fallout rolling for years is the obvious one: using open rate to compare across Apple and non-Apple users. A subject-line "winner" where variant A happens to have more Apple recipients than variant B isn't a winner. It's a coincidence with a p-value (the statistical probability that a result is real rather than chance) stapled to it. Segment by client every single time, or stop drawing conclusions.
The teams that fixed it, and the teams still pretending
Four years on, programs that weathered MPP share a pattern. Measurement got rebuilt once, early, and all the way — not in patches every quarter like a boat with a hole in it. Triggers moved off opens. Open rate stayed on the dashboard as a diagnostic but stopped getting called a KPI. The composite engagement score got built, and the team committed to it even when it made one quarter's numbers look worse before they looked better.
Programs that didn't adapt are honestly fascinating to look at now. Open rates are up year-over-year, which reads like success if you don't know why. Click rates are flat or quietly trending down because the underlying audience quality is drifting and nobody caught it. Re-engagement flows are reactivating people who never saw the original email, producing the lowest-quality reactivated cohorts the program has ever shipped. Dashboard says up and to the right. Reality says the opposite.
Has MPP hurt overall email performance? For teams that adapted: no, not materially. For everyone else: yes, quietly and compoundingly. Still-triggering-on-opens means firing to inflated audiences. Still-reactivating-on-opens means lower-quality reactivated users. Still-A/B-testing-on-opens means confident nonsense about Apple-skewed subject lines, week after week. The damage is gradual and usually invisible until someone audits end to end. Most haven't.
Honest punchline: MPP wasn't really the problem. The problem was a decade of measuring a metric nobody had ever interrogated, and then being surprised when it turned out to be structural. Apple just pulled out the load-bearing string and watched what happened.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
Frequently asked questions
- What is Apple Mail Privacy Protection (MPP)?
- Apple Mail Privacy Protection, released with iOS 15 in September 2021, pre-fetches tracking pixels on behalf of Mail users who opted into the feature (which is the default choice on first launch). This means the open-rate pixel fires whether the user actually opened the email or not — so open rate from Apple Mail clients is effectively 100% and no longer a reliable engagement signal.
- How much did Apple MPP inflate email open rates?
- Open rates across the industry inflated 15-40 percentage points after MPP rollout, depending on Apple Mail share. A program that had a true 25% open rate in 2020 might see 55% reported in 2022 — not because engagement improved, but because every Apple Mail recipient now registers as an open regardless of behaviour.
- Should I still use open rate as a metric after MPP?
- As a diagnostic signal only, not a KPI. Open rate still tells you something about deliverability (zero opens = probably not inboxing) and directional comparison across non-Apple segments. But as a triggering signal for re-engagement, winback, or suppression — no, it's broken for any segment with significant Apple Mail share. Replace with click-based engagement scoring, reply tracking, and forwarding behaviour.
- What should replace open rate in lifecycle scoring?
- A composite engagement score combining clicks (weighted highest), replies, forwards, out-of-folder moves, and recency. Most modern programs use a 90-day weighted score that decays older activity. Build it once, commit to it for at least two quarters, and stop referencing open rate in triggering logic.
- Is Apple MPP the same as Gmail's privacy protections?
- No. Apple MPP pre-fetches pixels on the user's behalf (inflating opens). Gmail does NOT pre-fetch pixels — it proxies images via Google's servers, which breaks IP-based geo tracking but doesn't inflate open rates. The two protections solve related problems in opposite ways.
This guide is backed by an Orbit skill
Related guides
Browse allList hygiene: the six-rule policy
List hygiene isn't cleanup; it's a continuous policy that runs automatically. Here's the six-rule policy every lifecycle program should have written down, each tied to a specific deliverability outcome.
The deliverability mental model: one picture for authentication, reputation, content, and monitoring
Most deliverability guides cover one piece — SPF, DKIM, DMARC, BIMI, reputation, warmup — and assume you already know how the pieces fit. This is the picture they assume: how a mailbox provider decides whether your email reaches the inbox, what each acronym actually does inside that decision, and where to look first when placement tanks.
Email deliverability — the practitioner's guide
Deliverability isn't a setting. It's the running total of every send decision you've made since you bought the domain. Four pillars hold it up. Break one and the whole program starts leaking.
IP warm-up in Braze — the playbook that actually holds
A fresh dedicated IP has zero reputation on day one. Most warm-up guides fixate on ramp speed and ignore the harder question — which users get the send each day. Here's the schedule, the Random Bucket Number trick, and the day-10 mistake that ruins most of them.
The unsubscribe page is the most important page in your lifecycle program
The page every lifecycle team ignores is the one quietly deciding sender reputation, suppression-list quality, and the fate of next quarter's deliverability. A short defence of why it deserves the ten-minute rebuild.
SPF, DKIM, and DMARC explained for lifecycle marketers
Three DNS records decide whether Gmail trusts your marketing email or quietly bins it. Gmail and Yahoo made all three mandatory for bulk senders in 2024 and the grace period is over. This is the practitioner's explainer: what each record does in plain English, how they interact, and the setup order that won't accidentally block your own mail.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.