Updated · 12 min read
The deliverability mental model: one picture for authentication, reputation, content, and monitoring
Picture what happens in the half-second after you hit send. Your email lands at Gmail's front door. Before any human sees it, three different systems take a look — one asking who actually sent this, another asking whether the sender can be trusted, a third asking which folder this particular message belongs in. Pass all three and you're in the inbox. Fail one and you're in spam, or nowhere. Most deliverability guides explain one piece of one system and assume you already know how the rest fits together. This guide is the picture they assume. Once you have it, every acronym in the stack — SPF, DKIM, DMARC, BIMI, IP warmup — turns into a specific answer to a specific question, and most placement problems start telling you which checkpoint they live at.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
The one question every mailbox provider is actually asking
Every deliverability decision a mailbox provider makes reduces to one question: is this sender behaving like someone our users want to hear from?
Gmail, Outlook, Yahoo, Apple Mail — between them they decide where almost every marketing email on earth lands. A mailbox provider, in case the term is new, is the company that runs an inbox. They're the people who decide what shows up in inbox, what gets diverted to Gmail's Promotions tab, and what ends up quietly dumped in spam. They're the gatekeepers, and all of them are answering the same question: should we put this message in the inbox, the promotions tab, the spam folder, or nowhere at all?
They can't literally poll every recipient about every email. So they use proxies, by which I mean automated, provable signals that correlate with whether someone wants this mail. The whole deliverability stack is just a collection of those proxies. Authentication proves who you are. Reputation tracks how recipients have reacted to you historically. Placement classifiers decide which folder a specific message belongs in for a specific person. Once you see the proxies as a stack rather than a checklist, the rest of this guide stops being a wall of acronyms and starts being a flowchart.
The three checkpoints every email passes through
When your email arrives at Gmail's door, it goes through three checkpoints in order. Fail any one and you're out. Pass all three and you're in — though where you land (Primary, Promotions, or spam) is a fourth, fuzzier decision the system makes after letting you through.
First checkpoint — Identity. "Who claims to have sent this, and can they prove it?" This is the authentication checkpoint: SPF, DKIM, DMARC. Three protocols that together let your domain mathematically prove it authorised this send. Pass = the cryptography checks out. Fail = you look like someone trying to spoof a brand, and the provider treats you accordingly.
Second checkpoint — Reputation. "You're authenticated. Do we trust you?"Now the provider looks up its history of every other email you've ever sent. Reputation, here, means a numeric score the provider keeps on your sending domain and your sending IP, built up from years of recipients either engaging with your mail or marking it as spam. A strong score puts you in the running for the inbox; a weak one puts you in the spam folder regardless of how clean the email looks.
Third checkpoint — Placement. "We trust you. Which folder does this specific message go in?"Now content gets a vote. Subject line, HTML structure, image-to-text ratio, link patterns, and the receiver's personal history with you specifically. Same sender, two recipients, two different folders is normal. This is the layer where Gmail decides Promotions vs Primary on a per-user, per-message basis.
First checkpoint — Identity: what SPF, DKIM and DMARC each prove
Picture the receiving server as a doorman who's seen ten thousand impersonators try to walk past wearing your name badge. Authentication is how you prove the badge is real. Three different proofs, each covering a different gap, all three needed for the doorman to wave you through cleanly.
SPF — Sender Policy Framework. A list of IP addresses (the numeric addresses servers use, like 192.0.2.1) that you publish in DNS — DNS being the public phone book of the internet — saying "these are the servers allowed to send mail on my behalf". The receiving server checks the sender's IP against that list. On the list = pass. Not on the list = the message could be a spoof. The catch with SPF is that it only checks the envelope sender (the address used in the technical handshake), not the "From:" line a human reads. Spoofers happily pass SPF on a different envelope while showing your brand in the visible From. SPF on its own is not enough. Full SPF/DKIM/DMARC detail
DKIM — DomainKeys Identified Mail.Your sending server signs every message with a private cryptographic key. Your domain publishes the matching public half of that key pair as a DNS record. The receiving server runs the signature against the public half to verify two things: the body and headers haven't been tampered with in transit, and whoever signed the message genuinely held the private half. A pass means the message arrived intact and was authorised. The catch with DKIM is that it doesn't cover the envelope, and on its own it doesn't tell the receiver what to do when a signature fails.
DMARC — the policy on top. DMARC does two jobs at once. First, it ties SPF and DKIM back to the From: address the recipient actually sees, a property called "alignment", which closes the gap SPF leaves open. Second, it tells the receiving server what to do when alignment fails: none (just monitor and report), quarantine (send to spam), or reject(bounce outright). DMARC also sends you reports listing every server sending mail under your domain, including the dodgy ones you didn't know about. SPF and DKIM are the proofs. DMARC is the rule for what counts as a valid proof and what to do when one is missing.
A sender with all three configured looks like an organisation that controls its own identity. A sender missing one looks either careless or compromised, and the provider treats both the same way.
Second checkpoint — Reputation: the score you have been quietly building for years
Reputation is every engagement signal recipients have ever given on mail from your domain, summed up and remembered. It takes months to build. It takes a bad weekend to lose.
Once authentication passes, the receiving server pulls up your file. That file is keyed against two things: the IP address you sent from, and the domain in your From: line. Both carry reputation. They behave differently, and most operators don't realise which one matters more for them.
IP reputation. Tied to the specific IP address pumping out your mail. Matters most for high-volume senders on dedicated IPs (an IP used only by you, vs a shared IP pooled across many tenants of an email service provider — your ESP, the platform that actually sends your mail, like Braze or Klaviyo). IP reputation builds over weeks of consistent volume and burns fast: sudden spikes look like compromise, a bad list looks like spam, and complaint rates above the threshold make the provider throttle you. IP warmup mechanics
Domain reputation. Tied to your sending domain (and to your From: domain, via DMARC alignment). Slower to move than IP reputation, but it survives IP changes and travels with the brand, which is exactly what you want. Gmail in particular has shifted most of its weight onto domain reputation in recent years. If you're below a million sends a month, this is the signal you should care about most. Domain vs IP reputation
What feeds reputation positively. Recipients open your mail, click through, move it out of spam, drag it from Promotions to Primary, occasionally reply, and don't unsubscribe in the first thirty seconds. Sustained engagement of any kind builds the score. The provider doesn't need to know why people engage — only that they do, consistently.
What burns it.Spam complaints, easily the single most damaging signal, with anything above 0.3% (three complaints per thousand sends) flashing red on a provider's dashboard. Repeated delete-without-opens. High unsubscribe rates. Spam traps, which are dormant addresses ISPs deliberately leave alive to catch senders with sloppy list hygiene; hit one and the provider knows you're scraping or buying lists. Sending patterns that look automated count too — sudden volume spikes, perfectly flat send curves with no human rhythm.
The cleanest way to see what at least one provider thinks of your reputation right now is the Google Postmaster walkthrough. It's free, it's the only direct view Gmail gives you, and you should be checking it weekly.
Third checkpoint — Placement: how Primary, Promotions and spam get sorted
You authenticated cleanly. Your reputation is solid. Your mail is going to land somewhere — but where, exactly? That decision happens once per message, per recipient, and it's the least deterministic part of the whole stack. Two colleagues at the same company can get the same email in two different folders.
Content signals. Subject line, preview text, HTML structure, image-to-text ratio, presence of an unsubscribe link, URL patterns (look-alike domains, link shorteners, weird TLDs like .xyz or .click). Content scanners aren't grading your prose. They're looking for patterns spammers tend to use. Good senders occasionally trip them anyway: an all-image email, a SHOUTY subject line, a wall of links. The cost of tripping a scanner is placement, not delivery. Gmail Promotions tab heuristics
Per-recipient engagement history. Has this specific recipient opened your mail before? Moved you to Primary? Replied? Mailbox providers personalise placement on a per-user basis. The same email from the same sender can land in Primary for an engaged recipient and Promotions for a dormant one. Both placements are correct, given what each recipient has signalled.
Tab and folder classifiers.Gmail's Promotions, Updates, Social and Forums tabs. Outlook's Focused/Other split. These are machine-learning models trained on what commerce and marketing email tends to look like. Landing in Promotions is correct classification for most lifecycle mail. Chasing Primary is usually a wasted afternoon: people who engage with Promotions still see your mail there, and the ones who don't engage wouldn't engage from Primary either. Optimise for opens, not for the tab.
Where BIMI sits — the reputation play that looks like a placement play
BIMI — Brand Indicators for Message Identification. The little verified logo you see next to a sender's name in Gmail or Apple Mail. Functionally, BIMI sits awkwardly between checkpoints — above reputation, below placement. It doesn't affect whether your email lands in the inbox; it affects how it looks once it does. To display a BIMI logo you need three things: DMARC at quarantine or reject (not just none), a Verified Mark Certificate (VMC, basically an SSL certificate for your logo, costing around $1,500 a year) in most cases, and your logo published in a specific SVG format at a specific DNS record.
BIMI's real value isn't the logo itself. The trust signal is doing the work — the recipient sees confirmation that the message is genuinely from that brand, and the implicit confirmation that you're running DMARC properly enough to qualify in the first place. Industries where trust drives open rates (finance, healthcare, high-ticket ecommerce) get the most out of it. For a small operator sending fortnightly newsletters, the VMC fee outpaces the lift. BIMI setup in detail
When placement tanks: walking the checkpoints in the right order
The most expensive deliverability mistake operators make is diagnosing the wrong checkpoint. A reputation problem doesn't get fixed by rewriting subject lines. Walk it bottom-up.
Open rates fell off a cliff this week. The team is in a panic. Here's the order to actually check things in, not the order it tends to feel like checking them:
Identity check — five minutes, free. Send a test email through a header-checking tool (mail-tester.com or mxtoolbox both do this, both free). Do SPF, DKIM and DMARC all pass on that test? Are they aligned with your visible From: address? If anything fails, fix it before doing anything else. None of the higher-up checkpoints matter until identity is clean.
Reputation check — Google Postmaster, a day of data. Is domain reputation showing as medium or high? Is your spam complaint rate under 0.3%? Has sending volume been steady, or did you just spike? When reputation is down, the fix is slow and unsexy: throttle sends down to your most-engaged segments only, stop pulling in addresses from whichever source caused the problem, wait two to four weeks for the score to climb back. No subject-line trick speeds this up. Full reputation recovery playbook
Placement check — A/B with different content, only after the first two are clean.If a heavy-image creative lands in Promotions and a leaner HTML version lands in Primary, the problem is content and placement. Most of the time this is fine and Promotions is exactly where lifecycle mail belongs. But if you're getting spam-foldered from a clean-reputation domain, content is your last suspect. Strip the email back to plain HTML, rerun, see if placement recovers.
The Deliverability Management skill bakes this diagnostic order into the workflow so you walk the checkpoints in sequence rather than in whichever one feels urgent at 4pm on a Friday.
Closing the loop: did any of this actually move money?
Deliverability infrastructure isn't the goal. It's the precondition. Land in the inbox and the rest of your lifecycle work has a chance to matter. Land in spam and nothing else you do counts. Two more disciplines close the loop between "we sent the email" and "the email caused a sale".
Incrementality testing. The only honest way to know whether a lifecycle program causes revenue is to randomly hold it back from a slice of your audience and compare. Recipients who got the email vs those who didn't — that's your incremental lift. Everything else is correlation in a fancy hat. Incrementality test design
Attribution models. Once you know a touchpoint matters, attribution is the imperfect art of dividing credit across the four other channels the recipient touched before they bought. The honest argument isn't about which model is correct. It's about which model's biases match the question you're asking right now. Attribution models in lifecycle
Identity, reputation, placement, measurement. That's the whole arc. Every other guide in the deliverability section answers a specific question inside one of those four boxes. Once you can place the question in the box, the answer becomes findable instead of mysterious, and most arguments about "why are we in spam" collapse into a quick walk down the checkpoints, in order.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
Frequently asked questions
- If I have good SPF/DKIM/DMARC, why is my mail going to spam?
- Authentication is the first checkpoint — necessary but not sufficient. Spam placement almost always traces to the second checkpoint, reputation. Check Google Postmaster Tools for your domain reputation, complaint rate, and sending history. A domain that authenticates perfectly but has weak engagement signals or elevated complaint rates still gets spam-foldered. Fixing that is a 2-4 week project of sending less, to more engaged recipients only, until signals recover.
- I don't send enough email to have IP reputation. Does any of this apply to me?
- Yes, but mostly via domain reputation rather than IP. Even at 1,000 sends a month, your domain accumulates reputation signals. At low volume you'll typically be on a shared IP, so IP reputation is pooled across all tenants of that IP (good senders lift you, bad senders drag you). Focus on what you control: clean authentication, consistent sending, good engagement from your list. At your volume, IP warmup isn't a thing you need to worry about.
- How do I know if my reputation has been damaged before it's too late?
- Watch Google Postmaster Tools daily if you send to Gmail addresses at any scale. The leading indicators — domain reputation, IP reputation, spam rate — move before inbox placement does. If you see domain reputation drop from high to medium, you have maybe a week before inbox placement follows. That's the window to act. Waiting until bounces or inbox-placement tools show a problem is usually too late.
- Does BIMI actually lift engagement or is it just a vanity project?
- Measured lifts are small — typically 1-3% open rate improvement vs no logo — but BIMI's real value is signalling. It confirms to sophisticated recipients that you run DMARC properly (a trust signal for finance, healthcare, high-value ecommerce categories) and it gets you into Gmail's and Apple Mail's sender-trust visual treatments. Whether that ROI justifies the VMC cost (~$1,500/year) depends on volume and brand. For most mid-size senders: worth it. For tiny operators: wait.
- DMARC p=reject — should every sender eventually move there?
- Yes, but slowly and in order: monitor (p=none) → quarantine (p=quarantine) → reject (p=reject). Don't skip quarantine — that's the step where you catch legitimate sources (marketing tools, email-from-CRM, internal automations) you didn't know were sending on your behalf. Moving to p=reject without a clean quarantine period will quietly black-hole legitimate mail from services your team has set up over the years. DMARC is the one deliverability change where patience is a virtue.
This guide is backed by an Orbit skill
Related guides
Browse allGoogle Postmaster Tools: a walkthrough for people who actually send email
Postmaster Tools is the single most valuable free deliverability tool and most programs either ignore it or misread the charts. Here's what each tab actually says, what to act on, and what to stop looking at.
Gmail Promotions tab: is landing there actually bad?
The Promotions tab has been a marketing bogeyman since 2013. The honest answer: it's usually fine, sometimes a problem, and the fix is almost never 'try to escape the tab'. Here's how to think about it.
Email deliverability — the practitioner's guide
Deliverability isn't a setting. It's the running total of every send decision you've made since you bought the domain. Four pillars hold it up. Break one and the whole program starts leaking.
List hygiene: the six-rule policy
List hygiene isn't cleanup; it's a continuous policy that runs automatically. Here's the six-rule policy every lifecycle program should have written down, each tied to a specific deliverability outcome.
Spam complaints: the playbook for detecting and reducing them
Spam complaints are the hardest-hitting negative reputation signal in email. They compound faster than bounces and recover slower. This is the playbook — what actually triggers them, how to catch them early, and the four levers that reliably bring the rate back down.
Domain vs IP reputation: which one actually matters
Deliverability reputation is two parallel scores, not one. IP and domain behave differently, recover differently, and the balance between them has shifted hard toward domain over the past five years. Here's what that means for what you monitor and how you warm up.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.