Updated · 11 min read
The lifecycle audit — a 30-point checklist
The first question worth asking any lifecycle team is when the last audit ran. The honest answer is usually either never or the last time something broke loudly enough that someone noticed. Programs don't fail in one spectacular event — a canvas (the visual flowchart your ESP uses for an automated journey) stops firing, a segment definition drifts, a naming convention slips, and six months later the reporting stops being trustworthy and nobody can pinpoint when it happened. This is the checklist that catches it early. Thirty points, four categories, run quarterly.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
If you're new here: what a lifecycle audit actually is, and why it matters
Picture the lifecycle program at a typical mid-stage company a year after launch. Someone built a welcome series in month one. Three months later a different person added a win-back. The next quarter someone tried a price-drop trigger and never quite finished it. Nobody's touched any of those flows since — they're running, the dashboards still look fine at a glance, and the team has moved on to the next project. That is the entire setup for a lifecycle audit.
An audit is a structured walk-through of every running program — the deliverability signals (whether your emails are reaching inboxes versus spam folders), the data feeding the triggers, the program logic itself, and the operational hygiene around naming and ownership. The goal isn't to be exhaustive. The goal is to find the things that have silently broken since the last time anyone looked, before they show up as a revenue dip nobody can explain.
Three hours, four categories, thirty points. Same checklist whether you're running it on a six-month-old startup program or a ten-year-old enterprise instance. Newcomer's on-ramp ends here; the rest of the guide is the list itself, plus what to do with the output.
How to run it without burning three days
An audit that takes three days never gets done. An audit that takes three hours, every quarter, is the one that catches drift before it becomes damage.
The thirty points cover four categories: deliverability, data integrity, program health, operational hygiene. Run all thirty on a quarterly cadence. First pass takes three to four hours. Every pass after that lands in 60 to 90 minutes because you already know where to look. Grade each on a simple traffic light — green (healthy), amber (watch), red (fix this quarter). The most senior lifecycle person owns the audit, even if they delegate individual checks, because a junior sign-off won't carry the weight needed to unblock the cross-functional issues — data engineering changes, customer-service flows — that always surface.
One cadence wrinkle worth naming. Programs still stabilising, or ones that just survived a deliverability incident (a bounce spike, a sudden drop in inbox placement, a blocklist flag), should run monthly until they stop finding new reds. Mature programs settle into quarterly. The discipline is the recurrence, not the frequency.
The Orbit Lifecycle Audit skill automates the bulk of it — pulls current state from Braze (the customer-engagement platform a lot of mid-to-late-stage teams use to run their programs), flags anomalies against baselines, produces the structured report. Use this list as the spec whether you run it by hand or by skill.
Are your emails actually reaching inboxes? (8 points)
Deliverability is the bit that decides whether anything else in this audit matters. If your sends are landing in spam folders, no clever trigger logic saves the program. These eight checks tell you whether the postal service still trusts you.
1. Bounce rate over the last 30 days — the share of sends that came back undeliverable. Under 2% green, 2–5% amber, 5%+ red.
2. Spam complaint rate — recipients hitting the "mark as spam" button. Under 0.1% green, 0.1–0.3% amber, 0.3%+ red. Covered in the complaints playbook.
3. Unsubscribe rate by program. Flag any program over 0.5% per send — that's the threshold where a single program is actively eroding your list rather than nurturing it.
4. SPF, DKIM, DMARC intact across every sending subdomain. Those three are the email authentication standards that prove to mailbox providers your mail is really from you and not a spoofer. Covered in the authentication guide.
5. DMARC reports reviewed in the last 30 days. The reports tell you whether anyone unauthorised is signing mail as your domain. If nobody's read them, nobody knows.
6. Google Postmaster Tools reputation — Google's view of how trustworthy your domain looks to Gmail. High or Medium is healthy. Low or Bad is red.
7. Microsoft SNDS status across all sending IPs. SNDS is the Smart Network Data Services dashboard — Outlook's equivalent of Postmaster Tools. Same logic: green flags only.
8. Dormant user suppression active — users inactive 90+ days excluded from marketing broadcasts. Sending to people who never open is the fastest way to teach mailbox providers your mail is unwanted.
Is the data feeding your triggers still trustworthy? (7 points)
Programs trigger off events, attributes, and segments. When any of those drift quietly, the program keeps firing — just on the wrong people, at the wrong moment, or not at all. These seven checks are the data-integrity layer.
9. Lifecycle stage populated on every user profile. Lifecycle stage is the field — typically new, activated, engaged, at-risk, churned — that decides which programs a user qualifies for. Flag anyone stuck in null or an undefined value.
10. Activation event still firing at expected weekly volume versus baseline. The activation event is the thing the user does that signals they've actually started getting value — first purchase, first connection, first export, whatever your team decided counts. If the volume drops without a product reason, the instrumentation has broken upstream.
11. Every segment on an active program has updated in the last seven days (for rolling segments) or matches its spec (for static ones). A segment is the audience definition — "users who purchased in the last 30 days but not the last 7" — that the program targets.
12. Every custom attribute referenced by a live campaign actually exists on profiles. Custom attributes are the user-level fields your team adds beyond the platform defaults — plan_tier, last_purchase_date. Drift here breaks personalisation silently. The Braze Data Model Validation skill catches drift automatically.
13. Event taxonomy hasn't drifted — events that were once-per-session are still once-per-session. Analytics teams change instrumentation without telling lifecycle teams. This is where you'll find the evidence.
14. Random Bucket Number distribution is uniform across the expected range. RBN is a number Braze (and most platforms) auto-assigns each user — used to split audiences for A/B tests and holdouts (the random group you don't message, your control). If the distribution is skewed, your sampling is silently broken and every test result on top of it is suspect.
15. Test-user suppression is in place for internal emails and employee accounts. Otherwise your own staff is in the numerator and denominator of every metric.
Are the programs themselves doing what they're meant to? (9 points)
Now the programs themselves. This is where most teams find their biggest drift. Triggers stop firing, sequences run on the wrong audience, success metrics quietly stop being tracked. These nine are the programmatic checks.
16. Every active canvas has sent in the last 30 days. Canvases that haven't fired are either broken or orphaned — either way, pause until validated. This is the single most common find. Programs get built, launched, and forgotten. Six months later they're either firing with broken logic or contributing to silent reputation drag.
17. Every scheduled broadcast has a named owner. Anonymous sends are how a one-off promo from 18 months ago ends up still going out weekly because nobody knows it's theirs to switch off.
18. Onboarding email #1 open rate above 40% on Apple-Mail-excluded audiences. (Apple's Mail Privacy Protection pre-fetches images and inflates opens, so Apple users are the noisy half of the chart — exclude them and the number is honest.) The first onboarding send going to people who explicitly signed up is the highest-intent moment in the funnel; anything below 40% there usually means subject-line drift or a deliverability problem dressed as engagement weakness.
19. Win-back converts above your baseline reactivation floor (program-specific number). The win-back is the program targeting users who used to engage and stopped; the floor is whatever rate you saw the first three months it was live, before novelty decayed.
20. Abandoned cart trigger fires within 60 minutes of the event. Latency over that is a broken pipeline, not a cadence choice. The user has already moved on.
21. Frequency cap configured on every marketing broadcast — and respected by triggered programs. The cap is the platform-level limit on how many sends a single user can get in a window; without it, your most engaged users are also your most over-mailed.
22. Post-activation sequences actually fire for newly activated users. Check the transition between "new" and "activated" isn't silently failing — people are activating, but the next program in the chain never picks them up.
23. Churned and sunset users excluded from all marketing sends, including one-off broadcasts that live outside the main program flow. The exclusion is usually wired into automated programs and forgotten on broadcasts.
24. Every program has a defined success metric recorded somewhere real — not open rate, but the business metric the program is meant to move. If win-back is graded on opens instead of reactivated revenue, the program will look like it works long after it has stopped working.
Could the next person actually inherit this? (6 points)
25. Naming convention compliance — off-convention campaigns under 10% of the live portfolio. The convention is the agreed format for naming campaigns and segments so they're sortable and filterable; once compliance slips past 10%, search and reporting both start lying. See the naming guide.
26. Content Blocks: no duplicates, no stale blocks unused in 90+ days, every block has a named owner. Content Blocks are reusable snippets — header, footer, promo banner — referenced from inside campaigns; duplicates mean two people edit different copies of the same thing and one always falls behind.
27. Segments: no unused segments older than 90 days. Archive them. Old segments are how new hires find five candidates for "active users" and pick the wrong one.
28. Templates: every active template renders cleanly in Gmail, Outlook, Apple Mail, and dark mode. Outlook is the one that breaks; dark mode is the one nobody checks.
29. Every program has a brief — purpose, triggers, audience, success criteria. The Program Brief skill catches the missing ones. A program without a brief is a program nobody can defend in a roadmap meeting.
30. Handoff documents exist for any program launched by someone who has left. Knowledge in one person's head is a program at risk.
What to do once you've got thirty grades on a page
Three rules for what comes next. Any red is a same-week fix — deliverability and data-integrity reds are blocking, because they threaten sender reputation or mean triggers are silently firing wrong. Ambers get a plan by the end of the quarter. Greens become next quarter's baseline. The audit compounds in value because you're tracking deltas, not absolute state. Quarter two is faster than quarter one. Quarter four is where the pattern recognition lives — the same red showing up three audits running tells you the fix isn't a fix, it's a structural problem.
Share the result with stakeholders in whatever format fits — summary memo, Looker dashboard, Notion page. The medium is not the point. The visibility of drift is the point. A program where leadership sees the grade every quarter runs tighter than one where the dashboard is locked to the lifecycle team's Slack channel. And if a red can't be fixed in a week, escalate it. That's what escalation is for.
One last translation tip, because stakeholders respond to it. Convert every red into a dollar figure. Three red deliverability items at current sending volume puts ~$X of monthly revenue at risk lands differently in a leadership meeting than we have a bounce rate problem. The numbers are usually already sitting in your revenue model — surface them.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
This guide is backed by an Orbit skill
Related guides
Browse allBuilding a lifecycle team — the roles, the order, the size
Lifecycle is a craft, an ops function, and a strategic lever all at once. Most teams accidentally end up with three people holding overlapping halves of the role. Here's the deliberate version: who to hire first, what triggers the next one, and when CRM stops belonging in brand marketing.
Choosing which lifecycle programs to build first
New lifecycle lead, empty Braze account, a laundry list of programs you could build. The question nobody trains you for is which to build first. This is the selection framework — by business type, by team size, by data maturity, and the programs I'd actively wait on.
The cadence question: how often should you email?
Everyone asks how often to email. Almost nobody answers it properly, because it's the wrong question. Cadence is a consequence of five other decisions you probably haven't made yet. Here's the version of the debate that resolves.
What is lifecycle marketing? A field guide for operators starting from zero
If you're new to CRM and lifecycle, the field reads like a pile of acronyms and vendor demos. It's actually one simple idea executed across five canonical programs. Here's the frame that makes the rest of the library make sense.
Segmentation strategy: beyond RFM
RFM is the floor of audience segmentation, not the ceiling. Every program that stops there ends up describing what users already did without ever predicting what they'll do next. Here's the segmentation stack that actually drives lifecycle decisions — and how to build it in Braze without ending up with 400 segments nobody understands.
Retention economics: proving lifecycle ROI to finance
Lifecycle programs get deprioritised when they can't defend their impact in dollars. The four models that keep the budget — LTV, payback, cohort retention, incrementality — and the four-slide pattern that wins a CFO room.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.