Updated · 8 min read
The lifecycle metrics dashboard: what to track, what to ignore
Picture the Monday review most lifecycle teams sit through. Thirty tiles on the screen. The team nods. Nobody changes anything. That gap — between metrics tracked and decisions made — is why most lifecycle dashboards are decoration. They reassure stakeholders. They don't inform decisions. The dashboard worth building is the opposite shape: eight metrics that trigger real actions, and everything else lives in ad-hoc reports where it belongs. (Lifecycle marketing, briefly: the email, push, and SMS programs that move users from signup through activation, retention, and win-back.)

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
The Monday-morning test: would anyone actually do anything about it?
A metric belongs on the dashboard if, and only if, a change in it would trigger a specific action. Otherwise it's context — useful in an ad-hoc report, noise on a weekly review.
Run that test honestly across your current dashboard and most of it falls off the screen. Open rate — the share of recipients who opened the email — sits there for months; nobody ever says "open rate dropped three points, so we're changing X." The number is wallpaper. Everyone nods anyway because it's Monday and there's a meeting. Name that failure mode out loud and the rest of the dashboard cleanup gets easier.
Why eight, specifically? Every metric on a dashboard is a claim on attention, and past eight the team stops really looking at any of them. Eight is generous, not strict. The dashboard's job is the action it provokes, not the completeness it performs — and a wall of forty tiles performs completeness while provoking nothing.
The eight metrics that earn their tile
Each one comes with what it is, why it's on the list, and the specific thing that moving makes you do. Inline glosses cover the jargon the first time it shows up — once.
1. Active audience (weekly). Users who received at least one marketing email this week AND engaged — opened or clicked. This is the numerator that feeds everything downstream; revenue, retention, and growth metrics all sit on top of it. If it drops, you have an engagement or sending problem before you have anything else. Investigate first.
2. Revenue per send (rolling 7-day). Total attributed revenue divided by total sends for the week. Dividing by sends keeps the comparison apples-to-apples even when volume swings — a big broadcast week and a quiet one become directly comparable. Action trigger: a 20%+ drop week-over-week without a volume explanation means something has gone wrong in the audience or the content.
3. Spam complaint rate (30-day rolling). Complaints divided by emails delivered, over a thirty-day window. Mailbox providers — Gmail, Outlook, Yahoo, the systems that decide whether you reach the inbox at all — watch this number obsessively. Action trigger: 0.3% requires immediate intervention. The complaints playbook has the rest.
4. Unsubscribe rate (per-send). Unsubscribes divided by emails delivered for the last broadcast — kept per-send rather than rolling, because the question this answers is "did this specific campaign annoy people?", not "is the program drifting?" Action trigger: a specific send above 0.5% signals audience-content mismatch on that campaign, not the whole program.
5. List growth: net subscribers this week. New subscribes minus unsubscribes minus hard bounces — delivery failures from dead or invalid addresses. Negative weeks are fine when you're pruning a stale list on purpose. Sustained negative months mean an acquisition or retention gap you need to name.
6. Activation rate (new cohort). Percentage of users hitting your activation event — the moment a new signup becomes a real user, whatever you've defined that as — within seven days. The program's leading indicator: it moves first, and downstream retention follows it weeks later. Action trigger: a three-point drop in a single weekly cohort means investigate the onboarding flow today, not next sprint.
7. Thirty-day retention (cohort). Percentage of a signup cohort still active at day 30. The trailing indicator that retention work is paying off — it confirms what activation suggested a month ago. Moves slowly; check monthly, not weekly, or the noise drowns out the signal.
8. Gmail domain reputation. Pulled from Gmail Postmaster Tools — Google's free dashboard that tells senders how Gmail rates their sending domain. Action trigger: any movement to Low or below, regardless of what the other seven metrics say. Reputation is the single number that gates whether anything else you do reaches an inbox.
The metrics that feel essential — and aren't
If you'd need to ask "what do I do with that?" after reading the number, the number doesn't belong on the dashboard. Push it into a campaign-specific report or an ad-hoc analysis where context can do the work — not into a tile that competes for attention with the eight metrics that will actually trigger something.
The painful one to drop is open rate. It's been broken since 2021, when Apple introduced Mail Privacy Protection (MPP) — a feature that auto-loads images for Apple Mail users. Mailbox providers count an image load as an open, so MPP inflates opens with machine activity, and the inflation ratio varies by audience. Your open rate now mixes real opens with bot opens at an unknowable ratio. The Apple MPP guide covers why it's still a fine A/B test proxy (the inflation hits both arms equally) and a poor primary dashboard metric.
Revenue attribution raises a similar question — pick one model, stick with it, and keep the dashboard consistent. Last-click in a 7-day window is the default for email and it's defensible. Measure true incrementality — the lift you'd lose if the program stopped, the only honest answer to "is this working?" — separately, via holdout tests. (A holdout is a randomly selected group held back from messaging as a control, the way a clinical trial works.) Don't try to bake real incremental revenue into a daily dashboard. Consistency beats precision when you're tracking trends.
One dashboard, three timeframes
One dashboard view, toggling timeframes, beats three separate dashboards. Same eight metrics, different granularity. The view changes; the canon doesn't.
Weekly:metrics 1–5 — active audience, revenue per send, complaint rate, unsub rate, list growth. Fast-moving numbers that respond to campaign changes within days.
Monthly:metrics 6–8 — activation, retention, domain reputation. Slower-moving; weekly noise drowns out weekly signal.
Quarterly: cohort analysis (tracking groups of users by signup month over time), retention curves, revenue attribution readouts, holdout results. This is the work that explains why the dashboard metrics moved at all — the weekly tiles tell you something happened, the quarterly view tells you what.
The leading-versus-lagging split matters here. Leading indicators — activation, complaint rate, unsub rate, revenue per send, domain reputation — move fast and predict downstream impact. Lagging indicators — thirty-day retention, net list growth, total revenue — confirm what the leading ones were already saying 30 to 90 days ago. Weight current decisions toward the leading ones. By the time the lagging ones move, the change you'd make has already been made (or missed).
Making the dashboard part of how the team actually works
A dashboard is only useful if someone looks at it on a schedule. Build a review cadence into the team's operating rhythm and it becomes a tool. Skip the cadence and it becomes wallpaper — pretty, ignored, occasionally pointed at in an exec deck.
Monday 15-minute standup. Review the weekly dashboard. One person presents changes in the five weekly metrics; if any triggered actions, assign and move on. No discussion theatre, no extended commentary on numbers that haven't moved.
Monthly review, first Monday of the month. Forty-five minutes on the slower metrics plus last month's action items. Output is two or three priorities for the month. Not ten.
Quarterly business review. Cohort analysis, retention curves, experiment readouts, retrospective on what actually moved. The output shapes next quarter's roadmap — which is the entire point of running this rhythm.
Two questions come up constantly. First: should you split dashboards by channel — email, push, SMS? Only if the audiences or goals genuinely differ. For most programs, rolling all channels into the same eight metrics with a channel filter is better than maintaining parallel dashboards. The team should be thinking about "the program", not "the email program versus the push program." Second: if your current dashboard has thirty metrics and the team is asking for more, audit each one against the action test. If it moved 20% this week, what specifically would you do? Remove everything where the honest answer is "we'd look into it" or "nothing specific." You'll cut at least 70%. Archive the survivors' cousins into a monthly analytical report where context can carry the weight a dashboard tile can't.
Eight metrics. One operating rhythm. Everything else in a report nobody opens unless something on the dashboard moved.
covers the quarterly review format and how to structure readouts that influence prioritisation, not merely inform it.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
This guide is backed by an Orbit skill
Related guides
Browse allReporting lifecycle to executives: the monthly update that actually lands
Most lifecycle reporting to execs is a 20-slide deck of campaign-level charts that nobody remembers a week later. The fix is structural, not quantitative. Three numbers, two decisions, one ask. Here's how to build the report that produces ongoing investment instead of polite nods.
Retention economics: proving lifecycle ROI to finance
Lifecycle programs get deprioritised when they can't defend their impact in dollars. The four models that keep the budget — LTV, payback, cohort retention, incrementality — and the four-slide pattern that wins a CFO room.
What is lifecycle marketing? A field guide for operators starting from zero
If you're new to CRM and lifecycle, the field reads like a pile of acronyms and vendor demos. It's actually one simple idea executed across five canonical programs. Here's the frame that makes the rest of the library make sense.
Segmentation strategy: beyond RFM
RFM is the floor of audience segmentation, not the ceiling. Every program that stops there ends up describing what users already did without ever predicting what they'll do next. Here's the segmentation stack that actually drives lifecycle decisions — and how to build it in Braze without ending up with 400 segments nobody understands.
Lifecycle marketing for flat products
The standard lifecycle playbook assumes weekly engagement and tidy stage progression. Most real products aren't shaped like that. This is how to design lifecycle — the messaging program that nudges users through their relationship with a product — for things people use once a year, once a quarter, or whenever they happen to need you. The textbook quietly makes those programs worse.
Holdout group design: the incrementality tool most lifecycle programs skip
Without a holdout, lifecycle ROI is attribution-model guesswork with a spreadsheet. With one, you get a defensible number you can actually put in front of finance. Here's how to size, run, and read a holdout — and the three mistakes that quietly invalidate the result.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.