Updated · 9 min read
Churn cohort analysis: the one chart that tells you if retention is actually improving
Picture the slide that always seems to land in the quarterly review: "We retain 60% of users at 30 days." Compared to what? Improving, stable, sliding? Driven by a flood of new signups that haven't had time to leave yet, or by actual lifecycle work? A single retention number — what fraction of users are still using the product N days after signup — can't answer any of those. The cohort retention curve can. If you only get to put one chart on the wall, this is the chart. Here's how to build it properly, what it actually tells you, and where it stops being useful.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
If you're new here: what a cohort actually is, and what the chart shows
Picture every user who signed up in the same week, treated as a single group. That group is a cohort — people who entered the product at the same moment, tracked together over time. The cohort retention curve plots, for each weekly group, the percentage still active 1 week later, then 2, 4, 12, 26, and 52. Each cohort is one line on the chart. Time-since-signup is the x-axis. Simple shape, enormous information density.
What you end up with: a stack of overlapping curves that drop sharply in the first weeks and flatten over time. The drop is churn — users leaving and not coming back, the thing every lifecycle program is built to slow down. The point of the chart is not any single line; it's the comparison between them at the same age. "At week 4, the January cohort was at 42% retained; the April cohort was at 48%." That sentence is the entire job. Everything else on the chart is detail.
Cohort curves are the only view that lets you see program improvement as it happens. Every other metric compresses time and confuses new-user effects with lifecycle effects.
One choice to make up front: how wide is each cohort? Weekly cohorts for fast-moving consumer products. Monthly for slower-moving or B2B SaaS — software sold to other businesses, where people use it during work hours and signup volume is lower. Weekly is noisier but catches changes faster. Monthly is smoother but can hide a regression that happened three weeks ago. Most programs settle on monthly for executive review and weekly for the team's working view.
Two kinds of churn, and why mixing them up wrecks the chart
Before reading any curve, separate two things that get casually conflated. Logo churn is the percentage of users or accounts that leave — count of customers, full stop. Dollar churn is the percentage of revenue that leaves — same idea, weighted by what each account paid you. They tell different stories.
A SaaS business can lose 10% of its logos and still grow revenue, because the leavers were on the cheap plan and the survivors are upgrading. The third movement is contraction — accounts that stay but spend less (downgraded a plan, dropped seats, cut usage). Contraction never shows up in a logo cohort chart at all; it only appears in a revenue cohort. If your enterprise customers are quietly cutting seat counts every quarter, your logo retention will look glorious while the actual book of business shrinks.
Default to logo cohorts for product retention questions. Add a revenue cohort the moment you have multiple price tiers or any expansion/contraction motion. For subscription businesses, you genuinely need both — the two charts side by side are the whole story. Pick one definition of churn per chart and label the chart accordingly. Mixing logo and dollar in the same view is how programs end up with quarterly reviews that contradict themselves.
What each part of the curve is actually telling you
You've got the chart open. Four things to look at, in this order.
The first-week drop. Where most attrition happens — "attrition" meaning the share of users who simply stop coming back. A cohort at 100% on day 0 and 40% by day 7 has lost 60% in the first week. That's the activation window: the period where new users either find the value or quietly leave. Improvements to onboarding and welcome flows show up here first, usually within four weeks of launch.
The 30-day curve shape. Continued steep decline through 30 days means activation stuck but engagement didn't — they showed up, didn't come back. Flattening by day 14 means the people who survived the first week are largely sticking. Two cohorts with the same 30-day retention can have completely different shapes, and the shape is what tells you where the program is working and where it isn't.
The asymptote. The flat line cohort retention approaches over months — the floor it stops falling toward. Cohorts flattening at 15% means you have a 15% "committed user" base that rarely churns. Cohorts that keep declining past month 6 mean ongoing attrition even among established users — which points at product value or re-engagement work, not onboarding.
Cohort-to-cohort comparison. Stack recent cohorts against older ones at the same age. Newer above older means improvement. Newer below older means regression. This is the single most valuable comparison on the chart and the one most programs forget to make explicit.
Worth saying plainly: industry retention benchmarks are nearly useless. Numbers vary wildly by category, and comparing your product to a public company with a different business model is a recipe for bad decisions. The honest comparison is always cohort-to-cohort within your own program.
When the average is lying — slicing by channel, action, segment
A cohort chart by week of signup is the default view. The richer one stratifies — splits each cohort into sub-groups so you can see where the average is hiding something. Three cuts that consistently earn their keep:
Acquisition channel. The path users came in through — paid social, organic search, referral, etc. Paid social cohorts typically retain worse than organic. Referred users typically retain best. If your blended retention is stable but the channel mix has shifted toward paid, your real retention is declining and the blend is politely hiding it. That's the quietest way a lifecycle program can lose ground without anyone noticing.
First-action experience. Users whose first meaningful action was X versus Y often show wildly different retention curves. This is where product-led retention wins come from — find the "aha moment" (the action that flips a curious signup into a sticking user) and shape the first-week flow around reaching it.
Geography or plan tier. If your product varies meaningfully by region or plan, cohorts along those dimensions show where to invest. Blended "just fine" retention often hides one high-retention segment quietly funding a low-retention one.
The aha moment guide covers how to identify the first-action that drives retention. Once you find it, that's usually the primary cohort stratification for the rest of the program's life.
Where the chart stops earning its keep
Every chart has a job description. This one's is narrower than it looks.
,
It also misses resurrections — users who go quiet and come back later. A user who signed up in January, went dormant in February, and re-engaged in April is absent from the January cohort's week-12 retention but present in the quarterly active-user count. If your win-back work (the campaigns aimed at lapsed users) is producing meaningful resurrection, a separate "resurrection cohort" chart captures it cleanly. Stacking it alongside the retention curve is the honest picture.
One worry comes up in nearly every quarterly review: "why are newer cohorts sometimes worse than older ones?" Three suspects in rough priority order — acquisition-channel shift toward lower-quality traffic, a product regression between cohorts, and seasonal effects. Stratify by channel to rule out the first, annotate the timeline with product changes to identify the second, and compare year-over-year to isolate the third.
Building it: the tool, the definition, the one thing not to change
Two ways to build the chart. Most modern analytics stacks support cohort analysis natively — Amplitude, Mixpanel, and Heap all have the view built in, no SQL required. For SQL-native teams, the query is about 30 lines: one CTE (a Common Table Expression — a named sub-query at the top of a SQL statement that makes the rest readable) for signups grouped by cohort week, another for activity events, then a JOIN on user_id with a time-since-cohort calculation. Not complicated. Just finicky to get right the first time.
The hard part isn't the query. It's the definition of "active." Pick one and stick with it across every cohort. "Active" should mean one specific thing — e.g., "user performed [key product action] in the week ending [date]." Something meaningful, not a login. For a marketplace that might be "viewed a product." For SaaS, "performed core action X." For content, "read an article." Logged-in is too loose; purchased is usually too conservative for early-stage retention. Shifting the definition between runs produces curves that cannot be compared, which defeats the chart's only purpose.
,
Worth noting a useful variant: cumulative revenue per cohort. Instead of "percent still active at week N," plot "cumulative revenue per user at week N." A flattening line tells you revenue is saturating — your average user has bought what they're going to buy. A continued steep climb tells you users keep spending. Pair it with the user retention curve to separate a stable-revenue base from a growing-spend-per-user dynamic. For subscription businesses, those two charts together are basically the whole board report.
The quarterly review where the chart actually pays for itself
Picture the quarterly business review where leadership asks, "is the lifecycle program working?" That meeting is where cohort curves earn their keep. The structure that works:
1. Current quarter cohorts stacked against the prior four quarters, at the same cohort ages.
2. Highlight any cohort meaningfully above or below trend. Discuss what changed.
3. For cohorts above trend: what did we do that we should keep doing?
4. For cohorts below trend: what happened, and what's the remediation?
5. Decide next quarter's priority lifecycle work based on where the curves say leverage lives.
One timing reality worth calling out, because it catches people: different levers show up at different cohort ages. Onboarding improvements read in week-1 retention within four weeks. Win-back changes read in 90–180-day retention within 3–6 months. Systemic retention work — product value, re-engagement programs — needs 6–12 months of cohort data before you can confidently say the curve moved. Build the review cadence around that reality, not around the quarterly calendar.
uses cohort curves as the foundation of quarterly roadmap decisions. Programs that don't look at cohorts drift into reactive, campaign-level thinking within two quarters. Every time.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
Frequently asked questions
- What is cohort analysis?
- Cohort analysis groups customers by a shared entry event (sign-up month, first purchase) and tracks their behaviour over time. Unlike aggregate retention, cohort analysis exposes when customers churn (month 1 is usually the worst), whether retention is improving cohort-over-cohort, and where lifecycle programs deliver real lift. The canonical cohort chart is a triangle: rows = cohorts, columns = months since start, cells = percent still active.
- How do I build a cohort retention curve?
- Pick the entry event (usually sign-up or first-purchase date), bucket users by the month of that event, then for each cohort compute the percentage still active at month 1, 2, 3, etc. A user is "still active" if they performed the retention-defining behaviour in that month — for subscription, paid; for e-commerce, purchased; for engagement, logged in. Most ESPs and BI tools have cohort templates but computing manually in SQL with GROUP BY cohort_month + months_since_start works too.
- What's a healthy cohort retention rate?
- Depends heavily on category. Subscription SaaS m3 retention above 85% is healthy; below 70% signals product-market-fit issues. E-commerce is harder to benchmark because purchase frequency varies. Consumer apps see steep early drop-off with long plateaus — instagram-style apps hit 30% m1 and stabilise there as healthy. The shape of the curve matters more than any single number — a flat 60% is healthier than a steep 80% falling to 30%.
- How does cohort analysis differ from aggregate retention?
- Aggregate retention averages across all cohorts, hiding whether the signal is improving or degrading. If last month's new cohort has 50% worse day-30 retention than the cohort from a year ago, the aggregate retention number barely moves (it's still averaged with all the older healthier cohorts), but the trajectory is collapsing. Cohort views make this immediately visible. Every serious lifecycle program reports cohort-first, aggregate-second.
- How often should I review cohort data?
- Monthly at minimum. Weekly for programs in active iteration. The important check is whether each new cohort's early-month retention is beating the previous cohort's — that's the leading indicator of whether your onboarding / activation / retention work is compounding or flat. If six months of cohorts all show identical m3 retention, your lifecycle programs aren't moving the needle.
This guide is backed by an Orbit skill
Related guides
Browse allHoldout group design: the incrementality tool most lifecycle programs skip
Without a holdout, lifecycle ROI is attribution-model guesswork with a spreadsheet. With one, you get a defensible number you can actually put in front of finance. Here's how to size, run, and read a holdout — and the three mistakes that quietly invalidate the result.
Attribution models for lifecycle: which one to defend in which room
Attribution debates are half epistemology, half politics. Last-touch is wrong but defensible. Multi-touch is more accurate but less defensible. Incrementality is the only one that answers the causal question — and it's the slowest. Here's which model to use for which question, and why.
Measuring AI personalisation lift honestly
Every vendor case study shows AI personalisation moving the numbers. Most internal post-mortems show the lift evaporating once a proper holdout is in place. The gap between the two is the measurement methodology. Here's the framework for proving — to yourself, your CFO, and the auditor — whether AI personalisation is actually earning its place.
The lifecycle metrics dashboard: what to track, what to ignore
Most lifecycle dashboards show forty metrics and answer none of the questions the team actually has. A good one shows eight, and each one tells you what to do next. Here's the eight-metric dashboard that runs a real lifecycle program.
Reporting lifecycle to executives: the monthly update that actually lands
Most lifecycle reporting to execs is a 20-slide deck of campaign-level charts that nobody remembers a week later. The fix is structural, not quantitative. Three numbers, two decisions, one ask. Here's how to build the report that produces ongoing investment instead of polite nods.
Retention economics: proving lifecycle ROI to finance
Lifecycle programs get deprioritised when they can't defend their impact in dollars. The four models that keep the budget — LTV, payback, cohort retention, incrementality — and the four-slide pattern that wins a CFO room.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.