Updated · 10 min read
Segmentation strategy: beyond RFM
Picture the lifecycle marketer on a Monday morning, looking at two million users, deciding what to send them this week. They can't send the same email to everyone — the person who signed up last night needs a welcome, the buyer who came back yesterday needs a thank-you, the user who hasn't logged in for three months needs winning back. Segmentation is the act of cutting that list into groups that should be talked to differently. Most programs solve it with RFM — Recency, Frequency, Monetary — and stop there. Which is fine, until you notice RFM can describe the past in crisp detail and predict the future about as well as a coin. This guide walks the segmentation stack a real lifecycle program needs — stage, tier, intent — and how to build it without the segment list exploding into triple digits.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
Why every lifecycle team reaches for RFM first — and where it quietly stops working
Back to the marketer at the spreadsheet. Two million users, one inbox each, and a single decision to make: who hears what this week. Segmentation — the act of cutting a user list into groups that should be talked to differently — is the tool that makes that decision tractable instead of paralysing.
The cut almost everyone reaches for first is RFM — Recency, Frequency, Monetary. Score every user on how recently they engaged, how often they buy, and how much they spend, then drop them into a grid. It became the default for two reasons. The three dimensions are independently predictive of future behaviour, and you can explain the model to a CMO in under two minutes without diagrams. Both still true.Source · Harvard Business ReviewIdentifying and Evaluating the Best Customers in a Direct Marketing FrameworkRFM was formalised in direct-response marketing literature in the 1990s — the approach has proven durable because it maps to three independent behavioural dimensions.hbr.org
Here's where it falls over. RFM describes past behaviour, full stop. Two users with identical scores can be on completely different trajectories — one accelerating into power-user territory, one quietly decelerating toward churn — and the model can't tell them apart. Lifecycle decisions need the direction of travel, not just the current position.
The second crack: non-transactional products. Monetary value is a weak signal on a content platform and literally nonexistent on a free product. Programs that try to patch this by bolting "session count" or "content views" onto the M end up with a fragile Frankenstein that pretends to be RFM and behaves like a homegrown activity score nobody can defend in a quarterly review. The honest move is to stop calling it RFM and build the layers that actually answer the question.
What you actually need to know about a user before you press send
Position tells you what the user already did. Direction of travel tells you what they'll do next. Real lifecycle segmentation captures both.
Forget frameworks for a second. Picture one user. To decide what to send them tomorrow, you need three answers — and they're different questions, even though most teams squash them into one segment. Where are they in their relationship with the product? How intensely do they use it? What did they just do that's worth reacting to? Three layers, three different jobs.
Layer 1 — Lifecycle stage.Every user sits in exactly one stage: new, activated, engaged, at-risk, lapsed, churned. Mutually exclusive (you're only ever in one), collectively exhaustive (every user is in some stage). Stage is the single most consequential filter in almost every lifecycle program — it decides whether someone gets an onboarding email or a winback. Misclassify a user here and the whole program miscommunicates with them. It's the most expensive segmentation error you can make, and the one most programs make quietly for years.
Layer 2 — Engagement tier. Within a stage, users vary by intensity. Power, regular, light, inactive. Tier is where RFM finally earns its keep — recency, frequency and value are the right inputs when you're ranking intensity within a stage, rather than using them as the whole model. Two activated users behave differently if one logs in daily and one logs in monthly; tier is what catches that.
Layer 3 — Behavioural intent. Recent actions that signal what the user is about to do. Added an item to cart. Viewed pricing twice this week. Invited a teammate. Opened a feature for the first time. Intent segments— short-lived audience cuts based on a recent action — last 24 to 72 hours and always sit on top of stage and tier. They trigger campaigns; they don't re-categorise the user.
Stage decides which program family the user is in. Tier decides how intensely you talk to them. Intent decides what's worth triggering right now. Keeping those jobs separate is what keeps the segment count manageable instead of exploding into the hundreds.
The meeting where three teams disagree on what "activated" means — and what to do about it
The most common segmentation disaster isn't a sophisticated modelling failure. It's stage ambiguity. "Activated" means one thing to product, something different to growth, something different again to finance. Picture the meeting: three teams, three definitions, all confident, all going to ship campaigns this quarter against different versions of the same word. Unless the definition is written down and operationalised, users flicker between stages based on whichever team's definition got applied last. The downstream mess is severe — onboarding emails firing at activated users, winbacks firing at engaged ones, every stakeholder quietly losing faith in the numbers.
Fix it by defining stage as a derived attribute — a user field that's computed from raw events rather than typed in by hand — in your CDP or Braze, with a single canonical rule. New = signed up within the last N days OR hasn't yet taken the activation event. Activated = took the activation event AND hasn't crossed the lapsed threshold. At-risk = between engagement threshold and lapsed threshold. Lapsed = past the lapsed threshold. Churned = cancelled or deleted. Each threshold is a number you tune for your product. For infrequent-usage products the thresholds change shape entirely — the lifecycle for flat products guide covers that case.
How often should you recompute stage? Daily for most programs. Event-driven where transitions need to be immediate — onboarding flows that should move a user to "activated" the moment the activation event fires, for example. A nightly batch is fine for everything downstream of that.
The non-negotiables: one canonical definition, enforced in code. No team overrides. The Orbit CRM Data Model skill covers the data architecture this requires — the derived attributes, the event taxonomy, and the rules that keep stages consistent when three different teams all want to tweak the definition.
How the Braze segment list goes from twelve to four hundred (and how to stop it)
The trap is so common it's almost a rite of passage. A lifecycle lead opens Braze — an enterprise messaging platform covering email, push, in-app, the lot — sees how easy segments are to make, and starts making one for every combination. Six lifecycle stages multiplied by four engagement tiers equals twenty-four segments before a single product filter has been added. Add three intent flags and you're past a hundred. That list is unmaintainable inside a quarter and unreadable inside two.
Braze gives you three tools that look interchangeable and aren't. Segments are saved audience definitions you target campaigns at. Custom attributes are per-user fields stored on the profile (e.g. lifecycle_stage = "activated"). Derived values are usually populated from your CDP or via API. The pattern that scales: store lifecycle stage and engagement tier as custom attributes on the user profile, not as segments. Create segments only for the combinations your campaigns actually target — usually 10 to 15 named segments, built from those attributes plus any additional filters.
Intent segments are almost always better as Currents- (Braze's real-time event-streaming product) or webhook-triggered campaigns than as persistent segments. The difference matters: a persistent segment is a list that gets recomputed on a schedule; a triggered campaign fires off the event itself, in seconds, and doesn't accumulate as a segment to maintain.
The mental model: one user has one lifecycle stage at any moment, but that same user belongs to many segments at the same time. The stage is a filter inside nearly every segment, not a segment itself. Once you internalise that distinction the segment list stops growing on its own.
The Orbit Namer generates segment names from a consistent convention so the list stays scannable six months in. The Braze Segment Analysis skill audits existing segments for overlap, redundancy, and the long tail of segments nobody uses but nobody wants to delete either.
Predictive scoring — when it earns its keep, and when it's a science fair project
The most ambitious layer, and the one most commonly overbuilt. Predictive scores — a model's estimate of a future outcome, like churn risk, likelihood to purchase, or propensity to upgrade — sound like the obvious next step once basic segmentation is humming. They're sometimes worth it. Often they're not, and the people building them rarely admit which is which.
A predictive score earns its keep only if three conditions all hold: you have enough data volume for a meaningful model (100K+ users, longer for low-traffic products), you can act on the score inside the lifecycle program in a way you couldn't with a rule, and the prediction is meaningfully more accurate than the simple rule-based proxy you'd use otherwise.
If any of those three is missing, skip it. A rule-based "at-risk" segment — users whose session cadence dropped more than 50% over the last 30 days, for example — usually outperforms a half-trained ML model at a tiny fraction of the operational cost. The people building half-trained ML models rarely admit this, which is part of why the problem persists.
When predictive scoring does pay off, integrate it as an attribute on the Braze profile rather than a native Braze segment. Keeps model ownership in the CDP or warehouse where it belongs. Decouples the model's update cadence from Braze's segment refresh cycle. Lets you version and roll back without rebuilding segments. Starting here instead of inside Braze saves you from a messy migration eighteen months later.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
Frequently asked questions
- What is RFM segmentation?
- RFM (Recency, Frequency, Monetary) segmentation is a customer-scoring framework that bins each customer on three ordinal dimensions: Recency (days since last purchase), Frequency (transactions in a fixed window), and Monetary (total spend in the same window). Each dimension gets a 1-5 score, and the combination produces bands like "champions" (555), "loyalists" (544), "at-risk" (155). It's the oldest serious customer-segmentation model, simple to compute without any ML, and maps cleanly to lifecycle triggers.
- What comes after RFM segmentation?
- Behavioural segmentation layered on top of RFM — segments defined by the actions customers take (not just when/how much/how often they spend). Examples: "browses-category-but-never-buys", "high-LTV-single-product-buyer", "reactivated-after-6-month-dormancy". These behavioural segments drive more targeted programs than RFM's general bands because the trigger behaviour maps directly to the program's goal. Predictive/propensity models (churn risk, purchase propensity, LTV forecast) come after that, once you have enough data to train them.
- How many segments should a lifecycle program have?
- Fewer than most teams think. The right number is the number you can actually build differentiated programs for — usually 5-12 total across the RFM + behavioural layers. A program with 40 segments is a program where 35 of them have identical messaging because nobody had time to write separate copy. Build 6 sharp programs before you build 20 mediocre ones.
- Is RFM still useful after Apple MPP broke open rates?
- Yes — RFM doesn't depend on opens. Recency is based on purchase / transaction recency (not email recency), frequency is based on transactions, and monetary is revenue-based. None of these are contaminated by Apple MPP's pixel pre-fetching. The dimensions that DO need updating post-MPP are engagement-based segments ("opened in last 30 days") — replace those with click-based or multi-signal composite scores.
- How often should segments be recomputed?
- Daily for behavioural segments (so yesterday's purchase moves a customer out of "browse abandoned" today). Weekly or monthly for RFM bands unless volume is very high — quarterly recomputation is too slow and misses the fast drift around lifecycle inflection points. Most modern ESPs support daily re-segmentation natively.
This guide is backed by an Orbit skill
Related guides
Browse allAI personalisation at scale: the architecture that actually works
Every ESP now sells an AI personalisation layer. Most teams turn it on and quietly notice the lift is smaller than the sales deck promised. The model isn't the problem — the plumbing underneath is. Here's the data, content and activation stack that decides whether AI personalisation moves revenue or just moves dashboards.
Lifecycle marketing for flat products
The standard lifecycle playbook assumes weekly engagement and tidy stage progression. Most real products aren't shaped like that. This is how to design lifecycle — the messaging program that nudges users through their relationship with a product — for things people use once a year, once a quarter, or whenever they happen to need you. The textbook quietly makes those programs worse.
Predictive models in lifecycle: churn, propensity, and recommendations without the magic
Predictive models in lifecycle are mostly three things: churn risk, conversion propensity, and product recommendations. Each one earns or loses its place based on whether its score actually changes a decision. Here's the operator view of what's worth deploying, what to expect from ESP-native suites, and when to build your own.
VIP customer lifecycle: how to treat the 5% of users who drive 40% of revenue
Your highest-value customers need a different lifecycle than everyone else. Most programs send them the same broadcast cadence as a cold signup. Here's the VIP-specific flow: what to send, what to skip, and how to protect the relationship that's paying the bills.
What is lifecycle marketing? A field guide for operators starting from zero
If you're new to CRM and lifecycle, the field reads like a pile of acronyms and vendor demos. It's actually one simple idea executed across five canonical programs. Here's the frame that makes the rest of the library make sense.
Retention economics: proving lifecycle ROI to finance
Lifecycle programs get deprioritised when they can't defend their impact in dollars. The four models that keep the budget — LTV, payback, cohort retention, incrementality — and the four-slide pattern that wins a CFO room.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.