Updated · 10 min read
Lifecycle marketing for flat products
Almost every lifecycle guide on the internet is secretly about Duolingo. Daily engagement, obvious stages, a fresh behavioural signal every Tuesday. Now try running that playbook on tax software. Travel booking. Insurance. Gift cards. The usage shape is flat, lumpy, seasonal, or once-a-year — and a weekly-optimisation framework dropped onto it produces the worst of both worlds: more messages, less relevance, faster unsubscribes. This is the guide for the other product shape. The one nobody writes about.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
Why the standard playbook quietly breaks on a once-a-year product
The standard playbook assumes a product whose usage gives you something to work with constantly. A flat product doesn't.
Picture the lifecycle program at a typical consumer app. Users open the thing two or three times a week. They move from new, to activated (they've hit the first "aha" moment), to engaged, along a predictable arc. Behavioural signal — clicks, sessions, transactions, the data trail a user leaves behind — is fresh every few days, so segmentation is always current. Onboarding is a first-week job. Retention is a weekly metric. Win-back, the sequence that fires when a user goes quiet, kicks in at 30 or 60 days. Almost every lifecycle guide on the internet is built on top of that picture.
A flat product is the opposite picture. Annual, seasonal, event-driven, transactional — the usage is rare enough that behavioural signals go sparse for months at a time. Stages blur because the product doesn't surface enough events to separate them. There's nothing to activate the user into; there's just the next use. Retention is a yearly measurement, and by the time you can calculate it the levers that influenced it are long gone.
Apply the standard playbook on a product shaped like that and it produces noise. A week-one onboarding sequence on a product used once a year converts badly — the user isn't coming back this week regardless of what you send. A 60-day win-back on a 9-month cycle is just static, and the user who receives it reads you as genuinely out of touch with how they use the product. That's not a copy problem. That's a wrong-framework problem.
Before you write a single email — describe the usage shape
The first move on a flat product isn't writing copy. It's writing an honest, one-paragraph description of how people actually use the thing. Four questions, in order. The answers determine almost everything else.
What's the natural usage cycle? Tax has an annual spike. Travel is seasonal. Insurance is event-driven — something happens, then a flurry of activity, then quiet for months. Gift cards pulse around holidays. The cycle shape is the single biggest input into lifecycle design, and everything else falls out of it. Get this wrong and the rest of the program inherits the error.
What non-usage signals do users leave?Logins without a transaction. Support tickets. Settings changes. Reading docs. These are weaker than transaction events — someone reading the FAQ is a softer indicator than someone buying — but they're often the only behavioural data you have between cycles. Instrument them carefully, because this is what your segmentation will actually run on.
How wide is the decision window before a usage event?People don't decide to use an infrequent product on the day of use. They decide in a window that's days, weeks, or months wide — the stretch of time where they're weighing it up, gathering information, comparing options. The lifecycle program's real power lives in that window. Messages outside it don't move behaviour.
What signals the start of a new cycle? Annual renewal. Seasonal trigger. Post-transaction satisfaction that primes the next one. Some products have explicit cycle starts; others need them inferred from softer signals. Either way, the cycle start is the natural re-engagement moment and it deserves disproportionate attention — more than anything else on the calendar.
The Orbit Lifecycle Program Design skill opens with exactly these four questions for any program. The output shape of the program falls out of the usage shape rather than being pasted in from a template.
Build the program around the cycle, not the calendar week
The lifecycle program on a flat product is organised around the cycle, not the week. The canonical stages — the named buckets a user moves through — aren't new / activated / engaged / at-risk / lapsed. They're pre-cycle / in-cycle / post-cycle / off-cycle. Each one does a specific job:
Pre-cycle.The decision window — the weeks before the user is likely to use the product again. Messages here make the case for that next use: product updates, reminders of last time, preparation content. Most of the real behaviour change on a flat product happens here, and most programs underinvest in it because the calendar doesn't flash red for a month that has nothing obvious going on.
In-cycle. The transactional moment — the user is in the product, doing the thing. Usage-enabling messages: reminders to finish, cross-sell of adjacent products, handoff to support. Short, high-intent. Higher frequency is tolerated here because the user is actively engaged.
Post-cycle.The window immediately after a use event. The single highest-engagement moment the program will see — users are most willing to give feedback, set preferences, subscribe, share. Most programs squander it on a confirmation email and then go silent for eleven months. Don't.
Off-cycle.The long quiet between cycles. Content is the dominant shape here — helpful, periodic, low-ask touches that keep the brand present without demanding usage the user isn't ready for. The mistake to avoid is transactional-style messaging during off-cycle. It reads as noise and it trains the user to ignore you by the time the next decision window actually opens.
First-use completion is the activation equivalent here — what counts as a user "getting it." A week-one onboarding sequence loses to a program that activates users through their first full cycle, which might take months. The success metric isn't "activated within seven days"; it's "completed a first successful use event" on whatever timeline the product naturally operates on. Define the win at the cycle, not the week.
The dashboard is lying to you — measure on the cycle instead
Weekly open rate on a product that doesn't expect weekly engagement is a noisy metric. Month-over-month retention on an annual-cycle product is meaningless. A flat-product program needs its own measurement shape, usually centred on two numbers: cycle-over-cycle return rate (users who completed the last cycle and also completed this one) and decision-window engagement rate (the share of users engaging during their specific decision window).
Neither shows up on a standard lifecycle dashboard. Both are usually the two most important numbers the program has. Build the reporting to surface them yourself — the dashboard templates assume a daily-active product and they will quietly mislead leadership for years if nobody points it out.
Define lapse — the point at which you call a user gone — on the cycle too, not on the calendar. An annual product might tolerate two full cycles missed before the user is genuinely lost. A quarterly product, two or three. The right threshold is the point where cycle-over-cycle return probability drops materially, not a round 60-day number inherited from a weekly product. Same logic applies to win-back: trigger on cycle signals, not elapsed time. A win-back that fires 60 days after last engagement is noise on a quarterly product. One that fires when the user misses a natural cycle-start trigger — renewal window opens, nothing happens — is signal.
The Retention Economics skill covers LTV (lifetime value) and payback modelling for long-cycle products specifically, where monthly cohort curves don't apply and you need cycle-aligned modelling instead. The cadence guide covers how send frequency also has to shift on a flat product.
Three habits worth killing on day one
Three patterns that turn up in nearly every flat-product program inherited from a weekly playbook. Kill them early.
Sending frequent messaging to fill a calendar. Weekly newsletters to users who engage with the product annually train those users to unsubscribe. If there's nothing cycle-relevant to say, silence is a feature — put the budget toward decision-window depth instead of off-cycle frequency.
Running win-back on standard thresholds. A 60-day no-engagement window is meaningless for a quarterly product. Define lapse on the cycle, not the calendar, every time.
Reporting against engagement metrics that don't match the cycle. If leadership is staring at monthly open rates on an annual-cycle product, the numbers will look awful for eleven months a year and fine for one. The program will appear broken when it isn't. Agree the reporting shape — cycle-over-cycle return, decision-window engagement — before you have to explain it in a QBR (quarterly business review) that's already gone sideways.
The shortest version of this whole guide: write down the usage shape, design four stages around it, measure on the cycle, and pour the budget into the decision window. Everything else is either inherited error or a habit from the wrong playbook.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
This guide is backed by an Orbit skill
Related guides
Browse allWhat is lifecycle marketing? A field guide for operators starting from zero
If you're new to CRM and lifecycle, the field reads like a pile of acronyms and vendor demos. It's actually one simple idea executed across five canonical programs. Here's the frame that makes the rest of the library make sense.
Lifecycle for startups: the three flows to build before anything else
Early-stage programs waste months building the wrong lifecycle flows. Here are the three that compound value at every stage — welcome, trial-to-paid (or first-repeat), and winback — and why everything else can wait.
Choosing which lifecycle programs to build first
New lifecycle lead, empty Braze account, a laundry list of programs you could build. The question nobody trains you for is which to build first. This is the selection framework — by business type, by team size, by data maturity, and the programs I'd actively wait on.
Predictive models in lifecycle: churn, propensity, and recommendations without the magic
Predictive models in lifecycle are mostly three things: churn risk, conversion propensity, and product recommendations. Each one earns or loses its place based on whether its score actually changes a decision. Here's the operator view of what's worth deploying, what to expect from ESP-native suites, and when to build your own.
The cadence question: how often should you email?
Everyone asks how often to email. Almost nobody answers it properly, because it's the wrong question. Cadence is a consequence of five other decisions you probably haven't made yet. Here's the version of the debate that resolves.
The monthly newsletter still works — here's the structure
Email newsletters have been declared dead every year since 2015. They're not. A well-run monthly newsletter does real work for a lifecycle program — brand equity, re-engagement, the non-promotional relationship that makes every other send land. Here's what separates the newsletters worth sending from the ones that feel mandatory.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.