Updated · 8 min read
Quarterly planning for lifecycle: what actually goes in the plan
Picture the scene. Lifecycle team — the folks who run the welcome flows, the win-back emails, the trial-conversion sequences — sits down for quarterly planning and walks out with a Google Doc listing 40 campaigns. Tidy. Comprehensive. Not a plan. By week six, half the items have moved, three urgent things have landed that nobody saw coming, and the original document is fossilised in a Drive folder nobody opens. A real plan looks different: fewer items, more specific, tied to numbers, with actual decisions rather than activities.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
Why most quarterly plans are actually calendars in disguise
A calendar answers "what are we sending, when". A plan answers "what are we trying to achieve, what's in the way, what specifically will we do about it". One of these survives a changing quarter. The other is a wall decoration.
Test for a real plan: if priorities shift halfway through the quarter, does the document still serve you? Calendars become garbage. Plans become the thing you reprioritise within.
Most lifecycle teams drift into calendars because calendars feel productive. You can fill them in. You can show a stakeholder a grid of campaigns and look organised. The trouble: the calendar doesn't help the team decide which of 20 possible campaigns actually matters. The plan does. If you've never had to defend why one campaign got built and another didn't, you've probably been working off a calendar.
The five sections every plan needs — and what each one is for
Two pages. Five sections. That's the whole shape. Walk through them in order — each one earns its keep by answering a question the next section depends on.
Section 1: Where we are. One page of current state — the diagnostic before the prescription. Active audience size, revenue per send, complaint rate (the percentage of recipients who hit "mark as spam"), 30-day retention, deliverability health (whether your emails are landing in the inbox or spam). Compared to last quarter. What moved. What didn't. What's concerning. Without this, the priorities have nothing to anchor to.
Section 2: The three priorities for the quarter. Not ten. Not thirty. Three. Each priority is stated as a metric-level goal, not an activity. "Lift trial-to-paid conversion from 12% to 15%" is a priority. "Ship a trial email flow" is an activity. The distinction matters: goals survive activity pivots, activities don't. If the welcome flow you planned turns out to be the wrong intervention, the goal is still the goal — you swap the activity, not the target.
Section 3: For each priority, the plan. Three to five specific investments per priority. These are the activities — ship the trial flow, run the welcome test, improve the product reminder. Every activity maps to a priority it serves. Anything that doesn't map is a distraction dressed up as work, and you can tell by reading the document.
Section 4: What we're explicitly not doing. Five to ten things that will come up during the quarter and are NOT on the plan — requests from other teams, tempting campaign ideas below the priority bar, low-value maintenance work. Documenting the no-list stops you re-litigating it every time someone walks over with a "quick favour". The no-list is doing as much load-bearing work as the yes-list.
Section 5: What we'll learn regardless. Two to three experiments — controlled tests on your own list — that produce insight whether they "win" or not. Separate from the priorities. Learning investments. "Test whether send-time optimisation produces real lift on our list" is the shape. The point isn't to win the test, it's to know something next quarter you don't know now.
,
Picking the three priorities — the bit most teams get wrong
Most programs have more priority candidates than capacity. Everyone wants to fix everything. Three filters narrow the field, applied in this order — and yes, the order matters, because a candidate that fails any of them isn't actually a viable priority.
1. Evidence of room to move. Prior cohort analysis (looking at how groups of users behave over time), benchmark gaps, or experimental data showing real headroom. A priority backed by "we ran a test and saw a lift here" is sharper than one backed by "it feels like we should work on this". Feelings are a starting point, not a plan.
2. Team capacity. Honest estimate of engineering, design, copy, and data-science hours needed. Without capacity to execute, the priority is a fantasy. Requiring 400 hours from a team with 200 available is also a fantasy, dressed up in a spreadsheet. Do the math before you fall in love with the idea.
3. Dependency readiness. External teams or data availability the priority depends on. A retention priority that needs product telemetry — the in-app behavioural events you'd use to trigger re-engagement — that the product team is three months away from shipping is blocked. Pick something else.
Headroom plus capacity plus dependency-readiness usually leaves three to five candidates. Pick three. Rank the rest as contingent backups so you have a warm bench when something inevitably falls through. Something always falls through.
Keeping the plan alive once the quarter starts
Plans are only useful if revisited. Write it once, never look at it again, and you might as well have written a calendar. Three rhythms keep it breathing:
Weekly: short check-in on activities against each priority. On track?
Monthly: broader review. Are the priorities still the right priorities? Has something shifted — competitor move, product change, data surprise?
End of quarter: retrospective. Did we move the metrics? What did we learn? What goes in the next plan?
,
One last thing on planning itself: it takes time. One to two weeks of concentrated work, not a single afternoon meeting. The work is cohort analysis to identify where the headroom sits, capacity math, and stakeholder alignment on priorities. Writing the document takes a few hours. The thinking takes two weeks. Compress it into an afternoon and you'll spend the rest of the quarter wondering why the plan didn't survive contact with February.
The five ways teams torch a perfectly good plan
If you've done this before, some of these will sting. Good. They're the patterns that turn a useful document into a wall decoration:
Too many priorities. "Our three priorities" followed by a list of seven. Pick three. Everything else goes in Section 4, the no-list.
Priorities written as activities, not goals. "Ship the welcome flow" is an activity. "Lift week-1 activation from 30% to 38%" is a goal. The first dies the moment scope changes. The second outlasts whatever activity you ship in service of it.
No capacity math. Be honest about hours. Cut priorities until the math actually works. A plan that doesn't clear the capacity check becomes a plan to fail on three things instead of succeed on two.
No trade-offs documented. Saying yes to three things without saying no to anything isn't a plan. When it says yes to X and no to Y, it's a real plan. When it only lists yeses, it's a wish list with a header.
On stakeholder involvement: yes, share the plan outside the lifecycle team. It's the alignment artefact — the document that tells brand, product, sales, and exec what lifecycle is prioritising. Walk them through it. Get explicit agreement on the priorities and the no-list. This is how you avoid the week-six "why aren't you doing X" meeting, the one that wastes an afternoon every time it happens.
When something major lands mid-quarter that wasn't in the plan, decide explicitly. Does it replace one of the three priorities? Or does it get added to the not-doing list? Don't silently add and hope the team handles everything. If it truly replaces, re-plan. If it doesn't, defer to next quarter and move on.
includes quarterly plan production as a default output. Forcing the trade-offs is how the plan earns its keep — and "what we're not doing" is doing as much work as "what we are".
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
This guide is backed by an Orbit skill
Related guides
Browse allWhat is lifecycle marketing? A field guide for operators starting from zero
If you're new to CRM and lifecycle, the field reads like a pile of acronyms and vendor demos. It's actually one simple idea executed across five canonical programs. Here's the frame that makes the rest of the library make sense.
Choosing which lifecycle programs to build first
New lifecycle lead, empty Braze account, a laundry list of programs you could build. The question nobody trains you for is which to build first. This is the selection framework — by business type, by team size, by data maturity, and the programs I'd actively wait on.
Segmentation strategy: beyond RFM
RFM is the floor of audience segmentation, not the ceiling. Every program that stops there ends up describing what users already did without ever predicting what they'll do next. Here's the segmentation stack that actually drives lifecycle decisions — and how to build it in Braze without ending up with 400 segments nobody understands.
Retention economics: proving lifecycle ROI to finance
Lifecycle programs get deprioritised when they can't defend their impact in dollars. The four models that keep the budget — LTV, payback, cohort retention, incrementality — and the four-slide pattern that wins a CFO room.
Lifecycle marketing for flat products
The standard lifecycle playbook assumes weekly engagement and tidy stage progression. Most real products aren't shaped like that. This is how to design lifecycle — the messaging program that nudges users through their relationship with a product — for things people use once a year, once a quarter, or whenever they happen to need you. The textbook quietly makes those programs worse.
The lifecycle audit — a 30-point checklist
Lifecycle programs — the automated email, push and SMS journeys that move customers from signup to repeat purchase — decay quietly. A recurring audit is the cheapest discipline that catches drift before it turns up in the revenue deck. Here's the 30-point list, grouped by severity, that takes three hours the first time and ninety minutes after that.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.