Updated · 9 min read
The cadence question: how often should you email?
There's a meeting that happens at every CRM (customer relationship management — the lifecycle messaging you send to existing users) program at least once. CEO thinks the team sends too much. Growth lead thinks they send too little. Meanwhile the CX lead wants a blanket cap because two customers complained last quarter. Three smart people, one argument, no shared definition of what they're actually arguing about. Across a decade of CRM work, this is the question I've been asked most often, and most of the heat it generates is a consequence of treating a single number like the answer.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
Why the question itself is broken
The right cadence for user A is the wrong cadence for user B. Program-level cadence questions force an answer that's wrong for most of the audience, every time.
Built into "how often should we email?" is an assumption: cadence is one program-wide dial you turn until the magic number appears. It isn't. Cadence is an emergent property — something that falls out of other decisions rather than being set directly — and the decisions it falls out of are each a more productive conversation than the top-level one.
Walk through the actual users on your list and the contradictions stack up fast. Someone three days into onboarding (the first week or two after signup, when usage habits are forming) should get more mail than your average. Someone dormant for six months should get less. Buyers from this week sit in a different post-purchase sequence than users who've never converted. And the click-everything power user can absorb what a cold-list recipient cannot. Same program, wildly different right answers.
Better question: what are the inputs that determine the right cadence per cohort (a group of users who share a defining trait — stage, engagement, acquisition window)? Answer those, and the program-level frequency falls out as a consequence, not as an argument.
The five inputs that actually settle it
Lifecycle stage. Different stages tolerate different frequencies. Onboarding is intense by design — more email in week one than in any other week of the relationship. Engaged users tolerate regular cadence because they're finding value. At-risk users (those whose engagement has dropped but who haven't fully lapsed) tolerate some cadence but need higher-signal messages, not more of them. Lapsed users need specific low-density sequences, not the standard program. The Lifecycle Program Design skill covers stage-specific cadence for each canonical stage.
Engagement tier within the stage. Inside any stage, users vary. A subscriber who opened four of your last five emails can handle more than one who opened one. Tier explicitly — segment users by engagement signal and ship a different cadence for each band. Most programs that pick a single rate end up under-mailing the engaged base, over-mailing the light base, and producing worse aggregate numbers than they could.
Natural product cycle.A daily-engagement product (a news app, a habit tracker) tolerates a different frequency than a monthly-engagement one (a tax tool, an annual-renewal SaaS). The lifecycle cadence should match or just slightly lead the product's usage rhythm — not impose a separate one that ignores when the user is ready for the product.
Deliverability headroom. Deliverability is your ability to land in the inbox rather than spam. Strong sender reputation (mailbox providers' trust score for your sending domain) and clean list hygiene give you more room to send. Weak reputation, more volume compounds the damage. Most programs that "can't send more email" actually can — they just need to fix hygiene first so the additional volume doesn't poison reputation. The deliverability guide covers the full connection between frequency and sender reputation.
Content inventory. You can only send as many messages as you have worth sending. Shipping a second email in a week just because the schedule said so — when the content is thin — is worse than shipping one good email. Quality bar sets the cadence ceiling; cycle sets the floor.
The safety net: frequency capping
3–5/wk
Common marketing message cap for B2C audiences.
1/day
Absolute ceiling including transactional messages.
0
Messages that cross the cap without explicit priority rules. Decide before you need to.
Even with well-designed per-cohort cadences, things go sideways when systems collide. Onboarding email fires on a Tuesday morning. Lifecycle newsletter goes out the same hour. Product-update broadcast hits the whole list. Abandoned-cart push catches one user who tabbed away on the checkout page. That user gets four messages in an afternoon — not because anyone designed it, but because four independent systems fired independently. A user-level frequency cap is the wrapper that prevents this compound damage.
Practical cap: no more than N marketing messages per user per week, with transactional (order confirmations, password resets) and critical-service messages exempt. N depends on program and tier; three to five marketing messages per week is the range that balances engagement and fatigue for most B2C audiences. Under-mailing a cohort 20% below the cap rarely hurts; over-mailing 20% above it reliably does.
Priority is the harder question. Which message gets cut when a user is about to cross the cap? Decide in advance and encode it in the system. Onboarding beats newsletter. Abandoned-cart beats promotional. Transactional beats everything. Without explicit priority, the cap produces random cuts and the program becomes less coherent, not more.
When the user's behaviour says "stop"
A subscriber who hasn't opened a single email in 30 days is telling you something the cadence rules can't hear. Your tiering says they're engaged and should get three a week. The behaviour says otherwise. Drop them dramatically below the rule — not because the rule is wrong, but because the engagement signal is telling you the user is heading toward lapsed. Continuing to mail at the normal rate accelerates that journey.
A spam complaint, a move-to-junk, or an abandoned unsubscribe flow is a louder signal again. Suppress outright or drop to minimum-touch immediately. The cost of one extra send to a user who's already said "stop" through behaviour is a complaint — and complaint rate is the metric mailbox providers (Gmail, Outlook, Yahoo) actually weigh against your reputation. Complaints poison deliverability; unsubscribes just trim the list.
The principle: cadence rules define the ceiling, not the floor. The ceiling is the maximum you can mail without damage. Whatever engagement signals say the user is willing to receive sets the floor, and it can be much lower than the ceiling. A cadence system that ignores the floor over-mails disengaging users and ships them straight to spam complaints — the one thing the cap was supposed to prevent.
When the CEO says "we email too much"
Lifecycle teams rarely get to answer cadence questions in isolation. A senior stakeholder observes they're getting "too many emails from your own company" and asks the team to cut back. The ask is real, but it's a sample size of one. A cadence cut based on a single user's experience is usually wrong for the base.
The productive response isn't "actually our engagement rates are fine" — that reads as defensive. It's to surface the tiering. Show which users are receiving how many messages. Show engagement-by-cadence data for each tier. Invite the stakeholder to look at whether their own cadence is actually aligned with their own engagement tier — which, usually, it isn't. They opened two emails all year and are receiving three a week. The right answer is almost never "cut everyone's cadence". It's "this user's cadence doesn't match their engagement — fix the mismatch, not the program".
The other version of this question: what's the actual risk of over-mailing? Higher complaint rates, higher unsubscribe rates, and sender reputation damage that compounds over months. The cost usually surfaces 30–90 days after the frequency increase, which is why programs rarely link them. Watch complaint rate, not unsubscribe rate. One predicts deliverability damage; the other predicts list-size damage.
Significance testing — checking whether a difference in numbers is real or just noise — has a role here too. Any cadence change deserves a proper test before it rolls program-wide. A 20% cut that produces a 30% revenue drop is worse than the problem it was trying to fix.
What to do Monday
Stop arguing about a single program-wide number. Pull a list of your active subscribers, segment them by engagement tier (opens in the last 30 days is fine for a first pass), and look at how many messages each tier received last week. If your engaged top decile and your disengaged bottom decile got the same volume, you have a tiering problem, not a cadence problem. Fix that first. The program-level number sorts itself out.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
Frequently asked questions
- How often should I email subscribers?
- Depends on category and audience expectations. Daily works for news and content-heavy programs (Morning Brew, The Daily). Weekly is the standard for most B2C marketing and newsletters. Monthly is right for low-frequency relationships (SaaS onboarding-complete users, infrequent-purchase categories). Two to three times per week works for e-commerce with sale-driven content. The wrong answer is consistent — it's whatever your audience opted into and what your content consistently earns.
- What's the optimal email frequency?
- The frequency where incremental unsubscribes equal incremental revenue. Below that frequency, you're under-mailing — leaving revenue on the table from users who would engage more. Above it, you're damaging retention — sends that produce revenue today erode the audience you'll mail tomorrow. The practical test: run a holdout group at higher frequency vs your current frequency for 8+ weeks, compare incremental revenue against incremental unsubscribes and complaints. The break-even frequency is what you want.
- Should email frequency be the same across segments?
- No. Engaged subscribers tolerate and benefit from higher frequency; dormant subscribers should receive less not more. Good programs run 5-10x frequency gap between engaged and dormant cohorts — daily for top-engagement tier, monthly for low-engagement tier. Sending the same cadence to everyone is the most common failure mode: it over-fatigues the engaged and doesn't rescue the dormant.
- Does higher frequency always mean higher revenue?
- Short-term yes, long-term no. Every incremental send produces a revenue bump (some percentage of the list converts on that send). But every incremental send also produces unsubscribes and complaint-rate drift, which permanently shrinks the audience and the ceiling of future revenue. The cumulative effect compounds downward. Programs that chase short-term send counts over 12-18 months almost always trail frequency-disciplined programs on total revenue.
- How do I handle opt-outs by frequency rather than channel?
- Preference centre with frequency tiers. Users choose: daily, weekly, monthly, or topic-only. Each tier maps to a different sending cadence. This reduces unsubscribes substantially (users who would have unsubscribed entirely instead downshift to monthly), preserves the subscription relationship for future content, and lets you segment by implied-engagement — users who chose daily signalled higher intent, which is a useful input to personalisation.
This guide is backed by an Orbit skill
Related guides
Browse allWhat is lifecycle marketing? A field guide for operators starting from zero
If you're new to CRM and lifecycle, the field reads like a pile of acronyms and vendor demos. It's actually one simple idea executed across five canonical programs. Here's the frame that makes the rest of the library make sense.
The lifecycle audit — a 30-point checklist
Lifecycle programs — the automated email, push and SMS journeys that move customers from signup to repeat purchase — decay quietly. A recurring audit is the cheapest discipline that catches drift before it turns up in the revenue deck. Here's the 30-point list, grouped by severity, that takes three hours the first time and ninety minutes after that.
Lifecycle marketing for flat products
The standard lifecycle playbook assumes weekly engagement and tidy stage progression. Most real products aren't shaped like that. This is how to design lifecycle — the messaging program that nudges users through their relationship with a product — for things people use once a year, once a quarter, or whenever they happen to need you. The textbook quietly makes those programs worse.
Predictive models in lifecycle: churn, propensity, and recommendations without the magic
Predictive models in lifecycle are mostly three things: churn risk, conversion propensity, and product recommendations. Each one earns or loses its place based on whether its score actually changes a decision. Here's the operator view of what's worth deploying, what to expect from ESP-native suites, and when to build your own.
Choosing which lifecycle programs to build first
New lifecycle lead, empty Braze account, a laundry list of programs you could build. The question nobody trains you for is which to build first. This is the selection framework — by business type, by team size, by data maturity, and the programs I'd actively wait on.
Segmentation strategy: beyond RFM
RFM is the floor of audience segmentation, not the ceiling. Every program that stops there ends up describing what users already did without ever predicting what they'll do next. Here's the segmentation stack that actually drives lifecycle decisions — and how to build it in Braze without ending up with 400 segments nobody understands.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.