Updated · 8 min read
Browse abandonment: the program that sits between ads and cart
Picture the customer who visits your site three times in a week, opens a pair of running shoes, scrolls the reviews, and closes the tab. They didn't add to cart. They didn't sign up for anything new. Most lifecycle programs are completely silent for that person — and that person is the majority of your traffic. Browse abandonment is the email program that wakes up when they leave. Smaller per-user lift than the cart-abandonment flow you've already shipped, but ten to twenty times the trigger volume, sitting right between your paid ads and your cart program. For most teams it's the biggest revenue line on the roadmap that nobody's built yet.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
Cart abandonment vs browse abandonment — same idea, different customer
If you've shipped a cart-abandonment flow — the email a shopper gets after they put something in the cart and leave without paying — browse abandonment is the cousin one step earlier in the funnel. Same underlying move (catch the person who almost bought), different customer state. Cart abandonment talks to someone who already decided. Browse abandonment talks to someone who was thinking about it.
Cart abandonment is a user who decided to buy and paused. Browse abandonment is a user who considered buying and didn't. The copy has to respect the difference.
That distinction shapes everything that comes next — the timing, the tone, whether to discount, what to put in the email. Treat a browse abandoner like a cart abandoner and you'll write copy that assumes commitment they never made. The pitch lands wrong, and the unsubscribe rate tells you so.
What actually counts as a browse abandonment
The trigger: someone you can identify viewed a product, category, or search result, then left the site without adding to cart or buying.
"Identified" is the catch. You can only email people whose email address you already have, tied to the session — typically because they're logged in, or they clicked through from a previous email and the tracking cookie stuck. An anonymous visitor browsing in a private window triggers nothing, because there's no address to send to. Your site analytics knows the session happened; it just has no way to reach the person.
The qualifying events, in order of how strong a buying signal they carry:
Product detail page view. The user landed on a specific product, sat there a while, and didn't add to cart. Cleanest trigger. Highest intent. This is the one your flow is mostly built on.
Category browsing. The user scrolled a category or collection page ("Women's shoes", "Sale", "New in") without clicking into anything. Weaker signal — they might just have been looking — but still worth a touch.
Search with no click-through. They typed something into search and left without tapping a result. Useful as input for your merchandising team (your catalog is missing the thing they wanted), but a low-intent send trigger. Don't hang the program off this one.
The three-message structure
One product, three emails, spaced to match how human consideration actually works. The shape:
One hour after the view — the product reminder.Subject: "Still thinking about [product]?" Body: the product they looked at, plus one or two similar items that share an attribute (same colour, same category, same price band). No discount. The job is to re-anchor the interest while the consideration is still warm. Thirty minutes feels surveillance-y — you saw me leave six minutes ago, mate. Six hours and the moment's gone. One hour is the sweet spot.
A day later — social proof or category expansion.Subject: "Here's why customers love [product category]" or "More in [category]". Body: reviews, customer photos, or three curated picks from the same shelf. Still no discount. The move here is widening the consideration set from one SKU to the category, which converts surprisingly well — half the time the user wasn't set on that exact product, just that kind of product.
Three days out — the optional nudge.Subject: "Back to where you left off", or a small incentive if your program runs them. Most well-designed programs skip this third touch entirely and let the first two do the work. If you do include a discount, this is where it goes — never on the first email.
That last point matters more than it looks. Discounting users who only browsed, the moment they browsed, trains them to browse-and-wait next time — a feedback loop where your most engaged shoppers learn to never pay full price. You don't want that. Save the discount for when you've already given them two non-incentive reasons to come back and they still haven't.
Cut the whole sequence the moment the user adds to cart (handoff to the abandoned cart flow) or makes any purchase. Continuing browse reminders after intent has advanced is annoying and dilutes the next email — the one from the program that should now own the conversation.
The three things that have to be in place before this works
Browse abandonment is more data-hungry than the other lifecycle programs you've probably already shipped. Welcome and winback need an email and a date. This one needs three pieces of plumbing wired up before any of it works.
1. Identified browsing events. Your site has to fire a tracked event every time someone views a product, and that event has to carry the user's identity (an email address or a known user_id). The plumbing is usually a customer data platform — Segment, Rudderstack, or the equivalent — sitting between your site and your ESP (your email service provider — Braze, Iterable, Klaviyo, the tool that actually sends the message). Without identity attached to the event, the trigger never matches a real human and nothing sends.
2. Product metadata available at send time. The email has to know the name, image, price, and link of the product the user was looking at — and most ESPs pull that live, the moment the email is composed, from a product feed. Braze calls this Catalogs. Iterable calls them data feeds. Klaviyo has its own version. Whatever the name, the feed has to refresh often enough that the email never goes out with last week's price on the front of it.
3. Related-product logic. To populate "similar items" in message 1 or "more in category" in message 2, you need a rule that picks candidate products. Rudimentary works fine — "three other products in the same category, in the same price band, excluding the one they looked at" is a perfectly good v1. Don't wait for a recommendation engine to ship before you start the program. The simple rule gets you 80% of the lift; the fancy collaborative-filtering version gets you the last 20% and you can add it in v3.
The Braze Liquid reference covers the personalisation patterns for catalog lookups (Liquid is the small templating language ESPs use to drop dynamic values like product name and price into an email). Other ESPs have an equivalent — slightly different syntax, identical job.
Expected lift, and the three things that quietly flatten it
That's the headline number. Three failure modes flatten it in practice, in roughly this order of how often they bite:
Trigger fatigue.Picture your most engaged customer — the one who visits the site daily to see what's new. Without a frequency cap, every visit fires a sequence, and they get four emails a week from you. That's the group most likely to mash unsubscribe, which is exactly backwards. Cap at one full sequence per user per 7 days, regardless of how many products they viewed in between.
Out-of-stock products.The first email ships an hour after the view, and by then the product may have sold out — especially for limited drops or high-demand SKUs. Check stock at send time. Out of stock? Either substitute with similar items only, or skip the product-specific email entirely and let the user move on. Reminding someone of a thing they can't buy is worse than silence.
Bot views.Scrapers, link-preview bots (Slack, iMessage, Apple Mail), and background-tab sessions all fire false product views. A non-trivial share of your sends will go to robots, who are frustratingly excellent at not converting. The fix: filter to "view longer than 15 seconds" or "at least two pageviews in the session" before firing the trigger. Strips most of the noise without hurting the legitimate signal.
Measuring it — and why your dashboard is probably lying to you
Here's the part most teams get wrong. Browse abandonment looks brilliant in last-click attribution — the standard reporting most ecommerce dashboards default to, where revenue gets credited to the last marketing touch before the purchase. Users who would have come back anyway click your reminder email and get the credit. The number on your dashboard is real revenue. It is not, mostly, revenue your email caused.
The way you measure the real lift is a holdout: a randomly selected group of users you deliberately don't send the email to, who otherwise would have qualified. Same logic as a clinical trial — half the patients get the drug, half get a placebo, the gap between the two groups is the actual effect. Suppress 10% of users who qualify for the sequence, compare their 14-day conversion rate to the sent group, and the difference is the real incremental lift the flow generated.
Expected holdout-vs-sent gap on a healthy program: 8–15% incremental lift on 14-day purchase rate. If your last-click dashboard is showing above 30%, that's almost always natural-return behaviour being over-credited — the program is doing real work, just less than the dashboard claims. Size your investment case on the holdout number, not the attribution number. One is measurement; the other is marketing maths.
The holdout group design guide covers sizing the test, assigning users to it, and reading the result without fooling yourself.
covers where browse abandonment sits on the roadmap relative to other investments. Short version: ship welcome, winback, and cart abandonment first. Browse goes immediately after. Build in that order and each program compounds off the last — the cart flow eats the wins from browse, the winback flow eats the wins from both, and your incrementality story compounds rather than cannibalises.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
This guide is backed by an Orbit skill
Related guides
Browse allReferral program emails — the three flows that make it work
A referral program lives or dies on the lifecycle messaging — the automated emails that prompt, deliver, and confirm — wrapped around it. Three flows do the work: inviter prompt, invitee welcome, reward confirmation. Get the timing and copy right on each and conversion roughly doubles without anyone touching the offer.
Trial-to-paid: the seven-email sequence that converts 20%+ of free users
Trial conversion is the most financially leveraged flow in SaaS — every percentage point compounds directly against CAC. Here's the seven-email sequence that reliably moves trial conversion from 5% to 20%+.
Replenishment emails: the lifecycle flow that buys itself
Replenishment emails remind users to re-order a consumable before they run out. Done right, they generate the highest revenue-per-send in any lifecycle program because purchase intent is already established. Here's the timing, data, and copy.
Price increase emails: how to raise prices without a churn spike
A price increase is one of the highest-risk lifecycle moments your program will ever run. Done wrong, it triggers churn, public complaints, and a reputation dent that outlasts the extra revenue. Done right, most users accept the change without friction. Here's the sequence that works.
Review request emails: the timing that actually produces reviews
A review request at the wrong time gets ignored. At the right time, it converts at 15–25% into a submitted review. Here's the timing, the message pattern, and why incentivising reviews almost always backfires.
Product launch email sequence: the five emails that actually sell a new product
A product launch with one big announcement email captures a fraction of the addressable audience. A proper five-email sequence catches multiple attention windows, builds anticipation, and converts the users who needed a second or third touch. Here's the structure that reliably outperforms the single-send version.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.