Updated · 7 min read
Review request emails: the timing that actually produces reviews
Picture the email most companies send fourteen days after you ordered something. Generic subject line. Generic body. A button that says "Leave a review". You don't even open it — and neither does almost anyone else, because that email gets a 2% response rate. The version that produces ten times more reviews isn't a different template. It's a different moment, a different question, and an honest reckoning with why discounts-for-reviews ruins the whole thing.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
Ask too early, they have nothing to say. Ask too late, they've moved on.
The biggest lever in a review program isn't the copy. It isn't the design. It's the day you press send. A review request is a question — what did you think? — and a question only works if the person you're asking has formed an answer. Send before they've used the product properly and they have nothing to tell you. Send after the moment's gone cold and they've forgotten why they bought it. Both versions land in the same 2% response bucket.
So the timing rule is a question, not a number. By when has the user actually formed an opinion worth sharing? The answer changes by product type. Three patterns cover most programs.
Physical product (ecommerce): 7–14 days after delivery, never after order. Shipping eats the gap — at 14 days from order, plenty of users are still waiting for the package to arrive. Trigger off the delivery confirmation event (most carriers fire one) plus N days, where N tracks how long it takes someone to actually use the thing. Apparel sits at 5 days (worn it a couple of times by then), small electronics at 10 days, furniture at 21 days because nobody knows whether a sofa is comfortable until they've actually lived with it for three weeks.
SaaS product: tie the ask to a usage milestone — the equivalent moment for software. 30 days of active use, or completion of a primary workflow (the core thing your product is for: a campaign sent in a marketing tool, a deal closed in a CRM — customer relationship management system, the database where sales tracks deals). Pure time-based triggers are weak here because someone who signed up and never logged in is not someone with an opinion. Usage-based is strong.
Content or course: after completion, or a meaningful percentage consumed. "Finished the course? Tell us how it went" converts a couple of times higher than "2 weeks since you signed up, review?". The reason is the same one — the first message arrives at the moment the answer exists.
The right time to ask is the moment an opinion exists. Before that, you're asking someone to invent one. After that, you're asking them to remember.
What a review email actually has to do — and how to write one that does it
Five elements, in order. Each one is a small thing on its own. The compounding is the point — every friction you remove costs you nothing and adds a few percentage points of response rate.
The subject line is a question, not an order. "How's the [product] working out?" or "Quick question about your [product]". Question-framed subjects open higher than command-framed ones ("Please leave a review") for the same reason a friend asking your opinion lands differently than a stranger demanding one. The friend gets a reply.
The opening references the actual purchase. "Your [product name] arrived on [date]. By now you've probably had a chance to use it." Specific beats generic by a lot — the line lands as if a human wrote it for them, not as if a system batched them into a queue. Personalisation tokens on the product name and delivery date do the work.
The ask is one question with one CTA. "Would you share a quick review? It helps other customers decide." CTA — call to action, the button you want them to click — singular. Not three. Not a survey. One review link. Every additional path you offer is a new chance for them to choose "none".
Cut every click between email and submitted review. Link direct to the review form with the product pre-filled. Every extra step — log in, find the product, click "leave a review" — costs you response rate. Where the platform supports it, embed a 1–5 star rating in the email itself; tapping any star deep-links to the review form with that rating pre-selected. The user's already halfway through.
Quiet social proof. "Join [X] customers who've shared their experience with [product]." Reinforces that reviewing is what people do here. Not a chore — a norm.
The discount-for-a-review trap
Almost every program eventually considers it. "Leave a review, get 10% off your next order." Submission rates jump 2–4×. The numbers go up. The dashboard looks like a win. And then, depending on your platform, your reviews start vanishing — or your listing does.
,
Two ethical alternatives that won't trip platform rules. Loyalty-program points — earned for the act of reviewing, redeemable for benefits but not a direct money-off voucher — sit safely on most platforms because they're not contingent on a positive review. Entry into a monthly draw for everyone who reviews is the other clean version. Both are weaker pulls than "10% off right now", which is the entire point — they pay the platform-rule tax without inviting the bias. Read your specific platform's reviewer-incentive policy before you build anything; the wording differs more than you'd expect.
The stronger path is the boring one. Better timing, better messaging, frictionless flow — that combination lifts submission 2–3× on its own without putting your listings at risk. Same outcome as the discount trick, none of the policy landmines.
When someone hates the product — the legitimate move and the dishonest one
This is where review programs get morally tested. A user gives you 1 star. What does the program do next? There's a clean version of routing low ratings differently from high ones, and there's a version that's widely considered cheating. The line between them is a single principle.
Legitimate: a 1-star rating triggers a support flow — "We're sorry — can we help?" A real human reaches out, offers a resolution, fixes the underlying issue if there is one. After that resolution, the user is still free to leave a public review wherever they want. They might leave a glowing one because the support was good. They might still leave a 1-star one. Either is fine — they had the option. The program is customer-service first, reviews second.
Dark pattern: the same triage on the surface, but with one quiet change — high ratings get a one-click path to the public review platform, low ratings get a contact form that leads nowhere public. Negative reviews are intercepted entirely. The published average becomes a curated lie. This violates the terms of most platforms (Trustpilot calls it "gating") and is one of the standard things that triggers a manual platform review of your account.
The principle that separates the two is whether the user can still post publicly. If your support flow ends with "here's the review link, post if you want" regardless of how the resolution went, you're in the legitimate bucket. If the public review path quietly disappears for unhappy customers, you're in the second bucket — and most platforms have caught up to it. Be honest with yourself about which one you're building.
Knowing whether your review program is actually working
Four numbers tell you almost everything. Track them month over month, not in isolation.
Submission rate. The percent of emailed users who actually submit a review. 5–15% is healthy for a program that has timing and messaging right. 1–3% is the unoptimised baseline. That's a 5-to-15× gap purely from getting the request mechanics right — bigger than almost any other lever in lifecycle marketing.
Average rating. Should trend at 4+ on a 5-point scale if the product is genuinely good. If your average is sitting at 3.2, no email rewrite is going to fix it — the product is the problem. A better review email pointed at a worse product is just a more effective complaint generator. Fix the product first.
Review depth. Average word count of submitted reviews. Longer reviews are more useful to prospective buyers because they answer the questions a star rating can't — what specifically was good, what wasn't, who it's for. Open-ended prompts ("What would you tell a friend about this product?") produce noticeably deeper reviews than tick-box forms. The trade is a small drop in submission rate for a meaningful lift in review usefulness, and it's usually the right trade.
Time-to-review. The median days from email to submission. Users who submit within 24 hours are your highest-intent reviewers — they read the email, had an opinion ready, and acted. If most reviews are landing 5+ days later, your timing is probably late by a few days and people are reviewing when they finally circle back, not when the moment was hot.
One operational rule on follow-ups: send exactly two emails in the flow, the original and one reminder seven days later. Two is the right number. Three starts to read as pestering, and the marginal lift on a third reminder is rounding-error material. If they haven't reviewed after two asks, they aren't going to. End the flow there.
The platform question — Shopify Reviews vs Trustpilot vs G2 vs Google — depends on the business. Commerce-native platforms (Shopify Reviews, WooCommerce reviews) plus Google Reviews for SEO is the standard ecommerce stack. Higher-consideration purchases — anything where buyers research before buying — lean on Trustpilot or Google. SaaS reviews live on G2, Capterra, and TrustRadius depending on the product category. Multiple platforms are fine. The mistake is sending one blended "please leave a review somewhere" ask — pick the platform per email and send the user direct to that one. Generic asks convert worse than specific ones, every time.
places review programs as a standing trigger after key product milestones, not a periodic batch. Batch sends produce batch-shaped response rates — which is to say, poor ones. Trigger on the moment the opinion exists; the rest of the program follows.
The one thing to do Monday: pull your current review-request flow and check the trigger. If it's "14 days after order", change it to "N days after delivery confirmation" this week. That single change typically lifts submission rate by 50–100% with no other work — and it's usually a five-line edit to a journey, not a project. Start there. The copy and the incentive question can wait. Browse the rest of the program guides when you're ready for the next one.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
This guide is backed by an Orbit skill
Related guides
Browse allProduct launch email sequence: the five emails that actually sell a new product
A product launch with one big announcement email captures a fraction of the addressable audience. A proper five-email sequence catches multiple attention windows, builds anticipation, and converts the users who needed a second or third touch. Here's the structure that reliably outperforms the single-send version.
Browse abandonment: the program that sits between ads and cart
Browse abandonment catches the users who viewed a product and left without adding to cart. Smaller per-user lift than cart abandonment. Ten to twenty times the trigger volume. For most programs it's the biggest revenue lever you haven't shipped yet.
Referral program emails — the three flows that make it work
A referral program lives or dies on the lifecycle messaging — the automated emails that prompt, deliver, and confirm — wrapped around it. Three flows do the work: inviter prompt, invitee welcome, reward confirmation. Get the timing and copy right on each and conversion roughly doubles without anyone touching the offer.
Trial-to-paid: the seven-email sequence that converts 20%+ of free users
Trial conversion is the most financially leveraged flow in SaaS — every percentage point compounds directly against CAC. Here's the seven-email sequence that reliably moves trial conversion from 5% to 20%+.
Replenishment emails: the lifecycle flow that buys itself
Replenishment emails remind users to re-order a consumable before they run out. Done right, they generate the highest revenue-per-send in any lifecycle program because purchase intent is already established. Here's the timing, data, and copy.
Price increase emails: how to raise prices without a churn spike
A price increase is one of the highest-risk lifecycle moments your program will ever run. Done wrong, it triggers churn, public complaints, and a reputation dent that outlasts the extra revenue. Done right, most users accept the change without friction. Here's the sequence that works.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.