Updated · 10 min read
Attribution models for lifecycle: which one to defend in which room
Picture the meeting. Quarterly review, your CFO asks how much revenue email produced, you read out the number from the dashboard, somebody from paid social leans forward and asks why their channel is being credited so much less than yours when they're spending eight times as much. That's an attribution argument — the accounting fight over which marketing touch gets credit for a sale — and it's the same one every lifecycle team has at least once a quarter. The honest answer is that the question doesn't have one model attached to it. Different questions need different tools. This guide is the operator's map of which tool fits which question, and the political layer underneath that nobody wants to name out loud.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
What attribution is really doing — and why it's mostly a guess
Picture a single sale. A user got an email on Tuesday, scrolled past a paid Instagram ad on Wednesday, opened a push notification on Thursday, bought on Friday. Attribution — the bookkeeping exercise of deciding which marketing touch gets credit for a conversion — is the question of how to split that one Friday sale across those four touches. Every model in this guide is a different opinion about how to do that split.
The catch: the user's actual decision process is invisible. Nobody knows which touch tipped them. So every model below is a guess dressed up as accounting — some better-grounded than others, none of them reading minds. That matters because the numbers run real-world decisions: which programs get budget next quarter, which channels get killed, which team takes credit for the win.
Four broad approaches end up in lifecycle conversations. Three of them are flavours of the same idea — pick a touchpoint and assign credit to it. The fourth tries to answer a different question entirely. The difference between those two camps is the whole game.
The four models — what each one actually is
Last-touch. Credit goes to the most recent touchpoint before the conversion. When the touchpoint had to be clicked to count, this is usually called last-click — same idea, stricter rule. Simple. Easy to report. Almost always wrong for lifecycle. An email clicked two minutes before purchase gets all the credit. The onboarding sequence from three months ago that actually built the relationship gets nothing.
First-touch. Credit to the first interaction in the path. Useful for acquisition attribution — which channel brought this user in for the first time — and largely beside the point for lifecycle, where the user is already yours.
Multi-touch attribution (MTA) — credit distributed across several touchpoints rather than dumped on one. The MTA family has three common flavours. Linear gives every touch in the path equal credit. Time-decay weights recent touches more heavily than older ones. Position-based (sometimes called U-shaped) weights the first and last touch more, the middle less. All three feel more accurate than last-touch. All three rest on weighting rules nobody will defend when pressed on why the sixteenth email is worth exactly as much as the first.
Incrementality. A different shape of question entirely. Instead of splitting credit across touches that did happen, you compare a group who got the messages to a matched group who didn't. The second group is called a holdout — the random group you deliberately don't message, the way a control arm works in a clinical trial. The revenue gap between the two groups is the lift the program actually caused. Answers the only question leadership genuinely cares about, even when they don't know to phrase it that way: would this revenue have happened anyway? The holdout guide covers the design.
Two related terms worth flagging here, because they show up in the same conversations and trip people. Lift testing is the catch-all name for the holdout approach — running a version of the program with the message and a version without, and measuring the difference. Marketing mix modelling (MMM) is incrementality's big cousin, run at the channel level using statistical regression on aggregate spend and revenue rather than user-level holdouts. Useful for cross-channel budget questions, overkill for a single email program. Both live on the causal side of the line — what would have happened without this. Last-touch and MTA live on the bookkeeping side — who was in the room when it happened. Knowing which side a number lives on matters more than which specific model produced it.
Match the model to the question, not the question to the model
Most attribution mistakes happen one step earlier than people think. The error isn't picking the wrong model. It's picking a single model and applying it to every question. Different questions need different tools. Here's the working version of that.
| Question | Best model | Why |
|---|---|---|
| Which campaign generated this sale? | Last-touch | Operational attribution. Fine for daily dashboards. |
| Which program is most valuable? | Multi-touch (MTA) | Credits the journey, not just the final click. |
| Is lifecycle worth running at all? | Incrementality | Only model that answers the causal question. |
| Should we kill this program? | Incrementality | Attribution can't tell you what happens without the program. |
| How are acquisition channels performing? | First-touch + multi-touch | Lifecycle isn't in this debate; it's an acquisition question. |
| What's the monthly revenue from email? | Incrementality (quarterly) | Attribution inflates the number; leadership eventually notices. |
| Are display ads worth it? | Incrementality + view-through caution | View-through (credit for seeing an ad without clicking) inflates display badly. |
The instinct to pick one model and apply it everywhere forces a single method to answer questions it wasn't built for. Then the argument becomes about the method when it should have been about the question.
Quick gloss on that last row, because it comes up in every cross-channel meeting. View-through attribution — sometimes called post-impression — gives credit to ads a user saw but didn't click. The ad served, the user converted within some window (usually 24 hours to 30 days, depending on the platform), the channel claims the win. It is famously generous and famously hard to argue with using attribution alone. The only honest answer to view-through is incrementality, which doesn't care whether a touch was clicked, viewed, or imagined; it only cares whether the conversion would have happened without it.
The argument under the argument: whose team gets credit
Attribution debates aren't really about correctness. They're about whose team gets credit and whose budget survives the next cycle.
Every conversation about which model to use runs on two layers at once. One is epistemological — which model is most accurate. One is political — whose team gets credit. Lifecycle usually loses the political layer under last-touch, because most conversions have a paid or organic touchpoint sneaking in near the bottom of the funnel and stealing the last click. MTA hands lifecycle some credit back. Incrementality hands it the credit it actually deserves. Which is exactly why those incrementality conversations are the slowest to get organisational buy-in — the channels that fare worst under it have every reason to drag the timeline.
The operator move: publish in whichever model fairly represents the program's contribution, and have a second number ready for when leadership challenges the first. Last-touch for operational reports. Incrementality for budget conversations. The Attribution Audit skill covers how to structure the dual-model reporting without it looking like attribution-shopping — because the second it looks like that, you've lost the room.
Why multi-touch feels like the adult answer — and why it isn't
MTA sounds like the grown-up compromise. Spread the credit. Acknowledge the journey. Stop arguing. There's one problem buried inside it: the weighting is arbitrary. Linear gives every touch in the path the same value, but is the sixteenth email genuinely worth as much as the first? Time-decay weights recent touches more, which means channels that hit near the conversion (often paid, often not lifecycle) win on timing alone. Position-based weights first and last, which is a vibe, not a finding.
There is a cleaner version of multi-touch that weights touches based on holdout-measured incrementality per channel. If a holdout shows email produces 40% of incremental revenue, ads 35%, content 25%, your weighting becomes evidence-based rather than picked from a drop-down. Most programs don't do this. It requires running holdouts across several channels at once and stitching the results together — operationally heavy, organisationally tricky. Which is why most MTA numbers, however official the dashboard looks, are guesses in a spreadsheet wearing a chart.
Two reports, two purposes — and the rule that keeps them separate
Mature programs run two parallel streams of attribution reporting, each answering a different question. Mixing them is where most lifecycle teams get into trouble.
Stream one: attribution in the operational reports — daily, weekly, monthly. It tells you where conversions happened and which touchpoints were in the path. Useful for spotting broken programs, evaluating creative, making tactical calls. Is last-touch wrong for lifecycle? For lifecycle ROI specifically, yes — it systematically under-credits programs whose work happens days or weeks before the buy. For operational reporting, it's genuinely fine. Don't take it into a budget meeting and you'll avoid most of the trouble.
Stream two: incrementality for budget and strategic questions — quarterly at minimum, annually for smaller programs. It tells you whether the program is worth running and how much revenue it actually produces. Useful for allocation, kill calls, and defending lifecycle to finance when finance is in a mood. A holdout for budget season and a second for mid-year review is the cadence that keeps the financial case current without exhausting the team or the audience.
Where it goes wrong is using attribution for budget conversations. Attribution-based lifecycle revenue is almost always optimistic — it counts conversions that would have happened anyway and quietly attributes them to whichever email brushed against the path. When leadership eventually notices the gap between attribution-claimed revenue and bottom-line revenue (and they always do, it just takes a while), the credibility hit damages the whole program. Better to under-promise with incrementality numbers and deliver consistently than to inflate now and explain later.
Can you run both? Yes, and most mature programs do. Attribution in weekly dashboards, incrementality in the quarterly review. They answer different questions and complement each other. The only real risk is attribution numbers leaking into budget conversations by accident — keep the separation deliberate, name it out loud, label every report with the model that produced it. For framing incrementality numbers in CFO conversations specifically, the retention economics guide covers it.
One thing to take from this guide into Monday: pick the model that fits the question in front of you, label it clearly, and never let the operational number do work the causal number is supposed to do. These debates calm down fast once everyone in the room agrees what each number is actually measuring.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
This guide is backed by an Orbit skill
Related guides
Browse allHoldout group design: the incrementality tool most lifecycle programs skip
Without a holdout, lifecycle ROI is attribution-model guesswork with a spreadsheet. With one, you get a defensible number you can actually put in front of finance. Here's how to size, run, and read a holdout — and the three mistakes that quietly invalidate the result.
Price-testing through email: what's testable, what isn't
Email is the fastest place to try a new price, and the easiest place to learn the wrong lesson. What you can test cleanly, what you can't, and the measurement traps that quietly turn price tests into expensive false positives.
Retention economics: proving lifecycle ROI to finance
Lifecycle programs get deprioritised when they can't defend their impact in dollars. The four models that keep the budget — LTV, payback, cohort retention, incrementality — and the four-slide pattern that wins a CFO room.
Measuring AI personalisation lift honestly
Every vendor case study shows AI personalisation moving the numbers. Most internal post-mortems show the lift evaporating once a proper holdout is in place. The gap between the two is the measurement methodology. Here's the framework for proving — to yourself, your CFO, and the auditor — whether AI personalisation is actually earning its place.
A/B testing in email: sample size, novelty, and what to report
Most email A/B tests produce winners that don't reproduce. Three reasons keep showing up: under-powered samples, the novelty effect, and weak readout discipline. This guide is about designing tests that actually drive decisions instead of theatre.
Sample size: the calculation everyone gets wrong in email A/B tests
Most email A/B tests are powered to detect effects far larger than the test could actually produce. The result: false positives and false nulls, with confident conclusions in both directions. Sample size calculation fixes this before you send. Takes 5 minutes. Here's the 5-minute version.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.