Updated · 7 min read
Custom attributes: the data design that decides what your program can do
Picture the conversation that starts every avoidable lifecycle fire. Someone in a strategy meeting asks, "can we send this only to users on the premium plan who haven't opened an email in 30 days?" The CRM lead pauses. Pulls up the workspace. Squints at a list of 240 user-level data fields with names like custom_plan, plan_v2, subscription_status_new. Three of them look right. Nobody can remember which one updates. Forty-five minutes later the answer is "give me a week." That accumulated mess — every ESP (the email sending platform — Braze, Iterable, Customer.io) eventually grows one — is the single most common reason a 15-minute segmentation question turns into a multi-week engineering ticket. This guide is how you stop it before it starts.

By Justin Williames
Founder, Orbit · 10+ years in lifecycle marketing
Three rules that do the work of a hundred
Custom attributes — the user-level data fields your ESP filters and personalises on — are infrastructure. Badly named or badly maintained, they produce unreliable segmentation, personalisation that fires on the wrong users, and engineering queues full of "can we fix this attribute" tickets.
Before the rules: a custom attribute is a single piece of data attached to a user in your ESP — their subscription tier, their last purchase date, their lifetime order count. Segments filter on them. Triggers fire when they change. Liquid (the templating language ESPs use to inject personalised values into emails) renders them into copy. They're the spine of the program. Three rules keep the spine straight:
Attributes are derived data, not raw data. Don't create an attribute for every column in your data warehouse — the central database where your raw user data lives. Create attributes for the specific segmentation and personalisation decisions the lifecycle program actually needs to make. "user_is_premium_tier" is useful. "user_plan_internal_id" is clutter. The warehouse stays the source of truth; the ESP holds the derived view the program actually uses. Reverse-ETL (the pipeline that pushes warehouse data into your ESP) copying every column wholesale is how you end up with 300 attributes and nobody who knows which ones are live.
Every attribute has a defined update path. Three questions any attribute should be able to answer: what sets it the first time, what keeps it current, who owns it when it breaks. Attributes without those answers go stale silently — and silent staleness is the worst kind. The data looks fine right up until it produces a wrong send to the wrong cohort.
Retire aggressively. An attribute with no active campaigns using it is clutter. Audit quarterly. Remove anything unused. Keeping dead attributes around has a real cost: they confuse future teammates, bloat the segment-builder dropdown, and slowly turn "let me check" into a 40-minute forensic exercise.
Naming: the convention that stops the free-text mess
Without a convention, attribute names become a free-text wasteland. Two people name the same concept three different ways and six months later nobody knows which one updates. The convention that holds up across most programs:
[domain]_[entity]_[property]_[aggregation]
Examples:
• subscription_plan_tier — user's current subscription tier
• commerce_order_count_lifetime — total orders ever
• commerce_order_count_90d — orders in the last 90 days
• engagement_email_open_count_30d — email opens in the last 30 days
• preference_category_primary — user's top category
The domain prefix groups related attributes together inside the ESP's dropdown — type "commerce_" and every commerce attribute surfaces at once. The aggregation suffix ("_lifetime", "_90d", "_count") makes the data shape obvious at a glance. Pick snake_case (lowercase_with_underscores) and stick to it — camelCase and snake_case mixed in the same instance is a recipe for bugs when authors misremember which format applies to which attribute. See the Braze naming conventions guide for the broader naming framework this fits into.
Not every attribute deserves the same care — tier them
The 300-attribute workspace and the 40-attribute workspace fail the same way: they treat every field as equally important. They aren't. Sort yours into tiers, and apply different maintenance standards to each:
Tier 1 — Core attributes. Used across many campaigns. Updated frequently. Central to segmentation. subscription_tier, lifetime_order_count, last_engagement_date. These get reliable update pipelines and continuous monitoring. If one of these breaks, it's a fire — multiple programs are now lying.
Tier 2 — Program-specific attributes. Used by one or two specific programs. birthday_month for birthday emails. preferred_frequency for frequency management. Owned by the specific program. Retired the day the program retires.
Tier 3 — Computed or temporary attributes. Derived at send time or for a one-off campaign. Usually better handled via Liquid or a segment filter at send time rather than persisted as a stored attribute.
,
How fresh does the data need to be? Pick on purpose
Every attribute needs an update mechanism — the pipeline that keeps its value current as users do things. The right mechanism depends on what the attribute drives downstream:
Real-time (event-driven). Updated the moment a user action occurs. Fast but requires event pipelines. Reserve it for attributes that drive immediate triggers — "user_is_in_cart" has to be right within seconds, or the cart-abandon flow fires on people who already checked out.
Batch (nightly or hourly). Computed in the warehouse and pushed to the ESP on a schedule via reverse-ETL. Acceptable lag for most segmentation — lifetime_order_count, preferred_category — because nobody notices if the number is 12 hours old. The default for the majority of attributes in any healthy program.
Send-time (computed via Liquid or similar). Calculated when the email actually sends. Nothing stored. Most flexible; requires the underlying data to be available at send time, plus a clear set of decisions about what happens when it isn't (the catalogue is missing, the API is slow, the user has no value).
Document which mechanism each attribute uses. When something looks wrong, the first question is always "when was this last updated, by which pipeline?" — and programs without that documentation spend hours on debug sessions that should have taken three minutes. Over-refreshing wastes pipeline budget; under-refreshing causes wrong sends. Neither comes free.
The four failure modes that show up in any audit
Stale attributes. An attribute that was useful in 2024 quietly stopped updating. Campaigns still filter on it. Results are wrong, and nobody notices because the segment still returns users — just the wrong ones. Fix: quarterly audit, confirm update pipelines are still running, retire anything no longer maintained.
Multiple sources of truth. Two attributes tracking the same thing — "subscription_status" and "is_subscriber" — updated by different pipelines, disagreeing in subtle ways. Authors pick whichever one Slack-search surfaces first. Fix: designate one as source of truth. Remove the other, or compute it from the first.
Attribute sprawl. 300+ attributes, most unused, nobody knows which ones matter. Fix: audit and retire. Establish a creation-approval process for new attributes so the new sprawl doesn't immediately replace the old sprawl six months later.
Privacy-sensitive data over-stored. Detailed personal data (full birthdate, address, income) stored in the ESP when only a derived attribute (age_bracket, city, plan_tier) is needed for the program. Fix: store the minimum needed for lifecycle. Keep raw data in the warehouse where privacy controls are stronger and access is audited.
Retiring an attribute safely is a three-step move. Audit every campaign and segment referencing it. Remove or redirect those references. Wait 30 days of zero usage, then delete. Skip the audit and you break campaigns in production the moment the attribute goes — the order matters. As for the meta-question of how many attributes a program should have at all: as few as enable the decisions you actually need to make. Most programs operate comfortably with 30–80 active attributes. Above 200 is usually sprawl territory. The count matters less than the attribute-to-usage ratio — every attribute should have at least one active campaign or segment referencing it, or it shouldn't exist.
treats attribute hygiene as part of the quarterly program audit. Most programs discover 20–40% of their attributes are stale or duplicated the first time they actually run it. The fix is rarely hard — it's the looking that nobody schedules.
Read to the end
Scroll to the bottom of the guide — we'll tick it on your reading path automatically.
Frequently asked questions
- What is a custom attribute in Braze?
- A custom attribute is a user-level data field beyond the built-in properties (email, first name, etc.). Common custom attributes: subscription_tier, last_purchase_date, nps_score, product_interests, churn_risk_band. Custom attributes are the backbone of segmentation and personalisation — segments filter on them, Liquid renders them, triggers fire when they change. They differ from events: attributes describe current state, events describe historical action.
- How should I name custom attributes?
- snake_case, typed prefixes, and domain-consistent vocabulary. Examples: dt_last_purchase (dt_ for dates), is_paying (is_ for booleans), n_sessions_30d (n_ for counts), amt_ltv (amt_ for amounts), tier_subscription (tier_ for categorical). The prefixes make filtering in the Braze UI faster (typing "is_" shows all boolean attributes) and help non-technical team members read segments without memorising every field's type.
- How many custom attributes should I maintain?
- Fewer than most workspaces have — and audited quarterly. Every Braze instance I've seen eventually accumulates 200+ attributes where 100+ haven't been written to or read in a year. Dead attributes clutter segment-builder dropdowns, slow down the UI, and confuse new team members. Best practice: review all attributes quarterly, identify any with zero writes over the previous quarter, and archive them. Keep the active set under 100 wherever possible.
- Events vs custom attributes — which to use?
- Use events for actions (purchased, viewed, clicked, signed_up) and attributes for state (current_tier, is_active, total_lifetime_value). The append-only log of events lets you query history; the single-value-per-user shape of attributes represents current state. Trigger programs from events, segment and personalise from attributes. A common failure: storing event data as attributes (current_last_purchase instead of a purchased event log) loses history and wastes attribute slots.
- Should I store computed values as custom attributes?
- Yes for values expensive to compute live, no for values the ESP can derive from events. Good examples of stored-computed attributes: churn_risk_band (from a ML model computed daily), lifetime_value (aggregated from purchase events), tier_mapping (from subscription data). Avoid storing what can be derived cheaply from event streams — it creates consistency risk (the stored value drifts from the truth) and wastes slots.
This guide is backed by an Orbit skill
Related guides
Browse allProgressive profiling: asking users for data without scaring them off
Progressive profiling means collecting user data over time in small, contextual prompts instead of one giant signup form. Done well, it transforms personalisation data quality. Done badly, it's irritating surveillance with extra steps.
Personalisation that doesn't feel creepy
There's a line between personalisation that earns trust and personalisation that breaks it. It's not where most people think it is — it's about how you signal what you know, not what you know. Here's the line, how programs cross it without noticing, and the patterns that keep you on the right side.
CRM vs CDP: which tool do you actually need?
Vendors sell CRM, CDP, marketing automation, and ESP as if they're four shapes of the same box. They aren't. Here's what each one actually does, where it falls over, and the decision rule for picking one first.
Email dark mode: the four render modes and how to not break any of them
Dark mode in email is four different render behaviours across major clients, each with its own logic. Design without knowing which mode you're hitting and roughly half your audience sees something you never approved. Here's what each client does and the defensive rules that survive all of them.
Mobile email design: 65% of opens are on a phone — design for that
Two-thirds of email is opened on mobile. Most designs still start with desktop and hope it collapses. Here are the mobile-first rules that reliably produce emails that read, click, and convert on a phone.
Transactional email anatomy: the five sections every transactional needs
Transactional emails — receipts, password resets, shipping updates — open at 3–5x the rate of marketing and carry more brand signal per send than anything else in the program. Most teams treat them as ops artefacts and miss the leverage entirely. Here's the five-section anatomy that works across every transactional type.
Found this useful? Share it with your team.
Use this in Claude
Run this methodology inside your Claude sessions.
Orbit turns every guide on this site into an executable Claude skill — 63 lifecycle methodologies, 91 MCP tools, native Braze integration. Free for everyone.