Developers
API References
Data Subject Request API

Data Subject Request API Version 1 and 2

Data Subject Request API Version 3

Platform API

Key Management

Platform API Overview

Accounts

Apps

Audiences

Calculated Attributes

Data Points

Feeds

Field Transformations

Services

Users

Workspaces

Warehouse Sync API

Warehouse Sync API Overview

Warehouse Sync API Tutorial

Warehouse Sync API Reference

Data Mapping

Warehouse Sync SQL Reference

Warehouse Sync Troubleshooting Guide

ComposeID

Warehouse Sync API v2 Migration

Bulk Profile Deletion API Reference

Custom Access Roles API

Calculated Attributes Seeding API

Group Identity API Reference

Data Planning API

Pixel Service

Profile API

Events API

mParticle JSON Schema Reference

IDSync

Client SDKs
Android

Initialization

Configuration

Network Security Configuration

Event Tracking

User Attributes

IDSync

Screen Events

Commerce Events

Location Tracking

Media

Kits

Application State and Session Management

Data Privacy Controls

Error Tracking

Opt Out

Push Notifications

WebView Integration

Logger

Preventing Blocked HTTP Traffic with CNAME

Workspace Switching

Linting Data Plans

Troubleshooting the Android SDK

API Reference

Upgrade to Version 5

AMP

AMP SDK

Cordova

Cordova Plugin

Identity

Direct Url Routing

Direct URL Routing FAQ

Web

Android

iOS

Flutter

Getting Started

Usage

API Reference

iOS

Workspace Switching

Initialization

Configuration

Event Tracking

User Attributes

IDSync

Screen Tracking

Commerce Events

Location Tracking

Media

Kits

Application State and Session Management

Data Privacy Controls

Error Tracking

Opt Out

Push Notifications

Webview Integration

Upload Frequency

App Extensions

Preventing Blocked HTTP Traffic with CNAME

Linting Data Plans

Troubleshooting iOS SDK

Social Networks

iOS 14 Guide

iOS 15 FAQ

iOS 16 FAQ

iOS 17 FAQ

iOS 18 FAQ

API Reference

Upgrade to Version 7

React Native

Getting Started

Identity

Roku

Getting Started

Identity

Media

Unity

Upload Frequency

Getting Started

Opt Out

Initialize the SDK

Event Tracking

Commerce Tracking

Error Tracking

Screen Tracking

Identity

Location Tracking

Session Management

Xbox

Getting Started

Identity

Web

Initialization

Content Security Policy

Configuration

Event Tracking

User Attributes

IDSync

Page View Tracking

Commerce Events

Location Tracking

Media

Kits

Application State and Session Management

Data Privacy Controls

Error Tracking

Opt Out

Custom Logger

Persistence

Native Web Views

Self-Hosting

Multiple Instances

Web SDK via Google Tag Manager

Preventing Blocked HTTP Traffic with CNAME

Facebook Instant Articles

Troubleshooting the Web SDK

Browser Compatibility

Linting Data Plans

API Reference

Upgrade to Version 2 of the SDK

Xamarin

Getting Started

Identity

Alexa

Quickstart
Android

Overview

Step 1. Create an input

Step 2. Verify your input

Step 3. Set up your output

Step 4. Create a connection

Step 5. Verify your connection

Step 6. Track events

Step 7. Track user data

Step 8. Create a data plan

Step 9. Test your local app

HTTP Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

iOS Quick Start

Overview

Step 1. Create an input

Step 2. Verify your input

Step 3. Set up your output

Step 4. Create a connection

Step 5. Verify your connection

Step 6. Track events

Step 7. Track user data

Step 8. Create a data plan

Java Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

Node Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

Python Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

Web

Overview

Step 1. Create an input

Step 2. Verify your input

Step 3. Set up your output

Step 4. Create a connection

Step 5. Verify your connection

Step 6. Track events

Step 7. Track user data

Step 8. Create a data plan

Server SDKs

Node SDK

Go SDK

Python SDK

Ruby SDK

Java SDK

Tools

mParticle Command Line Interface

Linting Tools

Smartype

Media SDKs

Android

iOS

Web

Guides
Partners

Introduction

Outbound Integrations

Outbound Integrations

Firehose Java SDK

Inbound Integrations

Kit Integrations

Overview

Android Kit Integration

JavaScript Kit Integration

iOS Kit Integration

Compose ID

Data Hosting Locations

Glossary

Migrate from Segment to mParticle

Migrate from Segment to mParticle

Migrate from Segment to Client-side mParticle

Migrate from Segment to Server-side mParticle

Segment-to-mParticle Migration Reference

Rules Developer Guide

API Credential Management

The Developer's Guided Journey to mParticle

Guides

Composable Audiences (Early Access)

Customer 360

Overview

User Profiles

Overview

User Profiles

Group Identity

Overview

Create and Manage Group Definitions

Calculated Attributes

Calculated Attributes Overview

Using Calculated Attributes

Create with AI Assistance

Calculated Attributes Reference

Predictive Attributes

What are predictive attributes?

Predict Future Behavior

Create Future Prediction

Use Future Predictions in Campaigns

Assess and Troubleshoot Predictions

Next Best Action

Next Best Action Overview

Create a Next Best Action (NBA)

View and Manage NBAs

Activate Next Best Actions in Campaigns

Getting Started

Create an Input

Start capturing data

Connect an Event Output

Create an Audience

Connect an Audience Output

Transform and Enhance Your Data

Platform Guide
Billing

Usage and Billing Report

The New mParticle Experience

The new mParticle Experience

The Overview Map

Observability

Observability Overview

Observability User Guide

Observability Troubleshooting Examples

Observability Span Glossary

Platform Settings

Key Management

Event Forwarding

Notification Center (Early Access)

System Alerts

Trends

Introduction

Data Retention

Data Catalog

Connections

Activity

Data Plans

Live Stream

Filters

Rules

Blocked Data Backfill Guide

Tiered Events

mParticle Users and Roles

Analytics Free Trial

Troubleshooting mParticle

Usage metering for value-based pricing (VBP)

IDSync

IDSync Overview

Use Cases for IDSync

Components of IDSync

Store and Organize User Data

Identify Users

Default IDSync Configuration

Profile Conversion Strategy

Profile Link Strategy

Profile Isolation Strategy

Best Match Strategy

Aliasing

Segmentation
Audiences

Audiences Overview

Create an Audience

Connect an Audience

Manage Audiences

FAQ

Classic Audiences

Standard Audiences (Legacy)

Predictive Audiences

Predictive Audiences Overview

Using Predictive Audiences

New vs. Classic Experience Comparison

Analytics

Introduction

Core Analytics (Beta)

Setup

Sync and Activate Analytics User Segments in mParticle

User Segment Activation

Welcome Page Announcements

Settings

Project Settings

Roles and Teammates

Organization Settings

Global Project Filters

Portfolio Analytics

Analytics Data Manager

Analytics Data Manager Overview

Events

Event Properties

User Properties

Revenue Mapping

Export Data

UTM Guide

Analyses

Analyses Introduction

Segmentation: Basics

Getting Started

Visualization Options

For Clauses

Date Range and Time Settings

Calculator

Numerical Settings

Segmentation: Advanced

Assisted Analysis

Properties Explorer

Frequency in Segmentation

Trends in Segmentation

Did [not] Perform Clauses

Cumulative vs. Non-Cumulative Analysis in Segmentation

Total Count of vs. Users Who Performed

Save Your Segmentation Analysis

Export Results in Segmentation

Explore Users from Segmentation

Funnels: Basics

Getting Started with Funnels

Group By Settings

Conversion Window

Tracking Properties

Date Range and Time Settings

Visualization Options

Interpreting a Funnel Analysis

Funnels: Advanced

Group By

Filters

Conversion over Time

Conversion Order

Trends

Funnel Direction

Multi-path Funnels

Analyze as Cohort from Funnel

Save a Funnel Analysis

Explore Users from a Funnel

Export Results from a Funnel

Cohorts

Getting Started with Cohorts

Analysis Modes

Save a Cohort Analysis

Export Results

Explore Users

Saved Analyses

Manage Analyses in Dashboards

Users

Getting Started

User Activity Timelines

Time Settings

Export Results

Save A User Analysis

Journeys

Getting Started

Event Menu

Visualization

Ending Event

Save a Journey Analysis

Query Builder

Data Dictionary

Query Builder Overview

Modify Filters With And/Or Clauses

Query-time Sampling

Query Notes

Filter Where Clauses

Event vs. User Properties

Group By Clauses

Annotations

Cross-tool Compatibility

Apply All for Filter Where Clauses

Date Range and Time Settings Overview

User Attributes at Event Time

Understanding the Screen View Event

User Aliasing

Dashboards

Dashboards––Getting Started

Manage Dashboards

Dashboard Filters

Organize Dashboards

Scheduled Reports

Favorites

Time and Interval Settings in Dashboards

Query Notes in Dashboards

Analytics Resources

The Demo Environment

Keyboard Shortcuts

User Segments

Tutorials

Analytics for Marketers

Analytics for Product Managers

Compare Conversion Across Acquisition Sources

Analyze Product Feature Usage

Time-based Subscription Analysis

Identify Points of User Friction

Dashboard Tips and Tricks

Understand Product Stickiness

Optimize User Flow with A/B Testing

APIs

User Segments Export API

Dashboard Filter API

Warehouse Sync

Warehouse Sync User Guide

Historical Data and Warehouse Sync

Data Privacy Controls

Data Subject Requests

Default Service Limits

Feeds

Cross-Account Audience Sharing

Approved Sub-Processors

Import Data with CSV Files

Import Data with CSV Files

CSV File Reference

Glossary

Video Index

Analytics (Deprecated)
Identity Providers

Single Sign-On (SSO)

Setup Examples

Settings

Debug Console

Data Warehouse Delay Alerting

Introduction

Developer Docs

Introduction

Integrations

Introduction

Rudderstack

Google Tag Manager

Segment

Data Warehouses and Data Lakes

Advanced Data Warehouse Settings

AWS Kinesis (Snowplow)

AWS Redshift (Define Your Own Schema)

AWS S3 Integration (Define Your Own Schema)

AWS S3 (Snowplow Schema)

BigQuery (Snowplow Schema)

BigQuery Firebase Schema

BigQuery (Define Your Own Schema)

GCP BigQuery Export

Snowflake (Snowplow Schema)

Snowplow Schema Overview

Snowflake (Define Your Own Schema)

APIs

Dashboard Filter API (Deprecated)

REST API

User Segments Export API (Deprecated)

SDKs

SDKs Introduction

React Native

iOS

Android

Java

JavaScript

Python

Object API

Developer Basics

Aliasing

Integrations
24i

Event

Aarki

Audience

Abakus

Event

ABTasty

Audience

Actable

Feed

Adikteev

Audience

Event

AdChemix

Event

Adjust

Event

Feed

AdMedia

Audience

Adobe Marketing Cloud

Cookie Sync

Server-to-Server Events

Platform SDK Events

Adobe Audience Manager

Audience

Adobe Target

Audience

Adobe Campaign Manager

Audience

AdPredictive

Feed

AgilOne

Event

Airship

Audience

Event

Feed

Algolia

Event

AlgoLift

Feed

Event

Alooma

Event

Amazon Advertising

Audience

Amazon Kinesis

Event

Amazon Kinesis Firehose

Audience

Event

Amazon Redshift

Data Warehouse

Amazon S3

Event

Amazon SNS

Event

Amazon SQS

Event

Amobee

Audience

Amplitude

Forwarding Data Subject Requests

Event

Ampush

Audience

Event

Analytics

Audience

Event

Forwarding Data Subject Requests

Anodot

Event

Antavo

Feed

AppLovin

Audience

Event

Apptimize

Event

Apptentive

Event

Apteligent

Event

Attentive

Event

Feed

Attractor

Event

Awin

Event

Batch

Event

Audience

Microsoft Azure Blob Storage

Event

Bidease

Audience

Bluecore

Event

Bing Ads

Event

Bluedot

Feed

Blueshift

Event

Feed

Forwarding Data Subject Requests

Branch

Event

Feed

Forwarding Data Subject Requests

Braze

Audience

Feed

Forwarding Data Subject Requests

Event

Branch S2S Event

Event

Bugsnag

Event

Button

Event

Audience

Cadent

Audience

Census

Feed

ciValue

Feed

Event

CleverTap

Audience

Event

Feed

comScore

Event

Conversant

Event

Cordial

Audience

Feed

Cortex

Feed

Event

Forwarding Data Subject Requests

Criteo

Audience

Event

Crossing Minds

Event

Custom Feed

Custom Feed

CustomerGlu

Event

Feed

Databricks

Data Warehouse

Customer.io

Audience

Event

Feed

Datadog

Event

Dynalyst

Audience

Didomi

Event

Dynamic Yield

Audience

Event

Emarsys

Audience

Everflow

Audience

Epsilon

Event

Edge226

Audience

Facebook

Audience

Event

Facebook Offline Conversions

Event

Fiksu

Audience

Event

Google Analytics for Firebase

Event

Flurry

Event

Flybits

Event

ForeSee

Event

Formation

Event

Feed

Foursquare

Audience

Feed

FreeWheel Data Suite

Audience

Google Ad Manager

Audience

Friendbuy

Event

Google Ads

Audience

Event

Google BigQuery

Audience

Data Warehouse

Google Analytics

Event

Google Analytics 4

Event

Google Cloud Storage

Event

Audience

Google Enhanced Conversions

Event

Google Marketing Platform Offline Conversions

Event

Google Pub/Sub

Event

Google Marketing Platform

Cookie Sync

Audience

Event

Google Tag Manager

Event

Heap

Event

Herow

Feed

Hightouch

Feed

Hyperlocology

Event

Ibotta

Event

ID5

Kit

Impact

Event

InMarket

Audience

InMobi

Audience

Event

Inspectlet

Event

Insider

Audience

Event

Feed

Intercom

Event

ironSource

Audience

iPost

Audience

Feed

Iterable

Audience

Feed

Event

Jampp

Audience

Event

Kafka

Event

Kayzen

Audience

Event

Kissmetrics

Event

Klaviyo

Event

Audience

Kochava

Feed

Event

Forwarding Data Subject Requests

Kubit

Event

LaunchDarkly

Feed

Liftoff

Audience

Event

LifeStreet

Audience

LinkedIn

LinkedIn Conversions API Integration

LiveLike

Event

Liveramp

Audience

Localytics

Event

Leanplum

Audience

Event

Feed

mAdme Technologies

Event

MadHive

Audience

Mailchimp

Audience

Event

Feed

Marigold

Audience

Mautic

Event

Audience

MediaMath

Audience

Mediasmart

Audience

Microsoft Azure Event Hubs

Event

Mintegral

Audience

Mixpanel

Audience

Event

Forwarding Data Subject Requests

MoEngage

Audience

Event

Feed

Moloco

Audience

Event

Movable Ink - V2

Event

Movable Ink

Event

Monetate

Event

Multiplied

Event

Nami ML

Feed

myTarget

Audience

Event

Nanigans

Event

Narrative

Audience

Event

Feed

NCR Aloha

Event

Neura

Event

Optimizely

Audience

Event

OneTrust

Event

Oracle BlueKai

Event

Oracle Responsys

Audience

Event

Paytronix

Feed

Persona.ly

Audience

Personify XP

Event

PieEye

Inbound Data Subject Requests

Pilgrim

Event

Feed

Pinterest

Audience

Event

Postie

Audience

Event

AppsFlyer

Feed

Event

Forwarding Data Subject Requests

Pushwoosh

Audience

Event

Plarin

Event

Qualtrics

Event

Quadratic Labs

Event

Quantcast

Event

Radar

Event

Feed

Reddit

Audience

Event

Rakuten

Event

Remerge

Event

Audience

Retina AI

Event

Feed

Reveal Mobile

Event

Regal

Event

Rokt

Event

Audience

Rokt Thanks and Pay+

RevenueCat

Feed

Sailthru

Audience

Event

RTB House

Audience

Event

Salesforce Email

Audience

Feed

Event

Salesforce Sales and Service Cloud

Event

Salesforce Mobile Push

Event

Samba TV

Audience

Event

Scalarr

Event

SendGrid

Audience

Feed

SessionM

Event

Feed

ShareThis

Audience

Feed

Shopify

Custom Pixel

Feed

Signal

Event

SimpleReach

Event

Singular

Event

Feed

Singular-DEPRECATED

Event

Skyhook

Event

Slack

Event

Smadex

Audience

Snapchat

Event

Audience

SmarterHQ

Event

Snapchat Conversions

Event

Snowflake

Data Warehouse

Snowplow

Event

Split

Event

Feed

Sprig

Audience

Event

StartApp

Audience

Statsig

Event

Feed

Splunk MINT

Event

Stormly

Event

Audience

Swrve

Feed

Event

Talon.One

Audience

Event

Feed

Loyalty Feed

Tapad

Audience

Tapjoy

Audience

Taplytics

Event

Taptica

Audience

Teak

Audience

The Trade Desk

Cookie Sync

Audience

Event

Ticketure

Feed

TikTok Event

Audience

Audience (Deprecated)

Event

Audience Migration

Treasure Data

Audience

Event

Triton Digital

Audience

TUNE

Event

Twitter

Audience

Event

Valid

Event

Vkontakte

Audience

Voucherify

Audience

Event

Vungle

Audience

Webtrends

Event

Webhook

Event

White Label Loyalty

Event

Wootric

Event

Xandr

Audience

Cookie Sync

Yahoo (formerly Verizon Media)

Cookie Sync

Audience

Yotpo

Feed

YouAppi

Audience

Z2A Digital

Audience

Event

Zendesk

Feed

Event

Punchh

Audience

Feed

Event

Primer

Event

Composable Audiences (Early Access)

With mParticle’s Composable Audiences feature, you can create dynamic user segments using data stored directly in your data warehouse. This approach enables you to leverage rich, custom attributes and events that are maintained in your own systems without needing to copy data.

By querying your warehouse directly, you can ensure that audiences are built on the most up-to-date and complete picture of your users, all while eliminating data duplication and minimizing operational overhead.

Creating a composable audience happens in two stages:

  1. You connect mParticle to your warehouse and identify what data to use.

    • This includes configuring a connection and one or more data models that specify what data to ingest and how to interpret it.
  2. You create an audience based on your data models.

    • These audiences can then be forwarded to downstream partners for activation across paid media, CRM platforms, and more, just like traditional mParticle audiences.

How is data from your warehouse processed?

When a composable audience initializes, mParticle translates your audience definition (the criteria that determine which users should be included in the audience) into a SQL query. This query is then executed directly within your data warehouse using your warehouse’s compute resources.

The SQL query returns a list of users and (optional) attributes that make up the audience at that particular point in time. This data is stored in a temporary table in your data warehouse that is provisioned for internal processing only. These datasets are not part of your production tables and are only used by mParticle to track audience membership over time. These additional datasets are automatically managed and cleaned up by mParticle.

mParticle pulls the users and any optional attribute data from your warehouse before forwarding them to your connected downstream integration.

When you create a composable audience, you specify the cadence at which it refreshes. Note that composable audiences can’t be refreshed in real-time.

Composable Audiences operates independently of mParticle’s identity resolution framework (IDSync). No user profile enrichment or merging takes place. Audience membership is based solely on the logic provided in the SQL query and the structure of the source data. This makes the feature highly transparent and deterministic, giving you full control over how your audiences are built and updated. It also means that the records in your data warehouse must include some form of user identifier (like a user ID or customer ID) that can be used when determining audience membership.

What data is stored in mParticle?

mParticle only stores data about your warehouse that’s needed to create and activate composable audiences, including:

  • Information needed to connect with, and authenticate to, your warehouse provider.
  • Metadata about your warehouse database and tables, like column names.
  • If explicitly enabled, mParticle caches up to 1000 of the most frequent values from each column in your table for up to 7 days to power the autocomplete feature within the audience builder. You must enable autocomplete for each column included in your data model.
  • A future release of Composable Audiences will allow you to create “hybrid composable audiences” in which composable audience membership can be optionally noted on user profiles in Customer 360 or the User Activity View.

No other data is stored in mParticle beyond these items.

How are your user profiles modified?

Composable Audiences does not modify your user profiles in any way.

A future release will add the ability to optionally have composable audience membership reflected on user profiles.

How are composable audiences refreshed?

To improve performance, mParticle only tracks changes in audience membership (specifically, the users who have been added to or removed from the audience) rather than recomputing the full list each time. The first time the audience is executed, the entire set of qualifying users is returned. For subsequent runs, mParticle compares the new audience snapshot with the previous one to determine which users have been added or dropped.

This “diff-ing” logic happens entirely within the data warehouse environment. Two snapshots of the audience (previous and current) are compared using SQL, and only the incremental changes are passed back into mParticle for forwarding to downstream integrations. This approach ensures both scalability and efficiency in audience refresh cycles.

Are composable audiences compatible with native mParticle Audiences?

While support for this functionality is planned for a future release, you can’t currently clone audiences comprising data stored in the mParticle CDP from a composable audience, nor can you reference audiences built with mParticle data from a composable audience using the isMemberOf() function (or vice versa).

Setup guide

This guide walks you through connecting your data warehouse, creating data models, and building your first composable audience.

1. Connect to your data warehouse

  1. In the mParticle UI, navigate to Data Platform > Setup > Inputs, and open the Feeds tab.
  2. Select either Google BigQuery or Snowflake — currently, these are the only supported warehouse providers.
  3. Click the + button to add a new input.
  4. In the “How would you like to connect your Data Warehouse?” modal, select Use data directly from your Data Warehouse.

    • This option allows you to query your warehouse directly and use that data to power audience segmentation.

Connecting to Google BigQuery

Step 1: Create a new service account for mParticle in Google Cloud Console
  1. Go to the Google Cloud Console IAM & Admin > Service Accounts page.
  2. Click Create Service Account.
  3. Enter a name for the service account. For example, “mparticle-warehouse-connectivity”.
  4. Add a description. For example, “Service account for mParticle data warehouse connectivity”.
  5. Click Create and Continue.

    BigQuery Create Service Account

Step 2: Grant BigQuery permissions to the service account
  1. In the Grant this service account access to project section, click Select a role.
  2. Search for and select BigQuery Data Editor.
  3. Click Continue.
  4. Click Done to create the service account.

    BigQuery Create Service Account

Step 3: Create and download a service account key
  1. Find and select your newly created service account in the list.
  2. Go to the Keys tab.
  3. Click Add Key > Create new key.
  4. Select JSON as the key type.
  5. Click Create.
  6. The JSON key file will automatically download to your computer.
  7. Keep this file secure - you’ll need to upload it in the form on the left.

    BigQuery Create Service Account

Step 4: Grant access to the dataset in BigQuery
  1. Go to the BigQuery Console.
  2. Navigate to your dataset.
  3. Click on the Share button (three dots menu) next to your dataset.
  4. Click Share dataset.
  5. In the Add principals section, enter your service account email address.
  6. Assign the BigQuery Data Editor role.
  7. Click Add.
  8. Click Done.

    BigQuery Create Service Account

Step 5: Create a BigQuery Connection in mParticle
  1. Provide a Connection Name.
  2. Upload the JSON service account key file you downloaded.
  3. Enter your BigQuery project ID.
  4. Enter your dataset name.
  5. Click Create Connection to complete the setup.

Connecting to Snowflake

Step 1: Configure Snowflake
  1. An administrator in your Snowflake account will need to create a service user that mParticle can use when connecting to, and retrieving data from, your database. Make note of the username and role when creating your service user, as these are referenced in the steps below. Learn more about service users in Snowflake’s documentation.
  2. Provide a Storage Integration Name, and use the Username and Role for the mParticle service user you created.

     CREATE OR REPLACE STORAGE INTEGRATION mp_[your_name]
         WITH TYPE = EXTERNAL_STAGE
         STORAGE_PROVIDER = 'S3'
         ENABLED = TRUE
         STORAGE_AWS_ROLE_ARN = "arn:aws:iam::[your_value]"
         STORAGE_AWS_OBJECT_ACL = "bucket-owner-full-control"
         STORAGE_ALLOWED_LOCATIONS = ("s3://[your_value]");`
    
     GRANT USAGE ON INTEGRATION your_name TO ROLE [your_role];
  3. Create a schema named “MPARTICLE” in your Snowflake database and grant the service user read/write access. This is used to track audience membership over time and is automatically managed by mParticle.

    CREATE SCHEMA IF NOT EXISTS "MPARTICLE";
    GRANT USAGE ON SCHEMA "MPARTICLE" TO ROLE [your role];
    GRANT CREATE TABLE ON SCHEMA "MPARTICLE" TO ROLE [your role];
    GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA "MPARTICLE" TO ROLE [your role];
    GRANT SELECT, INSERT, UPDATE, DELETE ON FUTURE TABLES IN SCHEMA "MPARTICLE" TO ROLE [your role]
  4. Grant the role access to any schemas and tables in your database that will be used build audiences.

    GRANT USAGE ON SCHEMA [your database]."[your schema]" TO ROLE [your role]; 
    
    CREATE OR REPLACE STORAGE INTEGRATION mp_[your_name]
        WITH TYPE = EXTERNAL_STAGE
        STORAGE_PROVIDER = 'S3'
        ENABLED = TRUE
        STORAGE_AWS_ROLE_ARN = "arn:aws:iam::[your_value]"
        STORAGE_AWS_OBJECT_ACL = "bucket-owner-full-control"
        STORAGE_ALLOWED_LOCATIONS = ("s3://[your_value]");
Step 2: Create a Snowflake Connection in mParticle
  1. Provide a Connection Name, Account Identifier, Region, Warehouse, and Database in the Connection UI.
  2. Once complete, click Create Connection.

2. Create a data model

Your data model specifies the exact data from your warehouse that you’ll create your audiences from. To create a data model, select one of the following Model Types, and follow the specific instructions for that type.

  • User data models

    • User Data models define the primary dataset (users or customers) that audiences are built from. When creating a composable audience, start by creating a user data model first that you can later supplement with additional event or behavioral data.
  • Event data models

    • Event Data models are used to pull timestamped actions (like logins, purchases, or other conversions) that you can use to enrich your user data by creating a relationship between your user data and event data models.
  • General data models

    • General models can be used to enrich audiences with other data (e.g., subscriptions, device types).

Create a user data model

User data models define the core user dataset that Composable Audiences use for segmentation, so you must create a user data model in order to create a composable audience. Like all data models, they’re built from a SQL query that selects user records and attributes from your warehouse, with a required primary ID column to uniquely identify each user. This model forms the base for all audiences and can be linked to other models, like events or subscriptions, for richer targeting.

1. Write your SQL query

Enter the SQL that will be executed against your warehouse. Because data models are defined using SQL, you can use features like joins, filters, grouping, and case statements to target the exact data you want to build your audiences with. This determines which fields and records are made available for targeting.

2. Enter your model details

Name your model for easy reference, and choose a primary ID column (e.g., user_id, customer_id) to identify users in the data.

3. Map columns (optional)

Creating mappings is an optional step that helps you control how your warehouse data is interpreted and used within mParticle and by downstream partners. For example, you can specify whether a column should be treated as a user attribute or event attribute, and assign a more readable display name for use in the audience builder. This makes your data easier to work with, especially for team members who aren’t familiar with the raw schema used in your warehouse.

For each column returned by your query, review the data type mParticle detects and update it if you want mParticle to interpret it a different way.

  • Choose the corresponding field name (user attribute or user identifier) for how mParticle should interpret the column.
  • Assign a display name if you want the column to have a more marketer-friendly name in the audience builder.
4. Define relationships (optional)

Creating relationships between data model types helps you build more advanced audiences by combining different kinds of data. For example, linking a user data model with event data lets you target users who meet specific profile criteria and have performed certain actions. This also keeps your data models modular, making them easier to reuse across different audience definitions.

Create an event data model

Event Data Models capture timestamped user actions (like logins, purchases, or conversions) that can be used to refine or enrich your audience definitions. These models pull event-level data from your warehouse and can be linked to user data models through a shared user ID. These relationships allow you to build audiences based on specific behaviors or activity patterns over time.

To create an event data model:

1. Write your SQL query

Write the SQL query that selects the event data you want to use for audience segmentation.

  • Your query should return timestamped actions (e.g., purchases, app logins, conversions) associated with user identifiers.
  • You can include joins, filters, and calculated fields to shape the dataset as needed.
2. Enter your model details

After writing your query, give the model a clear name so it’s easy to reference when building audiences.

  • Choose a primary ID column—this should be a unique identifier for the event (EVENT_ID, for example) and must be of type string or number.
  • Select a timestamp column that represents when the event occurred. This is required for the model to function as event-based.
  • You should also include a user identifier (like USER_ID) to denote who performed each event. This ID is used when relating your event data model with a user data model.
3. Columns (optional)

Review the columns returned by your query and update the detected data types if needed.

Create a General Data Model

General data models are a flexible, catch-all option for data that doesn’t neatly fit into user or event categories (like subscriptions, device types, or other contextual attributes). Like other models, general data models are defined using SQL and can be linked to user models to enrich audience criteria.

To create a General Data Model in the mParticle UI, follow these steps:

1. Write your SQL query

Write the SQL query that selects the dataset you want to use to enrich audience segmentation.

  • This can include contextual data such as device types, subscription plans, or any other non-user, non-event attributes.
  • Use SQL features like joins and filters to shape the data as needed.
  • Make sure your query includes a column that can be used to join this data with a User Data Model.
2. Enter your model details

After defining your query, name the model for easy identification.

  • Choose a primary ID column that corresponds to the user identifier you’ll use to link this model to a User Data Model.
  • The primary ID should be unique per record and of type string or number.

3. Create an audience

Once you’ve defined your data models, you’re ready to create an audience. You’ll choose the data model to base your audience on, set criteria for segmentation, and schedule regular audience refreshes to ensure your segments stay current. After your audience is created, you can activate it across your marketing platforms and monitor its performance.

To create your audience:

  1. Navigate to Segmentation > Audiences and click + New Audience.
  2. Select Choose from data model from the available options.
  3. Use the dropdown menu to select the data model you want to base your audience on.
  4. Click Next.
  5. Enter a name for the audience and select a refresh cadence.
  6. Define audience criteria based on the imported fields from your model.

    • The audience builder displays a live preview of your audience size based on the selected criteria, helping you validate your logic before activation. A future release will include additional insights to support audience validation.

Activating and Monitoring Your Audience

Once your audience is created, you can:

  • Activate your audience across downstream advertising platforms such as Google Ads, Facebook, or TikTok.
  • Monitor audience size and delivery logs to confirm data quality and validate your targeting criteria.

Key differences between Composable and Native Audiences

  • Audience sizes shown in the UI are actual sizes (not estimates) ensuring you always have an accurate picture of your segment.
  • The audience builder UI for composable audiences differs slightly from the native mParticle audience builder. For example, composable audiences have clearer logical groupings and support drag-and-drop to rearrange segmentation clauses.
  • Composable audiences are clearly labeled as such on the Audiences landing page.

FAQ

Can I use composable audiences for real time use cases?

No, composable audiences run on a scheduled refresh cadence. They are not designed for real-time updates. You define the refresh schedule when creating the audience.

Does mParticle ingest all audience members every time it runs?

No. The first time your audience runs, mParticle ingests the full list of qualifying users. On subsequent runs, only users who have been added or removed are ingested.

Does composable audience data update my user profiles?

No. Composable Audiences does not modify user profiles or interact with identity resolution (IDSync). A future release will allow you to opt in to reflecting audience membership on profiles.

Can I clone or convert a native mParticle audience into a composable audience?

No. Native audiences (audiences built using data stored in mParticle) and composable audiences are separate features and cannot be cloned across types.

What happens if I change my SQL query in a data model?

If you update a data model’s SQL, any audiences built on that model will reflect the new structure the next time they refresh. You should verify that the model still returns the expected fields and structure.

How accurate is the size shown in the audience builder?

For composable audiences, the audience size preview reflects actual user counts based on your current criteria.

What kinds of data can I include in my models?

You can use user data, timestamped event data (like purchases or logins), and general contextual data (like device types or subscription plans). Each is configured using a SQL query.

Was this page helpful?

    Last Updated: August 13, 2025