Developers
API References
Data Subject Request API

Data Subject Request API Version 1 and 2

Data Subject Request API Version 3

Platform API

Key Management

Platform API Overview

Accounts

Apps

Audiences

Calculated Attributes

Data Points

Feeds

Field Transformations

Services

Users

Workspaces

Warehouse Sync API

Warehouse Sync API Overview

Warehouse Sync API Tutorial

Warehouse Sync API Reference

Data Mapping

Warehouse Sync SQL Reference

Warehouse Sync Troubleshooting Guide

ComposeID

Warehouse Sync API v2 Migration

Bulk Profile Deletion API Reference

Calculated Attributes Seeding API

Custom Access Roles API

Data Planning API

Group Identity API Reference

Pixel Service

Profile API

Events API

mParticle JSON Schema Reference

IDSync

Client SDKs
AMP

AMP SDK

Android

Initialization

Configuration

Network Security Configuration

Event Tracking

User Attributes

IDSync

Screen Events

Commerce Events

Location Tracking

Media

Kits

Application State and Session Management

Data Privacy Controls

Error Tracking

Opt Out

Push Notifications

WebView Integration

Logger

Preventing Blocked HTTP Traffic with CNAME

Workspace Switching

Linting Data Plans

Troubleshooting the Android SDK

API Reference

Upgrade to Version 5

Cordova

Cordova Plugin

Identity

Direct Url Routing

Direct URL Routing FAQ

Web

Android

iOS

Flutter

Getting Started

Usage

API Reference

iOS

Workspace Switching

Initialization

Configuration

Event Tracking

User Attributes

IDSync

Screen Tracking

Commerce Events

Location Tracking

Media

Kits

Application State and Session Management

Data Privacy Controls

Error Tracking

Opt Out

Push Notifications

Webview Integration

Upload Frequency

App Extensions

Preventing Blocked HTTP Traffic with CNAME

Linting Data Plans

Troubleshooting iOS SDK

Social Networks

iOS 14 Guide

iOS 15 FAQ

iOS 16 FAQ

iOS 17 FAQ

iOS 18 FAQ

API Reference

Upgrade to Version 7

React Native

Getting Started

Identity

Unity

Upload Frequency

Getting Started

Opt Out

Initialize the SDK

Event Tracking

Commerce Tracking

Error Tracking

Screen Tracking

Identity

Location Tracking

Session Management

Xbox

Getting Started

Identity

Roku

Getting Started

Identity

Media

Web

Initialization

Configuration

Content Security Policy

Event Tracking

User Attributes

IDSync

Page View Tracking

Commerce Events

Location Tracking

Media

Kits

Application State and Session Management

Data Privacy Controls

Error Tracking

Opt Out

Custom Logger

Persistence

Native Web Views

Self-Hosting

Multiple Instances

Web SDK via Google Tag Manager

Preventing Blocked HTTP Traffic with CNAME

Facebook Instant Articles

Troubleshooting the Web SDK

Browser Compatibility

Linting Data Plans

API Reference

Upgrade to Version 2 of the SDK

Xamarin

Getting Started

Identity

Alexa

Media SDKs

iOS

Android

Web

Quickstart
Android

Overview

Step 1. Create an input

Step 2. Verify your input

Step 3. Set up your output

Step 4. Create a connection

Step 5. Verify your connection

Step 6. Track events

Step 7. Track user data

Step 8. Create a data plan

Step 9. Test your local app

HTTP Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

iOS Quick Start

Overview

Step 1. Create an input

Step 2. Verify your input

Step 3. Set up your output

Step 4. Create a connection

Step 5. Verify your connection

Step 6. Track events

Step 7. Track user data

Step 8. Create a data plan

Node Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

Java Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

Python Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

Web

Overview

Step 1. Create an input

Step 2. Verify your input

Step 3. Set up your output

Step 4. Create a connection

Step 5. Verify your connection

Step 6. Track events

Step 7. Track user data

Step 8. Create a data plan

Server SDKs

Node SDK

Go SDK

Python SDK

Ruby SDK

Java SDK

Tools

mParticle Command Line Interface

Smartype

Linting Tools

Guides
Partners

Introduction

Outbound Integrations

Outbound Integrations

Firehose Java SDK

Inbound Integrations

Kit Integrations

Overview

Android Kit Integration

JavaScript Kit Integration

iOS Kit Integration

Compose ID

Data Hosting Locations

Glossary

Migrate from Segment to mParticle

Migrate from Segment to mParticle

Migrate from Segment to Client-side mParticle

Migrate from Segment to Server-side mParticle

Segment-to-mParticle Migration Reference

Rules Developer Guide

API Credential Management

The Developer's Guided Journey to mParticle

Guides
Customer 360

Overview

User Profiles

Overview

User Profiles

Group Identity

Overview

Create and Manage Group Definitions

Calculated Attributes

Calculated Attributes Overview

Using Calculated Attributes

Create with AI Assistance

Calculated Attributes Reference

Predictive Attributes

What are predictive attributes?

Predict Future Behavior

Create Future Prediction

Use Future Predictions in Campaigns

Assess and Troubleshoot Predictions

Next Best Action

Next Best Action Overview

Create a Next Best Action (NBA)

View and Manage NBAs

Activate Next Best Actions in Campaigns

Getting Started

Create an Input

Start capturing data

Connect an Event Output

Create an Audience

Connect an Audience Output

Transform and Enhance Your Data

Platform Guide
Billing

Usage and Billing Report

The New mParticle Experience

The new mParticle Experience

The Overview Map

Observability

Observability Overview

Observability User Guide

Observability Troubleshooting Examples

Observability Span Glossary

Platform Settings

Key Management

Event Forwarding

Notification Center (Early Access)

System Alerts

Trends

Introduction

Data Retention

Data Catalog

Connections

Activity

Data Plans

Live Stream

Filters

Rules

Blocked Data Backfill Guide

Tiered Events

mParticle Users and Roles

Analytics Free Trial

Troubleshooting mParticle

Usage metering for value-based pricing (VBP)

Segmentation
New Audiences Experience

Audiences Overview

Create an Audience

Connect an Audience

Manage Audiences

FAQ

Classic Audiences Experience

Real-time Audiences (Legacy)

Standard Audiences (Legacy)

Journeys

Journeys Overview

Manage Journeys

Download an audience from a journey

Audience A/B testing from a journey

New vs. Classic Experience Comparison

Predictive Audiences

Predictive Audiences Overview

Using Predictive Audiences

IDSync

IDSync Overview

Use Cases for IDSync

Components of IDSync

Store and Organize User Data

Identify Users

Default IDSync Configuration

Profile Conversion Strategy

Profile Link Strategy

Profile Isolation Strategy

Best Match Strategy

Aliasing

Analytics

Introduction

Core Analytics (Beta)

Setup

Sync and Activate Analytics User Segments in mParticle

User Segment Activation

Welcome Page Announcements

Settings

Project Settings

Roles and Teammates

Organization Settings

Global Project Filters

Portfolio Analytics

Analytics Data Manager

Analytics Data Manager Overview

Events

Event Properties

User Properties

Revenue Mapping

Export Data

UTM Guide

Analyses

Analyses Introduction

Segmentation: Basics

Getting Started

Visualization Options

For Clauses

Date Range and Time Settings

Calculator

Numerical Settings

Segmentation: Advanced

Assisted Analysis

Properties Explorer

Frequency in Segmentation

Trends in Segmentation

Did [not] Perform Clauses

Cumulative vs. Non-Cumulative Analysis in Segmentation

Total Count of vs. Users Who Performed

Save Your Segmentation Analysis

Export Results in Segmentation

Explore Users from Segmentation

Funnels: Basics

Getting Started with Funnels

Group By Settings

Conversion Window

Tracking Properties

Date Range and Time Settings

Visualization Options

Interpreting a Funnel Analysis

Funnels: Advanced

Group By

Filters

Conversion over Time

Conversion Order

Trends

Funnel Direction

Multi-path Funnels

Analyze as Cohort from Funnel

Save a Funnel Analysis

Explore Users from a Funnel

Export Results from a Funnel

Cohorts

Getting Started with Cohorts

Analysis Modes

Save a Cohort Analysis

Export Results

Explore Users

Saved Analyses

Manage Analyses in Dashboards

Journeys

Getting Started

Event Menu

Visualization

Ending Event

Save a Journey Analysis

Users

Getting Started

User Activity Timelines

Time Settings

Export Results

Save A User Analysis

Query Builder

Data Dictionary

Query Builder Overview

Modify Filters With And/Or Clauses

Query-time Sampling

Query Notes

Filter Where Clauses

Event vs. User Properties

Group By Clauses

Annotations

Cross-tool Compatibility

Apply All for Filter Where Clauses

Date Range and Time Settings Overview

User Attributes at Event Time

Understanding the Screen View Event

User Aliasing

Dashboards

Dashboards––Getting Started

Manage Dashboards

Organize Dashboards

Dashboard Filters

Scheduled Reports

Favorites

Time and Interval Settings in Dashboards

Query Notes in Dashboards

Analytics Resources

The Demo Environment

Keyboard Shortcuts

User Segments

Tutorials

Analytics for Marketers

Analytics for Product Managers

Compare Conversion Across Acquisition Sources

Analyze Product Feature Usage

Identify Points of User Friction

Time-based Subscription Analysis

Dashboard Tips and Tricks

Understand Product Stickiness

Optimize User Flow with A/B Testing

APIs

User Segments Export API

Dashboard Filter API

Warehouse Sync

Warehouse Sync User Guide

Historical Data and Warehouse Sync

Data Privacy Controls

Data Subject Requests

Default Service Limits

Feeds

Cross-Account Audience Sharing

Approved Sub-Processors

Import Data with CSV Files

Import Data with CSV Files

CSV File Reference

Glossary

Video Index

Analytics (Deprecated)
Identity Providers

Single Sign-On (SSO)

Setup Examples

Settings

Debug Console

Data Warehouse Delay Alerting

Introduction

Developer Docs

Introduction

Integrations

Introduction

Rudderstack

Google Tag Manager

Segment

Data Warehouses and Data Lakes

Advanced Data Warehouse Settings

AWS Kinesis (Snowplow)

AWS Redshift (Define Your Own Schema)

AWS S3 (Snowplow Schema)

AWS S3 Integration (Define Your Own Schema)

BigQuery (Snowplow Schema)

BigQuery Firebase Schema

BigQuery (Define Your Own Schema)

GCP BigQuery Export

Snowflake (Snowplow Schema)

Snowplow Schema Overview

Snowflake (Define Your Own Schema)

APIs

Dashboard Filter API (Deprecated)

REST API

User Segments Export API (Deprecated)

SDKs

SDKs Introduction

React Native

iOS

Android

Java

JavaScript

Python

Object API

Developer Basics

Aliasing

Integrations
24i

Event

Aarki

Audience

Abakus

Event

ABTasty

Audience

Actable

Feed

AdChemix

Event

Adjust

Event

Feed

Adikteev

Audience

Event

AdMedia

Audience

Adobe Audience Manager

Audience

Adobe Marketing Cloud

Cookie Sync

Server-to-Server Events

Platform SDK Events

AdPredictive

Feed

Adobe Campaign Manager

Audience

Adobe Target

Audience

AgilOne

Event

Airship

Audience

Feed

Event

Algolia

Event

AlgoLift

Event

Feed

Alooma

Event

Amazon Kinesis

Event

Amazon Advertising

Audience

Amazon Kinesis Firehose

Audience

Event

Amazon S3

Event

Amazon SNS

Event

Amazon SQS

Event

Amobee

Audience

Amplitude

Forwarding Data Subject Requests

Event

Amazon Redshift

Data Warehouse

Ampush

Audience

Event

Analytics

Audience

Event

Forwarding Data Subject Requests

Anodot

Event

Antavo

Feed

Apptentive

Event

Apptimize

Event

Attentive

Event

Feed

Apteligent

Event

Awin

Event

AppLovin

Audience

Event

Batch

Audience

Event

Attractor

Event

Bidease

Audience

Microsoft Azure Blob Storage

Event

Bing Ads

Event

Bluecore

Event

Bluedot

Feed

Blueshift

Event

Feed

Forwarding Data Subject Requests

Branch

Event

Forwarding Data Subject Requests

Feed

Branch S2S Event

Event

Bugsnag

Event

Braze

Audience

Feed

Forwarding Data Subject Requests

Event

Button

Event

Audience

Cadent

Audience

Census

Feed

ciValue

Event

Feed

CleverTap

Audience

Event

Feed

comScore

Event

Conversant

Event

Cordial

Audience

Feed

Cortex

Event

Forwarding Data Subject Requests

Feed

Criteo

Audience

Event

Custom Feed

Custom Feed

Crossing Minds

Event

CustomerGlu

Feed

Event

Databricks

Data Warehouse

Customer.io

Audience

Event

Feed

Datadog

Event

Didomi

Event

Dynalyst

Audience

Dynamic Yield

Audience

Event

Edge226

Audience

Emarsys

Audience

Epsilon

Event

Facebook Offline Conversions

Event

Facebook

Audience

Event

Everflow

Audience

Fiksu

Audience

Event

Google Analytics for Firebase

Event

Flurry

Event

Formation

Event

Feed

Flybits

Event

FreeWheel Data Suite

Audience

ForeSee

Event

Foursquare

Audience

Feed

Friendbuy

Event

Google Ad Manager

Audience

Google Ads

Audience

Event

Google Analytics

Event

Google BigQuery

Audience

Data Warehouse

Google Analytics 4

Event

Google Cloud Storage

Event

Audience

Google Enhanced Conversions

Event

Google Marketing Platform

Cookie Sync

Audience

Event

Google Marketing Platform Offline Conversions

Event

Google Tag Manager

Event

Google Pub/Sub

Event

Heap

Event

Hightouch

Feed

Herow

Feed

Hyperlocology

Event

Ibotta

Event

ID5

Kit

Impact

Event

InMobi

Audience

Event

InMarket

Audience

Insider

Audience

Event

Feed

Inspectlet

Event

ironSource

Audience

Intercom

Event

iPost

Audience

Feed

Jampp

Audience

Event

Kayzen

Audience

Event

Iterable

Audience

Feed

Event

Kissmetrics

Event

Kafka

Event

Klaviyo

Audience

Event

Kochava

Feed

Event

Forwarding Data Subject Requests

Kubit

Event

LaunchDarkly

Feed

Leanplum

Audience

Event

Feed

LifeStreet

Audience

LinkedIn

LinkedIn Conversions API Integration

Liftoff

Event

Audience

LiveLike

Event

Liveramp

Audience

mAdme Technologies

Event

Localytics

Event

AppsFlyer

Forwarding Data Subject Requests

Event

Feed

Mailchimp

Audience

Feed

Event

MadHive

Audience

Marigold

Audience

Mautic

Audience

Event

MediaMath

Audience

Mediasmart

Audience

Microsoft Azure Event Hubs

Event

Mintegral

Audience

MoEngage

Event

Audience

Feed

Mixpanel

Audience

Event

Forwarding Data Subject Requests

Moloco

Audience

Event

Movable Ink

Event

Monetate

Event

Movable Ink - V2

Event

Multiplied

Event

myTarget

Audience

Event

Nami ML

Feed

Nanigans

Event

Narrative

Audience

Event

Feed

Neura

Event

NCR Aloha

Event

OneTrust

Event

Optimizely

Event

Audience

Oracle BlueKai

Event

Oracle Responsys

Audience

Event

Paytronix

Feed

Persona.ly

Audience

Personify XP

Event

PieEye

Inbound Data Subject Requests

Pilgrim

Event

Feed

Pinterest

Audience

Event

Plarin

Event

Postie

Audience

Event

Primer

Event

Punchh

Feed

Audience

Event

Qualtrics

Event

Quantcast

Event

Radar

Event

Feed

Regal

Event

Rakuten

Event

Reddit

Audience

Event

Remerge

Audience

Event

Retina AI

Event

Feed

RevenueCat

Feed

Reveal Mobile

Event

Rokt

Event

Rokt Thanks and Pay+

Audience

RTB House

Event

Audience

Salesforce Email

Audience

Feed

Event

Sailthru

Audience

Event

Salesforce Mobile Push

Event

Salesforce Sales and Service Cloud

Event

Samba TV

Audience

Event

Scalarr

Event

SendGrid

Audience

Feed

SessionM

Event

Feed

ShareThis

Audience

Feed

Signal

Event

SimpleReach

Event

Shopify

Feed

Custom Pixel

Singular

Event

Feed

Singular-DEPRECATED

Event

Skyhook

Event

Smadex

Audience

Slack

Event

Snapchat

Audience

Event

Snapchat Conversions

Event

SmarterHQ

Event

Snowflake

Data Warehouse

Split

Event

Feed

Snowplow

Event

Splunk MINT

Event

Sprig

Audience

Event

StartApp

Audience

Statsig

Event

Feed

Stormly

Audience

Event

Swrve

Feed

Event

Talon.One

Audience

Feed

Event

Loyalty Feed

Tapad

Audience

Taplytics

Event

Tapjoy

Audience

Taptica

Audience

Teak

Audience

The Trade Desk

Audience

Cookie Sync

Event

Ticketure

Feed

TikTok Event

Audience (Deprecated)

Audience

Event

Audience Migration

Treasure Data

Audience

Event

Triton Digital

Audience

TUNE

Event

Twitter

Audience

Event

Valid

Event

Vkontakte

Audience

Voucherify

Audience

Event

Vungle

Audience

Webtrends

Event

Webhook

Event

White Label Loyalty

Event

Wootric

Event

Xandr

Audience

Cookie Sync

Yahoo (formerly Verizon Media)

Audience

Cookie Sync

Yotpo

Feed

YouAppi

Audience

Z2A Digital

Event

Audience

Zendesk

Event

Feed

Quadratic Labs

Event

Pushwoosh

Audience

Event

Warehouse Sync API Overview

The Warehouse Sync API is mParticle’s reverse ETL solution that allows you to ingest user profile data or event data into mParticle from a data warehouse.

There are four stages to configuring Warehouse Sync:

  1. Create a connection between mParticle and your database.
  2. Create a data model that specifies what data you want to ingest.
  3. Create a field transformation that maps your source data to fields in mParticle.
  4. Create and configure the pipeline between your database and mParticle using your data model and field transformations. Your pipeline settings determine when, and how frequently, data will sync.

Full vs Incremental Pipelines

When you create a Warehouse Sync pipeline you must choose its sync_mode: full or incremental. This choice cannot be changed later.

full pipelines

  • Each run returns the complete result set produced by your data model’s SQL.
  • Ideal for small tables or occasional, ad-hoc backfills.
  • Combine with a WHERE clause to narrow the slice of data when re-running the pipeline.
  • Full pipelines may ingest duplicate data due to deduplication limits for large datasets. Additionally, without a timestamp to refer to, mParticle can’t track changes to data over time, so a field that’s been changed back to a previous value might be mistakenly marked as a duplicate. For most use cases, incremental syncs provide better data accuracy and performance.

incremental pipelines

  • Requires an iterator column and keeps track of the greatest iterator value that has been processed.
  • The first execution ingests all rows where the iterator is after the from value you supply.
  • Subsequent runs fetch only new or updated rows.
  • Leave until blank to make the pipeline unbounded.
  • Requires an iterator column and keeps track of the greatest iterator value that has been processed.

Iterator Columns and Timestamps

When using incremental pipelines in Warehouse Sync, you must specify an iterator column—a timestamp field (such as datetime, date, or Unix timestamp) that mParticle uses to track which rows have already been processed. This iterator column is essential for reliable incremental updates and should be distinct from your event timestamp field whenever possible.

Best practices for iterator columns:

  • Use a dedicated iterator column (such as a system timestamp indicating when a row was inserted or updated) for incremental syncs. This is preferred because the time an event occurred (event timestamp) often differs from when the row was added or updated in your warehouse (iterator/system time).
  • When using incremental pipelines in Warehouse Sync, you must specify an iterator column—a timestamp field (such as datetime, date, or Unix timestamp) that mParticle uses to track which rows have already been processed. This iterator column is essential for reliable incremental updates and should be distinct from your event timestamp field whenever possible.
  • Use the event timestamp field for filtering in your SQL (for example, in a WHERE clause) and for mapping to the appropriate mParticle field.
  • When setting up your field transformation, you may ignore the iterator column if it’s only used for sync tracking.
  • You may use the event timestamp field as the iterator column if convenient, but this is not recommended for most use cases.
  • You can offset the iterator column by a fixed amount to accommodate late-arriving data. For example, if data typically arrives in your warehouse one day after creation, set the pipeline’s delay field to 1d to account for this upstream processing time.

Filtering the Data You Ingest

Warehouse Sync provides two complementary ways to control which rows are pulled from your warehouse:

  1. SQL WHERE clauses – Add predicates such as WHERE event_timestamp >= DATE '2023-01-01' or a bounded BETWEEN statement inside the query you save with the data model. The filter executes in your warehouse on every run. This is recommended for filtering on fields other than the iterator field, such as event timestamp, event type, or user region, and is useful for validating your pipeline by limiting the dataset to a small subset of data.
  2. Iterator window (from / until) – Use the from and until fields on sync_mode when calling the API to set the minimum and maximum iterator values a pipeline will ever request. This is recommended for a seamless transition between your initial backfill and ongoing incremental syncs, and makes your pipeline runs more easily auditable via the pipeline status APIs.

A row must satisfy both the iterator window and your SQL to be ingested, giving you granular control over the time span and the business logic applied to your data.

Practical Patterns for Historical Ingestion

There are two common strategies for ingesting historical data efficiently:

  1. Single incremental pipeline with an early from date – Set the from value to the start of the period you want to ingest, and leave until blank. The initial run will backfill the entire range, after which the pipeline will automatically switch to incremental updates on its schedule.
  2. Separate historical full and incremental pipelines – Create a full On Demand pipeline and use a WHERE clause in your data model to limit the dataset (for example, WHERE event_timestamp BETWEEN '2024-01-01' AND '2024-06-30'). Trigger the pipeline whenever you need to ingest another slice, updating the clause between runs as needed. Once you have finished ingesting the historical data, disable the full pipeline and create a new incremental pipeline to sync ongoing new data on a regular schedule, setting the from value to the last date you ingested with the full pipeline.

Important notes:

  • Full pipelines re-read the entire result set every time they run and are well-suited for small tables or ad-hoc, on-demand replays. They may ingest duplicate data due to deduplication limits for very large datasets. Use SQL WHERE clauses to ensure no overlap in data between runs.
  • Incremental pipelines require an iterator column and only fetch rows whose iterator value is greater than the last successful run. The first run backfills everything from the from value you set; subsequent runs pick up new rows. Leave until blank to let the pipeline continue indefinitely.

Performance Considerations

By default, a Warehouse Sync Pipeline will ingest your data at a rate according to your account’s configured limits and expected data volumes. When the Warehouse Sync API or UI shows a “success” status, it means your data has been ingested and accepted to be processed by other mParticle systems, such as user activity view, audiences, calculated attributes, and connected outputs. Keep in mind that while recent non-historical data may be available quickly, some downstream features and integrations may require additional processing time before all ingested data is available end-to-end. This is especially true for large or historical data loads which are processed differently than real-time data and connected outputs which have their own processing times.

If you expect to ingest large volumes of data or have specific timing requirements, reach out to your Customer Success Manager (CSM) to review your configuration and ensure optimal pipeline performance.

Ingesting user profile data

User profile data includes information like what identities a user has (such as an email address, user name, or account number), custom attributes (such as subscription status), or device and demographic information. User data in mParticle is stored in user profiles which can be used to create segmented audiences that you can engage with using downstream marketing tools.

You can use warehouse sync to ingest user profile data from your database into mParticle, where it can be used to create new, or enrich existing, profiles that describe your users.

Ingesting events data

Event data describes the actions that your users take in your app, website, or product. This could include information about pages your users visit, videos played, products added to wishlists, and more.

Continue reading below for general information about how to access the Warehouse Sync API. For a step-by-step tutorial on how to start creating your first warehouse sync pipeline, go to the Warehouse Sync API Tutorial.

Prerequisites to Accessing the Warehouse Sync API

To authenticate when using the Warehouse Sync API, you will need a new set of API credentials.

  1. From your mParticle account, hover your cursor over the Settings gear icon in the left hand nav and select Platform under settings.
  2. Select the API Credentials tab and click +Add Credential.
  3. Give your new credential a descriptive name, check the box next to Platform and select Admin from the Permissions dropdown menu.
  4. Click Save, and copy the Client ID and Client Secret. You will use these when fetching an OAuth access token.

Authentication

After creating your new API credential by following the steps above, you can authenticate by issuing a POST request to mParticle’s SSO token endpoint.

https://sso.auth.mparticle.com/oauth/token

The JSON body of the request must contain:

  • client_id - your Client ID that you saved when creating your new API credential
  • client_secret - your Client Secret that you saved when creating your new API credential
  • audience - set to a value of "https://api.mparticle.com"
  • grant_type - set to a value of "client_credentials"

Curl Syntax

curl --request POST \
  --url https://sso.auth.mparticle.com/oauth/token \
  --header 'content-type: application/json' \
  --data '{"client_id":"...","client_secret":"...","audience":"https://api.mparticle.com","grant_type":"client_credentials"}'

Sample Raw HTTP Request

POST /oauth/token HTTP/1.1
Host: sso.auth.mparticle.com
Content-Type: application/json

{
  "client_id": "your_client_id",
  "client_secret": "your_client_secret",
  "audience": "https://api.mparticle.com",
  "grant_type": "client_credentials"
}

Using your Bearer Token

A successful POST request to the token endpoint will result in a JSON response as follows:

{
  "access_token": "YWIxMjdi883GHBBDnjsdKAJQxNjdjYUUJABbg6hdI.8V6HhxW-",
  "expires_in" : 28800,
  "token_type": "Bearer"
}

Subsequent requests to the API can now be authorized by setting the Authorization header as follows:

Authorization: Bearer YWIxMjdi883GHBBDnjsdKAJQxNjdjYUUJABbg6hdI.8V6HhxW-

Versioning

Once you have authenticated, the API resources can be accessed at https://api.mparticle.com/platform/v2/. Subsequent updates to the API that introduce breaking changes will be published with a new version number in the URL.

HTTP Methods

This API uses the HTTP methods GET, POST, PUT, and DELETE.

Headers

This API accepts and sometimes requires the following headers:

Header Required Method Notes
Authorization Required GET, POST, PUT, DELETE
Content-Type Optional GET, POST, PUT, DELETE

Request Bodies

All POST/PUT requests should send JSON as the Request Payload, with Content-Type set to application/json.

Limits

In addition to the standard default service limits, note the following limits specific to the Warehouse Sync API:

Limit Value Notes
Max number of Active Pipelines 5
Historical Record Limit 24 million For new interval based pipelines, there is a 24 million record limit while retrieving records before the schedule start time. See sync mode from and until to filter data to load.
Column limit 100
Record count limit per hourly interval 1 million
Record count limit per daily interval 24 million
Record count limit per weekly interval 40 million
Record count limit per monthly interval 40 million
Record count limit per once request 40 million
Record count limit per on-demand request 24 million Applicable when the trigger API is used

Was this page helpful?

    Last Updated: July 24, 2025