Developers
API References
Data Subject Request API

Data Subject Request API Version 1 and 2

Data Subject Request API Version 3

Platform API

Key Management

Platform API Overview

Accounts

Apps

Audiences

Calculated Attributes

Data Points

Feeds

Field Transformations

Services

Users

Workspaces

Warehouse Sync API

Warehouse Sync API Overview

Warehouse Sync API Tutorial

Warehouse Sync API Reference

Data Mapping

Warehouse Sync SQL Reference

Warehouse Sync Troubleshooting Guide

ComposeID

Warehouse Sync API v2 Migration

Bulk Profile Deletion API Reference

Calculated Attributes Seeding API

Custom Access Roles API

Group Identity API Reference

Data Planning API

Pixel Service

Profile API

Events API

mParticle JSON Schema Reference

IDSync

Client SDKs
Android

Initialization

Configuration

Network Security Configuration

Event Tracking

User Attributes

IDSync

Screen Events

Commerce Events

Location Tracking

Media

Kits

Application State and Session Management

Data Privacy Controls

Error Tracking

Opt Out

Push Notifications

WebView Integration

Logger

Preventing Blocked HTTP Traffic with CNAME

Workspace Switching

Linting Data Plans

Troubleshooting the Android SDK

API Reference

Upgrade to Version 5

AMP

AMP SDK

Cordova

Cordova Plugin

Identity

Direct Url Routing

Direct URL Routing FAQ

Web

Android

iOS

Flutter

Getting Started

Usage

API Reference

iOS

Workspace Switching

Initialization

Configuration

Event Tracking

User Attributes

IDSync

Screen Tracking

Commerce Events

Location Tracking

Media

Kits

Application State and Session Management

Data Privacy Controls

Error Tracking

Opt Out

Push Notifications

Webview Integration

Upload Frequency

App Extensions

Preventing Blocked HTTP Traffic with CNAME

Linting Data Plans

Troubleshooting iOS SDK

Social Networks

iOS 14 Guide

iOS 15 FAQ

iOS 16 FAQ

iOS 17 FAQ

iOS 18 FAQ

API Reference

Upgrade to Version 7

React Native

Getting Started

Identity

Roku

Getting Started

Identity

Media

Unity

Upload Frequency

Getting Started

Opt Out

Initialize the SDK

Event Tracking

Commerce Tracking

Error Tracking

Screen Tracking

Identity

Location Tracking

Session Management

Xbox

Getting Started

Identity

Web

Initialization

Configuration

Content Security Policy

Event Tracking

User Attributes

IDSync

Page View Tracking

Commerce Events

Location Tracking

Media

Kits

Application State and Session Management

Data Privacy Controls

Error Tracking

Opt Out

Custom Logger

Persistence

Native Web Views

Self-Hosting

Multiple Instances

Web SDK via Google Tag Manager

Preventing Blocked HTTP Traffic with CNAME

Facebook Instant Articles

Troubleshooting the Web SDK

Browser Compatibility

Linting Data Plans

API Reference

Upgrade to Version 2 of the SDK

Xamarin

Getting Started

Identity

Alexa

Media SDKs

Android

iOS

Web

Quickstart
Android

Overview

Step 1. Create an input

Step 2. Verify your input

Step 3. Set up your output

Step 4. Create a connection

Step 5. Verify your connection

Step 6. Track events

Step 7. Track user data

Step 8. Create a data plan

Step 9. Test your local app

HTTP Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

iOS Quick Start

Overview

Step 1. Create an input

Step 2. Verify your input

Step 3. Set up your output

Step 4. Create a connection

Step 5. Verify your connection

Step 6. Track events

Step 7. Track user data

Step 8. Create a data plan

Java Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

Node Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

Web

Overview

Step 1. Create an input

Step 2. Verify your input

Step 3. Set up your output

Step 4. Create a connection

Step 5. Verify your connection

Step 6. Track events

Step 7. Track user data

Step 8. Create a data plan

Python Quick Start

Step 1. Create an input

Step 2. Create an output

Step 3. Verify output

Server SDKs

Node SDK

Go SDK

Python SDK

Ruby SDK

Java SDK

Tools

mParticle Command Line Interface

Linting Tools

Smartype

Guides
Partners

Introduction

Outbound Integrations

Outbound Integrations

Firehose Java SDK

Inbound Integrations

Kit Integrations

Overview

Android Kit Integration

JavaScript Kit Integration

iOS Kit Integration

Compose ID

Data Hosting Locations

Glossary

Migrate from Segment to mParticle

Migrate from Segment to mParticle

Migrate from Segment to Client-side mParticle

Migrate from Segment to Server-side mParticle

Segment-to-mParticle Migration Reference

Rules Developer Guide

API Credential Management

The Developer's Guided Journey to mParticle

Guides

Composable Audiences (Early Access)

Customer 360

Overview

User Profiles

Overview

User Profiles

Group Identity

Overview

Create and Manage Group Definitions

Calculated Attributes

Calculated Attributes Overview

Using Calculated Attributes

Create with AI Assistance

Calculated Attributes Reference

Predictive Attributes

What are predictive attributes?

Predict Future Behavior

Create Future Prediction

Use Future Predictions in Campaigns

Assess and Troubleshoot Predictions

Next Best Action

Next Best Action Overview

Create a Next Best Action (NBA)

View and Manage NBAs

Activate Next Best Actions in Campaigns

Getting Started

Create an Input

Start capturing data

Connect an Event Output

Create an Audience

Connect an Audience Output

Transform and Enhance Your Data

Platform Guide
Billing

Usage and Billing Report

The New mParticle Experience

The new mParticle Experience

The Overview Map

Platform Settings

Key Management

Observability

Observability Overview

Observability User Guide

Observability Troubleshooting Examples

Observability Span Glossary

Event Forwarding

Notification Center (Early Access)

System Alerts

Trends

Introduction

Data Retention

Data Catalog

Connections

Activity

Data Plans

Live Stream

Filters

Rules

Blocked Data Backfill Guide

Tiered Events

mParticle Users and Roles

Analytics Free Trial

Troubleshooting mParticle

Usage metering for value-based pricing (VBP)

IDSync

IDSync Overview

Use Cases for IDSync

Components of IDSync

Store and Organize User Data

Identify Users

Default IDSync Configuration

Profile Conversion Strategy

Profile Link Strategy

Profile Isolation Strategy

Best Match Strategy

Aliasing

Segmentation
Audiences

Audiences Overview

Create an Audience

Connect an Audience

Manage Audiences

FAQ

Classic Audiences

Standard Audiences (Legacy)

Predictive Audiences

Predictive Audiences Overview

Using Predictive Audiences

New vs. Classic Experience Comparison

Analytics

Introduction

Core Analytics (Beta)

Setup

Sync and Activate Analytics User Segments in mParticle

User Segment Activation

Welcome Page Announcements

Settings

Project Settings

Roles and Teammates

Organization Settings

Global Project Filters

Portfolio Analytics

Analytics Data Manager

Analytics Data Manager Overview

Events

Event Properties

User Properties

Revenue Mapping

Export Data

UTM Guide

Analyses

Analyses Introduction

Segmentation: Basics

Getting Started

Visualization Options

For Clauses

Date Range and Time Settings

Calculator

Numerical Settings

Segmentation: Advanced

Assisted Analysis

Properties Explorer

Frequency in Segmentation

Trends in Segmentation

Did [not] Perform Clauses

Cumulative vs. Non-Cumulative Analysis in Segmentation

Total Count of vs. Users Who Performed

Save Your Segmentation Analysis

Export Results in Segmentation

Explore Users from Segmentation

Funnels: Basics

Getting Started with Funnels

Group By Settings

Conversion Window

Tracking Properties

Date Range and Time Settings

Visualization Options

Interpreting a Funnel Analysis

Funnels: Advanced

Group By

Filters

Conversion over Time

Conversion Order

Trends

Funnel Direction

Multi-path Funnels

Analyze as Cohort from Funnel

Save a Funnel Analysis

Export Results from a Funnel

Explore Users from a Funnel

Cohorts

Getting Started with Cohorts

Analysis Modes

Save a Cohort Analysis

Export Results

Explore Users

Saved Analyses

Manage Analyses in Dashboards

Journeys

Getting Started

Event Menu

Visualization

Ending Event

Save a Journey Analysis

Users

Getting Started

User Activity Timelines

Time Settings

Export Results

Save A User Analysis

Query Builder

Data Dictionary

Query Builder Overview

Modify Filters With And/Or Clauses

Query-time Sampling

Query Notes

Filter Where Clauses

Event vs. User Properties

Group By Clauses

Annotations

Cross-tool Compatibility

Apply All for Filter Where Clauses

Date Range and Time Settings Overview

User Attributes at Event Time

Understanding the Screen View Event

User Aliasing

Dashboards

Dashboards––Getting Started

Manage Dashboards

Dashboard Filters

Organize Dashboards

Scheduled Reports

Favorites

Time and Interval Settings in Dashboards

Query Notes in Dashboards

Analytics Resources

The Demo Environment

Keyboard Shortcuts

User Segments

Tutorials

Analytics for Marketers

Analytics for Product Managers

Compare Conversion Across Acquisition Sources

Analyze Product Feature Usage

Identify Points of User Friction

Time-based Subscription Analysis

Dashboard Tips and Tricks

Understand Product Stickiness

Optimize User Flow with A/B Testing

APIs

User Segments Export API

Dashboard Filter API

Warehouse Sync

Warehouse Sync User Guide

Historical Data and Warehouse Sync

Data Privacy Controls

Data Subject Requests

Default Service Limits

Feeds

Cross-Account Audience Sharing

Approved Sub-Processors

Import Data with CSV Files

Import Data with CSV Files

CSV File Reference

Glossary

Video Index

Analytics (Deprecated)
Identity Providers

Single Sign-On (SSO)

Setup Examples

Settings

Debug Console

Data Warehouse Delay Alerting

Introduction

Developer Docs

Introduction

Integrations

Introduction

Rudderstack

Google Tag Manager

Segment

Data Warehouses and Data Lakes

Advanced Data Warehouse Settings

AWS Kinesis (Snowplow)

AWS Redshift (Define Your Own Schema)

AWS S3 Integration (Define Your Own Schema)

AWS S3 (Snowplow Schema)

BigQuery (Snowplow Schema)

BigQuery Firebase Schema

BigQuery (Define Your Own Schema)

GCP BigQuery Export

Snowflake (Snowplow Schema)

Snowplow Schema Overview

Snowflake (Define Your Own Schema)

APIs

Dashboard Filter API (Deprecated)

REST API

User Segments Export API (Deprecated)

SDKs

SDKs Introduction

React Native

iOS

Android

Java

JavaScript

Python

Object API

Developer Basics

Aliasing

Integrations
24i

Event

Aarki

Audience

Abakus

Event

Actable

Feed

ABTasty

Audience

AdChemix

Event

Adjust

Event

Feed

Adikteev

Event

Audience

AdMedia

Audience

Adobe Campaign Manager

Audience

Adobe Marketing Cloud

Cookie Sync

Server-to-Server Events

Platform SDK Events

Adobe Audience Manager

Audience

AdPredictive

Feed

AgilOne

Event

Airship

Feed

Audience

Event

Adobe Target

Audience

Algolia

Event

AlgoLift

Event

Feed

Alooma

Event

Amazon Advertising

Audience

Amazon Kinesis

Event

Amazon S3

Event

Amazon Kinesis Firehose

Audience

Event

Amazon Redshift

Data Warehouse

Amobee

Audience

Amazon SNS

Event

Amazon SQS

Event

Amplitude

Event

Forwarding Data Subject Requests

Ampush

Audience

Event

Analytics

Audience

Event

Forwarding Data Subject Requests

AppsFlyer

Event

Forwarding Data Subject Requests

Feed

Anodot

Event

AppLovin

Audience

Event

Apptimize

Event

Apptentive

Event

Attractor

Event

Attentive

Event

Feed

Awin

Event

Apteligent

Event

Batch

Audience

Event

Bidease

Audience

Microsoft Azure Blob Storage

Event

Bluecore

Event

Bing Ads

Event

Blueshift

Event

Forwarding Data Subject Requests

Feed

Bluedot

Feed

Branch

Event

Forwarding Data Subject Requests

Feed

Branch S2S Event

Event

Braze

Audience

Feed

Forwarding Data Subject Requests

Event

Bugsnag

Event

Button

Event

Audience

Cadent

Audience

Census

Feed

ciValue

Event

Feed

CleverTap

Event

Audience

Feed

comScore

Event

Cordial

Audience

Feed

Conversant

Event

Criteo

Event

Audience

Cortex

Event

Feed

Forwarding Data Subject Requests

Custom Feed

Custom Feed

Crossing Minds

Event

CustomerGlu

Feed

Event

Customer.io

Audience

Event

Feed

Databricks

Data Warehouse

Datadog

Event

Didomi

Event

Dynalyst

Audience

Dynamic Yield

Audience

Event

Edge226

Audience

Epsilon

Event

Emarsys

Audience

Everflow

Audience

Facebook

Audience

Event

Facebook Offline Conversions

Event

Fiksu

Audience

Event

Google Analytics for Firebase

Event

Flurry

Event

Flybits

Event

ForeSee

Event

Formation

Event

Feed

Foursquare

Audience

Feed

FreeWheel Data Suite

Audience

Friendbuy

Event

Antavo

Feed

Google Ads

Event

Audience

Google Analytics

Event

Google Ad Manager

Audience

Google BigQuery

Audience

Data Warehouse

Google Analytics 4

Event

Google Cloud Storage

Audience

Event

Google Enhanced Conversions

Event

Google Marketing Platform

Audience

Cookie Sync

Event

Google Marketing Platform Offline Conversions

Event

Google Pub/Sub

Event

Google Tag Manager

Event

Heap

Event

Hightouch

Feed

Herow

Feed

Hyperlocology

Event

Impact

Event

Ibotta

Event

ID5

Kit

InMarket

Audience

InMobi

Event

Audience

Inspectlet

Event

Insider

Audience

Event

Feed

Intercom

Event

iPost

Feed

Audience

ironSource

Audience

Iterable

Audience

Event

Feed

Jampp

Event

Audience

Kafka

Event

Kissmetrics

Event

Kayzen

Event

Audience

Klaviyo

Audience

Event

Kochava

Event

Feed

Forwarding Data Subject Requests

LaunchDarkly

Feed

Kubit

Event

LifeStreet

Audience

Leanplum

Event

Audience

Feed

Liftoff

Audience

Event

LiveLike

Event

LinkedIn

LinkedIn Conversions API Integration

Liveramp

Audience

Localytics

Event

MadHive

Audience

mAdme Technologies

Event

Mailchimp

Audience

Event

Feed

Marigold

Audience

Mautic

Audience

Event

Mediasmart

Audience

Microsoft Azure Event Hubs

Event

Mintegral

Audience

MediaMath

Audience

MoEngage

Audience

Event

Feed

Mixpanel

Audience

Event

Forwarding Data Subject Requests

Monetate

Event

Moloco

Audience

Event

Movable Ink

Event

Multiplied

Event

Movable Ink - V2

Event

myTarget

Audience

Event

Nami ML

Feed

Nanigans

Event

Narrative

Audience

Event

Feed

NCR Aloha

Event

Neura

Event

Oracle BlueKai

Event

OneTrust

Event

Optimizely

Audience

Event

Oracle Responsys

Audience

Event

Persona.ly

Audience

Paytronix

Feed

Personify XP

Event

Pilgrim

Feed

Event

PieEye

Inbound Data Subject Requests

Pinterest

Audience

Event

Plarin

Event

Punchh

Audience

Event

Feed

Pushwoosh

Audience

Event

Primer

Event

Quadratic Labs

Event

Qualtrics

Event

Radar

Event

Feed

Rakuten

Event

Quantcast

Event

Reddit

Audience

Event

Regal

Event

Retina AI

Event

Feed

Remerge

Audience

Event

Reveal Mobile

Event

RTB House

Audience

Event

Rokt

Audience

Event

Rokt Thanks and Pay+

RevenueCat

Feed

Sailthru

Event

Audience

Salesforce Email

Feed

Audience

Event

Salesforce Mobile Push

Event

Salesforce Sales and Service Cloud

Event

Samba TV

Event

Audience

Scalarr

Event

SendGrid

Audience

Feed

SessionM

Event

Feed

ShareThis

Audience

Feed

Signal

Event

Shopify

Custom Pixel

Feed

SimpleReach

Event

Singular

Event

Feed

Singular-DEPRECATED

Event

Slack

Event

Skyhook

Event

Smadex

Audience

SmarterHQ

Event

Snapchat Conversions

Event

Snapchat

Event

Audience

Snowflake

Data Warehouse

Snowplow

Event

Splunk MINT

Event

Sprig

Event

Audience

StartApp

Audience

Statsig

Event

Feed

Stormly

Audience

Event

Swrve

Event

Feed

Talon.One

Event

Audience

Feed

Loyalty Feed

Tapad

Audience

Tapjoy

Audience

Taplytics

Event

Taptica

Audience

Teak

Audience

The Trade Desk

Audience

Cookie Sync

Event

Ticketure

Feed

TikTok Event

Event

Audience

Audience (Deprecated)

Audience Migration

Treasure Data

Audience

Event

Triton Digital

Audience

TUNE

Event

Valid

Event

Twitter

Audience

Event

Voucherify

Audience

Event

Vkontakte

Audience

Vungle

Audience

Postie

Audience

Event

Webhook

Event

Webtrends

Event

White Label Loyalty

Event

Wootric

Event

Xandr

Cookie Sync

Audience

Yahoo (formerly Verizon Media)

Audience

Cookie Sync

Yotpo

Feed

Z2A Digital

Audience

Event

YouAppi

Audience

Split

Event

Feed

Zendesk

Event

Feed

Warehouse Sync API Overview

The Warehouse Sync API is mParticle’s reverse ETL solution that allows you to ingest user profile data or event data into mParticle from a data warehouse.

There are four stages to configuring Warehouse Sync:

  1. Create a connection between mParticle and your database.
  2. Create a data model that specifies what data you want to ingest.
  3. Create a field transformation that maps your source data to fields in mParticle.
  4. Create and configure the pipeline between your database and mParticle using your data model and field transformations. Your pipeline settings determine when, and how frequently, data will sync.

Full vs Incremental Pipelines

When you create a Warehouse Sync pipeline you must choose its sync_mode: full or incremental. This choice cannot be changed later.

full pipelines

  • Each run returns the complete result set produced by your data model’s SQL.
  • Ideal for small tables or occasional, ad-hoc backfills.
  • Combine with a WHERE clause to narrow the slice of data when re-running the pipeline.
  • Full pipelines may ingest duplicate data due to deduplication limits for large datasets. Without a timestamp to refer to, mParticle can’t track changes to data over time. A field that’s been changed back to a previous value might be mistakenly marked as a duplicate and therefore be discarded. For most use cases, incremental syncs provide better data accuracy and performance.

incremental pipelines

  • Requires an iterator column and keeps track of the greatest iterator value that has been processed.
  • The first execution ingests all rows where the iterator is after the from value you supply.
  • Subsequent runs fetch only new or updated rows.
  • Leave until blank to make the pipeline unbounded.
  • Requires an iterator column and keeps track of the greatest iterator value that has been processed.

Iterator Columns and Timestamps

When using incremental pipelines in Warehouse Sync, you must specify an iterator column—a timestamp field (such as datetime, date, or Unix timestamp) that mParticle uses to track which rows have already been processed. This iterator column is essential for reliable incremental updates and should be distinct from your event timestamp field whenever possible.

Best practices for iterator columns:

  • Use a dedicated iterator column (such as a system timestamp indicating when a row was inserted or updated) for incremental syncs. This is preferred because the time an event occurred (event timestamp) often differs from when the row was added or updated in your warehouse (iterator/system time).
  • When using incremental pipelines in Warehouse Sync, you must specify an iterator column—a timestamp field (such as datetime, date, or Unix timestamp) that mParticle uses to track which rows have already been processed. This iterator column is essential for reliable incremental updates and should be distinct from your event timestamp field whenever possible.
  • Use the event timestamp field for filtering in your SQL (for example, in a WHERE clause) and for mapping to the appropriate mParticle field.
  • When setting up your field transformation, you may ignore the iterator column if it’s only used for sync tracking.
  • You may use the event timestamp field as the iterator column if convenient, but this is not recommended for most use cases.
  • You can offset the iterator column by a fixed amount to accommodate late-arriving data. For example, if data typically arrives in your warehouse one day after creation, set the pipeline’s delay field to 1d to account for this upstream processing time.

Filtering the Data You Ingest

Warehouse Sync provides two complementary ways to control which rows are pulled from your warehouse:

  1. SQL WHERE clauses – Add predicates such as WHERE event_timestamp >= DATE '2023-01-01' or a bounded BETWEEN statement inside the query you save with the data model. The filter executes in your warehouse on every run. This is recommended for filtering on fields other than the iterator field, such as event timestamp, event type, or user region, and is useful for validating your pipeline by limiting the dataset to a small subset of data.
  2. Iterator window (from / until) – Use the from and until fields on sync_mode when calling the API to set the minimum and maximum iterator values a pipeline will ever request. This is recommended for a seamless transition between your initial backfill and ongoing incremental syncs, and makes your pipeline runs more easily auditable via the pipeline status APIs.

A row must satisfy both the iterator window and your SQL to be ingested, giving you granular control over the time span and the business logic applied to your data.

Practical Patterns for Historical Ingestion

There are two common strategies for ingesting historical data efficiently:

  1. Single incremental pipeline with an early from date – Set the from value to the start of the period you want to ingest, and leave until blank. The initial run will backfill the entire range, after which the pipeline will automatically switch to incremental updates on its schedule.
  2. Separate historical full and incremental pipelines – Create a full On Demand pipeline and use a WHERE clause in your data model to limit the dataset (for example, WHERE event_timestamp BETWEEN '2024-01-01' AND '2024-06-30'). Trigger the pipeline whenever you need to ingest another slice, updating the clause between runs as needed. Once you have finished ingesting the historical data, disable the full pipeline and create a new incremental pipeline to sync ongoing new data on a regular schedule, setting the from value to the last date you ingested with the full pipeline.

Important notes:

  • Full pipelines re-read the entire result set every time they run and are well-suited for small tables or ad-hoc, on-demand replays. They may ingest duplicate data due to deduplication limits for very large datasets. Use SQL WHERE clauses to ensure no overlap in data between runs.
  • Incremental pipelines require an iterator column and only fetch rows whose iterator value is greater than the last successful run. The first run backfills everything from the from value you set; subsequent runs pick up new rows. Leave until blank to let the pipeline continue indefinitely.

Performance Considerations

By default, a Warehouse Sync Pipeline will ingest your data at a rate according to your account’s configured limits and expected data volumes. When the Warehouse Sync API or UI shows a “success” status, it means your data has been ingested and accepted to be processed by other mParticle systems, such as user activity view, audiences, calculated attributes, and connected outputs. Keep in mind that while recent non-historical data may be available quickly, some downstream features and integrations may require additional processing time before all ingested data is available end-to-end. This is especially true for large or historical data loads which are processed differently than real-time data and connected outputs which have their own processing times.

If you expect to ingest large volumes of data or have specific timing requirements, reach out to your Customer Success Manager (CSM) to review your configuration and ensure optimal pipeline performance.

Ingesting user profile data

User profile data includes information like what identities a user has (such as an email address, user name, or account number), custom attributes (such as subscription status), or device and demographic information. User data in mParticle is stored in user profiles which can be used to create segmented audiences that you can engage with using downstream marketing tools.

You can use warehouse sync to ingest user profile data from your database into mParticle, where it can be used to create new, or enrich existing, profiles that describe your users.

Ingesting events data

Event data describes the actions that your users take in your app, website, or product. This could include information about pages your users visit, videos played, products added to wishlists, and more.

Continue reading below for general information about how to access the Warehouse Sync API. For a step-by-step tutorial on how to start creating your first warehouse sync pipeline, go to the Warehouse Sync API Tutorial.

Prerequisites to Accessing the Warehouse Sync API

To authenticate when using the Warehouse Sync API, you will need a new set of API credentials.

  1. From your mParticle account, hover your cursor over the Settings gear icon in the left hand nav and select Platform under settings.
  2. Select the API Credentials tab and click +Add Credential.
  3. Give your new credential a descriptive name, check the box next to Platform and select Admin from the Permissions dropdown menu.
  4. Click Save, and copy the Client ID and Client Secret. You will use these when fetching an OAuth access token.

Authentication

After creating your new API credential by following the steps above, you can authenticate by issuing a POST request to mParticle’s SSO token endpoint.

https://sso.auth.mparticle.com/oauth/token

The JSON body of the request must contain:

  • client_id - your Client ID that you saved when creating your new API credential
  • client_secret - your Client Secret that you saved when creating your new API credential
  • audience - set to a value of "https://api.mparticle.com"
  • grant_type - set to a value of "client_credentials"

Curl Syntax

curl --request POST \
  --url https://sso.auth.mparticle.com/oauth/token \
  --header 'content-type: application/json' \
  --data '{"client_id":"...","client_secret":"...","audience":"https://api.mparticle.com","grant_type":"client_credentials"}'

Sample Raw HTTP Request

POST /oauth/token HTTP/1.1
Host: sso.auth.mparticle.com
Content-Type: application/json

{
  "client_id": "your_client_id",
  "client_secret": "your_client_secret",
  "audience": "https://api.mparticle.com",
  "grant_type": "client_credentials"
}

Using your Bearer Token

A successful POST request to the token endpoint will result in a JSON response as follows:

{
  "access_token": "YWIxMjdi883GHBBDnjsdKAJQxNjdjYUUJABbg6hdI.8V6HhxW-",
  "expires_in" : 28800,
  "token_type": "Bearer"
}

Subsequent requests to the API can now be authorized by setting the Authorization header as follows:

Authorization: Bearer YWIxMjdi883GHBBDnjsdKAJQxNjdjYUUJABbg6hdI.8V6HhxW-

Versioning

Once you have authenticated, the API resources can be accessed at https://api.mparticle.com/platform/v2/. Subsequent updates to the API that introduce breaking changes will be published with a new version number in the URL.

HTTP Methods

This API uses the HTTP methods GET, POST, PUT, and DELETE.

Headers

This API accepts and sometimes requires the following headers:

Header Required Method Notes
Authorization Required GET, POST, PUT, DELETE
Content-Type Optional GET, POST, PUT, DELETE

Request Bodies

All POST/PUT requests should send JSON as the Request Payload, with Content-Type set to application/json.

Limits

In addition to the standard default service limits, note the following limits specific to the Warehouse Sync API:

Limit Value Notes
Max number of Active Pipelines 5
Historical Record Limit 24 million For new interval based pipelines, there is a 24 million record limit while retrieving records before the schedule start time. See sync mode from and until to filter data to load.
Column limit 100
Record count limit per hourly interval 1 million
Record count limit per daily interval 24 million
Record count limit per weekly interval 40 million
Record count limit per monthly interval 40 million
Record count limit per once request 40 million
Record count limit per on-demand request 24 million Applicable when the trigger API is used

Was this page helpful?

    Last Updated: August 14, 2025