mParticle provides an HTTP-based Events API that can be used to collect data from your backend systems.
The HTTP endpoint to send data is based on which Pod your account is hosted:
Region | Pod | URL |
---|---|---|
United States | US1 | https://s2s.mparticle.com/v2, https://s2s.us1.mparticle.com/v2 |
United States | US2 | https://s2s.us2.mparticle.com/v2 |
Europe | EU1 | https://s2s.eu1.mparticle.com/v2 |
Australia | AU1 | https://s2s.au1.mparticle.com/v2 |
Please reference the JSON reference for the precise API schema.
You can use the Open API specification (also known as Swagger) below to generate helper SDKs (using Swagger Codegen or OpenAPI Generator) for the Events API:
/v2/events
This path accepts a JSON event batch. See our JSON documentation for additional information.
This path should not be used to upload historical data older than 30 days, as this could impact downstream processes such as audience calculation. To upload historical data older than 30 days, please use the historical endpoint.
{
"events" : [
{
"data" : {},
"event_type" : "custom_event"
}
],
"device_info" : {},
"user_attributes" : {},
"deleted_user_attributes" : [],
"user_identities" : {},
"application_info" : {},
"schema_version": 2,
"environment": "production",
"context" : {},
"ip" : "172.217.12.142"
}
/v2/bulkevents
This path accepts a JSON array of event batches. See our JSON documentation for additional information.
You may not send more than 100 EVENT DATA items per request. If some event batches succeed and some event batches fail, you will still get an “Accepted” response.
This path should not be used to upload historical data older than 30 days, as this could impact downstream processes such as audience calculation. To upload historical data older than 30 days, please use the historical endpoint.
Please see the format below containing an array of JSON event batches.
[
{
"events" : [
{
"data" : {},
"event_type" : "custom_event"
}
],
"device_info" : {},
"user_attributes" : {},
"deleted_user_attributes" : [],
"user_identities" : {},
"application_info" : {},
"schema_version": 2,
"environment" : "production",
"context" : {},
"ip" : "172.217.12.142"
},
{
"events" : [
{
"data" : {},
"event_type" : "custom_event"
}
],
"device_info" : {},
"user_attributes" : {},
"deleted_user_attributes" : [],
"user_identities" : {},
"application_info" : {},
"schema_version" : 2,
"environment" : "production",
"context" : {},
"ip" : "172.217.12.142"
}
]
/v2/bulkevents/historical
This path accepts the same JSON payload as /v2/bulkevents and should be used to upload historical backfill data more than 30 days old. Data forwarded to the historical endpoint is subject to special requirements and is processed differently.
A batch received by the historical data endpoint will not be processed if any of the following are true:
timestamp_unixtime_ms
property,timestamp_unixtime_ms
is less than 72 hours old.The historical
API endpoint behaves nearly identically to the events
and bulkevents
endpoints with one key difference: data is not forwarded to connected event and data warehouses.
mParticle Feature | Effect of historical data |
---|---|
Event and Data Warehouse Outputs | Not forwarded downstream. |
Audience | No change to Real-time or Standard Audiences. Data is subject to existing date-range retention limits. Real-time audiences have a 30 day look-back for most customers. |
User Activity | No change; Events visible in date order. |
Identity and Profiles | No change |
The HTTP APIs are secured via basic authentication.
You can authenticate in 2 ways:
Manually set the Authorization
header by encoding your key and secret together:
2.1 Concatenate your application key and secret together with a colon (:) separating the two:
example-api-key:example-api-secret
2.2 Base64 with UTF-8 encode the result:
ZXhhbXBsZS1hcGkta2V5OmV4YW1wbGUtYXBpLXNlY3JldA==
2.3 Prefix the encoded string with the authorization method, including a space:
Basic ZXhhbXBsZS1hcGkta2V5OmV4YW1wbGUtYXBpLXNlY3JldA==
2.4 Set resulting string as the Authorization
header in your HTTP requests:
Authorization: Basic ZXhhbXBsZS1hcGkta2V5OmV4YW1wbGUtYXBpLXNlY3JldA==
You must POST a JSON Document to the endpoint. Reference the JSON documentation for details.
mParticle recieves data across many channels, and limits are not always enforced in the same way for each channel. Default service limits affect S2S or ‘server-to-server’ data. S2S data includes data received via the Events API, Calculated Attributes Seeding API, and from partner feeds.
For more information about default service limits related to event batches, see Default Service Limits.
You should inspect the status code of the response to determine if the POST has been accepted or if an error occurred.
Status | Code | Notes |
---|---|---|
202 | Accepted | The POST was accepted. |
400 | Bad Request | The request JSON was malformed JSON or had missing fields. |
401 | Unauthorized | The authentication header is missing. |
403 | Forbidden | The authentication header is present, but invalid. |
429 | Too Many Requests | You have exceeded your provisioned limit. The v2/events and v2/bulkevents endpoints may return a Retry-After response header with a value containing a non-negative decimal integer indicating the number of seconds to delay. If the header is not present, we recommend retrying your request with exponential backoff and random jitter.. Learn more about API throttling in Default Service Limits. |
503 | Service Unavailable | We recommend retrying your request in an exponential backoff pattern |
5xx | Server Error | A server-side error has occured, please try your request again. |
In some cases, the server can provide additional information the error that occurred in the response body.
The response body is optionally sent from the server and will not be included if additional details on the error are not known.
{
"errors" :
[
{
"code" : "BAD_REQUEST",
"message" : "Required event field \"event_type\" is missing or empty."
}
]
}
In order to maintain high throughput performance for large quantities of event data via the HTTP API, pay attention to how you compile individual events into batches. Each batch contains an event array which can hold multiple events, as long as they are for the same user.
If you are generating a lot of event data, sending a full batch for each individual event in realtime will negatively impact performance. Instead, send a combined event batch either at a set time interval, or after a given number of events for each user.
You can further reduce the number of HTTP requests by grouping together up to 100 event batches for multiple users together and forwarding them to the /bulkevents
endpoint.
When creating event batches remember the following:
/bulkevents
should contain no more than 100 batches.Was this page helpful?