Fleetio offers a bulk api for certain endpoints. Bulk APIs allow you to create data in large batches via a single network request, saving on network overhead.
Availability
The Bulk API currently supports creating and updating records. Deleting records in bulk is not supported at this time.
The following record types can be interacted with in bulk:
- Meter Entries: create only
- Location Entries: create only
- Faults : create only
- Vehicles: create and update
Usage
Using the bulk API is a two step process:
- Creating a bulk job.
- Retrieving the status of a bulk job
Step 1: Creating a Bulk Job
To create a bulk job you'll need to issue a POST
request to /api/v1/bulk_api_jobs
. The JSON body of the request requires three fields, resource
, operation
, and records
. Below is a curl example for bulk creating meter entries. This example creates two meter entries, but you can create up to 100 records via a single HTTP request like this.
$ curl \
--request POST \
--header "Authorization: Token YOUR_API_KEY" \
--header "Account-Token: YOUR_ACCOUNT_TOKEN" \
--header "Content-Type: application/json" \
--data '{"resource":"meter_entry","operation":"create","records":[{"vehicle_id":100, "date":"2020-01-01", "value":10000}, {"vehicle_id":200, "date":"2020-01-01", "value":25000}]}' \
"https://secure.fleetio.com/api/v1/bulk_api_jobs"
Notice how records
is an array of hashes/objects, where each hash represents a single meter entry to be created. Each hash will accept the exact same attribute set that is defined in the non-bulk create endpoint (https://developer.fleetio.com/reference/create-meter-entry for meter entries). Any non conforming attributes will be ignored.
Note that there is no restriction regarding the variety of records included in the array. This means that you can include records for different parent vehicles, different dates, etc, as long as they're all of the same record type.
Once a bulk job is created it will be enqueued and processed in the background. A successful response will look as follows:
{
"id": "a0e1a07d-2412-47d9-8164-197c1da6a160",
"completed_at": null,
"completed_count": 0,
"created_at": "2020-06-05T09:13:07.119-05:00",
"failed_count": 0,
"failed_records": [],
"operation": "create",
"resource": "meter_entry",
"started_at": null,
"state": "pending",
"successful_record_ids": [],
"total_count": 100,
"updated_at": "2020-06-05T09:13:07.119-05:00"
}
Note that the state
is currently pending
and started_at
is null
. This means that the job has been enqueued but processing has not been started. To check on the status of a job continue to step 2.
Step 2: Processing and Checking Job State
Since creating records in bulk can be time consuming, bulk jobs are processed in the background. You will not be notified when a job has completed processing, but you can poll our API to check on the status by using the id
returned in the create response.
Once a job has been completed you'll see the following response:
curl \
--request GET \
--header "Authorization: Token YOUR_API_KEY" \
--header "Account-Token: YOUR_ACCOUNT_TOKEN" \
"https://secure.fleetio.com/api/v1/bulk_api_jobs/a0e1a07d-2412-47d9-8164-197c1da6a160"
# Below is the response from the above request
{
"id": "a0e1a07d-2412-47d9-8164-197c1da6a160",
"completed_at": "2020-06-05T09:14:00.452-05:00",
"completed_count": 97,
"created_at": "2020-06-05T09:13:07.119-05:00",
"failed_count": 3,
"failed_records": [
{
"index": 1,
"error_messages": {
"vehicle": [
"can't be blank"
]
}
},
...
],
"operation": "create",
"resource": "meter_entry",
"started_at": "2020-06-05T09:13:58.075-05:00",
"state": "complete",
"successful_records": [
{
"index": 0,
"id": 52378361
},
{
"index": 1
"id": 52378362
},
...
],
"total_count": 100,
"updated_at": "2020-06-05T09:13:07.119-05:00"
}
Notice that the state
is now complete
. A few other fields you'll want to pay attention to are:
- started_at: time when the job was started
- completed_at: time when the job was completed
- completed_count: number of records that were successfully created
- failed_count: the number of records that failed to create
- failed_records: list of records that failed. Includes an index (starting at 0) and an error message. The index is based on the original order of records that were received.
- successful_records: an array of Fleetio ids for the created records, includes and index.These can be used in future API requests, such as to retrieve or delete a record.
Updating records
Updating records is slightly different from creating them. The main difference is that you'll need to provide two extra fields with each object: identifier_field_name
and identifier_field_value
. identifier_field_name
tells the API which attribute to use for lookup. For vehicles, identifier_field_name
can be one of id
, vin
, or name
. identifier_field_value
is the lookup value. A single bulk api job can have mixed identifier_field_name
values. Let's zoom in on the payload part of an HTTP request and inspect it in detail to see this principle illustrated.
// This is a JSON payload extracted from an HTTP request.
// It assumes that these three vehicles already exist in Fleetio.
{
"resource":"vehicle",
"operation":"update",
"records":[
{
// update vehicle based on vin matching "12345678901234567"
"identifier_field_name":"vin",
"identifier_field_value":"12345678901234567",
// The following fields are applied to the update
"name":"A new vehicle name",
"color": "Blue",
// ...
},
{
// update vehicle based on name matching "Current vehicle name"
"identifier_field_name":"name",
"identifier_field_value":"Current vehicle name",
// The following fields are applied to the update
"name":"Another new vehicle name",
"color": "Black",
// ...
},
{
// update vehicle based on id matching 123
"identifier_field_name":"id",
"identifier_field_value":123,
// The following fields are applied to the update
"name":"A third new vehicle name",
"color": "Red",
// ...
}
]
}
Limitations
- The bulk api is currently limited to 100 records per bulk job
- While you're able to create as many bulk api jobs as you'd like, we'll only process 10 jobs concurrently. Other jobs will be enqueued and will start processing as other jobs are completed.