Use the bulk record sets feature in the Quickbase channel to transfer large numbers of records more efficiently. When you batch records, the pipeline makes fewer total calls instead of one call per record.
Note
Step runs are counted regardless of success or failure. That is, one step run is billed per record that the step processes even if the overall operation fails. Quickbase is currently unable to validate data and records prior to performing step runs. If you're processing large sets of data, we recommend testing your pipeline with a smaller set first. Because Bulk Upsert is included in the Quickbase channel, it doesn't consume step runs.
Build a bulk record operation
Use the following steps to create a bulk record operation.
Prepare Bulk Record Upsert
Define the schema you want to import into. This step identifies the destination table for your upsert:
Table—Select the table you want to use as the destination.
Fields—Define the fields you want to import into.
Merge field—If you are updating records, this field is required.
Search Records
Use a query step to define the source data for the Add a Bulk Upsert Row step.Loop
The loop performs an action for each record found by the Search Records step.Add a Bulk Upsert Row
Select the target bulk record set by referencing the table you selected in the Prepare Bulk Record Upsert step. This step is where you map fields between your source data (query step) and your destination (Prepare Bulk record Upsert step).
Note: This step does not make an API call. Instead, records are temporarily stored for the next step.Commit Upsert
Use this step to take the temporarily stored records from Add a Bulk Upsert Row, and upsert the data into the table identified in the Prepare Bulk Record Upsert step.
Note
Prepare Bulk Record Import and Commit Upsert are intended to only be run once outside the context of the loop where you add records. Commit Upsert pushes all records added during the pipeline run into the Quickbase table.
Use case examples
The following are use-case examples of using bulk record sets.
Importing from a CSV
If a resource already exists in a CSV file through a URL or a previous step (through the file transfer URL), you can import records into your Quickbase table in a single step.
To Import with CSV:
Table—Select the table you want to use.
Merge field—Use Merge field if you are updating records. If you don't want to update an existing record, you may leave the field blank or use a value that doesn't exist in the table.
CSV URL—Enter the URL that contains your CSV-formatted data. This can be from an unauthenticated URL, or a file transfer URL from a previous step that handles files (such as Box, Dropbox).
Row Separator—Enter the character used to separate your data into columns.
First Row is List of Field Names—Enter Yes if your data contains a header row.
Header Row—This defines the structure of the CSV you are providing. If you have a header row in your CSV data, you can copy that.
Field to map to column*—These options display only when you define your Header Row. Use this to select the corresponding Quickbase field that you intend to map the column to.
To skip columns—Leave the option blank (don’t map to a Quickbase field)
Child-table records
In this use case, you want to:
On a trigger, find a set of child-table records connected to a trigger-field value.
From those child-table records add new records to a child table of the trigger-record.
To do so:
On a trigger, set the destination table and fields to be populated.
Search for the records meeting the criteria.
For each record found, create a record in the destination table and set the field values that will be used to create the new records.
Employee utilization
In this use case, we are determining weekly employee utilization. Employees track their time in two ways: customer facing (actions) and internal (tasks). These are both tracked in slightly differently ways and the data we use to create our weekly attestation comes from two different tables.
We use bulk upsert to merge our two tables into one (Utilization), so we can work with a single source of reporting. We schedule a pipeline to runs every hour, creating a bulk upsert from our Actions table, importing that data, merging/updating anything existing. That same Pipeline then creates an upsert from the Tasks table.