Import to Quickbase

Prev Next

Beta feature

This is a beta feature, which means we're still developing it. Some functionality may change.

The Import to Quickbase step provides a simple way to transform and load bulk data in one action. It takes bulk data, converts each item into a Quickbase record, and imports it into a destination table.

You can input a set of items from any preceding step that outputs bulk data, like Bulk Trigger, a search step (such as Procore, Netsuite, or HTTP Channel Fetch Items), or another Bulk Action step. These steps are tagged with an array icon.

The step supports upsert (update-insert) behavior, which means it either updates an existing record, or inserts a new record, depending on whether a matching record already exists. Matching is determined by either:

  • The destination table’s default key field

  • A specified merge field

If you provide a merge field, the step updates the existing record. If you do not, it appends a new record. If you do not, the step inserts a new record using the table’s default key field.

Configure the Import to Quickbase step

  1. Select a target Resource. This could be any preceding step that outputs bulk data.

  2. Select a destination App, Table, and Fields.

  3. Define the field mappings to transform the source data. These mappings are applied to each item imported from the Resource field.

Importing files into Quickbase with fields for activity description, file name, and path.

Example of configuring this step using Jinja:

  • Resource: Bulk Trigger - 'a'

  • App: my-example-app

  • Table: my-destination

  • Fields: name, previous_name

  • Field 'name': {{ a.name }}

  • Field 'previous_name': {{ metadata.a.record.previous.name }}

Both a and metadata.a refer to each item in the source array provided in the Resource field.

The Import to Quickbase step acts like a loop that processes each element in the source array.

Handle errors

The step attempts to import all items in the Resource field. In some cases, an item may fail to import and is skipped. The total number of skipped items is available in the Quick Reference widget or through Jinja:

  • Records skipped—{{ metadata.b.records_skipped }}
    Contains the number of records that failed to import.

  • Batch size—{{ metadata.b.batch.size }}
    Contains the number of all items in the original source.  

    With Jinja, you can use either an If condition or mapping in subsequent steps to verify the amount of records that were imported successfully, or failed. Learn more about using metadata.

You can use these values in Jinja conditions or later steps to confirm how many records were processed and how many failed.

Types of errors

The step outputs an array of error details when items fail to import. Two types of errors may appear:

  • Mapping error—Error reported while transforming the source item fields to a Quickbase record. These are usually result of mapping issues and/or Jinja expression validation. For example, text mapped in a numeric field or invalid jinja.
    These errors output:

    • Error message

    • Information about the source item that was about to be transformed, but failed

  • Import error—Error while importing the mapped record into Quickbase. These are usually the result of table constraints validation. For example, attempting to import a record with a duplicate value in a unique field.
    These errors output:

    • Error message

    • Information about the source item that was transformed into a Quickbase record, but failed to import

Using this information, you can process the errors one by one or handle them as a group. By default, Pipelines adds a Loop for per-item handling, but you do not need to use it if a different approach fits your workflow.

Example: Quarantine failed records

When you import large datasets, it can be difficult to identify which records failed. One way to track failures is to send them to a separate quarantine table for review. To do this, add another Import to Quickbase step, and use the output from the first import as the source for the second import. Then, map the error details into a table with fewer constraints.

For example, an import may fail when a destination table requires:

  • Unique fields

  • Required fields

In this example, the mail field is both unique and required. Import may fail if the upstream search step returns no data for this field, or a duplicate value. A quarantine table helps you keep track of these cases.

Create a quarantine table by adding a second Import to Quickbase step. If a record fails to import, you can capture the pipeline name, the pipeline unique run id, the reason for failure, and the original source item id:

Pipeline showing data import process with fields and resources specified.

In this example, we do not specify a Merge Field in order to use the default key field (Record ID#) of the quarantined table. The result is that each imported record will be appended as new record with no updates:

Pipeline showing actions for importing data into a quarantine destination.

Once we run this pipeline, the activity log shows there are 3 records found in source table and all of them fail to import due to a failing constraint:

Import process summary showing records processed and errors encountered during data import.

The log provides a sample of the Output list of errors produced, which are then sent with the follow-up step to the quarantine table.

Example: Handle duplicate records

In this example, an upstream system may not enforce uniqueness on a field that must be unique in a Quickbase table. This may result in an error similar to: Adding non-unique value for field with ID "7".. Alternatively, the transformation logic of these fields may introduce duplicates in some cases.

In these cases, we want to handle errors differently for each unique field.

This example uses the following destination table structure:

  • A unique Form ID

  • A unique Form Name

  • Record ID# as the key field

    This means that each imported record is inserted as a new record. If we re-run this pipeline, it will produce duplicates for Form ID or Form Name fields.

Table displaying various fields including Form ID, Date Created, and Record Owner.

The pipeline sources data from an upstream system (FastField in this example) and attempts import:

Search Forms and Import to Quickbase action details with fields and resources listed.

Let’s say we want to handle conflicts like this:

  • If Form ID conflicts, append a unique value such as the pipeline run ID

  • If Form Name conflicts, append a timestamp

  • If only Form Name conflicts, notify the Forms Authors group to review the name before importing it

To create this workflow, we need to iterate through each error record to check if Form ID is a duplicate. To do so, add a Prepare Bulk Records Upsert step before the loop to allocate a new list of dynamically appended records. We’ll retry sending these records to the same destination table in bulk with corrected data at the end.

Workflow diagram showing bulk record upsert and error handling in Quickbase.

If the condition is met, the record is appended to the dynamic list with unique fields. Otherwise, each Form entry is sent as notification to the Teams channel:

Workflow diagram showing actions for adding and sending messages based on conditions.

In the activity log, we can see that all possible outcomes are checked. We received 3 forms that failed to import from the upstream step. 2 of the failed forms had conflicts only on the Form Name and 1 had a conflict on the Form ID.

Form Name conflicts were sent to Teams channel:

Output details of a Microsoft Teams message and error handling in a pipeline.

And the one with adjusted Form ID was imported:

Pipeline output showing bulk upsert action with processed records and field IDs.