A step is an action your pipeline takes to work with a connected system, such as Quickbase or an HTTP endpoint. Each step can require resource data, configuration options, and mappings so the pipeline knows what information to send and what to expect in return.
Step components
A step includes the following elements:
Name—A concise label you assign to the step. If you don’t specify one, the step uses a default name based on its function.
Note—A description of the step’s purpose and its role in the overall pipeline.
Index—A unique string assigned to each step based on its position in the pipeline. The index appears in the Activity log and in Jinja transformations.
Connection—The connection to another service. Some steps require credentials, which are managed as part of a channel account and referenced by steps. For example, see the HTTP channel connection section.
Resource—A reference to the output from a previous step that provides input data to this step.
Configuration options—Additional information that defines how the step operates. For example, configuration options in the HTTP channel specify request details such as endpoint, headers, and payload.
Field mapping—When a step sends data to a remote service, its configuration allows you to map fields to the target system. You can use static values or Jinja templates to transform values from previous steps.
Result—The data returned by the step when it runs. This can be a single item or a list of items (such as from a Query or Bulk trigger step). Lists are shown with an array
icon.
Metadata—Additional information about the step’s execution, separate from the step result. Learn more about viewing a step’s metadata in a pipeline.
Step types
Steps can perform different functions within a pipeline. The main types include:
Trigger—Starts the pipeline when a change or event occurs in an external system. A pipeline can include only one trigger step, and it must appear first.
Action—Performs a change or retrieves information from the connected system.
Query—Searches a system to retrieve a list of data items. Unlike actions, queries don’t modify data and offer additional controls such as filters and limits.
Step schema
Each step’s configuration and field mappings are based on the current structure of the connected service. This structure is known as the step schema—a snapshot of the connected service’s configuration and capabilities.
If a remote service changes, the step schema may become outdated. Compatible changes, such as adding optional fields, don’t affect existing pipelines. However, incompatible changes may require updates to the pipeline configuration.
To keep a pipeline up to date, you can refresh schemas or sync data, depending on what has changed.

Refresh schemas updates the step schema to reflect structural changes in the connected service, such as added, removed, or renamed fields. You can refresh the schema on an individual step or on the entire pipeline. To refresh the entire pipeline, select Refresh schemas.
To refresh a single step, select the refresh icon:

You can do this whether the pipeline is ON or OFF.
Sync checks a remote system for new or updated data in pipelines that use a poller-based trigger, instead of waiting for its next automatic check. Poller triggers periodically check for data changes and trigger the pipeline when changes are found—Sync lets you run that check immediately.
The Sync option is available only for pipelines that use poller-based triggers. Pipelines that use event-based (webhook) triggers receive data automatically and do not display a Sync option.
Step results
Improve your pipeline configuration by viewing an overview of a step’s results. Here, you can quickly view and manage the output results of each step in a clear, concise layout, with the flexibility to show or hide this view as needed.

Tips for using steps
To process bulk data, use a loop, Import to Quickbase step, or Jinja.
Using a loop to iterate over a list of items produced by a step is handled differently:
If the list is less than 500 items, it is done sequentially
Otherwise, the loop is done in parallel
Avoid nesting queries within loops. This pattern causes the pipeline to run the inner query for every item in the outer loop, multiplying the number of operations and slowing performance.
If you need to combine data from two sources, it’s more efficient to load both datasets into a system that supports
JOINoperations—such as a database—and run the query there instead.You can check for a blank list by checking for the list size using Jinja and by using an
IFpipeline construct. You can also add a condition after a list to see if it was blank.A query step uses pagination, and there are audit records produced for each page of data.
The final audit of the query result shows the total items returned.
When you filter in the pipeline activity page you can show just the query step to get an idea of the amount of data processed.