Flows

Flows model data processing pipelines and are the core building block of Synatic

Praise Magidi avatar
Written by Praise Magidi
Updated over a week ago

Flows provide the core building blocks of Synatic. They model data processing pipelines, and are at the centre of building solutions with Synatic.

Flow Boilerplate

When you create a new Flow, it starts out with the Flow Boilerplate on the canvas. These are the fixed and essential parts to a flow, like a main() function is to traditional programming:

Triggers

Triggers determine how the flow is started or triggered. You can choose from an HTTP request, a Cron schedule, incoming email, and various 3rd party platform events (e.g. Salesforce, Typeform, WooCommerce etc. events). Triggers can include data for the flow pipeline to process.

Parameters

Parameters are input values for a flow (like function arguments in regular programming). That can be defined here and use the passed-in values throughout the flow. They are one source of data for the flow.

Return Type

A flow normally does not return any data. Click on the Return block to change this to data and the format to be returned. This is important when using branches (e.g. a Parallel control step to create multiple paths, or sub-flows) in your flow, or when using the flow as an HTTP endpoint.

Logger

The logger records events that happen through the pipeline. For example, a record with an error (Record Event), or the whole run has an error (Run Event). Other useful Event handlers can be added here to do useful things, such as send an email if a run succeeds or fail or kick off another flow based on the status of this flow.

Flow Steps

Now that there is an understanding of the fixed parts of the flow, we can look at the building blocks used to create a useful processing pipeline. These building blocks are called steps. Flow steps can be accessed by clicking on the + icon as highlighted in the image below.

Adding a new step in the new system has become a lot more efficient since the system will only show the applicable steps that you are allowed to add as you build your flow.

Steps are grouped according to their purpose in the flow.

  • Control Steps - These steps are equivalent to branches in traditional programming. There are conditional blocks, for each, parallel and series processing and more.

  • Trigger Steps - These steps determine how a flow is started or triggered. This can be HTTP, a cron schedule, emails, and various 3rd party events (e.g. SalesForce platform events, WooCommerce events and more).

  • Source Steps - Source steps are a data source for a flow. Sources can be files (e.g., from SFTP, Dropbox, Google Drive, etc), databases (e.g. SQL, Postgres, Mongo, etc), web services (HTTP, soap, etc), and other applications (e.g. SalesForce, Odoo, Sage, and many more online services).

  • Reader Steps - Readers convert data from files or other sources into a JSON object that can be understood by Synatic. The Reader creates individual Records that are further processed in the pipeline. There are readers to read from CSV, Excel, XML, JSON, and many other standard formats.

  • Mapper Steps - Mapper steps perform operations like mappings, transforms, filters, sorting, grouping and other processing on data in the flow pipeline. Multiple mappers can be used in the flow to manipulate data to meet preferred requirements.

  • Combiner Steps - These steps combine multiple data records into a single data record or split a single data record into multiple data records. This is useful when combining data to create a single email or document for example.

  • Writer Steps - A writer step is the opposite of a reader step. These steps convert the JSON data records Synatic works on, into other standard formats for writing to a data destination. Steps include CSV writers, Excel writers, PDF, Zip, and many other formats.

  • Destination Steps โ€“ A Destination step allows processing results to be written to a data store. Destination steps can be used to write to databases (SQL, Postgres, Mongo, etc), file platforms (e.g. SFTP, Dropbox, Google Drive, etc), HTTP or Soap services, SaaS systems (e.g. Salesforce, Odoo, etc), and even to other flows.

  • RunEvent Steps - These steps allow attachment to a RunEvent (Completed, Failed, Completed with Errors), and perform an action. This can be sending an email on a problem, kicking off other flows, or writing the event data to Synatic storage (Buffers)

  • RecordEvent Steps - These steps allow attachment to RecordEvents (Completed, Error, Skipped, Header, Footer) which occur as each data record is processed. Chosen events can be logged to Synatic's built-in log manager, or the results can be written to a Synatic data store (Buffers).

Processing Object

As a Record is processed through the Flow it has a specific object model that can be used in each Step as configuration and specifically in calculations. Read more about this in the Processing Object article.

Putting it Together

To see how these elements come together to create a useful flow, see our example step-by-step tutorial

Did this answer your question?