Introduction to ETL pipeline
Get to know how the ETL pipeline performs using Connectors.
This section covers ETL pipelines using connectors. By the end of this section, you will be able to grasp core concepts of ETL pipelines and be able to map and transfer data between two sources.
Overview
The ETL pipeline template helps users to create ETL pipeline applications through which users can load data directly from a data source into a target data destination. These applications are beneficial when the data is in large volumes.
The ETL pipeline can work with one Connector or entity in the source and one Connector or entity in the destination at a time.
Users can append, add and update, update, delete, or sync data between the source and target destination.
The source and destination can be within Langstack applications or external resources. For example, users can load records into a Langstack Entity directly from an external CSV file. Similarly, data can be transferred from Langstack applications to external target destinations.
The transfer of data is done by:
Creating an ETL pipeline
Setting the source fields (Reader), and destination fields (Writer)
Aligns the source and destination through Field Mapping
The ETL pipeline can be set up to run immediately or on a scheduled basis (once or recurring). The data will be read and written through the ETL pipeline based on the defined time duration.
An ETL pipeline can be linked to another ETL pipeline or Process to build a sequence such that the linked app is run immediately after the ETL pipeline is executed.
Last updated