# Exercise 2: ETL pipeline, CSV source & FTP Connector

In this exercise, store data in an Entity.

## Requirements

1. Store data through the ETL pipeline.&#x20;
2. Source data is in a CSV file.&#x20;
3. Use an FTP tool as a connector. File must be residing on the FTP server before the ETL pipeline process starts.&#x20;
4. Store the data from the CSV file in the entity “UserName\_Acc\_customers”.

## Solution

For this exercise, place the source file on the SFTP connector. Then define Reader, Writer, and Field Mapping settings in the ETL pipeline. Finally, run the ETL pipeline to store the details in the destination Entity.&#x20;

The image below illustrates a data source being an FTP connector for a CSV file, and the Destination is an Entity.

* For this exercise, use a CSV file with the following fields containing customer information.&#x20;
  1. Address
  2. City
  3. Country
  4. Customer\_ID
  5. Email
  6. First\_Name
  7. IsActive
  8. Join\_Date
  9. Last\_Name
  10. Phone
  11. Zip\_Code

{% hint style="info" %}
The data used for this exercise is not real and is test data that has been auto-generated by [http://www.convertcsv.com/generate-test-data.htm](https://www.convertcsv.com/generate-test-data.htm). \
The format of the dates generated in the test data is inconsistent and needs to be fixed to be aligned with the target entity's date format, i.e., YYYY-MM-DD.
{% endhint %}

* Open Filezilla and connect.&#x20;
* Place the CSV file in the remote connecting site.
* To go to the ETL pipeline template, click on “ETL pipeline” under the “Data management” menu on the left side panel.
*

```
<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9etlmenu.png" alt=""><figcaption></figcaption></figure>
```

* To create a new ETL pipeline, click the \[+ ETL pipeline] button under the “ETL pipeline” tab.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9addetl.png" alt=""><figcaption></figcaption></figure>

* To label the ETL pipeline and execution log entity, add the following information:&#x20;
  1. Enter the “ETL pipeline” name as “UserName\_Acc\_customers\_csvtoentity”.&#x20;
  2. Optionally add a description for this ETL pipeline.&#x20;
  3. Enter a name for the execution log for this ETL pipeline as “csvtoentity”. This log will be updated each time the ETL pipeline is executed and can be viewed by clicking “Go to records”.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9ex2namedetails.png" alt=""><figcaption></figcaption></figure>

* To connect to the data source, add the necessary information in the “Data Source” function. To add an existing connector:&#x20;
  1. Select the “Connector” tab.
  2. Select the relevant connector source. In this exercise, “Tutorial\_SFTP” is selected. (The source can be selected from the drop-down menu or a new source can be added by clicking the \[+] button and adding relevant details.)&#x20;
  3. To edit the settings, click the “Edit the settings” arrow. In the settings, to add or update the details for the FTP/SFTP Connector, update the required fields ([click here for the steps](https://docs.langstack.com/welcome/get-started/learn-langstack/connectors/sftp-connector)).\
     \
     If a connector does not exist, to create a new connector In the Data Source in the ETL pipeline, click the \[+] button and add the required fields ([click here for the steps to create a connector](https://docs.langstack.com/welcome/get-started/learn-langstack/connectors/create-a-connector)).

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9ex1dstutsftp.png" alt=""><figcaption></figcaption></figure>

* To add the necessary details to connect with the data destination, in the “Data Destination” section:&#x20;
  1. Select the “Entity” tab.
  2. Select Source as “UserName\_Acc\_customers”.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9ex2ddentitycust.png" alt=""><figcaption></figcaption></figure>

* To disallow multiple requests simultaneously, leave the toggle button enabled for “skip execution while in progress”. \
  Enabling this toggle button defines that the execution of this ETL pipeline will be skipped when there is one already in progress.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_10/ch10skipexe.png" alt=""><figcaption></figcaption></figure>

* The default value for ETL pipeline execution is selected as “Immediate”. For this exercise, keep it as is.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_10/ch10immedexec.png" alt=""><figcaption></figcaption></figure>

* To define the Reader (reading from the source) and Writer (writing on the destination) settings, go to the “Data Format” tab.\
  To define the stream, select “Reader Stream” as “CSV Stream”.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9etlrdrcsvstream.png" alt=""><figcaption></figcaption></figure>

* To edit the settings for CSV stream, click on the arrow besides "Edit the settings".

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9etlrdrcsvstreditset.png" alt=""><figcaption></figcaption></figure>

* To specify the “Reader stream”, define the following settings in the “FTP CSV Format Details” box:&#x20;
  1. Add the exact file path. This is the path to the file from which the CSV data will be read.&#x20;
  2. Select “Character Set” as “Universal (UTF-08)”.&#x20;
  3. Select “Language” as “English”.&#x20;
  4. To define that the data is read from the first row, in the “Start reading CSV from line” field, enter the digit “1”. \
     The field “Header present in the first row” is selected by default.&#x20;
  5. To define the separator based on which the fields are distinguished, select Separator as “Comma”. The \[Sample Data] button gets activated.&#x20;
  6. To define the sample entity fields, click the \[Sample Data] button.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9ex2ftpdet1.png" alt=""><figcaption></figcaption></figure>

* When the \[Sample Data]button is clicked, a box will be displayed to add the header file sample that is in the CSV file. To define the fields present in the target entity, add the header file sample as follows: \
  \
  `First_Name,Last_Name,IsActive,Phone,Email,Join_Date,Address,City,Zip_Code,Country` \
  \
  The header sample will contain the details of the fields to be read from the target Entity. To save the details, click the \[OK] button.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9ex2ftpdetsampleschema.png" alt=""><figcaption></figcaption></figure>

* The entity's fields will be displayed (based on the sample data entered).

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9ex2ftpdet2srcfields.png" alt=""><figcaption></figcaption></figure>

* To copy the CSV fields to Source fields in the entity, click the activated \[Copy Sample to Source Fields] button. \
  Fields in the source entity will be populated.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9ex2ftpdet3copytosrcfields.png" alt=""><figcaption></figcaption></figure>

* The fields will be populated with string data types in the Source fields section. \
  Adjust the data types of the fields “IsActive” and “Join\_Date” to align with data types of the target entity. (The field's data type can be selected, unnecessary fields can be deleted and further fields can be added as required.)

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9ex2srcfieldsadj.png" alt=""><figcaption></figcaption></figure>

* To accept the Ftp CSV format details settings, click the \[Accept & Collapse] button.&#x20;

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9ex2ftpdetacceptcollapse.png" alt=""><figcaption></figcaption></figure>

* After the “Reader” settings are defined, to define “Writer” settings, select the Data Format>Writer tab. \
  To define that the data is written to the destination as mentioned in reader source fields without transforming it, select “Writer Mode” as “Append”.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9etlwriterddentity.png" alt=""><figcaption></figcaption></figure>

* To map the reader source fields to the entity destination fields, select the “Field Mapping” tab.\
  To add the Mapped Fields, click the \[+ Field] button and add relevant information.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9etladdfieldfm.png" alt=""><figcaption></figcaption></figure>

* In case of CSV files, the sequence of fields top-down must be exactly the same as the sequence specified in the Reader.&#x20;
  1. Mapping Sources: Select the field specified in the Reader. Select Variables>Reader>(field name). It displays as “reader.(Field Name)”.&#x20;
  2. Select field from Entity: Select mapping field from the Entity.&#x20;
  3. Data Type: Select the data type for the mapping fields.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9etlfieldmapplabel.png" alt=""><figcaption></figcaption></figure>

* Add all required fields in the “Field Mapping” section except Customer\_ID (as that is auto-generated):&#x20;
  1. First\_Name
  2. Last\_Name&#x20;
  3. IsActive&#x20;
  4. Phone&#x20;
  5. Email&#x20;
  6. Join\_Date&#x20;
  7. Address&#x20;
  8. City&#x20;
  9. Zip\_Code&#x20;
  10. Country
* To save the ETL pipeline, click \[Save].

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9etlsave.png" alt=""><figcaption></figcaption></figure>

* To publish the ETL pipeline, click \[Publish].

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9publishetl.png" alt=""><figcaption></figcaption></figure>

* Ensure the ETL pipeline is enabled.

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9enableetl.png" alt=""><figcaption></figcaption></figure>

* Click the \[Run] button.
*

```
<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9runetl.png" alt=""><figcaption></figcaption></figure>
```

* The Run ETL pipeline dialog box displays. \
  Click \[Run] to execute the ETL pipeline.

  The ETL pipeline will run on the date and time specified.&#x20;

<figure><img src="https://media.langstack.com/documentation/media/images/code/training_manual/chapter_9/ch9runetl2.png" alt=""><figcaption></figcaption></figure>

* The records will be added to the destination.
