Overview
Version Description
FineDataLink Version | Functional Change |
---|---|
4.1.2 | Added the Field-to-Column Splitting, Field-to-Row Splitting, and Group Summary operators. |
4.1.3 | When Data Source in the DB Table Input operator was set to DB Table Input:
|
4.1.6.4 | Added the MongoDB Output operator, allowing you to output data to a MongoDB database. |
4.1.11.3 | Added the Elasticsearch Output operator, allowing you to output data to an Elasticsearch database. |
4.2.0.2 | Added the Elasticsearch Input operator, allowing you to fetch data from a specified Elasticsearch database. |
Application Scenario
The Data Synchronization node supports cross-database data synchronization. However, if you want to synchronize data to a database after complex processing (such as JSON parsing and multi-table association), you need to use the Data Transformation function, as shown in the following figure.

Function Description
Data Transformation provides various types of operators such as input, output, and transformation operators, which allow you to achieve complex data processing.

Function List
Enter the Data Synchronization node. The following figure shows the Data Synchronization page.
The following table describes the operators in the Data Synchronization node.
Type | Operator | Description |
---|---|---|
Data Input | DB Table Input | Allows you to fetch data from tables in relational databases. For details, see Data Sources Supported by FineDataLink. In V4.1.3 and later versions, when DB Table Input is selected as the data source type:
|
Allows you to fetch data from APIs (including RESTful APIs and WebService APIs). | ||
Allows you to fetch Excel, CSV, and TXT file data from local FineDataLink servers and FTP/SFTP servers. | ||
Allows you to fetch data from Jodoo forms. | ||
Allows you to fetch data from a specified MongoDB collection. | ||
Allows you to call functions that have been developed in the SAP system through RFC APIs to fetch data. | ||
Allows you to fetch data from a specified Elasticsearch database. | ||
Dataset Input | Allows you to fetch data from file datasets (Excel, TXT, XML, and CSV), tree datasets, stored procedures, programs, built-in datasets, and associated datasets. Among them, stored procedures, programs, built-in datasets, and associated datasets can only be defined in FineReport Designer. ![]() | |
Data Output | Allows you to output data into tables in relational databases. | |
Allows you to output obtained data as parameters for downstream nodes to use. | ||
Allows you to output data into APIs. | ||
Allows you to output data into Jodoo forms. | ||
Allows you to output data as files. | ||
Allows you to output data into the MongoDB database. | ||
Allows you to output data into Elasticsearch. | ||
Connection | Allows you to associate two tables from different source databases to generate a new table. The join methods are as follows:
| |
Allows you to compare two inputs and get the new, deleted, same, or updated data. | ||
Allows you to concatenate multiple tables row-wise and output a combined table. | ||
Transformation | Allows you to convert the columns in a data table to rows. | |
Allows you to convert the rows in a data table to columns. | ||
Allows you to parse JSON data and output data in a row-column format. | ||
Allows you to parse XML data and output data in a row-column format. | ||
Allows you to select and rename fields, and change data types. | ||
Allows you to obtain a new field by referencing or calculating the original fields without affecting them. | ||
Allows you to filter required data records. | ||
Allows you to select fields and convert table data into multiple JSON objects (which can be nested). | ||
Allows you to split field values according to specific rules (delimiters or the number of characters), where the split values form multiple new columns. | ||
Allows you to split field values according to specific rules (delimiters), where the split values form a new column. | ||
Allows you to group data based on certain criteria and perform summary calculations on the grouped data. | ||
Laboratory | Allows you to query and process data using the built-in Spark calculation engine where parameters and functions are supported. | |
Allows you to call Python scripts to perform complex data processing. | ||
Others | Allows you to add remarks to tasks and nodes. |