FineDataLink Version
Functional Change
4.0.13
/
Differences Between Data Synchronization and Data Transformation has explained the differences between the applications of these two nodes.
Data synchronization involves data acquisition, simple processing, and data output.
Data transformation involves data acquisition, complex processing, and data output.
When processing data, you may estimate that you can finish cross-database data migration by simply using the Data Synchronization node, but some complex operations are needed in actual use.
Or you may think that you can directly fetch the specified data as parameter value by using the Parameter Assignment node, but some complex data processing is required in advance.
The Generate Data Transformation function enables you to switch nodes quickly and smoothly.
FineDataLink provides the Generate Data Transformation function.
You can click Generate Data Transformation to transform the Data Synchronization or Parameter Assignment node into the Data Transformation node.
1. If the Data Synchronization or Parameter Assignment node in Loop Container is transformed, the generated Data Transformation node will be limited in the loop container.
2. If the Data Synchronization or Parameter Assignment node is connected to other nodes, the generated Data Transformation node will not affect the original node connections.
Assume that you have the interface data http://fine-doc.oss-cn-shanghai.aliyuncs.com/book.json and want to fetch the data whose category is fiction to a specified database.
Drag a Data Synchronization node to the design page, select API Input from the drop-down list of Data Source, configure the API, and enter $.store.book as the JSON path under Return Value Processing > Response Body Processing to fetch all data of the book array, as shown in the following figure.
Click Data Preview, as shown in the following figure.
Data with a category other than fiction can also be fetched at this stage.
Click Generate Data Transformation to add a Data Transformation node. The API Input and DB Table Output nodes will be generated by default on the editing page, as shown in the following figure.
The generated content will retain the previous configurations by default, as shown in the following figure.
Add a Spark SQL node to filter the required data, as shown in the following figure.
Configure the data destination, as shown in the following figure.
Assume that you want to parse the API data, set the data that meets certain conditions as parameters, and fetch the data that meets the parameter conditions from the database table to the specified database.
The data from http://fine-doc.oss-cn-shanghai.aliyuncs.com/book.json needs to be parsed to fetch the data whose isbin is not empty.
Output the author in the data as a parameter and input it into the book data table.
Retrieve the data that meets the parameter conditions and output it to the book_out data table.
The Parameter Assignment node is used first, as shown in the following figure.
The Parameter Assignment node does not provide complex parsing and processing functions. You can only process the data in the Data Transformation node, fetch it to the database, and use the Parameter Assignment node to output it as a parameter, which is taxing.
You want to directly output parameters after processing the data, without having to output the processed results to a database and then fetch values from the database as parameters. Click Generate Data Transformation to add a Data Transformation node. The API Input and Parameter Output nodes will be generated by default on the editing page, as shown in the following figure.
For details about subsequent operation steps, see Parameter Output.
滑鼠選中內容,快速回饋問題
滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。
不再提示
10s後關閉
Submitted successfully
Network busy