1. Files are upgraded in FineDataLink V4.2.10.3. Back up FineDB and files before an update.
2. Sheet Name in the File Input operator has been updated to Sheet Filter starting from FineDataLink V4.2.10.3. Upon upgrading to FineDataLink V4.2.10.3, the existing configuration will be converted automatically as follows: the parameter-based value is converted to that under Custom Condition > SheetName (which still contains parameters); sheet selections are migrated to the Direct Selection option.
3. Starting from FineDataLink V4.2.10.4, you can configure common Kafka parameters when configuring the transmission queue. For the upgrade compatibility of Kafka-related parameters previously configured in FineDB, see Compatibility Description.
Frontend Configuration of Common Kafka Parameters
A Health Check configuration item has been added to the transmission queue configuration page. You can now configure common parameters of the message queue Kafka on the frontend, as shown in the following figure.
For details, see Compatibility Description.
New Configuration for Enhancing Kafka Fault Tolerance
In previous versions, if a node in the Kafka cluster failed and happened to host a topic, the transmission queue would encounter errors.
This version introduces support for configuring the replication.factor parameter, by which you can set the default number of topic replicas distributed across different nodes. If a node fails, replicas on other nodes remain available, ensuring data accessibility and improving fault tolerance.
New Configuration for Optimizing Kafka Disk Storage Usage
Disk Management now includes new settings:
Max Storage per Topic: You can configure the maximum usable disk space for individual topics, improving disk cleanup efficiency.
Disk Alert Configuration: You can configure notification mechanisms for low disk space alerts specific to the Kafka message queue.
Disk Usage: It displays detailed disk usage information for the Kafka message queue.
When configuring Kafka data connections, you can now set common Kafka parameters on the frontend, as shown in the following figure.
For details, see Kafka Data Connection.
You can now view the storage usage of the Kafka topic associated with the table being parsed in Real-Time Capture Task, as shown in the following figure.
Before optimization:
The File Input operator could not read multiple sheets from an Excel file at once and did not support custom output fields, requiring additional scripting or functions for processing, which was complex and time-consuming.
After optimization:
New Sheet Filter configuration (updated from Sheet Name): Options include Direct Selection, Custom Condition, and All Sheets, allowing you to select multiple sheets simultaneously, eliminating the need for individual processing.
More flexible field output: You can now delete fetched fields and retain only the required fields.
Expanded support for CSV files: When File Type is set to CSV, a new Column To Be Read setting item is available, allowing customization of the row and column range to be read.
New built-in field: You can include the sheetName field (whose value is the sheet name) in Output Field, which helps pinpoint the data source.
For details, see Function Description of File Input.
Output formats were restricted. Single-row and single-column XML or JSON data could not be directly exported as files of the corresponding types. Output field names and data types could not be modified, requiring additional processing.
More flexible output format: Exporting data to custom file types is now supported. Single-row and single-column XML/JSON data can be output directly in its original format for subsequent operations.
Enhanced file splitting capability: The File Splitting function supports data splitting into multiple sheets by row count, ideal for large-volume data export scenarios.
Improved field mapping: You can now modify the names and data types of output fields, eliminating the need for extra processing and streamlining the workflow.
The original Invocation Task has been renamed Scheduled Task Invocation, as shown in the following figure.
For details, see Scheduled Task Invocation.
The Execution Monitoring function is added under O&M Center > Scheduled Task > Running Record to provide comprehensive oversight of execution records and proactive notification capabilities. By configuring custom monitoring rules and notification methods, you can help users stay informed about task status promptly to ensure stable and reliable task operation.
During data project reuse and migration, you often need to modify the configuration, such as databases and schemas, in data synchronization tasks. Extensive manual repetitive work is required if the database and source table configuration is the same in multiple tasks, resulting in low efficiency and high error risks.
After this update, the configuration of the database/schema, source table, and collection supports parameters, which can significantly reduce repetitive work of database and table configuration, thereby improving data development efficiency and project reusability.
The original Source Table item has been split into Database/Schema and Source Table when Configuration Method is set to Table Selection, and both support parameters, as shown in the following figure.
For details, see Data Synchronization - Data Source.
The configuration of Database/Schema and Target Table (when set to Existing Table) now supports parameters, as shown in the following figure.
For details, see Data Synchronization - Data Destination and Mapping.
The Collection configuration now supports parameters, as shown in the following figure.
When reading data from PostgreSQL databases using real-time data development tasks or real-time pipeline tasks, you can set Read Mode to pgoutput.
Previous field configuration settings were restrictive. Limitations such as character constraints on field names and the need for manual adjustments to parse or extract data from input and parsing operators impacted processing efficiency and accuracy. In this version, nodes involving field settings have been optimized to address these pain points.
The Output Field setting item is added to the API Input node. Options include Automatic Acquisition and Manual Acquisition. You can use Manual Acquisition to customize fields, such as modifying field names and data types.
For details, see API Input.
The JSON Parsing node allows you to select the required fields from upstream output fields and to configure the data types of output fields, as shown in the following figure.
The XML Parsing node allows you to select the required fields from upstream output fields and to configure the data types of output fields, as shown in the following figure.
For details, see XML Parsing Operator.
The Data Association node allows you to select the required fields to output, as shown in the following figure.
For details, see Data Association.
The Value Replacement node allows you to select the required fields to output, as shown in the following figure.
The Field-to-Column Splitting node allows you to rename the new column in Split Result, as shown in the following figure.
For details, see Field-to-Column Splitting.
The Field-to-Row Splitting node allows you to rename the new column in Split Result, as shown in the following figure.
For details, see Field-to-Row Splitting.
The MongoDB Input node now allows you to search fields and add fields of the decimal type, for which you can set precision and scale.
For details, see MongoDB Input.
The New Calculation Column node allows you to modify field data types, where type conversion configuration is supported, as shown in the following figure.
For details, see Function Description of New Calculation Column.
The field naming rules in the Field Setting node have been optimized. You can add fields of the decimal data type, as shown in the following figure.
For details, see Field Setting.
The Output Field setting item is added to the Jiandaoyun Input node. Options include Automatic Acquisition and Manual Acquisition. When using Manual Acquisition, you can modify and replace field names, as shown in the following figure.
When you select Manual Acquisition in Output Field in the File Input node, searching fields is supported, as shown in the following figure.
After successful registration, the Registration Licence Version information is displayed under Registration Management > Registration Information, as shown in the following figure.
滑鼠選中內容,快速回饋問題
滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。
不再提示
10s後關閉
Submitted successfully
Network busy