1. Application data sources are upgraded in FineDataLink V4.2.8.1. Back up FineDB before an update. Contact technical support personnel for update assistance.
2. After you update FineDataLink to V4.2.8.1, the names of original application data source connections will be automatically changed to their data source types, making it impossible to distinguish between multiple connections of the same data source type. Update FineDataLink with caution. This issue is resolved in FineDataLink V4.2.8.2.
3. Files are upgraded in FineDataLink V4.2.8.4. Back up FineDB and files before an update.
FineDataLink supports connection to Lark sheet data sources for data reading using scheduled tasks.
FineDataLink supports connection to KingbaseES (SQL Server) data sources for data writing using real-time tasks and real-time pipeline tasks, meeting real-time data writing requirements.
FineDataLink supports connection to KingbaseES (MySQL) data sources for data writing using real-time tasks and real-time pipeline tasks, meeting real-time data writing requirements.
When you empty the recycle bin, all files are processed as a whole. All files are either emptied successfully or retained (when the emptying operation fails).
The Data Service module allows you to publish data receiving APIs, which can receive data from upstream systems and write it to databases after parsing.
Application Scenario
You have used the Jiandaoyun Input operator to store form data in a database, and set the scheduled task to run at a minute-level frequency to ensure data timeliness. However, since Jiandaoyun form data isn't frequently updated, unnecessary executions of the scheduled task may occur.
You want data synchronization to be triggered only upon form data changes, avoiding meaningless task executions.
Function Description
The Data Service module provides a Jiandaoyun data receiving function. If form data changes, the system can synchronize data to the database in real time through the published data receiving API, as shown in the following figure.
Data Batching and Write Interval settings items are added to the Write Method tab page of scheduled tasks, real-time tasks, and real-time pipeline tasks, and are displayed when the data destination is a StarRocks/Doris/SelectDB database.
You can customize data batching conditions and write intervals between batches to prevent database overload issues caused by data write operations with large data volumes or high frequency, which can significantly improve data writing stability in big data scenarios.
When publishing data query APIs, you can now add business-specific descriptions to fields within arrays. These annotations are automatically included in the API documents you export, enabling API callers to clarify field meaning for efficient usage.
Data query APIs now include an HTTP Status Code setting item. If Show Error Message is ticked, API exception details will be reflected in the HTTP status code. If it is unticked, the HTTP status code only returns 200 or 404.
You can now view and export dirty data details in Statistics, quickly pinpointing the causes of dirty data generation, as shown in the following figure.
The lower-version ClickHouse driver limits the DateTime64 data range. You can use the provided higher version driver for complete range coverage.
FineDataLink supports connection to Cache data sources with the following functions available:
Data reading/writing using Scheduled Task
Data Service
Database Table Management
Scheduled Task enables you to read database stored procedures from InterSystems IRIS, where the returned query result set will be used as the input table.
Tables of Real-Time Task information have been added to LogDB for you to accurately and promptly understand the usage of real-time tasks.
FineDataLink supports connection to KingbaseES (MySQL) data sources with the following functions available:
Data reading using Real-Time Pipeline Task
FineDataLink supports connection to KingbaseES (SQL Server) data sources with the following functions available:
Before optimization:
If you have authorized multiple application data source connections in a single data connection and upgraded FineDataLink to V4.2.8.1, two issues may occur: One is that the names of original application data source connections will be automatically changed to their data source types, making it impossible to distinguish between multiple connections of the same data source type, thus posing management challenges. The other is that the original application data source connections will be moved to the root directory of Connection Management.
After optimization:
Application data source connections are restructured using the naming convention Data Connection Name-Application Data Source Name to maintain distinguishability. Additionally, after version updates, original data connections will remain in respective folders to prevent misplacement.
Email Notification Supporting Attachments
You may want to send files generated by upstream nodes as email attachments to specified recipients.
Starting from this version, you can add attachments in the Notification node when selecting Email as Notification Channel to meet data transmission needs.
A WeLink notification channel has been added, which enables you to notify WeLink users of scheduled task information.
In real-time pipeline tasks, Kafka data sources only supported synchronization starting from the earliest effective offset, which could result in redundant data synchronization issues in case of large volumes of data in Kafka.
Sync Start Time in real-time pipeline tasks now includes a Task Startup Time option if the data source is Kafka, which enables synchronization from the task startup time to avoid synchronizing unnecessary historical data.
The entry of the Blocklist/Allowlist function has been adjusted. You can access this function in Rule Management, as shown in the following figure.
Data Service now introduces application-level blocklist/allowlist settings, building upon existing global configurations. You can configure the blocklist/allowlist per application to implement accurate access control, mitigating risks of unauthorized access caused by credential leakage.
The maximum number of columns that can be generated by the Row to Column operator has been increased from 100 to 300.
滑鼠選中內容,快速回饋問題
滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。
不再提示
10s後關閉
Submitted successfully
Network busy