4.2.8 Update Log

  • Last update: September 02, 2025
  • Compatibility Description

    1. Application data sources are upgraded in FineDataLink V4.2.8.1. Back up FineDB before an update. Contact technical support personnel for update assistance.

    2. After you update FineDataLink to V4.2.8.1, the names of original application data source connections will be automatically changed to their data source types, making it impossible to distinguish between multiple connections of the same data source type. Update FineDataLink with caution. This issue is resolved in FineDataLink V4.2.8.2.

    3. Files are upgraded in FineDataLink V4.2.8.4. Back up FineDB and files before an update.

    4.2.8.5

    Supporting the Lark Sheet Data Source

    FineDataLink supports connection to Lark sheet data sources for data reading using scheduled tasks.

    Expanding Function Support for the KingbaseES (SQL Server) Data Source

    FineDataLink supports connection to KingbaseES (SQL Server) data sources for data writing using real-time tasks and real-time pipeline tasks, meeting real-time data writing requirements.

    Expanding Function Support for the KingbaseES (MySQL) Data Source

    FineDataLink supports connection to KingbaseES (MySQL) data sources for data writing using real-time tasks and real-time pipeline tasks, meeting real-time data writing requirements.

    Optimizing the Cleanup Logic of Recycle Bin

    When you empty the recycle bin, all files are processed as a whole. All files are either emptied successfully or retained (when the emptying operation fails).

    4.2.8.4

    Supporting Publication of Data Receiving APIs (General API Request Body)

    The Data Service module allows you to publish data receiving APIs, which can receive data from upstream systems and write it to databases after parsing.


    Supporting Publication of Data Receiving APIs (Jiandaoyun Form Push)

    Application Scenario

    You have used the Jiandaoyun Input operator to store form data in a database, and set the scheduled task to run at a minute-level frequency to ensure data timeliness. However, since Jiandaoyun form data isn't frequently updated, unnecessary executions of the scheduled task may occur.

    You want data synchronization to be triggered only upon form data changes, avoiding meaningless task executions.

    Function Description

    The Data Service module provides a Jiandaoyun data receiving function. If form data changes, the system can synchronize data to the database in real time through the published data receiving API, as shown in the following figure. 

    1753425082655009_fixed.jpeg

    Supporting Data Batching and Write Interval Settings in Write Method

    Data Batching and Write Interval settings items are added to the Write Method tab page of scheduled tasks, real-time tasks, and real-time pipeline tasks, and are displayed when the data destination is a StarRocks/Doris/SelectDB database.

    You can customize data batching conditions and write intervals between batches to prevent database overload issues caused by data write operations with large data volumes or high frequency, which can significantly improve data writing stability in big data scenarios.

    Supporting Array Field Description During API Publication

    When publishing data query APIs, you can now add business-specific descriptions to fields within arrays. These annotations are automatically included in the API documents you export, enabling API callers to clarify field meaning for efficient usage.

    Data Query APIs Supporting HTTP Status Code Configuration

    Data query APIs now include an HTTP Status Code setting item. If Show Error Message is ticked, API exception details will be reflected in the HTTP status code. If it is unticked, the HTTP status code only returns 200 or 404.

    Scheduled Task Supporting the Export of Dirty Data Details

    You can now view and export dirty data details in Statistics, quickly pinpointing the causes of dirty data generation, as shown in the following figure. 

    Providing a High-Version Driver for the ClickHouse Data Source

    The lower-version ClickHouse driver limits the DateTime64 data range. You can use the provided higher version driver for complete range coverage.

    4.2.8.3

    Supporting the Cache Data Source

    FineDataLink supports connection to Cache data sources with the following functions available:

    • Data reading/writing using Scheduled Task

    • Data Service

    • Database Table Management

    Allowing Reading Stored Procedures from the InterSystems IRIS Data Source

    Scheduled Task enables you to read database stored procedures from InterSystems IRIS, where the returned query result set will be used as the input table.

    Adding Tables of Real-Time Task Information to LogDB

    Tables of Real-Time Task information have been added to LogDB for you to accurately and promptly understand the usage of real-time tasks.

    4.2.8.2

    Supporting the KingbaseES (MySQL) Data Source

    FineDataLink supports connection to KingbaseES (MySQL) data sources with the following functions available:

    • Data reading/writing using Scheduled Task

    • Data reading using Real-Time Pipeline Task

    • Data Service

    • Database Table Management

    Supporting the KingbaseES (SQL Server) Data Source

    FineDataLink supports connection to KingbaseES (SQL Server) data sources with the following functions available:

    • Data reading/writing using Scheduled Task

    • Data reading using Real-Time Pipeline Task

    • Data Service

    • Database Table Management

    Optimizing Application Data Sources

    Before optimization:

    If you have authorized multiple application data source connections in a single data connection and upgraded FineDataLink to V4.2.8.1, two issues may occur: One is that the names of original application data source connections will be automatically changed to their data source types, making it impossible to distinguish between multiple connections of the same data source type, thus posing management challenges. The other is that the original application data source connections will be moved to the root directory of Connection Management.

    After optimization: 

    Application data source connections are restructured using the naming convention Data Connection Name-Application Data Source Name to maintain distinguishability. Additionally, after version updates, original data connections will remain in respective folders to prevent misplacement.

    4.2.8.1

    Optimizing the Notification Node

    Email Notification Supporting Attachments

    You may want to send files generated by upstream nodes as email attachments to specified recipients.

    Starting from this version, you can add attachments in the Notification node when selecting Email as Notification Channel to meet data transmission needs. 


    Adding a WeLink Notification Channel

    A WeLink notification channel has been added, which enables you to notify WeLink users of scheduled task information.

    Supporting Incremental-Only Synchronization for Kafka Inputs in Data Pipeline

    Before optimization: 

    In real-time pipeline tasks, Kafka data sources only supported synchronization starting from the earliest effective offset, which could result in redundant data synchronization issues in case of large volumes of data in Kafka.

    After optimization: 

    Sync Start Time in real-time pipeline tasks now includes a Task Startup Time option if the data source is Kafka, which enables synchronization from the task startup time to avoid synchronizing unnecessary historical data.

    Data Service Supporting App Blocklist/Allowlist Configuration

    • The entry of the Blocklist/Allowlist function has been adjusted. You can access this function in Rule Management, as shown in the following figure.

    • Data Service now introduces application-level blocklist/allowlist settings, building upon existing global configurations. You can configure the blocklist/allowlist per application to implement accurate access control, mitigating risks of unauthorized access caused by credential leakage.

    Optimizing the Row to Column Operator

    The maximum number of columns that can be generated by the Row to Column operator has been increased from 100 to 300.


    附件列表


    主题: Product Update Log
    • Helpful
    • Not helpful
    • Only read

    滑鼠選中內容,快速回饋問題

    滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。

    不再提示

    10s後關閉

    Get
    Help
    Online Support
    Professional technical support is provided to quickly help you solve problems.
    Online support is available from 9:00-12:00 and 13:30-17:30 on weekdays.
    Page Feedback
    You can provide suggestions and feedback for the current web page.
    Pre-Sales Consultation
    Business Consultation
    Business: international@fanruan.com
    Support: support@fanruan.com
    Page Feedback
    *Problem Type
    Cannot be empty
    Problem Description
    0/1000
    Cannot be empty

    Submitted successfully

    Network busy