4.2.14 Update Log

  • Last update: February 06, 2026
  • Compatibility Description

    If FineDataLink is upgraded from an earlier version to V4.2.14.3 or later versions: 

    • The dpworks directory will retain the configuration of scheduled tasks that were not successfully upgraded. Therefore, when backing up scheduled tasks, you must perform backup in both Platform Configuration and FDL Task.

    • The configuration of nodes in scheduled tasks (such as the remark content and message notification content) is stored in the external database. If emojis exist and the external database is a MySQL database with the utf8 encoding (which is utf8mb3), emojis will not be supported for storage, and an error will occur. For other databases, emojis can be saved normally.

    4.2.14.4

    COPY Loading Support for Writing to SQL Server Databases

    To address performance issues during high-volume data writing to SQL Server databases, scheduled tasks and real‑time pipeline tasks now support the COPY loading configuration if the target end is a SQL Server database.

    For details, see Microsoft SQL Server Data Connection.

    Optimization of Real‑Time Pipeline Task

    You want the system to remind IT personnel when the write latency of a real‑time pipeline task is too high, or the amount of data pending synchronization is excessive.

    Options of Notification Content in Result Notification now include Synchronization Delay.

    For details, see Real-Time Pipeline Task Configuration - Task Control.

    A Write Latency indicator has been added to the Read and Write statistics window of single tables in Historical Statistics of real‑time pipeline tasks. You can now view the fluctuation of the table write latency within the selected time period.

    Optimization of Real-Time Capture Task

    A Log Parsing Latency indicator has been added to Historical Statistics of real-time capture tasks, allowing you to view the fluctuation of log parsing latency of the capture task within the selected time period.

    4.2.14.3

    Change in the Storage Location of Scheduled Task Configuration

    Starting from this version, scheduled task configuration information is no longer stored in the dpworks folder but is instead saved in the FineDB database.

    For newly downloaded FineDataLink of V4.2.14.3 or later versions, the dpworks folder will remain empty and will not contain any configuration data. When backing up scheduled tasks, you only need to back up the platform configuration.

    For details, see Backup and Restoration.

    4.2.14.2

    Optimization of Event-Based Scheduling

    For details, see Event-Based Scheduling.

    New Timeout Policy

    A Timeout Policy setting item has been introduced. With it enabled, if an upstream task times out but completes successfully within the configured grace period, the current task group can still proceed with execution.

    New Exception Notification

    An Exception Notification setting item has been introduced. With it enabled, if a task fails to run successfully, the system will notify O&M personnel for handling, preventing downstream tasks from being affected because the task remains in a non‑executed state.

    4.2.14.1

    Optimization of Real-Time Capture Task

    Optimization of Relationships Between Real-Time Capture Tasks and Data Connections

    Before optimization: 

    Capture tasks were distinguished by data connection URL. As a result, if a data connection URL contained a new parameter, a new capture task would be created.

    After optimization: 

    Capture tasks are now distinguished by data connection name.

    Execution Logic Optimization of Real-Time Capture Task

    To reduce the complexity of capture task execution and backfill triggering frequency, the execution logic of capture tasks has been optimized.

    Before optimization: 

    When a real-time task was paused or deleted, the system would automatically check whether the CDC data of all source tables in that task were being used by other tasks, and would stop or continue the operation accordingly.

    After optimization:

    • When a real-time pipeline task or real-time task is paused or deleted, it no longer triggers the pausing or deletion of tables in the capture task.

    • If a capture task is manually paused and later triggered by a real-time task to restart, the collection of all tables will resume upon the start.

    New O&M Operation of Real-Time Capture Tasks

    A Pause button has been added, by which you can stop a capture task immediately when real-time collection has a significant impact on the database.

    Support for Viewing Backfill Records

    Before optimization: 

    The backfill process was a black box. You could not view backfill logs or the status, making it difficult to detect and troubleshoot issues.

    After optimization: 

    Backfill records are now available, displaying details such as the status and information of tables currently being collected.

    Support for Removing Tables from Single/Multiple Real-Time Capture Tasks

    You can now remove tables from single or multiple real-time capture tasks, as shown in the following figure.

    SQL Editor Optimization

    The SQL editor has been fully upgraded, with the following functions available:

    • Auto‑prompting for table names, field names, and other information, with assisted completion

    • Syntax checking and error highlighting

    • Selected code block debugging

    • Supporting multiple SELECT query statements for a single script 

    • SQL statement formatting beautification

    • An edit lock function (When the content in the SQL editor is being edited, others can only preview it.)

    For details, see Database Table Management.

    Parameter Usage Optimization

    Support for Passing Parent‑Task Parameters to Nested Subtasks

    If you tick Pass Parameter in Current Task to Subtask, both the defined task parameters and the parameters output by Parameter Assignment can be passed to subtasks, sub‑subtasks, and other downstream nested tasks.

    For details, see Scheduled Task Invocation.

    Using Parameters as Names of Auto-Created Target Tables

    You can input parameters as the target table names when Target Table is set to Auto Created Table. This function is useful when you process and write high-volume data that needs to be split into multiple tables, or when data must be stored in separate tables by date.

    For details, see Data Synchronization - Data Destination and Mapping.

    Support Data Type Modification for Source Fields from Some App Data Sources

    You can now modify the data types of source fields from some app data sources in Field Mapping.

    Viewing Information of Source Tables (Including Incremental Tables Without PKs) for Scheduled Pipeline Tasks

    When adding tables to a scheduled pipeline task, you can now view table information and view field descriptions via APIs. The list displays incremental tables that lack primary keys, which are grayed out and temporarily not selectable.

    附件列表


    主题: Product Update Log
    • Helpful
    • Not helpful
    • Only read

    滑鼠選中內容,快速回饋問題

    滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。

    不再提示

    10s後關閉

    Get
    Help
    Online Support
    Professional technical support is provided to quickly help you solve problems.
    Online support is available from 9:00-12:00 and 13:30-17:30 on weekdays.
    Page Feedback
    You can provide suggestions and feedback for the current web page.
    Pre-Sales Consultation
    Business Consultation
    Business: international@fanruan.com
    Support: support@fanruan.com
    Page Feedback
    *Problem Type
    Cannot be empty
    Problem Description
    0/1000
    Cannot be empty

    Submitted successfully

    Network busy