4.2.5 Update Log

  • Last update: April 28, 2025
  • 4.2.5.4

    Optimization of the XML Parsing Operator

    1. The Namespace setting item is removed.

    2. The source field defaults to responseBody if the upstream operator is API Input where the Expand Parsed JSON Data into a Two-Dimensional Table is unticked.

    3. Certain invalid characters in XML statements are supported.

    4. Namespace prefixes and namespace URLs in XML files are obtained automatically.

    5. You can parse leaf nodes.

    6. You can remove namespace prefixes from filed names generated after parsing in batches.

    7. The issue related to hierarchical decomposition is resolved.

    8. The logic for adding and deleting nodes is optimized.  

    For details, see XML Parsing Operator.

    O&M Center Optimization

    Pipeline Task

    1. A Configuration Detail tab page is added to data pipeline tasks, where you can view the configuration details of a task without entering the task editing page. Details include source end information (such as settings of Data Source, Read Mode, and Synchronization Object), target end information, and pipeline control settings (such as settings of Dirty Data Threshold, Retry After Failure, Notification Content, and Log Level), as shown in the following figure.

    2. Task Management under O&M Center > Pipeline Task is divided into Running Record and Task Management to facilitate pipeline task management, as shown in the following figure.

    Scheduled Task

    Fields such as Creator and Task Priority are added to the scheduled task page. The View Details field is replaced by Running Record.

    Data Service

    On the API Management page, the View Details button is replaced by Call Record, a Creator filtering option is added, and the interaction is optimized. You can configure the column display of the call record table. For details, see Data Service O&M.

    Permission Management Optimization

    Pre-optimization:

    You were required to enable Hierarchical Authorization and then enable task management for specified function modules as needed before assigning Use and Authorization permission on scheduled tasks/pipeline tasks/data service APIs/data service Apps/data connections to users/departments/roles.

    Post-optimization:

    Hierarchical permission management is independent of resource permission management for each function module.

    With Hierarchical Permission Management enabled:

    • You can assign Use and Authorization permission on Personnel Management and System Management to users/departments/roles.

    • You can assign Authorization permission on function modules to users/departments/roles.

    With Resource Permission Control enabled, you can manage permission on each resource individually. All resources will be openly accessible if no module is ticked.

    • You can assign Use and Authorization permission on specified scheduled tasks/real-time tasks/pipeline tasks/data service APIs/data service Apps to users/departments/roles.

    4.2.5.3

    Python Operator Optimization  

    Pre-optimization:

    • During data development, you could not debug code in the Python operator in real time to troubleshoot errors.

    • The Python operator could not be connected with multiple upstream input and process operators.

    • Each Python operator consumed 20% of total memory, potentially leading to excessive server memory usage and resource waste.

    Post-optimization:

    • You can debug code in the Python operator during data development with the returned code execution result.

     

    • You can connect Python to multiple upstream input and process operators.

    • The Python module in the image package of FineOps-deployed FineDataLink would be overwritten due to directory mounting to the host machine. This issue is fixed in FineDataLink of this version.

    • The memory usage limit of the Python operator is adjusted.

    New Functions in New Calculation Column

    FineVis real-time dashboards use data from WebSocket APIs that receive output from FineDataLink. However, since data layers of real-time 3D FineVis dashboards process data exclusively in the WGS84 geographic coordinate system, raw data based on GCJ-02 or BD09 coordinate systems cannot be directly displayed. To resolve this issue, FineDataLink offers two functions that convert GCJ-02 and BD09 coordinates into WGS84 coordinates.

    4.2.5.2

    Optimization of the JSON Generation Operator

    Pre-optimization:

    • The JSON nesting depth in JSON Setting could not exceed three levels.

    • You might need to use a field repeatedly during JSON generation, but one field could only be selected once in a JSON Generation operator.

    • Fixed values that were required when you transmitted JSON data to APIs were not supported.

    • You might want to import JSON templates to facilitate the generation of API-compliant JSON data, which was not supported.

    Post-optimization:

    • The JSON nesting depth in JSON Setting can exceed three levels.

    • A single field can be selected repeatedly in a JSON generation operator.

    • You can customize fixed values for selected fields.

    • Generation based on JSON templates is supported.

    For details, see JSON Generation.

    4.2.5.1

    Optimization of Event-Based Scheduling

    Pre-optimization: 

    The workflow of configuring dependencies on a single task was lengthy. Timed Scheduling and Event-Based Scheduling had multiple entry points, leading to disjointed operations. In the case of multiple event-based scheduling plans, adjusting the involved task was inconvenient.

    Post-optimization:

    Dependency configuration for a single task can be completed on a unified canvas, with streamlined access to adding upstream dependencies, editing and deleting task groups, removing dependencies, viewing the scheduled execution status of tasks, and more.

    For details, see Event-Based Scheduling and Scheduled Task O&M - Scheduling Plan.

    You can manage all task groups systematically under O&M Center > Scheduled Task > Scheduling Plan > Event-Based Scheduling, as shown in the following figure.

     

    Support for the Snowflake Data Source

    FineDataLink supports connection to Snowflake data sources for data reading/writing using Scheduled Task, data writing using Pipeline Task and Real-Time Task, and data reading using Data Service.

    Support for the PI Data Source

    FineDataLink supports connection to PI databases for data reading using Real-Time Task in Data Development.


    附件列表


    主题: Product Update Log
    • Helpful
    • Not helpful
    • Only read

    滑鼠選中內容,快速回饋問題

    滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。

    不再提示

    10s後關閉

    Get
    Help
    Online Support
    Professional technical support is provided to quickly help you solve problems.
    Online support is available from 9:00-12:00 and 13:30-17:30 on weekdays.
    Page Feedback
    You can provide suggestions and feedback for the current web page.
    Pre-Sales Consultation
    Business Consultation
    Business: international@fanruan.com
    Support: support@fanruan.com
    Page Feedback
    *Problem Type
    Cannot be empty
    Problem Description
    0/1000
    Cannot be empty

    Submitted successfully

    Network busy