1. If you upgrade FineDataLink to V4.2.6.1 or later versions and re-register it, Decision-making Platform and Data Portal will no longer displayed under System Management > Registration Management > Function List > Function Point Registered. (This does not affect using functions.)
2. Note the following items when upgrading FineDataLink to V4.2.6.2 or later versions.
Changes to JAR packages:
Removed fdl-offline-4.2.jar, fdl-pipeline-4.2.jar, and fdl-stream-4.2.jar.
Added fdl-datadevelop-offline-4.2.jar, fdl-datadevelop-stream-4.2.jar, fdl-datapipeline-stream-4.2.jar, fdl-engine-4.2.jar, fdl-realtime-center-4.2.jar, and fdl-datacenter-4.2.jar.
Before upgrades, you need to manually delete JAR files starting with fine- and fdl- in /webapps/webroot/WEB-INF/lib in the Tomcat container.
Then place the new JAR packages in /webapps/webroot/WEB-INF/lib in the Tomcat container.
3. If you upgrade FineDataLink to V4.2.6.4 or later versions, note that:
If the Data Lineage function was not registered before the upgrade but is registered after the upgrade, you need to click the Lineage Reset button to refresh the lineage.
If you have used lineage analysis before the upgrade and have not registered the Data Lineage function after the upgrade, you must click the Lineage Reset button to refresh the lineage after you register this function point.
The lineage parsing logic has been optimized:
1. During lineage parsing, case-insensitive fuzzy matching of table names for Oracle databases is supported. In SQL queries:
If table names are enclosed in double quotes, table name matching is case-sensitive.
If table names are not enclosed in double quotes, table name matching is case-insensitive, where table names will be converted to uppercase by default for matching.
2. If newly published tasks contain disabled nodes, the lineage view will not display the parsing results of the disabled nodes. The historical content will not be processed.
If Data Connection A fails during upgrades, tables corresponding to that data connection will lose lineage relationships after upgrades. In this case, you need to republish tasks to update lineage. However, it's unclear which tasks use tables corresponding to this data connection.
To solve this issue, a Lineage Reset function has been added in this version, by which you can refresh lineage at the data connection level.
You may need to modify data connections used in tasks once the database changes due to business adjustments, which is difficult as it's unclear which tasks use these data connections.
For data connections within your permission scope on the Data Connection Management page, you can view reference relationships between data connections and resources, check specific task details, and navigate to task configuration pages for adjustments.
The configuration methods SQL and Table Selection have been added to the Dimension Table Input operator. You can write SQL statements to process data from the dimension table before joining it with other tables.
You can configure the Flink Engine under Data Development > Real-Time Task to enable FineDataLink with aggregation computing capabilities, with which FineDataLink can support the construction of real-time data warehouses and dashboards under specific conditions.
You can associate multiple real-time data sources using the Data Association operator.
You can configure WebSocket data connections to read/write data from/into WebSocket under Data Development > Real-Time Task.
FineDataLink supports connection to Ymatrix 6.x data sources.
When you remove a table from a pipeline task, the system synchronously cleans up dirty data corresponding to that table to prevent performance degradation caused by data accumulation.
The indicators of execution statistics and exception troubleshooting for pipeline tasks are enriched to help you better monitor task execution status and identify issues, improving O&M efficiency.
1. On the detail page of a single pipeline task:
You can navigate to the page of its real-time capture task.
When the source-end database is MySQL, Oracle, and SQL Server, the log parsing latency is displayed.
The latest message read/write time is displayed.
2. The content on the Real-Time Statistics and Historical Statistics tab pages on the detail page of a single pipeline task has been optimized.
Under Pipeline Activity > Real-Time Statistics:
The indicator card of Total Read (Row) has been removed.
Total Output (Row) has been renamed Total Written (Row), and the output speed is no longer displayed.
The table columns have been adjusted.
Under Pipeline Activity > Historical Statistics:
Information on Pending Data (Row) and Total Written (Row) is displayed in charts.
3. A Source Table Quantity column has been added to the table under O&M Center > Pipeline Task > Task Management.
The O&M of real-time capture tasks has been optimized:
A Dependency tab page has been added to display the list of pipeline tasks and real-time tasks that depend on this real-time capture task.
When the source-end database is Oracle, MySQL, and SQL Server, Log Parsing Latency is displayed.
Real-Time Statistics and Historical Statistics tabs have been added to the Table Being Parsed tab page.
You can view the execution status of all data capture tasks.
Application Scenario
If a scheduled task configured with high-frequency minute-level scheduling gets stuck, subsequent scheduled tasks will repeatedly queue up.
If a scheduled task is configured with too many scheduling plans, thread insufficiency issues will occur.
Function Description
You can set Instance Execution Strategy for scheduled tasks configured with minute-level scheduling to keep the newest queuing instance only and skip others when the tasks get stuck.
You can set Task Priority for scheduled tasks. If the number of threads is insufficient, tasks with higher priority in the queue will be executed first.
You can configure Execution Management for multiple scheduled tasks in batches under O&M Center > Scheduled Task > Task Management, as shown in the following figure.
The O&M Overview page displays the number of scheduled tasks, pipeline tasks, and data service tasks in the project, their running status, failure rate rankings, and execution status within specified periods, as shown in the following figure.
1. The API creation steps have been updated to Service Content (Data Query) and API Configuration.
2. You can control the number of records per request.
3. You can enable Pagination Query and specify corresponding parameters.
4. You can customize request parameters and the structure of returned data.
You can modify app path prefixes and app IDs in app paths under Data Service > App List, as shown in the following figure.
1. In the table on a specific folder page in Data Connection Management:
A Test Result column has been added.
The Number of Idle Connections column no longer displays the maximum number of idle connections.
2. The check logic during connection testing has been optimized.
FineDataLink supports connection to AnalyticDB for MySQL data sources, for which the following functions are supported:
Data reading/writing using scheduled tasks
Data writing using pipeline tasks
The Database Table Management function
The Data Service function
1. You can add fileName (whose value is the file name), filePath (the file path), and lastModifiedTime (the file modification time) fields in the File Input operator as output fields. Once added, these fields will be displayed during data previewing and participate in actual task execution. The following figure shows the effect.
2. Filename Extension in the File Input operator is no longer mandatory.
For details, see the "Function Point Limit" section of Registration Introduction.
For details, see Microsoft SQL Server Data Connection.
滑鼠選中內容,快速回饋問題
滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。
不再提示
10s後關閉
Submitted successfully
Network busy