1. After you click the Data Source Permission Check button in real-time pipeline tasks:
A new check item Active Transaction Check will be performed if the source end is an Oracle database.
The original check item Supported Data Source Version is renamed Data Source Version Check.
2. If you use an Oracle database as the source end of a real-time pipeline task, and set Read Mode and Synchronization Type to LogMiner and Incremental Synchronization Only, respectively, the options of Incremental Sync Start Point now include Custom Time, as shown in the following figure.
For details, see Oracle Data Connection.
3. The built-in logic has been optimized:
This version addresses the issue where RAC restarts could cause gaps in the SCN sequence, which led to failures in real-time pipeline tasks.
This version resolves the issue that the pipeline would become unresponsive during execution without generating specific error messages when LogMiner labeled events as UNSUPPORTED in SQL_REDO. The system now explicitly reports these unsupported events.
This version enhances compatibility with scenarios involving partial event rollback in a transaction.
The configuration page of real-time pipeline tasks has been optimized, as shown in the following figure.
When the data table changes or exceptions occur (such as log expiration and checkpoint failure), you can now restore the pipeline to normal operation by clicking the Reset Task button to reset the real-time pipeline task, as shown in the following figure.
Real-time pipeline tasks now allow you to add, delete, and edit tables, modify the synchronization type, and adjust task control settings, as shown in the following figure.
1. You can now start and pause real-time pipeline tasks in Execution Record, as shown in the following figure.
2. You can now reset and delete real-time pipeline tasks, and modify task control settings under O&M Center > Real-Time Pipeline > Task Management, as shown in the following figure.
The notification content is categorized into task-level exceptions, table-level exceptions, and others.
Only real-time pipeline tasks in Draft status are subject to edit lock restrictions. The edit lock does not apply to other task-level edits or to edits of synchronization objects. For details, see Edit Lock for Tasks Against Concurrent Editing.
When the target end of a real-time pipeline task is a ClickHouse database:
If you set Target Table to Auto Created Table, the physical primary key of the source table will be identified and set as the sorting key of the target table automatically. You can also specify the sorting key. The sorting key will be automatically assigned a NOT NULL constraint, which cannot be removed.
If you set Target Table to Existing Table, the sorting key in the existing table will be identified and displayed automatically, which will also be used for primary key mapping in Write Method automatically.
Improved Button Display Logic of View Partition Key Setting
The display logic of the View Partition Key Setting button has been optimized for writing data to existing partitioned tables. For details, see Partition Table Creation and Data Reading/Writing.
If you set Target Table of a scheduled task to Auto Created Table and click Manual Table Creation, you can now directly edit the SQL statements in the pop-up window without clicking the Edit button on the right, which has been removed.
For details, see Synchronizing DDL Changes Using Scheduled Task.
1. When Configuration Method in Data Source is set to Table Selection, changes to primary keys, NOT NULL constraints, and remarks of the source table are now included in the DDL change detection scope.
2. A one-click apply function has been added. The field name is used as the unique identifier (based on the Map Fields with Same Name logic) to compare the structures of the source and target tables. If any inconsistencies are detected, you can synchronize the source table structure changes to the target table with one click.
The initial version of the Data Reception function required deploying Kafka and configuring the transmission queue, which increased the implementation effort. Starting from V4.2.11.2, the Data Reception function supports an in-memory queue, eliminating the need for Kafka deployment.
The in-memory queue will be used if you disable Message Queue Middleware. Determine whether to enable it based on actual conditions.
The default time range filtering conditions for Ranking by Failure Rate and Ranking by Average Duration under O&M Center > O&M Overview > Scheduled Task have been changed to Last 7 Days, as shown in the following figure.
Data sources of the Cloud App System type cannot be copied. You can create data connections if multiple app data sources are required, as shown in the following figure.
滑鼠選中內容,快速回饋問題
滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。
不再提示
10s後關閉
Submitted successfully
Network busy