4.1.1
Supported output level adjustment for service logs in FineDataLink of V4.1.1 and later versions, and optimized the log content accordingly:
Added instance queuing information to the log.
Added an INFO level to enable more detailed log output, where the output log could include information about parameter values, the executed script, API requests, and details about API-based paginated data fetching.
Clarified the level of each log record in the output log.
Added statistics for deleted data to the logs of all data-reading nodes.
4.2.9.1
Added the following information to the output logs:
Cluster nodes
Pre-startup preparation time
Information about SAP ERP Input and API Input, and the corresponding response time
Kettle invocation results
Script connection information
Version number
The SQL statements used for table selection
Diagnostic insights
After a scheduled task runs, execution logs will be generated to facilitate viewing of the task execution status, as shown in the following figure.
This document introduces the specific content displayed on the Log tab page.
If a scheduled task contains multiple flows and one fails while others succeed, the overall task status will be marked as failed, even though some flows are executed successfully.
1. FineDataLink of V4.1.1 and later versions supports the adjustment of the service log level. The log content has been optimized accordingly:
Instances queuing information is added to the output log.
An INFO level is added, which enables the log to output details including parameter values, the executed script, API requests, and details about API-based paginated data fetching.
The log level setting is optimized by clarifying the level of each log record in the output log.
Statistics for the deleted data of all data-reading nodes are displayed in the log.
2. The following table describes the level of messages output in the log after you configure the log level (by configuring Log Level Setting in Task Attribute or Level Setting in Global Setting).
INFO
BASIC INFO
ERROR
WARN
Scheduled task logs follow the following format.
yyyy-mm-dd hh:mm:ss Task status\Node status
[Log level] Log content
Overview: Successful Successful node count Failed Failed node count Terminated Terminated node count Skipped Skipped node count
The execution is completed. Total elapsed time: xxxs
The basic log displays the task/node execution status, and its output is not affected by the task log level setting. The basic log is output regardless of the log level.
Logs of all levels are embedded within the framework of basic logs. For instance, ERROR-level logs serve as supplementary details for logs corresponding to execution failure.
Instance creation
yyyy-mm-dd hh:min:ss Start creating the instance.
Instance creation result
yyyy-mm-dd hh:min:ss The instance is created successfully.
yyyy-mm-dd hh:min:ss Failed to create the instance.
The two are mutually exclusive, and only one can appear in the logs.
Instance queuing
yyyy-mm-dd hh:min:ss The instance is queuing.
Instance execution
yyyy-mm-dd hh:min:ss The instance starts to run.
Node execution
yyyy-mm-dd hh:min:ss Start executing [Node name]
yyyy-mm-dd hh:min:ss Skipped [Node name]
Node execution result
yyyy-mm-dd hh:min:ss Successful [Node name]
yyyy-mm-dd hh:min:ss Failed [Node name]
yyyy-mm-dd hh:min:ss Terminated [Node name]
Instance execution result
yyyy-mm-dd hh:min:ss The instance runs successfully.
yyyy-mm-dd hh:min:ss The instance fails to run.
yyyy-mm-dd hh:min:ss The instance running is interrupted.
The log output level of the whole Loop Container node cannot be defined. But the log of each iteration follows the log level setting.
Start executing [Node name]
Execute n Time(s) (Only the last 5 loops are displayed.)
No.1 Execution:
yyyy-mm-dd hh:min:ss Description
Details
No.2 Execution:
......
No.n Execution:
The execution is completed. Total elapsed time: ns
2023-08-31 21:26:55 Start executing [Loop Container]
Execute 20 Time(s) (Only the last 5 loops are displayed.)
No.16 Execution:
2023-08-31 21:26:56 [BASIC INFO] Start executing [SQL Script 1] [16]
2023-08-31 21:26:56 [BASIC INFO] Successful [SQL Script 1] [16]
No.17 Execution:
2023-08-31 21:26:56 [BASIC INFO] Start executing [SQL Script 1] [17]
2023-08-31 21:26:56 [BASIC INFO] Successful [SQL Script 1] [17]
No.18 Execution:
2023-08-31 21:26:56 [BASIC INFO] Start executing [SQL Script 1] [18]
2023-08-31 21:26:56 [BASIC INFO] Successful [SQL Script 1] [18]
No.19 Execution:
2023-08-31 21:26:56 [BASIC INFO] Start executing [SQL Script 1] [19]
2023-08-31 21:26:56 [BASIC INFO] Successful [SQL Script 1] [19]
No.20 Execution:
2023-08-31 21:26:56 [BASIC INFO] Start executing [SQL Script 1] [20]
2023-08-31 21:26:56 [BASIC INFO] Successful [SQL Script 1] [20]
The execution is completed. Total elapsed time: 5s
The log output level of the retried task (when Retry After Failure is enabled) cannot be defined.
Failed [Node name]
Re-run (Only last 5 retries are displayed.)
No.1 Retry
yyyy-mm-dd hh:min:ss [Log level] Description
No.n Retry
Wait for No.n+1 Retry...
Wait for No.1 Retry...
Wait for No.2 Retry...
yyyy-mm-dd hh:min:ss Start creating the instance. Node: Project name of the node
2024-04-23 15:51:13 Start creating the instance. Node: fanruan1
yyyy-mm-dd hh:min:ss Start executing [Node name] Cluster node: Project name of the node
2024-04-23 15:51:14 Start executing [SQL Script] Node: fanruan1
The instance running is interrupted.
[BASIC INFO] Task timed out, interrupting running.
2023-08-31 19:14:09 The instance running is interrupted.
[BASIC INFO]
- Start Time: yyyy-mm-dd hh:min:ss
- Real-Time Reading Speed: n B/s; n Row(s)/s
- Real-Time Writing Speed: n B/s; n Row(s)/s
- Cumulative Written Data: n Row(s)
- Cumulative Updated Data: n Row(s)
- Cumulative Deleted Data: n Row(s)
- Written Dirty Data: n Row(s)
Executing [Node name]...
2023-08-31 19:45:26 Start executing [Data Synchronization]
- Start Time: 2023-08-31 19:45:26
- Real-Time Reading Speed: 0B/s; 0 Row(s)/s
- Real-Time Writing Speed: 0B/s; 0 Row(s)/s
- Cumulative Written Data: 0 Row(s)
- Cumulative Updated Data: 0 Row(s)
- Cumulative Deleted Data: 0 Row(s)
- Written Dirty Data: 0 Row(s)
Executing [Data Synchronization]...
This content is only displayed during node execution and will be cleared upon completion of the node execution.
This log format applies to:
Data Synchronization
Data Transformation
Parameter Assignment
- Real-Time Download Speed: n B/s
- Real-Time Upload Speed: n B/s
- Number of Uploaded Files: n
Executing [File Transfer]...
2023-08-31 19:45:26 Start executing [File Transfer]
- Real-Time Download Speed: 0B/s
- Real-Time Upload Speed: 0B/s
- Number of Uploaded Files: 0
This log format applies to: File transfer
- Returned Row Count: x
- Affected Row Count: y
- Total Time: n s
2024-04-23 16:36:53 Started executing [SQL Script]
- Returned Row Count: 4
- Affected Row Count: 4
- Total Time: 0.016s
2024-04-23 17:17:23 Successful [SQL Script]
1. It is displayed only upon successful execution. (The information is returned from the database. If no data is returned, you will be notified accordingly.)
2. Total Time refers to the total time from establishing a data connection to retrieving returned data, primarily the time taken by the database to execute the SQL statements.
3. This log format applies to: SQL Script
Successful [Node name]
- End Time: yyyy-mm-dd hh:min:ss
- Average Traffic: n B/s
- Average Writing Speed: n Row(s)/s
- Read Data: n Row(s)
- Written Data: n Row(s)
- Updated Data: n Row(s)
- Deleted Data: n Row(s)
- Time Consumption: n s
2023-08-31 20:03:33 Successful [Parameter Assignment]
- Start Time: 2023-08-31 20:03:32
- End Time: 2023-08-31 20:03:33
- Average Traffic: 871B/s
- Average Writing Speed: 1 row/s
- Read Data: 20 Row(s)
- Written Data: 1 Row(s)
- Updated Data: 0 Row(s)
- Deleted Data: 0 Row(s)
- Time Consumption: 1s
- Number of Downloaded Files: n
2023-08-31 20:03:33 Successful [File Transfer]
- Number of Downloaded Files: 20
- Number of Uploaded Files: 1
File Transfer
Skipped [Node name]
[BASIC INFO] Failed to meet the condition set on the node connector.
[BASIC INFO] Failed to meet the branch condition.
2023-08-31 22:07:48 Skipped [Virtual Node]
The following table describes the ERROR log in the output log when you set the log level to ERROR, INFO, or WARN.
Failed to create the instance.
[ERROR] Abnormal Node: [Node name]
Specific error information
....
Abnormal Node: [Node name]
2023-08-31 19:09:44 Failed to create the instance.
[ERROR] Abnormal Node: [Data Transformation]
[DB Table Output]:
The target table is empty.
The field mapping table is empty.
It applies to all nodes.
[ERROR] [Error object]: Error title - Description
2023-08-31 19:19:27 Failed [Data Synchronization]
[ERROR] [Data Source]: Data connection exception - DataBase[FRDemo] get column failed - [SQLITE_ERROR] SQL error or missing database (near "1": syntax error)
It is a general error log.
[ERROR] The dirty data limit is exceeded. The limit is [x]. But [y] dirty data record(s) are captured actually.
Error Location: Dirty data occurs during data writing to the target table [Table name], preventing data from being written properly. Check and rectify it.
Exception: Specific error information
Target Field To Be Checked:
{Field ID: 1; Field Name: a1; Data Type: int}
{Field ID: 2; Field Name: a2; Data Type: varchar}
Error Record:
{Number of Bytes: 1; Field ID.: 1; Field Value: value1; Data Type: string}
{Number of Bytes: 1; Field ID.: 2; Field Value: value2; Data Type: string}
[ERROR] The dirty data limit is exceeded. The limit is [20]. But [700] dirty data record(s) are captured actually.
Error Location: Dirty data occurs during data writing to the target table [SALE_BSEG], preventing data from being written properly. Check and rectify it.
Exception: ORA-01653: Table BLLODS_TEST.SALE_BSEG cannot extend by 8192 (in tablespace BI_ODS_TEST)
{Field ID: 295; Field Name: PENDAYS; Data Type: number}
{Field ID: 333; Field Name: CURRENT_TIMESTAMP; Data Type: timestamp}
{Number of Bytes: 3; Field ID.: 1; Field Value: 800; Data Type: string}
{Number of Bytes: 4; Field ID.: 2; Field Value: 1001; Data Type: string}
This content only appears when dirty data occurs.
Only the most recent dirty data is displayed. The details about the generated dirty data can be exported in Statistics or viewed in the fanruan.log file.
Error Loop and Parameter:
No.x Execution - Parameter [Parameter name]: Parameter value x
No.y Execution - Parameter [Parameter name]: Parameter value y
2023-08-31 21:26:55 Start executing [Loop Container] Execute 20 Time(s) (Only the last 5 loops are displayed.) No.16 Execution: 2023-08-31 21:26:56 [BASIC INFO] Start executing [SQL Script 1] [16] 2023-08-31 21:26:56 [BASIC INFO] Successful [SQL Script 1] [16] No.17 Execution: 2023-08-31 21:26:56 [BASIC INFO] Start executing [SQL Script 1] [17] 2023-08-31 21:26:56 [BASIC INFO] Successful [SQL Script 1] [17] No.18 Execution: 2023-08-31 21:26:56 [BASIC INFO] Start executing [SQL Script 1] [18] 2023-08-31 21:26:56 [BASIC INFO] Successful [SQL Script 1] [18] No.19 Execution: 2023-08-31 21:26:56 [BASIC INFO] Start executing [SQL Script 1] [19] 2023-08-31 21:26:56 [BASIC INFO] Successful [SQL Script 1] [19] No.20 Execution: 2023-08-31 21:26:56 [BASIC INFO] Start executing [SQL Script 1] [20] 2023-08-31 21:26:56 [BASIC INFO] Successful [SQL Script 1] [20] [ERROR] [DB Table Output 2]: DB configuration error - The field mapping [ctr_id] on the writer side was not found in the source field Error Loop and Parameter: No.5 Execution - Parameter [para]: 5 No.6 Execution - Parameter [para]: 6 The execution is completed. Total elapsed time: 5s 2023-08-31 21:26:56 Failed [Loop Container] Errors generated during the execution of Loop Container are summarized and displayed before the node execution result is output. This log format applies to: Loop Container
Errors generated during the execution of Loop Container are summarized and displayed before the node execution result is output. This log format applies to: Loop Container
The following table describes the WARN log in the output log when you set the log level to WARN or INFO.
[WARN] [y] dirty data record(s) are captured.
- Average Writing Speed: 18 row/s
- Written Data: 18 Row(s)
- Written Dirty Data: 2 Row(s)
[WARN] [2] dirty data record(s) are captured.
/
[WARN] [Data Destination]: Detected that the structure of the source table [XXX] has changed.
- Added Field: COL1
- Deleted Field: COL2
- Modified Field: COL3; Old Type: varchar(100); New Type: varchar(200)
This log format applies to: Data Synchronization
[WARN] [Operator name]: Detected long-running queries. Check the statement that is being executed: TRUNCATE XXX
It is output only when execution time exceeds 30 seconds.
Data Synchronization (data destination)
Data Transformation (DB Table Output)
When the task log level is INFO, INFO-level logs are output.
[INFO] [Operator name]: --If no operator is included, the operator name will not be output.
Parameters Used:
- Parameter [Parameter name 1]: Parameter value 1
- Parameter [Parameter name 2]: Parameter value 2
The parameter names and values are output in the log.
Parameters can be used by all nodes except Virtual Node.
Execute the script:
The command statements executed actually
Output Information: -- No output will be generated if there is no content. Shell Script, Bat Script, and Python Script may produce output.
The output information after script execution
The content of the actually executed script is displayed.
The output information after script execution is displayed.
Nodes involving script execution include:
Data Synchronization (data fetching using SQL statements, table data filtering using statements, and transaction output)
Data Transformation (data fetching using SQL statements, table data filtering statements, and transaction output)
Parameter Assignment (data fetching using SQL statements and table data filtering using statements)
SQL Script
Shell Script
Bat Script
Python Script
Functions and operators involving script execution include:
DB Table Input in Data Synchronization/Parameter Assignment/Data Transformation (data fetching using SQL statements and table data filtering using statements)
Spark SQL and Python in Data Transformation
Response Time: xxs
Request Information:
- Certificate Address: /xx/xx/test.CRT -- This will not be printed if no self-signed certificate is configured.
- Request:
POST http://www.fanruan.com/testapi/
- Headers:
Content-Type: application/json
- Body:
{"key":"Im a JSON"}
Server Response:
- HttpStatusCode: 200
The detailed content of the API request is output in the log.
Nodes related to API requests include:
Data Synchronization/Data Transformation/Parameter Assignment: API Input
Data Synchronization/Data Transformation: API Output
[INFO] Call Result: Job Number: 24 Error: 0 Read Data: 0 Row(s) Write Data: 0 Row(s) Input Data: 0 Row(s) Output Data: 0 Row(s) Updated Data: 0 Row(s) Lines rejected: 0 Script exist status: 0 Result: true
Nodes involved: Kettle Invocation
[INFO]
[INFO] [Operator name]: --If no operator is included, the operator name will not be printed.
The detailed content of the SAP request is output in the log.
Related nodes include:
Parameter Assignment: SAP ERP Input
The Job information, which refers to diagnostic insights, is added in logs and organized by node, with its content limited to details that help locate backend logs for specific nodes. This is intended to improve efficiency in troubleshooting scenarios.
Job[xxx]
Job[12]
Definition: The Job[xxx] here does not refer to the job ID configured in a task, but rather a unique identifier assigned to the node by the scheduling plan. As long as the project is not restarted, the same node will use the same identifier for every execution. Currently, this identifier is not recorded in the logs and only displayed in the tooltip of the nodes on the canvas.
滑鼠選中內容,快速回饋問題
滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。
不再提示
10s後關閉
Submitted successfully
Network busy