Error message: The server selected protocol version TLS10 is not accepted by client preferences [TLS12]
Issue Description
The driver is unable to establish a secure connection to the SQL Server database using Secure Sockets Layer (SSL) encryption.
Solution
Check for driver version mismatch: Find the supported driver versions on the official database website based on your database version. If your driver version is supported, check for compatibility. Search online for how to check your driver version.
Check for the JDK version. Try changing the JDK version or replacing the server.
Verify that a Java environment exists, and modify the files in the jre directory.
A preview error occurs when a field name contains parentheses. The error message reads "Data connection is abnormal - DataBase[sqlserver2016] get data failed! 'Field name' is not a recognized built-in function name."
Wrap the field name in square brackets ([]), for example, [Field name].
Configuring the data connection fails with the error message indicating that the driver is unable to establish a secure connection to the SQL Server database using Secure Sockets Layer (SSL) encryption.
Cause Analysis
This is caused by new cipher suites introduced in newer JDK versions.
Open the java.security file in FineDataLink installation directory\jre\lib\security\java.security and remove or comment out 3DES_EDE_CBC, TLS1, TLS1.1, and TLS1.2, as shown in the following figure. Save the file and restart FineDataLink. The connection can then be configured normally.
An error occurs when you configure a data connection to an RDS MySQL database.
Error message: com.fr.third.alibaba.druid.pool.GetConnectionTimeoutException: wait millis 10003, active 0, maxActive 50, creating 1, createElapseMillis 20014 at com.fr.third.alibaba.druid.pool.
The IP address is not included in the allowlist of Alibaba Cloud.
Add the IP address of the FineDataLink server to the allowlist of Alibaba Cloud.
A data connection has actually been created with the driver org.gjt.mm.mysql.Driver.
However, the corresponding MySQL data connection is not displayed in the option list.
This MySQL driver is not yet supported.
Currently, the supported MySQL drivers include com.mysql.jdbc.Driver and com.mysql.cj.jdbc.Driver.
Data is output to a MySQL database with Write Method set to Write Data into Target Table After Emptying It. However, the target table is not cleared during each task execution.
The database user used to configure the MySQL data connection does not have the DROP privilege. Grant the user the DROP privilege.
After the TLS protocol of the MySQL database is upgraded, a data connection error occurs, saying "Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure." The SSL handshake fails during connection.
Append ?useSSL=false&allowPublicKeyRetrieval=true to the MySQL data connection URL.
An error occurs during connection, saying "Caused by: javax.net.ssl.SSLException: Received fatal alert: protocol_version."
Append ?useSSL=false&allowPublicKeyRetrieval=true to the data connection URL.
An error occurs during connection, saying "ORA-12505: TNS:listener does not currently know of SID given in connect descriptor."
The provided SID is incorrect, most likely because the database name and SID are being confused.
You can write the database URL in two ways:
Use a colon to specify the SID service, meaning the SID is orcl.
Example: database.url=jdbc:oracle:thin:@171.xxx.96.xx:xxxx:orcl
Use a slash to specify the service name, meaning the service name is orcl.
Example: database.url=jdbc:oracle:thin:@171.xxx.96.xx:xxxx/orcl
The first format uses a colon to specify the SID service, while the second uses a slash to specify the service name. Therefore, the error occurs because the SID is written incorrectly, with the service name mistakenly used as the SID. You can replace the colon (:) before orcl with a slash (/) to resolve the issue.
No data is displayed in Data Preview, and the ORA-00942 error is recorded in the log, indicating that the table or view does not exist.
The SQL query does not account for case sensitivity in schema and table names, which are treated as case-sensitive in Oracle databases.
Modify the case of the schema and table names to match the actual names in the database.
When uploading gpfdist files, first upload the compressed package to the Linux server and then decompress it. For details, see Greenplum Data Connection.
The SQL query succeeds in the Greenplum database but fails in FineDataLink.
Include the schema name in the SQL statement.
Both the source and target databases are Greenplum. Data synchronization hangs after 300 million records are extracted. Both read and write row counts drop to 0.
The read operation stalls due to network fluctuations.
Add the socketTimeout parameter to the data connection URL of the source end Greenplum database, for example, jdbc:postgresql://hostname:port/database?socketTimeout=1800.
The target end is a Greenplum database. The task fails with the error "ERROR: http response code 404 from gpfdist (gpfdist://192.168.60.197:15500/fdda13be3f2f01e7_BLANK_CHANCHU.csv): HTTP/1.0 404 file not found (seg33 slice1 192.168.60.177:40001 pid=27049)."
The old gpfdist process is not terminated during the FineDataLink restart, leaving it occupying the port and preventing new tasks from being executed.
Run kill -9 to kill the old gpfdist process and rerun the scheduled task to generate a new gpfdist service process.
The target end is a Greenplum database. The task fails with the error "error when connecting to gpidist htp:/10.145 1.74:15500/491b2d5143c97c36 ods oa worklow curentoperator.csy, quit after 11 tries (seg10 slice1 10.145.1.79:40002 pid=46296)."
A Greenplum node is abnormal and cannot access port 15500 on the FineDataLink server.
Check the Greenplum nodes and verify whether the database server can establish a telnet connection to the FineDataLink IP address and port.
Error message: java.sql.SQLException: Connection is in auto-commit mode - java.sql.SQLException: Connection is in auto-commit mode
This is caused by driver version issues.
For Presto database 0.273.2, use a driver of 0.169 or later versions.
The Presto data connection is successful under System Management > Data Connection > Data Connection Management, but fails in the Data Synchronization node.
This is caused by a product logic issue.
Under System Management > Data Connection > Data Connection Management, if the system cannot obtain the database schema, it will leave the schema field empty and consider the connection successful. However, in the Data Synchronization node, failure to obtain the schema results in an error.
The table is locked by another transaction, preventing deletion operations.
Run select pid, query from pg_stat_activity to check all current processes. The result shows that the second PID is associated with the table that cannot be deleted.
Run the following command to terminate the corresponding PID:
select pg_terminate_backend(pid), query from pg_stat_activity where query ~* 'order_table' and pid <> pg_backend_pid()
The table can then be deleted successfully.
Error message: Handler dispatch failed: nested exception is java.lang.NoClassDefFoundError: com/sap/conn/ico/CoException
Restart the FineDataLink server after placing the driver.
In the Data Synchronization node, an error occurs during data output, saying "Exception when job run com.fr.dp.exception.FineDPException: An error was encountered during data transfer - Connection reset - Connection reset."
The FE node address in the Doris data connection configuration is entered incorrectly. The correct input should be the IP address and the HTTP port of the FE node.
A scheduled task with Doris as the target end fails with the error "Within the Doris transaction, only INSERT INTO SELECT, UPDATE, and DELETE statements are supported. -errCode = 2, detailMessage = This is in a transaction, only insert, update, delete, commit, rollback is acceptable."
Before enabling Transaction Control, add useLocalSessionState=true to the Doris data connection URL.
For example, jdbc:mysql://192.168.5.199:9099/mysql?useLocalSessionState=true.
Tesing the SSH data connection fails with the error "SSH connection failed - timeout."
The FineDataLink server cannot establish a telnet connection to port 22 of the target server. Check the firewall rules, IP address blocklists/allowlists, and other network restrictions.
The command ssh root@IP address succeeds when you configure the SSH data connection to the local host. However, an authentication failure occurs when you use localhost in the connection URL in FineDataLink, and a timeout occurs when you use the IP address.
This issue may be caused by the following reasons.
1. The user account (for example, root on CentOS 6) is not allowed to log in remotely.
2. The GSSAPIAuthentication value in the server configuration file (/etc/ssh/sshd_config) is set to yes.
Troubleshooting steps:
1. Open the sshd_config file (/etc/ssh/sshd_config), uncomment the line PermitRootLogin yes to allow remote logins for root.
2. Set the GSSAPIAuthentication value in the sshd_config file to no, or add session.setConfig("userauth.gssapi-with-mic", "no") and session.setConfig("StrictHostKeyChecking", "no") in the code.
3. To speed up SSH logins, change the value of UseDNS from yes to no in the sshd_config file.
4. Restart the SSH service. The connection speed will be significantly improved.
Configuring an SSH data connection fails with the error "SSH Connection Failed - verify: false."
The built-in JSCH version in the platform fully supports only OpenSSH 6.x. For OpenSSH of later versions, additional encryption methods need to be configured.
A project that is deployed in a containerized manner encounters an Impala connection failure with the error "Unable to obtain Principal Name for authentication."
Open the krb5.conf configuration file, delete the renew_lifetime parameter, and then reconnect FineDataLink to the database.
Error message: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
Solution One
Open the catalina.sh file on the FineDataLink server and add the variable declaration export HADOOP_USER_NAME=Username. This specifies that the user connecting to HDFS is Username.
Solution Two (Not Recommended)
If no user is specified, the root account is used for connection to HDFS by default. This solution requires disabling HDFS user authentication, which may introduce security risks, as it allows all users to access HDFS and execute commands.
The procedures are as follows:
1. Locate the advanced configuration snippet (safety valve) of the HDFS service in hdfs-site.xml.
2. Set the value of dfs.permissions.enabled to false, save the change, and restart HDFS.
Hive (HDFS) data connection fails with the error "hdfs write file error.Caused by: java.lang.UnsupportedOperationException" in the log.
The value format of HDFS Setting in the data connection configuration should be hdfs://IP address:Port number, but you may have incorrectly entered http://IP address:Port number.
After the execution of a scheduled task that synchronizes data from a Kingbase database V8R6 to a Hadoop Hive database 2.1, the row count in the Hadoop Hive database (90496) is more than that in the source database (87719). Additionally, the Kingbase table has a non-null constraint on the id column, but null values appear in the corresponding column in the Hadoop Hive table.
The target table in the Hadoop Hive database is stored in the TEXT format. The data from the Kingbase database may contain delimiter characters, causing it to be split into new rows when written to HDFS.
Manually create a table using the following SQL statement that specifies the storage format as ORC:
CREATE TABLE your_table ( id INT, name STRING)row format serde 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat';
The Hadoop Hive is used as the target end with Write Method set to Write Data into Target Table After Emptying It. An error occurs, saying "FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.Exception while processing."
The configured HDFS user does not have sufficient privileges.
Use an HDFS user with the appropriate privilege.
The Kerberos authentication is enabled for a Hadoop Hive data connection. The test connection fails with the error "No common protection layer between client and server."
The SASL security layer is incorrectly configured or mismatched. Both the client and the server must support the same SASL mechanism, and the sasl.qop parameter in the configuration file must be set correctly.
Add sasl.qop=auth to the data connection URL.
Example URL:
jdbc:hive2://192.168.6.140:19023/default;principal=hive/primary.cxfund.com@CGJH.COM;auth=KERBEROS;sasl.qop=auth
The Hadoop Hive is used as the target end. An error occurs in retrieving existing tables.
Check the Hive configuration hive.support.quoted.identifiers. If it is set to none, backticks are not supported, causing the error. Modify this parameter value accordingly.
You do not find the Server Local Directory option when creating a data connection.
The data connection of the Local Server Directory type can only be created by the super administrator. For details, see Data Connection to a Local Server Directory.
Error message: ClickHouse response without column names occurs
Append urlcompress=false to the connection URL.
A ClickHouse table has a primary key, but it cannot be retrieved in the Data Synchronization node.
Primary keys in ClickHouse databases differ from conventional primary keys.
In ClickHouse databases, a primary key functions more like a sorting key, providing a primary index but not enforcing uniqueness.
In FineDataLink, the primary key is used to synchronize UPDATE/DELETE operations. Since ClickHouse primary keys are not unique constraints, selecting a primary key may be ineffective.
The automatic creation of target tables in ClickHouse databases fails.
The ClickHouse database engine is the MySQL engine, which does not support table creation.
Switch it to the Atomic engine, which supports automatic table creation during write operations.
You use the Data Synchronization node to write data to a ClickHouse database with Strategy for Primary Key Conflict set to Update Data. Only tens of rows are written per second.
ClickHouse is an MPP database, which is fast in inserts but slow in updates.
Use a Data Comparison node in FineDataLink to check the amount of data to be inserted before executing synchronization.
You use an SQL statement to join tables for data synchronization. Field names in Data Preview appear prefixed with the table name (in the Table name.Field name format).
Unlike MySQL databases, if you perform joins on ClickHouse table fields and the second table has a field present in the first table, the field name in the result set will be prefixed with the table name, even if the SELECT clause refers to it unambiguously. For example, SELECT t2.bit1 FROM "fdl"."all_type_03" t1 JOIN "fdl"."all_type_03" t2 ON t1.bit1 = t2.bit1 results in a column named t2.bit1.
Use AS to rename the field and remove the table name prefix.
During the connection to TDengine, the error "no taos in java.library.path" occurs.
The native library taos is not found. Add the corresponding dynamic link library to the system where FineDataLink is deployed.
During the connection to FineBI’s Public Data, the error "Failed to get token from FineBI" occurs.
Single sign-on (SSO) causes the login API requests to be redirected with the username parameter removed.
Enter the following URL in a browser: http://IP address:Port number/webroot/decision/login/cross/domain?fine_username=Username&fine_password=Password&validity=-1&callback=. If FineBI can be accessed successfully, the relevant token value will be returned.
If the access fails, disable the single sign-on plugin in the FineBI project and then perform reconnection.
A scheduled task writing to a StarRocks database fails with the error "too many filtered rows."
The source field corresponding to a non-null primary key field in the target table is not a primary key and allows null values.
Delete the target table in the database and set the target table to Auto Created Table in Data Destination and Mapping.
A scheduled task with SelectDB as the target end fails with the error "Within the SelectDB transaction, only INSERT INTO SELECT, UPDATE, and DELETE statements are supported.-errCode = 2, detailMessage = This is in a transaction, only insert, update, delete, commit, rollback is acceptable."
Before enabling Transaction Control, add useLocalSessionState=true to the SelectDB data connection URL.
The Transwarp ArgoDB data connection is successful, but HDFS authentication fails.
The time difference between the KDC server and the FineDataLink server exceeds five minutes.
During project startup, if the request to the application data source server takes more than five seconds, FineDataLink will actively interrupt the connection and fail to retrieve the application data source. An INFO-level log message will be printed:
INFO [standard] Fetched FineApp data sources from cloud cost 5019 ms.
Manually refresh the connection through /webroot/decision/v10/config/connection/fineapp/refresh.
Content on the data connection page is displayed abnormally, with issues such as misaligned text.
The browser being used is of a low version. Upgrade it to the latest version. You are advised to use Google Chrome or Microsoft Edge browsers of recent versions.
The data connection test fails with the error "SSH data connection error-timeout."
This error occurs due to network connectivity issues. The FineDataLink server cannot establish a telnet connection to the port on the database server. Check the firewall rules, IP address blocklists/allowlists, and other network restrictions.
The data connection test succeeds, but a data connection creation failure occurs during scheduled task execution.
The data connection times out.
Increase the maximum wait time in the data connection configuration. For details, see Connection Pool Setting.
The database user used to configure the data connection may lack table creation privileges.
Check whether the user is assigned the necessary privileges on the relevant tables.
Error message: -XXXX get data failed
No database is input in the SQL statement.
Select the appropriate database before executing the query.
On the Data Destination and Mapping tab page in a Data Synchronization node, after you select a data connection and set the target table to Existing Table, no existing tables are displayed.
The data connection name contains brackets ([]). Remove them.
A TiDB data connection error occurs, saying "Communications link failure.
The last packet successfully received from the serve was 1,165,470 milliseconds ago.The last packet sent successfully to the server was 1,165,470 milliseconds ago."
Maximum Wait Time in the data connection configuration is set to 0 milliseconds, under the mistaken assumption that 0 means never timeout.
Set Maximum Wait Time to an appropriate positive value.
The connection test fails with the error "JDBC column type does not match - No reader function mapping for java.sql.Types code: 0."
The JDBC driver does not match the database version. Download the correct driver from the help document or the official database website and upload it to FineDataLink.
滑鼠選中內容,快速回饋問題
滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。
不再提示
10s後關閉
Submitted successfully
Network busy