Transmission Queue Configuration

  • Last update: March 27, 2025
  • Overview

    Version Description

    FineDataLink   Version

    Functional   Change

    4.0.5

    /

    4.1.3

    The Data Pipeline task writer actively   checks if Kafka is abnormal. If an abnormal state is detected, the Log issues a warning and terminates the task.

    4.1.13.2

    Added a Kerberos authentication method.

    Application Scenario

    In the process of real-time synchronization, data from the source database are temporarily stored in Data Pipeline for efficiently writing data to the target database.

    Therefore, you need to configure the middleware for temporary data storage before setting up the Pipeline Task Configuration

    Function Description

    FineDataLink supports the use of Kafka as the middleware for data synchronization, enabling the following capabilities.

    • The reading and writing ends are separated to ensure that the two ends do not block each other during continuous incremental synchronization.

    • After a short downtime, data that has been read does not need to be read again.

    • Dirty data that cannot be written to the target database properly can be temporarily stored.

    You can achieve real-time data synchronization more effectively.

    Usage Restriction

    The current FineDataLink version uses the open-source Kafka streaming platform by default.

    iconNote: Only the super admin of the FineDataLink can configure the Transmission Queue.

    Prerequisite

    Placing the driver

    iconNote: The driver is pre-integrated in 4.0.6 and later versions. You can skip this step if your FineDataLink version meets this criteria.

    Download the driver package kafka.zip.

    Before using, you need to extract the driver files from the driver package and place them in the %FineDataLink%\webapps\webroot\WEB-INF\lib directory of your FineDataLink.

    Adding Configuration

    • If Kafka Deployment  and FineDataLink are on the same server, you can directly configure the Transmission Queue as described in the section “Procedure.“

    • If Kafka Deployment and FineDataLink are not on the same server, you need to configure Kafka separately to enable cross-server access.

    If you only require internal network access to Kafka, or need external access with a public network interface available on the machine, simply open the server.properties file located in the /config/kraft directory under your Kafka installation path, and configure the server's IP and port in the listeners parameter using the following code.

    iconNote: You are advised to configure internal network ports.
    listeners=PLAINTEXT://IP address:9092

    If you need to access Kafka from an external network but your machine lacks a public network interface, you need to open the server.properties file in the /config/kraft directory under your Kafka installation path, and configure both the listeners and advertised.listeners parameters with the server's IP and port, as shown in the following figure.

    listeners=PLAINTEXT://IP address:9092
    advertised.listeners=PLAINTEXT://IP address:9092
    iconNote: Kafka uses 9092 as the default port. You can modify the IP address in the configuration above to your Kafka server’s actual IP address.

    1.png

    You need to shut down Kafka before starting it again. For details, see O&M Commands

    Procedure

    Enter the FineDataLink page, and click the icon in Data Pipeline

     2.png

    Enter the Kafka deployment IP and port (default: 9092). Configure the data temporary storage time and click Test Connection, as shown in the following figure.

    iconNote:

    1. The temporary storage time for Kafka data should not exceed 90 days. Once the storage time is exceeded, data will be cleaned up in accordance with the principle of "first-in, first-out".

    2. FineDataLink supports both standalone and clustered Kafka deployments. When filling in multiple IP addresses/hostnames and port numbers, you need to separate each address-port pair with commas.

    3. If the source of the Data Pipeline is Kafka and both the Data Connection and Transfer Queue require Kerberos authentication, you need to configure Kerberos authentication for both components. For details about Kerberos authentication, see Kafka Data Connection.

    3.png

    If the connection is successful, click Save to complete the configuration, as shown in the following figure.

     4.png

    Subsequent Operation

    After configuring the Transfer Queue, you can configure the Data Pipeline task. For details, see Pipeline Task Configuration

    Notes

    Modifying Kafka Configuration

    If you modify the Kafka configuration, it may result in the loss of read data temporarily stored in transmission pipeline, so you need to modify the configuration carefully, as shown in the following figure.

     5.png

    Kafka Transfer Queue Connection Error

    Internal issues of the Kafka Transfer Queues and exceptions caused by manual adjustments to the Kafka Transfer Queue by users may lead to connection errors.

    The Data Pipeline task writer actively checks if Kafka is abnormal. If an abnormal state is detected, the Log issues a warning and terminates the task.

    6.png

    附件列表


    主题: Data Pipeline
    • Helpful
    • Not helpful
    • Only read

    滑鼠選中內容,快速回饋問題

    滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。

    不再提示

    7s后關閉

    Get
    Help
    Online Support
    Professional technical support is provided to quickly help you solve problems.
    Online support is available from 9:00-12:00 and 13:30-17:30 on weekdays.
    Page Feedback
    You can provide suggestions and feedback for the current web page.
    Pre-Sales Consultation
    Business Consultation
    Business: international@fanruan.com
    Support: support@fanruan.com
    Page Feedback
    *Problem Type
    Cannot be empty
    Problem Description
    0/1000
    Cannot be empty

    Submitted successfully

    Network busy