FineDataLink Glossary

  • Last update: September 02, 2025
  • Overview

    This document explains the unique terms of FineDataLink to facilitate usage.

    Function Module Name

    The function modules in FineDataLink include Data Development, Data Pipeline, Data Service, Task O&M, and others, meeting a series of needs such as data synchronization, processing, and cleaning.

    NamePositioningFunction Description
    Data DevelopmentIt is used for timed and real-time data synchronization, as well as data computation and processing.

    You can develop and orchestrate tasks with SQL statements and visual operations.

    For details about real-time tasks, see Overview of Real-Time Task.

    Data PipelineIt is used for real-time direct synchronization of data from the source table to the target table without any computation or processing.It can implement high-performance real-time data synchronization in scenarios with large data volumes or standard table structures.
    Data ServiceIt is used to publish APIs, achieving cross-domain data transfer.You can publish processed data as standardized APIs via FineDataLink for cross-system consumption.
    Task O&M

    Scheduled Task O&M

    Pipeline Task O&M

    Data Service O&M

    It enables unified management and execution monitoring of tasks by providing task overview.
    Data Center

    Database Table Management 

    Lineage Analysis

    1. Database Table Management:

    For databases supporting SQL statements, you can write SQL statements to query and modify table data.

    You can view data and structures of tables, modify table names and descriptions, empty, delete, and copy tables, and so on.

    2. Lineage Analysis:

    You can view the lineage relationship between tables used in data development tasks, pipeline tasks, and data services.

    System ManagementSystem Management IntroductionYou can manage the FineDataLink system, including users, permission, appearance, data connections, and others.
    Data ConnectionData Connection OverviewBefore using FineDataLink to process and synchronize data, you need to define the data connection so that you can easily determine the data source and destination by selecting the database, the data connection, and the data table sequentially during data processing.

    Data Development - Scheduled Task

    Production Mode and Development Mode

    FineDataLink provides Development Mode and Production Mode for scheduled tasks, with code isolated between the two environments, as shown in the following figure.

    For details, see Development Mode and Production Mode.

    Development Mode:

    Development Mode serves as a test environment for task design and editing, where all modifications remain isolated from tasks in Production Mode. Tasks developed in this mode can be published to Production Mode.

    Production Mode:

    You can publish production-ready tasks in Development Mode to create view-only counterparts in Production Mode, based on which you can configure scheduling plans for conditional execution. You can filter tasks by publication status on the Task Managementpage in O&M Center.

    Folder

    A folder can contain multiple scheduled tasks or real-time tasks, as shown in the following figure.

    Scheduled Task and Real-Time Task

    The Data Development module supports the development of two types of tasks:

    Scheduled Task

    The source and target ends of scheduled tasks support more than 30 types of data sources. For details, see Data Sources Supported by FineDataLink.

    Scheduled Task provides various nodes and operators for you to extract, transform, and load data on visual pages, facilitating the construction of offline data warehouses. You can configure scheduling plans for scheduled tasks for automatic execution, thus achieving efficient and stable data production.

    Real-Time Task

    Real-Time Task enables real-time data delivery from Point A to Point B. It supports data processing, such as parsing data in real-time data warehouses, during delivery, providing downstream businesses with available and accurate data to meet business requirements.

    For details, see Overview of Real-Time Task.

    Node

    In FineDataLink documents, nodes within the Data Transformation node are termed operators. Data Transformation and nodes outside the Data Transformation node are termed nodes.

    A node is a basic unit of a scheduled task. Multiple nodes form an execution process after being connected by lines and further form a complete scheduled task.

    Nodes in a task run in turn according to the dependencies between nodes.

    Data Flow

    ETL processing is carried out in the Data Transformation node. The encapsulated visual functions enable efficient data cleaning, processing, and loading.

    PositioningFunctional Boundary
    It refers to the data flow between the input widgets and the output widgets, focusing on the processing of each row of records and each column of data. There are various operators available for inputting, outputting, and transforming data.

    A data flow (a Data Transformation node) only provides the following three types of operators, and does not contain combinatorial and process-type operators:

    An example of output operators: DB Table Output

    An example of processing operators: Data Association

    An example of input operators: DB Table Input

    Terms involved in the Data Transformation node are explained in the following table.

    ClassificationFunctionPositioningFunction Description








    Input

    DB Table Input


    It is used to read data from database tables./
    API InputIt is used to read data through APIs./

    Dataset Input

    It is used to read data from server datasets or self-service datasets./

    Jiandaoyun Input


    It is used to back up, calculate, analyze, and display data from JiandaoyunYou can obtain data from specified Jiandaoyun forms.
    MongoDB InputIt is used to process data from MongoDB databases./
    SAP ERP InputIt is used to fetch data by calling functions that have been developed in the SAP system through RFC interfaces./

    File Input



    It is used to extract data by reading structured files in specified sources and paths.You can obtain data by reading structured files in specified sources and paths.


















    Data Output

    DB Table Output


    It is used to output data to database tables./
    DB Output (Transaction)It supports rollback on failure.When you write data to a database using this operator, the target table will be unaffected if the scheduled task fails.

    Comparison-Based Deletion

    It is used to synchronize data deletions from source tables to target tables.

    You can identify data rows that exist in the target table but are absent in the input source by comparing field values and then perform deletion operations. The supported deletion modes include:

    • Physical Deletion: It actually deletes data.

    • Logical Deletion: It does not delete data, and only adds deletion identifiers.

    Parameter Output



    It is used to output the obtained data as parameter values for use by downstream nodes.You can create parameters and output data as parameter values for use by downstream nodes in a task.

    API Output

    It is used to output data to APIs./

    Jiandaoyun Output

    It is used to output data to Jiandaoyun forms.You can upload data to specified Jiandaoyun forms.

    File Output


    It is used to output data as files.You can output data to structured files in specified destinations and paths.

    MongoDB Output

    It is used to output data to MongoDB databases.You can output data to MongoDB databases.

    Dataset Output



    It is used to create ETL result tables for Public Data of remote Fine+ projects and write data into them.This operator automatically creates ETL result tables in Public Data of local or remote projects and supports CUD (Create, Update, and Delete) operations on the generated tables.













    Connection

    Data Association



    It is used to join multiple input sources and output the join results.

    It supports cross-database and cross-source joins.

    The join methods include:

    • Left Join

    • Right Join

    • Inner Join

    • Full Outer Join

    These join methods are consistent with how database tables are joined. You can get the join results by defining the join fields and conditions. It requires more than two input sources and only one output source.

    Data Comparison


     


    It is used to compare data from two input sources to get new, deleted, identical, and different data. 

    Procedure:

    • Select two tables to be compared.

    • Configure the logical primary key.

    • Configure the comparison field.

    • Set the identification relationship.

    Union All/You can concatenate multiple tables row-wise and output a combined table.


















    Transformation

    Field Setting



    It is used to adjust field names and types. 

    It provides the following functions:

    • Set columns: You can select and delete fields.

    • Modify columns: You can modify the field name and type.

    Column to Row 



    It can change table structures between horizontal and vertical layouts to meet the requirements of conversion between one-dimensional tables and two-dimensional tables.

    You can convert columns in the input data table to rows.

    • Column to Row (also known as unpivot): It can convert one-row multi-column data into multi-row one-column data. After unpivoting, source column names are transposed into values within a designated attribute column, enabling traceability to their original data context.

    Row to Column

    It can convert rows in a data table to columns.

    You can convert rows in the input data table to columns.

    • Row to Column (also known as pivot): It can convert multi-row one-column data into one-row multi-column data. After pivoting, distinct categorical values from a source column become the new column headers, while corresponding values across multiple rows are aggregated into a single row per category.

    JSON Parsing



    It is used to parse JSON data and output data in the row-column format.You can obtain JSON data output by the upstream node, parse it into required row-column format data according to JSONPath, and output data to the downstream node.

    XML Parsing



    It is used to parse the input XML data into row-column format data according to the specified parsing strategy. You can specify the parsing strategy to parse the input XML data into row-column format data.

    JSON Generation


    It is used to generate JSON objects based on selected fields.You can select fields to convert table data into multiple JSON objects, where nesting is supported.
    New Calculation Column

    It is used to generate new columns through calculation.

    You can perform formula calculation or logical mapping of constants, parameters, and other fields, and place the results into a new column for subsequent operations or output.

    Data Filtering

    It is used to filter data to get the required records.You can get the required records through filtering.

    Group Summary

    It is used for aggregation calculation based on specified dimensions.You can group data based on certain criteria and perform summary calculations on the grouped data.

    Field-to-Row Splitting

    It is used to split values of a field by delimiter, where the split result forms a new column with values.
    Field-to-Column SplittingIt is used to split values of a field by delimiter or character length, where the split result forms multiple new columns with values.
    LaboratorySpark SQLIt can improve scenario coverage by providing flexible Spark SQL capabilities.With a built-in Spark computation engine, the Spark SQL operator enables you to obtain upstream output data, process it using Spark SQL queries, and deliver results to the downstream node.
    Python/It is used to call Python scripts to perform complex data processing.

    Step Flow

    A step flow is composed of nodes.

    PositioningFunctional Boundary
    A step flow, also called a workflow, orchestrates nodes. Steps are relatively self-contained and run in the specified order without the flow of data rows between them.

    Each step is a closed loop from input to output.

    The terms involved in a step flow are explained in the following table.

    ClassificationNodePositioningFunction Description















    General

    Data Synchronization






    It can synchronize data from the input source to the output source quickly, where data transformation is not supported.

    It provides multiple data fetching methods, such as API InputSQL Statement, and File Input. Memory calculation is not required as there is no data processing during the process, making this node suitable for scenarios where:

    • Rapid synchronization of data tables is needed.

    • The calculation needs to be finished during data fetching, and no calculation or transformation is needed during synchronization.

    • The target database has strong computing ability or the data volume is large. In this case, you can synchronize data to the target database and then use SQL statements for further development.

    Data Transformation

    It can meet the requirements of data conversion and processing between input and output.

    iconNote:
    Data Transformation node can form a step flow.

    It supports complex data processing, such as data association, transformation, and cleaning, between input and output during data synchronization.

    Data Transformation fundamentally operates as a data flow. It relies on an in-memory computing engine for data processing, making it suitable for development tasks with smaller datasets (up to 10 million records). Its computational performance scales with allocated memory resources.




    Script

    SQL Script

    It is used to deploy and execute SQL statements on designated relational databases. You can manipulate tables and data by writing SQL statements to perform operations such as creation, updates, deletion, queries, joins, and aggregations.

    Shell Script



    It is used to run executable Shell scripts in remote environments through SSH connections.You can run executable Shell scripts in remote environments through configured SSH data connections.

    Python Script



    It enables direct execution of Python scripts outside FineDataLink to extend data development capabilities, such as using Python programs to process data from files with formats that are not currently supported.You can configure the paths and input parameters of Python files, and run Python scripts through SSH data connections.

    Bat Script



    It is used to run executable batch files in remote environments through data connections.You can run executable batch files in remote Windows environments through configured data connections.
    Kettle InvocationIt is used to invoke kettle tasks in specified paths via SSH data connections. You can invoke executable kettle tasks in specified server environments via configured SSH data connections.




    Process

    Conditional BranchIt is used to conduct conditional judgment in a step flow. It determines whether to continue running downstream nodes by evaluating the results of upstream execution or system conditions.
    Invocation TaskIt is used to invoke other tasks for cross-task orchestration.You can call any task to join the orchestration of the current task.
    Virtual NodeIt is a node where no actual work is performed. 

    A virtual node is a no-operation element that connects multiple upstream branches to multiple downstream branches in process design.


    Loop ContainerIt is designed for scenarios requiring repeated execution of multiple nodes.It provides a loop container that supports both iterative loops (for-each loops) and conditional loops (do-while loops), allowing nodes in the container to run repeatedly.
    Parameter AssignmentIt is used to output data as parameter values.You can output the read data as parameter values for use by downstream nodes.
    NotificationNotificationIt is used to customize the notification content and channels.

    Notification channels include emails, SMSs, platform messages, WeCom messages (through chatbots and app messages), and DingTalk messages.

    You can customize the notification content.

    ConnectorExecution JudgmentIt is used to set the execution logic between nodes.

    You can right-click a connector in a step flow and choose the execution condition. Options include Execute UnconditionallyExecute on Success, and Execute on Failure.

    You can right-click a node in a step flow and click Execution Judgment to enter the Execution Judgment window, where you can customize the judgment logic of multiple conditions (All or Any) to determine whether to execute the node, controlling the dependencies of nodes in the task flexibly.

    OthersRemarkIt is used to add notes on the canvas.You can customize the content and format.

    Task Instance

    An instance is generated each time a scheduled task runs, which can be viewed under O&M Center > Scheduled Task > Running Record.

    Start Time of Instance Construction

    When a task runs, the time when the instance starts constructing is displayed in Log, as shown in the following figure.

    If you have set the execution frequency for a scheduled task, the instance construction may start slightly later than the set time. For example, if the task is set to run at 11:00:00 every day, the instance construction may start at 11:00:02.

    Incremental Update, Full Update, and Comparison-Based Update

    For details about data synchronization schemes, see Overview of Data Synchronization Schemes.

    1 Incremental Update:

    This scheme applies to target table update scenarios where the source table only experiences data inserts.

    2 Full Update

    This scheme is to comprehensively replace all existing data in the target table with the latest data in the source table.

    3. Comparison-Based Update

    This scheme applies to target table update scenarios where the source table experiences data inserts, modification, and deletion.

    Zipper Table

    A zipper table maintains historical states alongside the latest data, allowing convenient reconstruction of customer records at any specific point in time. This table is suitable for scenarios that require recording all data changes for auditing or tracing purposes.

    Identifier Value and Identifier Field

    The identifier field marks whether data rows are inserted, modified, or deleted.

    Identifier values are values of this field, where different values correspond to different data change types.

    During data output, insert/update/delete operations are executed based on the identifier field and its values.

    1. Scenario one: The identifier field and its values are generated by the Data Comparison operator.

    When you use the combination of Data Comparison and DB Table Output/Jiandaoyun Output to synchronize data operations (insert, delete, and update), the fdl_comparison_type field (automatically added by the Data Comparison operator) serves as the identifier field. Its values, including Identical (marking unchanged records data), Changed (marking modification), Added (marking addition), and Removed (marking deletion),  serve as the identifier values, as shown in the following figure.

    2. Scenario Two: The source table contains an identifier field with valid values, and you want to synchronize data insert/delete/update operations.

    For details, see Adding/Modifying/Deleting Data Based on the Identifier Field.

    The source table Product includes an identifier field Status, whose values include Hot-selling (indicating that the record needs to be added), Normal (indicating that the record needs to be deleted), and Viral (indicating that the record needs to be updated).

    In the Status column in the source table, the data whose Product ID value is 15 is marked as Hot-selling, whose Product ID value is 16 is marked as Normal, and whose Product ID value is 14 is marked as Viral. You want to change the data in the target table Product Data accordingly.

    Parallel Read

    When the data volume is large, you can enable Parallel Read to accelerate data reading.

    For details, see Data Synchronization - Data Source.

    Execution Log and Running Record

    Scheduled Task

    1. Execution Log

    After a scheduled task runs, execution logs are displayed on the Log tab page, where you can check whether the task runs successfully and view failure reasons.

    You can set Log Level Setting to adjust th level of detail of the output logs.

    2. Running Record

    You can view task execution information, such as execution status, task duration, and trigger method, under O&M Center > Scheduled Task > Running Record.

    Pipeline Task

    After a pipeline task runs, execution logs can be viewed. For details, see Single Pipeline Task O&M.

    For details about the execution records of pipeline task, see Real-Time Pipeline Task O&M - Task Management.

    Rollback

    In case of substantial historical data, incremental updates must be executed periodically during data synchronization to ensure data timeliness.

    If abnormal field values or dirty data are encountered during incremental updates, the synchronization task may fail after partial completion. In such cases, data in the target table requires rollback to its state before the current incremental update.

    FineDataLink provides native support for this scenario from V4.1.5.2. For details, see DB Output (Transaction).

    You can also implement rollback by referring to the help document. For details, see Data Rollback After Extraction Failures.

    Priority

    You can set execution priority levels for scheduled tasks. Options include HIGHEST,  HIGH, MEDIUM, LOW, and LOWEST.

    During thread resource contention, tasks with higher priority in the queue execute first, and tasks with equal priority follow the FIFO (First-In-First-Out) execution order.

    For details, see Task Control - Task Attribute.

    Execution Frequency

    You can set the execution frequency for scheduled tasks to have them executed automatically at regular intervals, ensuring prompt data updates.

    For details, see Overview of Scheduling Plan.

    Task Retry

    For details about the task retry function, see Task Record: Task Retry.

    The task retry function is required in the following scenarios:

    1. A scheduled task fetches data of 24 hours preceding the scheduling time each day and synchronizes data to the target database. During a three-day holiday, the system crashes and the scheduled task does not run, resulting in a lack of data for those three days in the target database.

    2. During the execution of a scheduled task, dirty data appears in an output component. The scheduled task continues running as the configured dirty data threshold has not been reached. The existence of dirty data is not perceived by the O&M personnel until they receive notification after task completion.

    The O&M personnel then check the reasons on the dirty data processing page and find that the dirty data results from the limit-exceeding field length. After modifying the field length at the target end, they want the task to rerun.

    Data Pipeline

    Data Pipeline provides real-time data synchronization functionality, enabling convenient single-table or entire-database synchronization to replicate data changes from partial or all tables in source databases to target databases in real time, ensuring continuous data correspondence between target and source systems.

    Transmission Queue

    During real-time synchronization, data from source databases is temporarily stored via the data pipeline to facilitate writing to target databases.

    Therefore, configuring middleware for data staging is a prerequisite before setting up pipeline tasks. FineDataLink employs Kafka as synchronization middleware to temporarily store transmitted data.

    Resynchronization from Breakpoints

    A failed pipeline task can continue from the breakpoint. In this case, if the full data load has not been synchronized, the data synchronization will start from the beginning. If the full load has been synchronized, the data synchronization will start from the breakpoint.

    The following is an example of resynchronization from the breakpoint:

    The pipeline task read data on March 21, stopped reading data on March 23, and restarted on March 27. Data from March 23 to March 27 would be synchronized.

    Binlog and CDC

    Binlog

    MySQL's binary log (binlog) is a critical feature that records all database change operations (such as INSERT, UPDATE, and DELETE) with precise SQL statement execution timestamps. It serves as the foundation for data replication, point-in-time recovery, and audit analysis for MySQL databases.

    When using MySQL databases as the source ends of pipeline tasks in FineDataLink, ensure you have enabled Binlog-based CDC. For details, see MySQL Environment Preparation.

    CDC

    Change Data Capture (CDC) extracts incremental changes to data and schemas from source databases, and propagates them to other databases or app systems in near real time. This enables efficient and low-latency data transfer to data warehouses, facilitating timely data transformation and delivery to analytical applications.

    Logical Deletion and Physical Deletion

    Physical Deletion: If the data is deleted from the source table, the corresponding data will be deleted from the target table.

    Logical Deletion: Mark data as deleted using an identifier column without actually deleting it from the target table.

    Data Service

    Data Service enables the one-click release of processed data as APIs, facilitating cross-domain data transmission and sharing.

    AppCode

    It is a unique API authentication method of FineDataLink. AppCode can be regarded as a long-term valid Token. If set, it will take effect on APIs bound with specified apps.

    To access an API, you need to specify the AppCode value in the Authorization request header in the form of AppCode AppCode value (with a space between AppCode and the AppCode value).

    For details, see Binding an API to an Application.

    General Information

    Concurrency

    Concurrency refers to the maximum number of scheduled tasks and pipeline tasks that can be started simultaneously.

    Field Mapping

    Field mapping establishes source-to-target field correspondence. For details, see Table Field Mapping.

    For source tables with multiple fields, only mapped fields in the target table will be updated. You can unmap fields that require no synchronization.

    There are two types of mapping methods:

    • Map Fields with Same Name: Use this method when source and target fields share identical names. For specific logic, see Table Field Mapping.

    • Map Fields in Same Row: Use this method when source and target fields follow identical column order. For specific logic, see Table Field Mapping.

    Dirty Data

    Definition of Dirty Data in Scheduled Task

    1. Data that fails to be written due to a mismatch between source and target fields (such as length/type mismatch, target field missing, and violation of NOT NULL constraints of target tables) is regarded as dirty data.

    2. Data with conflict primary key values in case Strategy for Primary Key Conflict in Write Method is set to Record as Dirty Data is regarded as dirty data.

    Definition of Dirty Data in Pipeline Task

    Data that fails to be written due to a mismatch between source and target fields (such as length/type mismatch, target field missing, and violation of NOT NULL constraints of target tables) is regarded as dirty data.

    iconNote:
    Primary key conflicts in pipeline tasks do not lead to dirty data because the new data will overwrite the old one.

    Lineage Analysis

    You can view the lineage relationships of tables used in data development tasks, pipeline tasks, and data services in FineDataLink, as shown in the following figure.

    For details, see Lineage Analysis.

    DDL

    In database management systems, Data Definition Language (DDL) comprises commands for defining and modifying database architectures.

    Common DDL statements include:

    • CREATE: used to create database objects

    • ALTER: used to modify the structure of existing database objects

    • DROP: used to delete database objects

    • TRUNCATE: used to remove all rows from a table without affecting its structure.

    DDL serves as a critical tool for database administrators and developers to design and maintain database architectures.

    In FineDataLink, DDL synchronization refers to the function that enables automatic synchronization of source-end DDL operations (such as table deletion, field adding/deletion, field renaming, and data field modification) to the target end, requiring no manual intervention to modify the target table structure. This synchronization mechanism ensures structural consistency and reduces manual maintenance efforts.

    For details, see Data Pipeline - Synchronizing Source Table Structure Changes.

    Logical Primary Key and Physical Primary Key

    Data updates, deletion, and insertion are typically performed based on logical or physical primary keys to ensure data uniqueness.

    Logical Primary Key

    The logical primary key is business-defined, supporting multiple fields. Its strength lies in meeting business requirements while guaranteeing data uniqueness. For example, in an employee table, the logical primary key can be the employee ID, ID card number, or other unique identifiers.

    Physical Primary Key

    The database generates a unique identifier as the primary key value for each record through auto-increment or unique indexes. Physical primary keys offer simplicity and efficiency, operating independently of business rules.

    Note: You can refer to authoritative sources for distinctions between logical and physical primary keys.

    Retry After Failure

    A task interrupted due to network fluctuations or other reasons can be executed successfully if you rerun it after a while. To prevent such task interruption, you can configure the number of retries and the interval between retries in Retry After Failure to automatically rerun the task upon failure.

    For details, see Task Control - Fault Tolerance Mechanism and Pipeline Task Configuration - Pipeline Control.

    Edit Lock

    FineDataLink employs task edit locks to prohibit concurrent editing of scheduled tasks, pipeline tasks, API tasks, and data service apps by multiple users.

    A task currently being edited by one user is locked against editing by others. Other users opening it can only view the task and will receive a prompt "The current task/service/application is being edited by Username."

    Log Level

    You can select ERROR, WARN, or INFO in Log Level Setting.

    • Log levels ranked by severity (from highest to lowest): ERROR > WARN > INFO

    • Log levels ranked by detail (from simplest to most detailed): ERROR < WARN < INFO

    For details about log levels in scheduled tasks, see Task Control - Task Attribute.

    For details about log levels in pipeline tasks, see Pipeline Task Configuration - Pipeline Control.

    Dirty Data Tolerance and Dirty Data Threshold

    Scheduled Task

    You can set Dirty Data Threshold to enhance the fault tolerance of tasks. The scheduled task continues running despite dirty data and does not trigger the error until the set limit of dirty data is reached.

    For details about this function in scheduled tasks, see Dirty Data Tolerance.

    Pipeline Task

    A synchronization task can proceed despite issues such as field type/length mismatch and the primary key conflicts after the threshold of the dirty data volume is set. The pipeline task will be aborted automatically when the threshold is reached.

    A pipeline task whose Dirty Data Threshold is set to 1000 Row(s) will be aborted when the number of dirty data records reaches 1000 during runtime. The dirty data threshold limits the total number of dirty data records in a task since task creation.

    For details, see Pipeline Task Configuration - Pipeline Control.

    System Management

    Function Point

    Prior to formal production use, functions including data source usage, Data Pipeline, Data Service, Data Development, Database Table Management, Lineage Analysis, and System Management require registration. For details, see Registration Introduction.

    External Database and Built-in Database

    The FineDB database stores all the platform configuration information of FineDataLink projects, including pipeline tasks, scheduled tasks, permission control, and system management. FineDataLink contains a built-in HSQL database used as the FineDB database.

    The HSQL database does not support multi-threaded access. It may become unstable in a clustered environment or when handling large volumes of data. It is suitable for a local trial of product functionality.

    FineDataLink supports the use of an external FineDB database. For details, see External Database Configuration.

    You must configure an external database for the formal project.

    Others

    Subform

    Subform is a concept in Jiandaoyun. For details, see SubForm.

    Data Development - Real-Time Task

    For the explanation of terms related to real-time tasks, see Overview of Real-Time Task.


    附件列表


    主题: Data Pipeline
    Previous
    Next
    • Helpful
    • Not helpful
    • Only read

    滑鼠選中內容,快速回饋問題

    滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。

    不再提示

    10s後關閉

    Get
    Help
    Online Support
    Professional technical support is provided to quickly help you solve problems.
    Online support is available from 9:00-12:00 and 13:30-17:30 on weekdays.
    Page Feedback
    You can provide suggestions and feedback for the current web page.
    Pre-Sales Consultation
    Business Consultation
    Business: international@fanruan.com
    Support: support@fanruan.com
    Page Feedback
    *Problem Type
    Cannot be empty
    Problem Description
    0/1000
    Cannot be empty

    Submitted successfully

    Network busy