Overview
Many businesses face the challenge of collecting, integrating, and managing real-time and offline data in the era of big data. To address this issue, FineDataLink, an enterprise-level data integration platform with low code/high time sensitivity, is developed. The platform enables fast connection, high-efficiency integration of various data sources, and flexible data extraction, transformation, and loading. By breaking down data silos with FineDataLink, businesses can activate their full potential and turn data into value.
Product Positioning
FineDataLink is committed to creating an open and one-stop self-service platform for data scheduling and governance for enterprises, data developers, data analysts, and data asset managers. It integrates databases, upper-layer protocols, files, message queues, platform systems, and applications to provide standardized, visualized, high-performance, and sustainable management solutions.
With FineDataLink, users can achieve real-time data transmission, scheduling, and governance in various complex scenarios on one platform, speeding up the digital transformation of enterprise business.
Taking data as its foundation and centering on full-link data processing, FineDataLink is a platform that provides various functions such as data aggregation, development, and governance to meet the data needs of users.
Core Competency
FineDataLink has the following characteristics:
Multi-source data collection. It supports various data sources such as relational, non-relational, interface, and file databases.
Non-intrusive real-time synchronization. It can synchronize data in multiple tables or the whole database, ensuring the time sensitivity of business data.
Data service construction at low cost. Enterprise-level data assets can be constructed relying on APIs to achieve interconnection and sharing.
Efficient intelligent operation and maintenance. FDL allows for flexible scheduling of tasks and real-time monitoring of task running status, reducing the workload of O&M personnel.
High extensibility. With a built-in Spark SQL, FDL allows for calling scripts such as Shell script.
Efficient data development. Equipped with a dual-core engine (ELT and ETL processes), it can provide customized solutions for different business scenarios.
Five data synchronization methods. FDL provides data synchronization methods based on timestamp, trigger, full-table comparison result, full-table increment, and log parsing, to meet data synchronization demands in different scenarios.
Security. It supports data encryption and decryption and is configured with SQL injection prevention rules, etc.
Process-oriented low-code platform. It's easy to get started and highly user-friendly, contributing to high development efficiency.
Target Group
Target Customer
Pain point: Further data processing is required before data visualization and analysis.
Imperfect data infrastructure: No professional and standardized data warehouse has been established, and the data are unusable.
Unable to meet personalized business requirements: The data warehouse is mainly designed for general data usage scenarios rather than personalized business scenarios.
Unagile development: The data preparation process was not agile enough to support report presentation and data analysis.
Target User
Position: Report development engineers, data processing personnel, data warehouse development engineers, and IT personnel who need to process data.
Main job: Prepare and process data for use.
Product Architecture and Configuration Requirement
Function Structure
The data platform enables data processing personnel to connect various heterogeneous data sources with one click and preprocess data for upper-layer applications through ETL-based flexible data development and various task engines, helping enterprises produce high-quality data suitable for display and analysis.
Configuration Requirement
For details on the configuration requirements for the software and hardware environment, see FineDataLink Deployment Environment Preparation.
Function Description
Data Development
Visualized data development in the form of process flow and data flow.
Flexible application to various scenarios with a dual-core engine.
Batch synchronization of data across data sources.
Flexible data processing with various data transformation operators.
One-click parsing of semi-structured data with JSON parsing.
Expanded coverage of data transformation scenarios with Spark SQL.
Support of looping through data with a loop container.
Integration with WeChat for automatic push.
Connection with external independent data processing processes by calling Shell scripts.
The Data Development module supports the following types of data sources.
Relational databases
Non-relational database
API-based data: data accessed through various APIs (e.g., REST API and Jodoo)
File data: Excel file and TXT file
For details about supported data sources, see Data Sources Supported by Data Development.
Real-Time Data Pipeline
With a high-performance real-time data synchronization engine, the Data Pipeline module can enhance the time sensitivity of the data warehouse by building an operation data store (ODS) layer, and improve the disaster resistance of enterprise data by backing up databases in real time.
This module features high reliability and fault tolerance as it supports error queue and resumable upload, allowing for data restoration at any time without resynchronization.
The changes in the source table structure can be automatically synchronized.
A pipeline task can be completed within 5 minutes with the creation guide.
For details about supported data sources, see Data Sources Supported by Data Pipeline.
Data Service
Leveraging API data access and data service capabilities, the data service module enables secure, convenient, and code-free cross-domain data transmission by releasing processed data as APIs with one click.
For details about supported data sources, see Data Sources Supported by Data Service.
Agile Task Operation and Maintenance
The Task O&M module details real-time task status and task execution records to facilitate task optimization.
It supports flexible schedule configuration.
It also supports enterprise-level permission management.
Value Proposition
Visualized Integration of Multi-source Heterogeneous Data for Efficient Data Warehouse Building
By constructing a low-code enterprise-level data warehouse in DAG mode, you can quickly eliminate data silos and securely store all historical data, thereby broadening the spectrum of data analysis scenarios. Moreover, offloading calculations to the data warehouse alleviates strain on the business system.
Automatic Synchronization of Cross-Domain/Business Data in Real Time
By leveraging the FDL Data Pipeline function and incremental log monitoring, you can enhance the update efficiency of incremental data, addressing data latency issues caused by large data volumes and network bandwidth limitations. This liberates human resources from repetitive tasks and provides accurate data foundations for enterprise decision-making.
API-Based Enterprise Data Asset Creation for Interconnection and Sharing
By leveraging the API data access and data service capabilities of FDL, you can lay a foundation for data connection and sharing while ensuring secure and stable data transmission.
Business Workflow Automation Through SaaS Connection
Local data and Jodoo data are interconnected in real time, and business processes operate automatically without manual intervention. This facilitates conducting global correlation analysis on both cloud-based and on-premise businesses, thereby laying the groundwork for comprehensive decision-making.
Swift Backup of Cloud Data for Standardized Data Management
You can significantly reduce the development and labor costs of backing up Jodoo data to local databases while achieving cloud compliance for separate cloud-to-end data storage by querying historical data through APIs via Jodoo frontend events.
Lowered Leased Line Costs Due to Secure and Stable Cross-Domain Transmission
By leveraging the FDL Data Service function, enterprises can securely and reliably conduct cross-domain data transmission even with external network bandwidth. This not only saves the cost of leased lines but also enables enterprises to independently monitor and manage anomalies.
Tool Acquisition
You can apply for a trial on the official website. Our staff will contact you within three working days after your application.