To meet multiple usage scenarios required by customers, FineDataLink supports multiple deployment methods and can be integrated with FineReport and FineBI for deployment.
This article briefly introduces the content differences of different deployment methods as well as the pros and cons of each method.
Excellent performance
FineDataLink has an excellent performance as it can utilize all server resources, such as CPU and memory.
Stable environment
The stability of FineDataLink is not affected by the FineReport/FineBI environment. For instance, in the event of a FineReport crash, independently deployed FineDataLink continues to function without disruption.
Cost-effectiveness
A single server is sufficient to support both data processing and presentation, which has a high return on investment.
All-in-one data workflow
Data integration, processing, and application can be completed in the same server environment. The all-in-one data workflow can greatly improve the work efficiency of system operation and maintenance personnel.
High server investment
A separate server for FineDataLink costs high.
Separate data processing and presentation
FineDataLink is used for data processing and modeling and FineReport/FineBI is used for data visualization.
In case of independent deployment, different steps of a complete data workflow are completed in multiple environments.
Limited performance
FineDataLink and FineReport/FineBI may compete for server resources, causing decreased performance. For instance, simultaneous data synchronization during high report accesses could slow down the data extraction.
Limited stability
The environments of FineDataLink and FineReport/FineBI affect each other.
Example of extreme scenarios: If FineReport crashes on the reporting side, FineDataLink deployed on the same server will crash as well, leading to the termination of data synchronization.
The Data Pipeline function is supported for real-time data synchronization.
The Data Service function is supported for data sharing and release.
The deployment of a multi-node (FineDataLink project) cluster is supported.
The Data Pipeline function is not supported.
The Data Service function is not supported
An environment with clustered FineReport/FineBI projects does not support the integrated deployment of FineDataLink.
Note: FineDataLink cluster is only supported when FineDataLink is deployed independently.
Standalone Project
A standalone project refers to a deployed project.
Cluster
A cluster is a group of projects providing the same services. These individual projects are the nodes of the cluster.
A cluster can be scaled horizontally, based on which you can increase the node quantity to achieve linear growth in concurrency, thereby achieving high concurrency support.
In addition, multiple projects also provide support for backup and can avoid losses (such as business interruption and data loss) caused by system crashes due to single project failure, ensuring a 24/7 stable operation of the system.
The benefits of a clustered FineDataLink system include high availability, high performance, ease of management, scalability, and security.
The system supports disaster recovery and hot backup.
A FanRuan cluster consists of multiple nodes that work collaboratively with load balancers, state servers, and other components to handle requests and balance the load. The FineDataLink system can recover from certain exceptions (such as partial node crashes and downtime problems) that may cause system crashes. This ensures that data synchronization and processing are not interrupted.
The ability to run tasks concurrently is improved
In FanRuan clusters, scheduled and pipeline tasks can be distributed to multiple nodes for parallel processing, and APIs with high request frequency can be separated for using data services.
When processing a large number of tasks with FineDataLink, you can increase the number of nodes to improve concurrency and overall performance.
Note: The minimum unit to run a scheduled task is a node. When a task is executed, a task instance is generated, which contains nodes on which the task is to be executed.
The minimum unit to run a pipeline task is a pipeline task instance, which is generated when a task is executed and cannot be further divided.
The pending nodes in the task instance are evenly distributed to nodes in the cluster for execution, following a predetermined order.
The FanRuan cluster provides a comprehensive management platform for deploying, configuring, monitoring, and automatically fixing nodes. It greatly reduces the difficulty and risk of cluster operation and maintenance and helps enterprises better manage and maintain their systems.
Allows users to complete 80% of the configuration on the platform through simple visual operations.
Supports hot deployment, which allows users to add and delete nodes by copying and deleting the node files without restarting the cluster.
Supports real-time monitoring of the running status of each node, and provides timely reminders for exceptions such as node crashes or node time inconsistency.
Enables real-time synchronization of the platform configuration and resource file updates of each node.
Detects JAR package of nodes for any inconsistency automatically during startup and sends reminders.
Think of a cluster node as a taxi. Similar to building a good fleet, you need to consider how to provide high-quality services when designing the cluster.
Think of each project node as a taxi.
If a taxi breaks down, other taxis will be automatically arranged to take over its work to ensure the normal operation of the entire fleet.
There is no host node in FanRuan clusters, only the benchmark node (which is the first node in a cluster). Each node provides services equally and is manageable.
Each node is a project that can run independently, handling user requests and tasks and managing other components. Cluster nodes communicate and collaborate through a series of network protocols and services.
The load balancer is similar to the dispatching center of the taxi fleet, coordinating and distributing all the work and requests.
It functions as a unified entrance for receiving "customers", preventing them from asking the "driver" every time.
In a cluster, the load balancer is used to distribute tasks to various nodes for high work efficiency.
It balances the load among all nodes, ensuring that all Web requests are distributed evenly across the servers.
Ensure the high availability and concurrency of the execution and scheduling of the timed tasks and pipeline tasks.
In case a project node fails, the task node will assign the task to other functioning nodes to guarantee uninterrupted execution. The task instance will be evenly distributed across all cluster nodes for the simultaneous execution of multiple scheduled tasks.
It refers to the external configuration database FineDB, which is similar to the vehicle management office of the taxi fleet.
It stores and maintains the information of all vehicles and drivers, and provides each driver with the same vehicle decoration, quotation, travel route, and other information. Let all taxis provide customers with the same travel experience.
In a cluster, the configuration database stores and maintains the configuration and parameter settings of all nodes. These parameters must be set reasonably to coordinate the work of cluster nodes.
Ensure high concurrency on the Web end by distributing user requests evenly to various nodes for response based on the forwarding logic of the load balancer. The requests include:
User's request to use the platform
API requests for data services
It stores all files related to the fleet, including vehicle maintenance records, insurance policies, vehicle location data, etc.
In a cluster, a file server is used to store and share the files and data resources needed in the cluster, ensuring that each node can access and use them.
The architecture of the FanRuan cluster is flexible and can be adjusted according to business needs and characteristics. A cluster must at least meet the following conditions:
1) There is at least one project node.
2) The node is configured with an external database.
3) The node is configured with a state server.
4) If there are more than two project nodes, a file server must be configured.
5) The node is configured with Nacos.
The Data Development function of FineDataLink 4.0.27 and later versions supports clusters.
The Data Pipeline and Data Service functions of FineDataLink 4.0.30 and later versions support clusters.
An independently deployed FineDataLink project can serve as a cluster node. A FineDataLink project cannot serve as a node together with a FineReport project or a FineBI project to constitute a cluster.
The FineDataLink project of integrated deployment cannot serve as a cluster node.
Traditional Deployment
You can prepare deployment packages yourself and build the most basic project architecture consisting of the FineDataLink project, JDK environment, and Tomcat middleware.
Containerized Deployment
The containerization of an application is to encapsulate the operating system and the application together (to form a complete mirror image) so that the application can run in a Docker container and the like.
As the container can provide the application with an environment that is close to a complete system, containerized deployment enables application modernization with no or few modifications.
It also lays a foundation for the application architecture to keep cloud-friendly.
Compared to traditional deployment, containerized deployment can significantly reduce maintenance and resource costs.
Less Maintenance Cost
Projects are isolated from each other, which restricts the impact range of exceptions.
It facilitates upgrades, disaster recovery, and rollback.
Less Resource Cost
It simplifies multi-cluster management and multi-tenancy and achieves microservices and high resource utilization.
It enables resource scaling.
The most essential benefit of containers throughout an application lifecycle is that they isolate development and operation. Other benefits include portability, flexibility, scalability, and easier management.
Manual modification of the Tomcat configuration file or other corresponding container configuration files is required.
Incomplete configuration may result in exceptions in subsequent business.
All common JVM parameters have been pre-configured in the mirror image by default to avoid most issues caused by JVM parameters.
The maximum memory limit parameter Xmx can be configured separately in the YAML file.
It requires installing NGINX or other load balancers and configuring the forward proxy.
Missing configurations may result in exceptions in certain businesses.
A highly available architecture combining Keepalived and NGINX can be configured as needed.
There are recommended hardware specifications, but they are not mandatory.
All product-supported components can be configured.
Deploying MySQL, Redis, and MinIO is also supported.
Linux AMD64 is supported.
Centos of versions below 7.3 is not supported.
Ubuntu of versions below 18 is not supported.
It is to prevent issues caused by running the project on an outdated operating system.
All ports used by all components of all nodes must be opened. Otherwise, deployment will fail.
Ensure that the port setting does not impede the project running.
The root user can deploy the product with one click.
Deploying products after configuring relevant nodes and component information in the YAML file is supported.
Configuring users' existing components through the YAML file is supported.
Regular users shall be assigned sudo permission to execute the following commands. (If Docker is installed and the user belongs to the Docker user group, no sudo permission is required.)
tar
rm
groupadd
gpasswd
groups
dockerd
systemctla (This command is optional and can be used to start the Docker service automatically during system boot.)
To upgrade the product or perform other operations on the system after deployment, check files in the background directory of the deployed project.
For details, see FineDataLink Installation Directory Structure.
滑鼠選中內容,快速回饋問題
滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。
不再提示
10s後關閉
Submitted successfully
Network busy