Recommended Environment/Configuration for Non-Containerized FineBI6.0 Deployment

  • Last update:November 22, 2024
  • iconNote:

    This document is applicable to historical FineBI 6.0.x projects deployed on non-O&M platforms.

    For details about FineBI 6.1 projects deployed on the O&M platform, see FineOps Help Doc.

    Overview

    To meet enterprises' business requirements on high availability and concurrency, FanRuan supports both standalone and cluster deployment methods.

    Also, the corresponding deployment architecture is recommended based on the specific business load situation to ensure the normal and stable business running.

    This document describes various recommended optimal deployment solutions based on the number of concurrent users and server resources.

    Deployment solution description:

    (1) The deployment solutions listed in this document are all based on the configuration requirements when the server is dedicated to the project and components. (Namely, no other content is installed on the server.)

    · The FanRuan standalone architecture consists of the project node and the configuration database (external FineDB).

    工程部署推荐环境及配置 图1.png

    · The FanRuan cluster architecture consists of project nodes, the load balancer, the state server, the file server, and the configuration database (external FineDB). Different application components in the architecture need to be selected and matched for different scenarios.

    工程部署推荐环境及配置 图2.png

    · During actual deployment, you can freely select the software and hardware in the range supported by the technology roadmap.

    (2) Considering that server resources of some users are limited:

    · For project node deployment solutions, this document describes recommended and minimum configuration requirements respectively. If resources are abundant, meet the recommended configuration requirements. If resources are insufficient, at least meet the minimum configuration requirements.

    · For cluster deployment solutions, this document provides both high availability and non-high availability deployment solutions. If resources are abundant, you are advised to use the high availability deployment solution. If resources are insufficient, the non-high availability deployment solution can be considered.

    (3) For standalone projects, unavailability is unavoidable if a low-possibility fault occurs. If high availability is required, you are advised to preferentially select the cluster architecture for deployment.

    System performance description:

    (1) Project node quantity:

    · In a cluster scenario, the more the nodes are, the larger number of concurrent users the system can handle, and the more similar the average template response time is.

    · The number of concurrent users the system can handle in a standalone project is only one half that in a dual-node cluster, and one third that in a three-node cluster.

    (2) Template performance:

    · In a standalone scenario, the larger the JVM memory is, the shorter the template response time is.

    · In a cluster scenario, if the number of concurrent users is similar in each node, the average template response time is similar.

    · The performance of preview requiring platform authentication is worse than that of preview not requiring platform authentication. This is because different users cannot share the same result cache.

    Project Server Environment

    General Server Requirement

    Configuration Item
    Recommended Configuration

    Server type

    Linux server

    System version

    Ubuntu 18.04.4 and later versions

    System kernel

    3.10 and later versions

    System architecture

    x86_64

    Available disk space

    Configured according to the data volume for the server where each node is located

    At least 40 GB for the root directory

    The following table lists the available disk space when the data volume is 0 to 5   million rows.

    Data Volume (Row)Available Disk Space

    0 – 5 million

    100 – 300 GB

    5 – 10 million

    300 – 600 GB

    10 – 100 million

    600 GB – 1.5 TB

    Middleware

    Tomcat 8.5.57 and later versions (the latest version recommended)

    JDK 1.8 (8u221 and later versions; the latest version recommended)

    Disk type

    SSD

    FineBI is an intensive I/O application, which relies heavily on the disk I/O. Therefore, you are advised to use the local disk or the SSD.

    Disk read/write speed

    Above 100 Mbps

    IOPS

    Above 10K

    Configuration Requirement Based on Node Quantity

    You can deploy only one project node of the cluster on one server. Therefore, you need to prepare as many servers as the number of project nodes.

    The following describes user dimensions.

    User TypeDescription

    Number of daily active users

    Number of users who log in to the FineBI project within one day

    Number of online users

    Number of logged-in users in FineBI at a certain moment

    Number of concurrent users

    Number of users who perform operation in FineBI at a certain moment.

    It represents how many users send requests to the server at the same time, namely the maximum number of users that can send requests for the server to handle at the same time.

    Concurrency limit during license registration

    Maximum number of system-accessed IP addresses obtained by the server from requests (as concurrent keys).

    The concurrency limit mainly restricts the cumulative number of IP addresses visiting the system. This parameter for license registration has no   correlation with the number of users in the following text.

    This document provides recommended configurations for three scenarios. If your FineBI project involves multiple scenarios, select the highest configuration to deploy your project.

    You need to first check whether your data is real-time or extracted one according to Difference Between Direct-Connected Data and Extracted Data.

    Direct-Connected Data

    This scenario prevails when only direct-connected data (rather than extracted data) is used.

    The bandwidth between nodes in the cluster or nodes and other components is 1000 Mb/s.

    If multiple configuration options are available, select a higher configuration based on the project concurrency and calculation capability of the data source DB.

    Users with concurrent edits are all ones obtaining data not from the cache.

    For concurrent users per second, you need to determine whether all of them obtain data from the cache. If so, check whether the number of concurrent users per second reaches the upper limit. If not, check whether the number of concurrent users per second reaches the lower limit.

    Number of Daily Active Users

    Number of Online Users

    Per Hour


    Number of Concurrent Users

    Per Second

    Number of Concurrent Edits

    Data Source Calculation Capability

    Number of Processed Calculations Per Second

    Recommended ConfigurationMinimum Configuration

    500

    < 100

    < 20

    < 20

    < 10

    Standalone configuration, in which nodes should meet the following requirements:

    CPU: 8 cores; 16 threads; 2.5 GHz

    JVM memory: 16 GB

    Physical memory: 24 GB

    Standalone configuration, in which nodes should meet the following requirements:

    CPU: 4 cores; 8 threads; 2.5 GHz

    JVM memory: 8 GB

    Physical memory: 12 GB

    2000

    100 – 1000

    40 – 90

    10 – 40

    10 – 20

    Two-node cluster configuration, in which each node should meet the following configuration:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 16 GB

    Physical memory: 32 GB

    Standalone configuration, in which nodes should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 16 GB

    Physical memory: 24 GB

    3000

    600 – 1500

    60 – 130

    30 – 60

    ≥ 30

    Three-node cluster configuration, in which each node should meet the following   requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 16 GB

    Physical memory: 24 GB

    Two-node cluster configuration, in which each node should meet the following   configuration:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 24 GB

    Physical memory: 48 GB

    4000

    600 – 2000

    60 – 170

    60 – 80

    ≥ 30

    Four-node cluster configuration, in which each node should meet the following   requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 16 GB

    Physical memory: 24 GB

    Three-node cluster configuration, in which each node should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 24 GB

    Physical memory: 48 GB

    Extracted Data: Self-Service Analysis of FineBI Projects with a High Number of Daily Active Users

    This scenario prevails for self-service data analysis of FineBI projects with a high number of daily active users (namely users who obtain data not from the FineBI cache).

    Concurrency can be estimated based on the total number of nodes as follows: Number of online users (Y) = 300 * (Number of nodes (X) - 1) + 400.

    Disk throughput and bandwidth need to be greater than 100 MB/s (namely performance of normal HDDs). SSDs are recommended.

    The JVM memory is not the same as the total device memory. You are advised to set the JVM memory to 2/3 to 3/4 of the total device memory.

    Number of Online Users

    Per Hour

    Number of Concurrent Users

    Per Second

    Number of Concurrent EditsTable Quantity/SizeRecommended ConfigurationMinimum Configuration

    < 100

    < 20

    < 20

    < 100 tables or < 1 TB

    Standalone configuration, in which nodes should meet the following requirements:

    CPU: 8 cores; 16 threads; 2.5 GHz

    JVM memory: 16 GB

    Physical memory: 32 GB

    Standalone configuration, in which nodes should meet the following requirements:

    CPU: 8 cores; 16 threads; 2.5 GHz

    JVM memory: 16 GB

    Physical memory: 32 GB

    300 – 1000

    20 – 70

    10 – 40

    < 100 tables or < 1 TB

    Two-node cluster configuration, in which each node should meet the following configuration:

    CPU: 8 cores; 16 threads; 2.5 GHz

    JVM memory: 16 GB

    Physical memory: 32 GB

    Standalone configuration, in which nodes should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    600 – 2000

    40 – 120

    30 – 60

    > 2000 tables or > 1 TB

    Two-node cluster configuration, in which each node should meet the following configuration:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    Two-node cluster configuration, in which each node should meet the following   configuration:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 24 GB

    Physical memory: 48 GB

    900 – 3000

    50 – 160

    50 – 80

    > 4000 tables or > 2 TB

    Three-node cluster configuration, in which each node should meet the following   requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVMmemory: 32 GB

    Physicalmemory: 64 GB

    Two-nodecluster configuration, in which each node should meet the followingconfiguration:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 24 GB

    Physical memory: 48 GB

    1200 – 3500

    60 – 190

    70 – 100

    > 5000 tables or > 3 TB

    Four-node cluster configuration, in which each node should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    Three-node cluster configuration, in which each node should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    1500 – 4000

    80 – 220

    80 – 200

    > 5000 tables or > 3 TB

    Five-node cluster configuration, in which each node should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    Four-node cluster configuration, in which each node should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    Extracted Data: Concurrent Dashboard Viewing

    This scenario prevails when users view data such as reports at the same time. In this scenario, the system generally calculates the cumulative number of visitors (Y) within 5 to 10 minutes (namely the number of users who obtain data totally from the FineBI cache, among which a user with multiple calculation requests is returned by the engine as only one result).

    Concurrency can be estimated based on the total number of nodes as follows: Number of concurrent users per 5 minutes (Y) = 380 * Number of nodes (X).

    When the number of request users per second reaches 160, the download speed of the load-balancing server needs to reach 100 MB/s.

    The JVM memory is not the same as the total device memory. You are advised to set the JVM memory to 2/3 to 3/4 of the total device memory.

    Number of Users

    Per 5 Minutes

    Number of Users

    Per Second

    Recommended ConfigurationMinimum Configuration

    < 400

    40

    Two-node cluster configuration, in which each node should meet the following configuration:

    CPU: 8 cores; 16 threads; 2.5 GHz

    JVM memory: 16 GB

    Physical memory: 32 GB

    Standalone configuration, in which nodes should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    400 – 800

    80

    Two-node cluster configuration, in which each node should meet the following configuration:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    Two-node cluster configuration, in which each node should meet the following configuration:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 24 GB

    Physical memory: 48 GB

    800 – 1100

    110

    Three-node cluster configuration, in which each node should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    Three-node cluster configuration, in which each node should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 24 GB

    Physical memory: 48 GB

    1100 – 1600

    160

    Four-node cluster configuration, in which each node should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    Three-node cluster configuration, in which each node should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    1600 – 2000

    190

    Five-node cluster configuration, in which each node should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    Four-node cluster configuration, in which each node should meet the following requirements:

    CPU: 16 cores; 32 threads; 2.5 GHz

    JVM memory: 32 GB

    Physical memory: 64 GB

    Server Environment of Other Components

    Standalone

    Applicable object:

    Standalone project

    · Application server: one node

    ·  FineDB configuration database: MySQL 5

    Parameter configuration:

    The FanRuan standalone architecture consists of the project node and the configuration database (external FineDB).

    iconNote:
    This section only describes the recommended configuration of other components in a standalone project. For details about related configuration requirements of the FineBI project node server, see section "Project Server Environment."
    Configuration Item
    Recommended Configuration

    Database (FineDB) Server Configuration

    Server quantity

    1

    If conditions permit, you are advised to deploy the external configuration   database on one separate server.

    If conditions are limited, you can deploy the external configuration database and the FineBI project on the same server.

    But at least ensure that the server is dedicated to the FineBI project and the database (namely, no longer deployed with other applications).

    Operating system

    Linux server

    System version: Ubuntu 18.04.4 and later versions

    System kernel version: 3.10 and later versions

    System architecture version: x86_64

    Network requirement

    (1) You are advised to deploy the configuration database and the application project on the same network segment to avoid problems such as network fluctuations.

    (2) If the configuration database and the application project are in a public network environment, the bandwidth needs to be above 10 Mbps.

    (3) The network needs to be smooth between the configuration database and the application project and their ports need to be accessible to each other.

    Database quantity

    1

    Database type

    MySQL 8 database

    Database version: versions later than MySQL 8.0.20

    Database driver version: 5.1.49 (compatible with MySQL 5/8)

    Database character set: utf8

    Database sorting rule: utf8_bin

    Database name: only numbers, letters, underscores, and periods (.) supported

    Database permission: create, delete, alter, update, select, insert, and index

    CPU

    Above 2.5 GHz

    8 cores; 16 threads

    Physical memory

    8 GB

    Available disk space

    More than 200 GB

    Network speed

    100 Mbps

    Disk read/write speed

    100 MB/s

    Standard Direct-Connected/Extracted Cluster

    Applicable object:

    Standard direct-connected/extracted cluster

    · Application server: at least two nodes

    ·  State server: Redis standalone

    · File server: SFTP

    · Load balancer: NGINX

    ·  Configuration database: MySQL

    Parameter configuration:

    The FanRuan cluster architecture consists of project nodes, the load balancer, the state server, the file server, and the configuration database (external FineDB).

    iconNote:
    This section only describes the recommended configuration of other components in a cluster. For details about related configuration requirements of the FineBI project node server, see section "Project Server Environment."
    Configuration Item
    Recommended Configuration

    Sharing Requirement

    The requirements listed in this section are ones that must be met by every type of cluster components.

    Server quantity

    If conditions permit, you are advised to deploy the load balancer, the state   server, the file server, and the external configuration database on one server separately.

    If conditions are limited, ensure that these components are deployed on at least one separate server (exclusively occupied by these components and not shared with the FineBI project).

    Operating system

    Linux server

    System version: Ubuntu 18.04.4 and later versions

    System kernel version: 3.10 and later versions

    System architecture version: x86_64

    GNU Compiler Collection (GCC)

    The deployment of both Redis and NGINX in the Linux system relies on GCC.

    Ensure that the system has the GCC environment.

    Query command: gcc -v

    Installation command: yum install gcc gcc-c++

    Network requirement

    (1) You are advised to deploy each component and the application project on the same network segment to avoid problems such as network fluctuations.

    (2) If each component and the application project are in a public network   environment, the bandwidth needs to be above 10 Mbps.

    (3) The network needs to be smooth between each component and the application project and their ports need to be accessible to each other.

    State Server

    Server quantity

    1

    Deployment solution

    Redis standalone

    JVM/Physical memory

    4 GB/8 GB

    CPU

    Above 2.5 GHz

    8 cores; 16 threads

    Available disk space

    More than 100 GB

    At least 40 GB for the root directory

    File Server

    Server quantity

    1

    Deployment solution

    SFTP

    Physical memory

    8 GB

    CPU

    Above 2.5 GHz

    8 cores; 16 threads

    Available disk space

    500 GB – 1 TB

    At least 40 GB for the root directory

    Expand the capacity according to usage.

    Notes

    The FTP server can be installed in the Linux system only by the user with highest permissions (root). Otherwise, the installation cannot proceed. If the FTP server does not need to be installed, this requirement can be ignored.

    Load Balancer

    Server quantity

    1

    Deployment solution

    NGINX

    1.21 and later versions recommended (preferentially the latest version)

    Physical memory

    8 GB

    CPU

    Above 2.5 GHz

    8 cores; 16 threads

    Available disk space

    More than 100 GB

    At least 40 GB for the root directory

    Database (FineDB) Server Configuration

    Server quantity

    1

    Database type

    MySQL 8 database

    Database version: versions later than MySQL 8.0.20

    Database driver version: 5.1.49 (compatible with MySQL 5/8)

    Database character set: utf8

    Database sorting rule: utf8_bin

    Database name: only numbers, letters, underscores, and periods (.) supported

    Database permission: create, delete, alter, update, select, insert, and index

    CPU

    Above 2.5 GHz

    8 cores; 16 threads

    Physical memory

    8 GB

    Available disk space

    More than 200 GB

    Network speed

    100 Mbps

    Disk read/write speed

    100 MB/s

    Highly-Available Extracted Cluster

    Applicable object:

    Highly-available extracted cluster

    ·  Application server: at least two nodes

    · Status server: Redis cluster with three master nodes and three slave nodes

    ·  File server: NAS

    ·  Load balancer: NGINX + Keepalived

    · Configuration database: PostgreSQL

    Parameter configuration:

    The FanRuan cluster architecture consists of project nodes, the load balancer, the state server, the file server, and the configuration database (external FineDB).

    iconNote:
    This section only describes the recommended configuration of other components in a cluster. For details about related configuration requirements of the FineBI project node server, see section "Project Server Environment."
    Configuration Item
    Recommended Configuration

    Sharing Requirement

    The requirements listed in this section are ones that must be met by every type of cluster components.

    Server quantity

    If conditions permit, you are advised to deploy the load balancer, the state server, the file server, and the external configuration database on one   server separately.

    If conditions are limited, ensure that these components are deployed on at least one separate server (exclusively occupied by these components and not shared with the FineBI project).

    Operating system

    Linux server

    System version: Ubuntu 18.04.4 and later versions

    System kernel version: 3.10 and later versions

    System architecture version: x86_64

    GNU Compiler Collection (GCC)

    The deployment of both Redis and NGINX in the Linux system relies on GCC.

    Ensure that the system has the GCC environment.

    Query command: gcc -v

    Installation command: yum install gcc gcc-c++

    Network requirement

    (1) You are advised to deploy each component and the application project on the   same network segment to avoid problems such as network fluctuations.

    (2) If each component and the application project are in a public network   environment, the bandwidth needs to be above 10 Mbps.

    (3) The network needs to be smooth between each component and the application project and their ports need to be accessible to each other.

    State Server

    Server quantity

    Redis cluster with three master nodes and three slave nodes

    If conditions permit, prepare six servers on each of which one node is deployed.

    If conditions are limited, prepare three servers on each of which one master node and one slave node are deployed.

    JVM/Physical memory

    4 GB/8 GB

    CPU

    Above   2.5 GHz

    8 cores; 16 threads

    Available disk space

    More than 100 GB

    At least 40 GB for the root directory

    File Server

    Server quantity

    1

    Deployment solution

    NAS

    Physical memory

    8 GB

    CPU

    Above 2.5 GHz

    8 cores; 16 threads

    Available disk space

    500 GB – 1 TB

    At least 40 GB for the root directory

    Expand the capacity according to usage.

    Load Balancer

    Server quantity

    2

    Deployment solution

    Keepalived + NGINX

    NGINX 1.21 and later versions recommended (preferentially the latest version)

    Physical memory

    8 GB

    CPU

    Above 2.5 GHz

    8 cores; 16 threads

    Available disk space

    More than 100 GB

    At least 40 GB for the root directory

    Database (FineDB) Server Configuration

    Server quantity

    2

    Database type

    Highly-available database in the master-slave mode

    Database permission: create, delete, alter, update, select, insert, and index

    CPU

    Above 2.5 GHz

    8 cores; 16 threads

    Physical memory

    8 GB

    Available disk space

    More than 200 GB

    Network speed

    100 Mbps

    Disk read/write speed

    100 MB/s

     


    附件列表


    主题: Deployment and Integration
    • Helpful
    • Not helpful
    • Only read

    滑鼠選中內容,快速回饋問題

    滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。

    不再提示

    9s后關閉

    Get
    Help
    Online Support
    Professional technical support is provided to quickly help you solve problems.
    Online support is available from 9:00-12:00 and 13:30-17:30 on weekdays.
    Page Feedback
    You can provide suggestions and feedback for the current web page.
    Pre-Sales Consultation
    Business Consultation
    Business: international@fanruan.com
    Support: support@fanruan.com
    Page Feedback
    *Problem Type
    Cannot be empty
    Problem Description
    0/1000
    Cannot be empty

    Submitted successfully

    Network busy