Manual FineChatBI Deployment (Not by FineOps)

  • Last update:August 13, 2025
  • Overview

    This document will introduce how to deploy FineChatBI.

    (1) Since FineChatBI uses FineBI as the base for analysis, you need to prepare a FineBI project first.

    (2) You need to deploy a semantic parsing SLM. The questions raised by users are converted into executable data query statements through the semantic parsing SLM, which is the core of FineChatBI's functions.

    (3) You need to deploy FineAI. FineAI is used as an algorithm tool and undertakes LLM forwarding work.

    (4) You need to install the FineChatBI Q&A plugin.

    iconNote:
    The deployment solution in this document is only applicable to non-FineOps deployment scenarios. If FineBI is deployed using FineOps, FineChatBI also needs to be deployed using FineOps, in which case you cannot deploy FineChatBI according to the deployment method described in this document.

    Environmental Requirement

    Two servers need to be prepared:

    Resource ItemConfiguration Requirement

    Server one:

    Deploy FineBI of V6.1 and later versions.

    Select the required server by referring to Confirming Server Configuration of the FineBI Project.

    Calculation rule: Count FineChatBI users as FineBI's daily active users, calculate required server resources, and reserve an additional 10% of server resources for FineChatBI use.

    • For users who have not deployed FineBI, follow the calculation rule, select the required server by referring to Confirming Server Configuration of the FineBI Project, and complete FineBI deployment by referring to New Project Deployment.

    • For users who have deployed FineBI, follow the calculation rule and determine whether the existing server needs additional resources. If the FineBI version is lower, you need to upgrade FineBI to V6.1 or later versions.

    Server two:

    Deploy a semantic parsing SLM.
    Deploy FineAI.

    Linux kernel version: versions later than 3.10

    Number of bits: 64-bit

    Number of cores: ≥ 4 cores (8 cores recommended)
    Memory: 16 GB
    Hard disk: ≥ 50 GB (70 GB recommended)

    Storage device: solid-state drive (SSD)

    Docker: 20.0.0 and later versions

    Graphics card: no graphics card allowed (minimum configuration); GeForce RTX 4090 or 3090 recommended

    Semantic Parsing SLM Deployment

    Obtain deployment resources for the semantic parsing SLM:

    ResourceAcquisition Channel

    Image file

    Contact our operations personnel to obtain the image file of the semantic parsing SLM.

    Code package

    Contact our operations personnel to obtain the code package.

    Docker Installation

    Docker is required during the installation. First check whether Docker has been installed on the server by running the Docker command docker --version.

    • If Docker has already been installed, Docker version information will be displayed, as shown in the following figure.

    Image Installation

    (0) (Optional) Check image file integrity.

    If the image installation fails, check whether the MD5 checksum of the image file matches the one in this document. If not, download the file again.

    Example command: md5sum fine-chat-bi-parser-base_v1_6.tar

    Image File NameMD5 Checksum

    fine-chat-bi-parser-base_v1_6.tar

    a4b82e5ea243fa7acd9aa6771eb55810

    (1) Upload the image file to the specified folder on the server.

    The path of the folder to which the image file is uploaded is /home/AI in this example.

    iconNote:
    Upload the image file directly. Do not decompress it after the upload.

    (2) Enter the folder path by the command cd Folder path.

    Example command: cd /home/AI

    (3) Check the file.

    Example command: ls

    (4) Run the image file by the command docker load -i Image file package.

    Example command: docker load -i fine-chat-bi-parser-base_v1_6.tar

    (5) After the running is complete, check whether the image name and version appear by the command docker images. If so, the operation is complete.

    Example command: docker images

    (6) Run the image by the command docker run -it -e TZ=Asia/Shanghai --name fine-chat-bi-parser-base -d -p 8666:8666 Base image name:Base image version.

    Example command: docker run -m 8g -it -e TZ=Asia/Shanghai --name fine-chat-bi-parser-base -d -p 8666:8666 fine-chat-bi-parser-base:v1.6

    (7) Check the container by the command docker ps.

    Code Installation

    (1) Upload the obtained code file encrypt_vXX_XXX.tar to the server and decompress it.

    Command: tar -xvf Code file path/Code file name

    Example command: tar -xvf /home/AI/encrypt_v1_5_1.tar

    (2) Place the code file into the image by the command docker cp Code file path/Code file name fine-chat-bi-parser-base:/root/.

    Example command: docker cp /home/AI/encrypt_v1_5_1/ fine-chat-bi-parser-base:/root/

    (3) Enter the /bin/bash path of the image by the command docker exec -it Image name /bin/bash.

    Example command: docker exec -it fine-chat-bi-parser-base /bin/bash

    (4) Enter the code folder by the command cd /root/Code file name/pipeline/.

    Example command: cd /root/encrypt_v1_5_1/pipeline/

    (5) Run the code by the command python app.py.

    Example command: python app.py

    (6) Press Ctrl+P and Ctrl+Q sequentially to exit the Docker container.

    FineAI Deployment

    Docker Installation

    Check whether Docker is installed on the server by the command docker --version. If installed, the prerequisite for FineAI deployment is met.

    • If not installed, "command not found" will be displayed, as shown in the following figure. In this case, you need to install Docker on the server by referring to Online Installation of Docker on Linux Systems.

    • If already installed, Docker version information will be displayed.

    Docker Image File Upload and Running

    Contact our operations personnel to obtain the FineAI docker image file.

    (1) Transfer the Docker image file (fine_ai.tar in the following figure) to the specified directory on the server. The example path is /home/fineai.

    If the name of the obtained image file is suffixed with .gz, you need to decompress the file. The following figure shows the post-decompression result.

    Example command: gunzip fine-ai-base_v0_1.tar.gz

    (2) Run the command docker load to import the Docker image file.

    Example command: docker load -i fine-ai-base_v0_1.tar

    (3) Run the command docker images to check whether the image has been successfully imported.

    Example command: docker images

    (4) Based on the imported image, create and start a container by the command docker run.

    Example command: docker run -m 8g -e TZ=Asia/Shanghai --name fine_ai -p 7666:7666 -it -d fine-ai-base:v0.2 /bin/bash

    Code Installation

    Contact our operations personnel to obtain the FineAI code file.

    (1) Transfer the code file encrypt_fine_ai_xxxxx.tar to the specified directory on the server. The example path is /home/fineai.

    (2) Decompress the code file by the command tar -xvf.

    Example command: tar -xvf encrypt_fine_ai_xxxxx.tar

    (3) Run the command docker cp to copy and paste the decompressed code file to the container path fine_ai:/root/.

    Example command: docker cp encrypt_fine_ai_xxxxx fine_ai:/root/

    (4) Enter the container and switch to the code path within the container to run the code script.

                   i.   Enter the container fine_ai by the command docker exec -it fine_ai /bin/bash.

                 ii.   Switch from the current working directory to the code path by the command cd /root/encrypt_fine_ai_xxx/pipeline.

                iii.   Run the script by the command python app.py.

    (5) Press Ctrl+P and Ctrl+Q sequentially to exit the Docker container.

    FineChatBI Plugin Installation

    (1) Contact our operations personnel to download the FineChatBI plugin.

    (2) Log in to the management platform as the super administrator, choose Management System > Plugin Management > Store App, click Local Install, and select the installation package to complete the installation.

    (3) After completion, refresh the page, choose Management System > Intelligent Q&A Configuration > Other Configurations, configure the server IP address and port number, and click Save, as shown in the following figure.

    Item
    Description

    Host

    IP address of the semantic parsing SLM

    Port Number

    Port number (8666 by default) of the semantic parsing SLM

    FineAI Service Host

    IP address of FineAI

    FineAI Port

    Port number (7666 by default) of FineAI

    (4) Wait until the BI Q&A button appears in the lower right corner of the management system, as shown in the following figure.

    License Installation

    Contact our operations personnel to obtain the license file (fanruan.lic).

    Next Step: LLM Connection

    Connect to the LLM according to Service Architecture Overview.


    附件列表


    主题: 隐藏by Chauvet
    Previous
    Next
    • Helpful
    • Not helpful
    • Only read

    滑鼠選中內容,快速回饋問題

    滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。

    不再提示

    10s後關閉

    Get
    Help
    Online Support
    Professional technical support is provided to quickly help you solve problems.
    Online support is available from 9:00-12:00 and 13:30-17:30 on weekdays.
    Page Feedback
    You can provide suggestions and feedback for the current web page.
    Pre-Sales Consultation
    Business Consultation
    Business: international@fanruan.com
    Support: support@fanruan.com
    Page Feedback
    *Problem Type
    Cannot be empty
    Problem Description
    0/1000
    Cannot be empty

    Submitted successfully

    Network busy