The following figure shows the overall service architecture of FineChatBI.
FineChatBI
FineChatBI is based on FineBI with AI components added.
Using FineOps to Deploy AI Services for FineBI
General LLM
General large language models (LLMs) support the following functions:
Data asking (semantic understanding and transcription in the intelligent mode)
Idea asking
One-click synonym configuration
Reasoning LLM (optional)
Reasoning LLMs support the following functions:
Data interpretation
Attribution analysis
Reasoning LLM
Since FineChatBI is based on FineBI, FineBI of V6.1 or later versions needs to be deployed.
Select the required server by referring to Confirming Server Configuration of the FineBI Project.
Calculation rule: Count FineChatBI users as FineBI's daily active users, calculate required server resources, and reserve an additional 10% of server resources for FineChatBI use.
Since AI components require a lot of resources, you are advised to prepare a separate server for AI components.
The server requirements are as follows:
Recommended configuration: 16-core CPU, available 32 GB memory, available 100 GB disk, and AI component-dedicated server
Minimum configuration: 12-core CPU, available 24 GB memory, available 50 GB disk, and server shared by AI components and projects
For details, see Preparing the AI Component Server.
滑鼠選中內容,快速回饋問題
滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。
不再提示
10s後關閉
Submitted successfully
Network busy