反馈已提交
网络繁忙
FineBI version
Functional changes
5.1.0
-
5.1.5
Delete the "Number of Memory Filter In Conditions" parameter
Delete the "Open Pagination Calculation" parameter
Delete the " Pagination calculation summary multi-thread calculation method" parameter
5.1.6
Added "Excel Export Data Limit" parameter
Added the parameter "Concurrent Thread Limit for List Export"
5.1.11
Added "Direct Connection Query Timeout (Sec)" parameter
5.1.12
Delete the parameter "Use Spark SQL to calculate the number of deduplicated records first "
5.1.14
Added "Self-Service Dataset Default Update Settings" parameter
5.1.18
Delete the "Cache Settings" and "Cache Time (Sec)" parameters, and move the related functions to the "System Management > Cache" interface. For details, please refer to: Cache
Optimize the setting range of the "Data Access" parameter
FineBI provides the function of configuring some BI parameters and tuning parameters in the system management, which is convenient for system management and project implementers to easily and quickly understand the current system configuration and make quick settings on the interface.
The administrator logs in to the data decision system, and goes to "Manage > System > General" to see the page for BI and Spider parameter configuration. As shown below:
Note: The "Spider parameter" is suitable for extracting data version parameters, but not for real-time data version BI.
parameter
definition
Defaults
whether after modification
Need to restart the project
data type identification
1) Regardless of whether it is turned on or not, the field is always recognized as a numeric type when there are decimal places
2 ) When data type recognition is not turned on, those with more than 19 digits are recognized as text types, and those with less than or equal to 19 digits are recognized as numeric types
3) After data type recognition is turned on, it is always recognized as a numeric type, but after the data is recognized, the field type will be recognized as double. The precision supported by double itself is only 16-17 digits, so when the length of the value exceeds 18 digits, precision loss may occur. Happening
4) Tables added without parameters (direct and distributed):
· If the table has not been edited, after enabling the parameters and restarting, you can enter the table editing interface to obtain the numeric field type and save it again.
· If the table has been edited (field type conversion has been done in 515 and later), after restarting the parameter, entering the table editing interface will still be of text type, and will not be read as a numerical value.
closure
Yes
data access
Data access limits the number of rows of data that can be loaded into server memory. Not all large data volume computing scenarios will read all data into memory, FineBI Spider engine has an intelligent memory usage policy
If the configuration is too low, the accuracy of data calculation will be affected.
If the configuration is too high, the system has the risk of downtime . It will take effect after restarting.
It is recommended to keep the default value of 1000000. Recommended setting interval: [10,000, 1,000,000], the maximum can be set to 10,000,000
1000000
Parameter control filtering takes effect
Set whether the control binding parameter function and the filtering function take effect at the same time
Closed by default, which means it does not take effect
no
Chinese sorting
Whether to use Chinese sorting
The default is off, which means that Chinese sorting is not used.
After opening, the table from which data is extracted needs to be re-extracted
For details, see: Sorting Section 1.4
The number of multi-index calculation threads
threads in multi-index calculation
20
Excel export data limit
When the user exports Excel, the data volume limit may be exceeded. This parameter is provided to facilitate the user's setting. If the export exceeds the limit, an error will be reported directly.
Unit: cell (row*column)
Default: empty, i.e. no limit
Configuration range: 0-2,000,000,000
Recommended configuration range: 0-1,000,000,000
null
Schedule export concurrent thread limit
When multiple users export Excel with a large amount of data at the same time, the number of concurrent users may be exceeded, which affects the use of users. Therefore, this parameter can be provided to set the number of users who can export detailed tables at the same time. If the export exceeds the limit, the salesperson needs to wait.
Configuration range: 1-10
Recommended configuration range: 1-5, it is recommended to keep the default value
3
Direct connection query timeout (seconds)
When there are too many components in the dashboard, the query time of a component will be too long, or the query time of a component in the dashboard will be too long, causing subsequent BI requests to be blocked, and it is easy to mistake the product for downtime .
At this time, you can set the direct query timeout time. After all real-time data queries time out, the query will be aborted to prevent abnormally slow queries from blocking other normal queries.
The component returns the following error: The component query time exceeds Xmin , and the query is interrupted
Unit: second
Default: 180
Recommended configuration range: 10-300
Valid range: BI direct connection to all query requests except to obtain the table structure
180
Spider parameters include basic parameters and advanced tuning parameters, as shown in the following figure:
The disk size (cell) parameter of the analysis user self-service dataset only affects the disk size of the data folder in the data storage path (default % FineBI %/bin/ROOT folder) . If the server disk space exceeds 1T, you can consider modifying it, 1T You can keep the default configuration below.
Do you need to restart the project after modification?
Analyze the disk usage of user self-service datasets
Note: If you modify the conference, the disk will be full, causing downtime .
The maximum number of cells supported during the quick analysis generation process, if it is exceeded, the generation will fail
For more information, see: Self-Service Dataset Data Size Limits .
50,000,000
Self-service dataset default update settings
self-service dataset single table update follows the parent table update
For details, see: Self-Service Dataset Single Table Update
Update with parent table
Number of decimation compression threads
Number of sharding (compress & write) threads when extracting data
When the memory is very small (no more than 4G) and the memory cannot be expanded, the thread can be reduced to reduce the memory pressure
8
Decimal compression thread queue size
Unprocessed shard wait queue length when extracting data
When the memory is very small (no more than 4G) and the memory cannot be expanded, the queue length can be reduced to reduce the memory pressure
200
Spark log output level
Spark log output level, standard output stream, output in Tomcat's catalina.out file or BI's nohup file
Available options are: INFO, WARN, ERROR, DEBUG.
· INFO: print error classes and basic execution logs
· WARN: print warning or prompt information
· ERROR: only print error logs
· DEBUG: print all logs
INFO
Add the number of data extraction task execution threads
of threads simultaneously executed by the new data extraction task
When the memory is very small (no more than 4G) and the memory cannot be expanded, the number of threads can be reduced to reduce the memory pressure
5
Lite Mode Date
When the simplified mode is enabled, only a small number of grouping types are generated in advance for the date field during data extraction, which speeds up the generation and reduces the space occupied. Ungenerated packets may incur performance losses during calculations;
Yes (and need to re-update data)
Spark Driver Port
17777
Spark blockManager port _
17778
Spark local mode temporary file path
Spark writes temporary files needs a certain amount of space. Modifying the SSD mount path can improve the performance of Spark processing associations and SparkSql queries.
Note: This parameter is invalid in the cluster version and needs to be configured on the server side
null (actually / tmp under Linux )
Spark dynamic adjustment function
Spark dynamically adjusts the number of tasks according to the amount of computational data
After opening, the computing performance for small data volumes is significantly improved
turn on
Incremental update block grooming schedule
During this time period, the incremental update task will not perform the merge operation, which improves the speed of incremental update.
set format hh: mm:ss -hh:mm:ss
Example 10:10:10-12:12:12
售前咨询电话
400-811-8890转1
在线技术支持
在线QQ:800049425
热线电话:400-811-8890转2
总裁办24H投诉
热线电话:173-1278-1526
文 档反 馈
鼠标选中内容,快速反馈问题
鼠标选中存在疑惑的内容,即可快速反馈问题,我们将会跟进处理。
不再提示
10s后关闭