Configuring WebSocket Port in Cluster Environment

  • Last update:April 24, 2026
  • Overview

    Version

    Report Server Version

    10.0

    Application Scenario

    This document introduces how to configure WebSocket ports in a cluster environment.

    iconNote: 

    Starting from the FineReport project of V11.0.2 and later versions, a container-native WebSocket solution has been added. You are advised to first check if the container-native WebSocket solution can be used.

    No user operation, manual configuration, or additional port opening is required. The system automatically uses the WebSocket built into the web container for connection, and the WebSocket connection reuses the HTTP port.

    Example

    Modifying Field Values

    You (the super admin) can modify the WebSocket port through the FINE_CONF_ENTITY Visualization Configuration plugin. The settings take effect after you restart the server.

    iconNote: 
    For details about how to modify field values in FineDB database tables, see FINE_CONF_ENTITY Visualization Configuration.
    PortJAR PackageIDDefault ValueValue RangeMulti-Value Support

    WebSocket port

    /

    WebSocketConfig.port

    ["38888", "39888"]

    An array of ports: ["Port number 1", "Port number 2"]

    All port numbers should fall within the range (1024, 65535].

    Supported

    WebSocket forwarding port

    Dated before 2019-11-08

    WebSocketConfig.requestPort

    38889

    Supported

    Dated after 2019-11-08

    WebSocketConfig.requestPorts

    38889

    Supported

    Notes for port configuration:

    1. The port number ranges from 1024 (excluded) to 65535. Separate multiple values with commas (,), for example, [Port number 1, Port number 2, Port number 3].

    2. You are advised to configure more WebSocket forwarding ports than the number of cluster nodes, so that each node can use an available port, which prevents server startup failures caused by port conflicts.

    3. You are advised to configure multiple values for the WebSocket port as backups to prevent port conflicts when multiple projects are deployed on a single server.

    4. Do not set the port number to 3389 (which is for remote server connection).

    5. If the project and the nginx load balancer are deployed in the same environment, do not use the same port for both the WebSocket port and the WebSocket forwarding port.

    6. If there are trailing spaces in WebSocketConfig.port, WebSocketConfig.requestPort, and WebSocketConfig.requestPorts, the configuration will not take effect.

    7. The parameters WebSocketConfig.port, WebSocketConfig.requestPort, and WebSocketConfig.requestPorts are case-sensitive. Wrong capitalization will prevent the configuration from taking effect.

    8. WebSocketConfig.requestPort and WebSocketConfig.requestPorts must not coexist in the fine_conf_entity table; otherwise, errors will occur.

    Configuring the Forwarding Strategy in a Cluster

    In the load balancer, configure the WebSocket forwarding strategy to use sticky sessions.

    Configuring the Listening Port in a Cluster

    In the nginx.conf file, the listening port configured in the server block should be consistent with the WebSocket forwarding port.

    nginx then listens on the WebSocket forwarding port (WebSocketConfig.requestPort, 38889 by default) and proxies connections to the backend WebSocket port (WebSocketConfig.port, 38888 by default).

    In the following example, the IP addresses of the FineReport project server and the nginx server are 192.168.6.171 and 192.168.6.181, respectively.

    #User or user group; the default is nobody.
    #user  root;
    #Number of worker processes. You are advised to set it to the number of CPU cores or 'auto' for automatic detection. Note that on Windows, multiple worker processes can be started, but only one will be actually used.
    worker_processes  auto;
    #Automatically set CPU affinity based on the number of CPU cores; supported only on nginx V1.9.10 and later versions)
    #worker_cpu_affinity  auto;
    #Error log file location
    error_log  logs/error.log;
    #error_log  logs/error.log  notice;
    #error_log  logs/error.log  info;
    #PID file location
    #pid        logs/nginx.pid;
    #Working mode and connection limits
    events {
        #Maximum connections per worker process. On Windows, the effective limit is 1024 regardless of this setting.
        #The total number of concurrent connections equals worker_processes multiplied by worker_connections.
        worker_connections  1024;
    }
    http {
        #MIME type definitions, loaded from mime.types
        include       mime.types;
        #Default MIME type; the default is text/plain
        default_type  application/octet-stream;
        #Log format
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for" $upstream_addr';
        #The access_log directive controls whether nginx records access logs. Setting it to off reduces disk I/O and improves performance.
        access_log  off;
        #access_log  logs/access.log  main;
        #The sendfile directive controls whether nginx uses the sendfile() system call (zero-copy) to send files.
        #For typical use cases, set it to on.
        #For I/O-heavy workloads such as file downloads, set it to off to balance disk I/O and network I/O and reduce system load.
        sendfile        on;
        #tcp_nopush     on;
        #HTTP keep-alive timeout (client <-> nginx): how long the connection stays open after a request completes.
        keepalive_timeout  65s;
        #The smaller the value of types_hash_max_size is, the less memory is used. But this may increase hash key collisions.
        types_hash_max_size 2048;
        #Enable gzip compression
        #gzip  on;
        #gzip_disable "MSIE [1-6].";
        #Request buffer settings
        client_header_buffer_size    512k;
        large_client_header_buffers  4 512k;
        #Maximum upload size. Adjust it based on your business requirements.
        client_max_body_size 100M;
        
        upstream FR.com {
           #max_fails=n fail_timeout=m: if n failed attempts occur within m seconds, the node is marked unavailable for the next m seconds. (After that period, the node is automatically marked available again, regardless of its startup state.)
            #The failed attempt is defined by the proxy_next_upstream directive in the server block below.

            server 192.168.6.171:8080 max_fails=15 fail_timeout=300s;
            keepalive 300;
            #Other server parameters:
            #down indicates that the server is unavailable.
            #backup is used to backup servers when primary servers are unavailable.
            #weight=number sets the weight; the default is 1.
            #Built-in load-balancing strategies includes ip_hash, round robin, and weighted round robin (set via weight=number).
            #ip_hash;
            #↓====================Active health check module configuration====================↓#
            ## interval: interval between health check packets sent to the backend, in ms
            ## fall(fall_count): After this many consecutive failures, the server is considered down.
            ## rise(rise_count): After this many consecutive successes, the server is considered up.
            ## timeout: timeout for a backend health request, in ms.
            ## type: type of health check packet. TCP, UDP, and HTTP are supported.
            #check interval=2000 rise=5 fall=10 timeout=10000 type=http;
            #check_http_send "GET /webroot/decision/system/health HTTP/1.0\r\n\r\n"; # Health check request
            #check_http_expect_alive http_2xx http_3xx; #The directive defines which HTTP response statuses are considered healthy. By default, 2xx and 3xx are considered healthy.
            #↑====================Active health check module configuration====================↑#
        }
        upstream WBS.com {
            server 192.168.6.171:38888 max_fails=15 fail_timeout=300s;
            #ip_hash is required here.
        ip_hash;
        }
        server {
            listen       8123;
            server_name  192.168.6.181;
            #By default, nginx drops request headers containing underscores, for example, "_device_", when forwarding them to upstream servers.
            #To preserve them, add this directive in the http or http -> server context: underscores_in_headers on.
            #charset koi8-r;
            #access_log  logs/host.access.log  main;
            #↓====================Active health check status page====================↓#
            #location /status {
            #    healthcheck_status html;
            #}
            #↑====================Active health check status page====================↑#
            #Default forwarding location for requests under /: every request following the port is reverse-proxied as configured below.
            location / {
                #For HTTP proxying, proxy_http_version should be set to "1.1", and the Connection header should be cleared (see proxy_set_header Connection "" below).
                proxy_http_version 1.1;
                #Target server (group) to forward to
                proxy_pass http://FR.com;
                #The directive defines conditions under which nginx forwards to the upstream server. The setting here indicates that failed requests (the 4xx/5xx codes listed in proxy_next_upstream) are passed to the next server.
                # Notes on failed attempts: 50x and 429 are treated as failed attempts under the current configuration. error, timeout, and invalid_header are always treated as failed attempts regardless of the configuration. http_403 and http_404 are never treated as failed attempts regardless of configuration.
                # error: failure during connection with the backend server, request sending, or response header reading
                # timeout: timeout during connection with the backend server, request sending, or response header reading, specified in proxy_next_upstream_timeout and proxy_next_upstream_tries (Both default to 0.)
                # invalid_header: invalid or empty response returned by the server
                proxy_next_upstream http_500 http_502 http_503 http_504 http_403 http_404 http_429 error timeout invalid_header non_idempotent;
                #proxy_next_upstream_timeout 0;
                #proxy_next_upstream_tries 0;
                #The directive controls how redirects returned by the backend are rewritten before being forwarded to the client. Setting it to off disables header rewriting.
                proxy_redirect off;
                #Request header settings; syntax: proxy_set_header [field] [value]
                #$host is the name of the requested host (nginx proxy server); $server_port is the host port (the nginx port). If external mapping is configured, replace $host:$server_port with address:port number (without the scheme).
                proxy_set_header Host $host:$server_port;
                #$remote_addr is the client IP address.
                proxy_set_header X-Real-IP $remote_addr;
                #$proxy_add_x_forwarded_for represents the chain of forwarding addresses. With multiple proxies, it becomes "client, proxy1, proxy2". The IP address of the original client is the left-most entry.
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                #Connection to the backend does not need to be kept alive.
            proxy_set_header Connection "";
                #nginx buffers responses from the proxy server. Responses are stored in internal buffers and not sent to the client until the full response is received.
    #Buffering improves performance for slow clients. Without it, nginx would have to relay responses synchronously, holding upstream connections longer than necessary.
                ## With buffering enabled, the backend can return responses quickly while nginx holds them for as long as the client needs to download them.
                #Buffering is enabled by default.
                #proxy_buffering off;
                #Buffer size
                #proxy_buffer_size     64k;
                #Number and size of buffers per connection; syntax: proxy_buffers [number] [size];
                #proxy_buffers         32 64k;
                #When buffering is enabled and the response has not been fully read, nginx starts sending data to the client once the busy buffer reaches this size, and continues until the buffered data falls below this threshold.
                #proxy_busy_buffers_size 64k;
                #Timeout for establishing the connection to the upstream server. The default is 60s. Timeout cannot exceed 75s.
                #Timeout cannot be too long or too short and should be set based on max_fails and fail_timeout.
                #Under high concurrency, the backend may become overloaded, and timeouts may occur. In this case, you can increase the values of max_fails and fail_timeout to reduce the chance of nginx incorrectly evicting nodes.
                proxy_connect_timeout    75;
                #Reading timeout; the default is 60s. If the server sends no data within this period, the request is treated as timed out. If the workload does not involve large-dataset template computations or exports, you are advised to set this below 100s. Otherwise, size it based on the longest-running template.
                proxy_read_timeout       400;
                #Sending timeout; the default is 60s. If the server receives no data within this period, the request is treated as timed out. If the workload does not involve large-dataset template computations or exports, you are advised to set this below 100s. Otherwise, size it based on the longest-running template.
                proxy_send_timeout       400;
            }
            #Custom 404 page
            #error_page  404              /404.html;
            # redirect server error pages to the static page /50x.html
            #Custom 50x page
            error_page   500 502 503 504  /50x.html;
            location = /50x.html {
                root   html;
            }
        }
        server { 
            #WebSocket port. In clustered deployments, FineReport uses 38889, and FineBI uses 48889.
            listen 38889;             
        server_name 192.168.6.181;
            location / {
                 proxy_http_version 1.1;
                 proxy_pass http://WBS.com;
                 proxy_connect_timeout 75;
                 proxy_read_timeout 400;
                 proxy_send_timeout 400;
                
                 #Target protocol to upgrade to; $http_upgrade resolves to websocket in this example.
         proxy_set_header Upgrade $http_upgrade;
                 #Set Connection to "upgrade" to enable the Upgrade header.
         proxy_set_header Connection "upgrade";
                 }
            }

    Opening Ports

    • If the firewall is enabled, you can either disable it or open the specific port.

    • If the cloud server uses security groups or similar access controls, open the required ports to external traffic.

    Restarting the Project

    Restart nginx and the FineReport project.

    1. When restarting the project, kill the running project processes, wait 2 minutes for ports to be released, and then restart the project. Otherwise, the restart may fail.

    Effect Preview

    Each cluster node listens for connections in the following order: WebSocket forwarding port >> WebSocket port. That is, with default ports, 38889, 38888, and 39888 are tried in sequence.

    • If a port is successfully bound, the cluster node will stop trying the others.

    If none of the configured ports can be bound to the system server, the deployment wizard will open to guide you through modifying the listening port list. Related functions will be affected during this time. 

    Attachment List


    Theme: Decision-making Platform
    • Helpful
    • Not helpful
    • Only read

    滑鼠選中內容,快速回饋問題

    滑鼠選中存在疑惑的內容,即可快速回饋問題,我們將會跟進處理。

    不再提示

    10s後關閉

    Get
    Help
    Online Support
    Professional technical support is provided to quickly help you solve problems.
    Online support is available from 9:00-12:00 and 13:30-17:30 on weekdays.
    Page Feedback
    You can provide suggestions and feedback for the current web page.
    Pre-Sales Consultation
    Business Consultation
    Business: international@fanruan.com
    Support: support@fanruan.com
    Page Feedback
    *Problem Type
    Cannot be empty
    Problem Description
    0/1000
    Cannot be empty

    Submitted successfully

    Network busy