RPC Load Balancer Setup

This guide explains how to set up a highly available load balancer for Berachain's public RPC endpoints using Nginx or HAProxy. The goal is to distribute incoming RPC traffic across multiple RPC nodes

Prerequisites

Before setting up the load balancer, make sure you have:

  1. At least two or more RPC nodes running Berachain.

  2. Nginx or HAProxy installed on your load balancer server.

  3. Monitoring tools like Prometheus and Grafana to track performance.


Step 1: Install Nginx or HAProxy

Nginx Installation

For Ubuntu/Debian:

bashCopy codesudo apt update
sudo apt install nginx

For CentOS:

bashCopy codesudo yum install epel-release
sudo yum install nginx

HAProxy Installation

For Ubuntu/Debian:

bashCopy codesudo apt update
sudo apt install haproxy

For CentOS:

bashCopy codesudo yum install haproxy

Step 2: Configure Nginx for Load Balancing

  1. Open the Nginx configuration file:

    bashCopy codesudo nano /etc/nginx/nginx.conf
  2. Configure Nginx to act as a load balancer for your RPC nodes. Add the following under the http {} block:

    nginxCopy codeupstream berachain_rpc {
        server rpc-node1-ip:26657;
        server rpc-node2-ip:26657;
    }
    
    server {
        listen 80;
    
        location / {
            proxy_pass http://berachain_rpc;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
  3. Save and exit the file (Ctrl+X, then Y).

  4. Test and reload Nginx:

    bashCopy codesudo nginx -t
    sudo systemctl reload nginx

Explanation:

  • upstream berachain_rpc: This block defines the backend RPC nodes to which traffic will be distributed.

  • proxy_pass http://berachain_rpc;: This forwards the incoming requests to one of the RPC nodes in a round-robin fashion.


Step 3: Configure HAProxy for Load Balancing

  1. Open the HAProxy configuration file:

    bashCopy codesudo nano /etc/haproxy/haproxy.cfg
  2. Add the following configuration to balance traffic between multiple RPC nodes:

    haproxyCopy codefrontend berachain-rpc
        bind *:80
        default_backend rpc_backend
    
    backend rpc_backend
        balance roundrobin
        server rpc1 rpc-node1-ip:26657 check
        server rpc2 rpc-node2-ip:26657 check
  3. Save and exit the file (Ctrl+X, then Y).

  4. Restart HAProxy:

    bashCopy codesudo systemctl restart haproxy

Explanation:

  • frontend berachain-rpc: This listens on port 80 and forwards traffic to the backend.

  • backend rpc_backend: Traffic is distributed between the two RPC nodes in a round-robin manner, with health checks on each node.


Step 4: Configure Failover

Both Nginx and HAProxy configurations above support failover. If one RPC node goes down, the load balancer will automatically route traffic to the remaining available nodes.

Failover in Nginx

Nginx will detect if an RPC node is down and remove it from the rotation. You can fine-tune failover behavior by specifying how many failures trigger failover using the max_fails and fail_timeout directives:

nginxCopy codeserver rpc-node1-ip:26657 max_fails=3 fail_timeout=30s;

Failover in HAProxy

HAProxy will automatically check the health of the backend nodes using the check directive in the configuration. You can adjust the health check intervals to suit your needs:

haproxyCopy codeserver rpc1 rpc-node1-ip:26657 check inter 3s fall 2 rise 2

Step 5: Monitoring and Scaling

1. Monitoring with Prometheus & Grafana

Set up Prometheus to scrape metrics from the load balancer and RPC nodes. You can monitor metrics such as CPU, memory, and RPC request rates.

Prometheus Configuration (for HAProxy metrics):

yamlCopy codescrape_configs:
  - job_name: 'haproxy'
    static_configs:
      - targets: ['localhost:8404']

Prometheus Configuration (for Nginx metrics): You’ll need to enable the Nginx VTS module or use the Nginx Exporter to expose metrics.

Grafana Dashboards:

  • Use Grafana to create dashboards for load balancer health, traffic distribution, and node performance.

2. Scaling the Load Balancer

You can horizontally scale your load balancer setup by adding more RPC nodes to the upstream block (for Nginx) or backend block (for HAProxy). As traffic grows, increase the number of backend nodes to handle the load efficiently.

Example Nginx Scale-Out:

nginxCopy codeupstream berachain_rpc {
    server rpc-node1-ip:26657;
    server rpc-node2-ip:26657;
    server rpc-node3-ip:26657;
}

Example HAProxy Scale-Out:

haproxyCopy codebackend rpc_backend
    balance roundrobin
    server rpc1 rpc-node1-ip:26657 check
    server rpc2 rpc-node2-ip:26657 check
    server rpc3 rpc-node3-ip:26657 check

Conclusion

Setting up a highly available load balancer for Berachain's public RPC endpoints ensures that your infrastructure can handle large volumes of traffic while maintaining uptime and performance. Both Nginx and HAProxy offer flexible, scalable, and reliable options for load balancing and failover, ensuring optimal service delivery for Berachain users.

Last updated

Was this helpful?