RPC Load Balancer Setup
This guide explains how to set up a highly available load balancer for Berachain's public RPC endpoints using Nginx or HAProxy. The goal is to distribute incoming RPC traffic across multiple RPC nodes
Prerequisites
Before setting up the load balancer, make sure you have:
At least two or more RPC nodes running Berachain.
Nginx or HAProxy installed on your load balancer server.
Monitoring tools like Prometheus and Grafana to track performance.
Step 1: Install Nginx or HAProxy
Nginx Installation
For Ubuntu/Debian:
For CentOS:
HAProxy Installation
For Ubuntu/Debian:
For CentOS:
Step 2: Configure Nginx for Load Balancing
Open the Nginx configuration file:
Configure Nginx to act as a load balancer for your RPC nodes. Add the following under the
http {}
block:Save and exit the file (
Ctrl+X
, thenY
).Test and reload Nginx:
Explanation:
upstream berachain_rpc
: This block defines the backend RPC nodes to which traffic will be distributed.proxy_pass http://berachain_rpc;
: This forwards the incoming requests to one of the RPC nodes in a round-robin fashion.
Step 3: Configure HAProxy for Load Balancing
Open the HAProxy configuration file:
Add the following configuration to balance traffic between multiple RPC nodes:
Save and exit the file (
Ctrl+X
, thenY
).Restart HAProxy:
Explanation:
frontend berachain-rpc
: This listens on port 80 and forwards traffic to thebackend
.backend rpc_backend
: Traffic is distributed between the two RPC nodes in a round-robin manner, with health checks on each node.
Step 4: Configure Failover
Both Nginx and HAProxy configurations above support failover. If one RPC node goes down, the load balancer will automatically route traffic to the remaining available nodes.
Failover in Nginx
Nginx will detect if an RPC node is down and remove it from the rotation. You can fine-tune failover behavior by specifying how many failures trigger failover using the max_fails
and fail_timeout
directives:
Failover in HAProxy
HAProxy will automatically check the health of the backend nodes using the check
directive in the configuration. You can adjust the health check intervals to suit your needs:
Step 5: Monitoring and Scaling
1. Monitoring with Prometheus & Grafana
Set up Prometheus to scrape metrics from the load balancer and RPC nodes. You can monitor metrics such as CPU, memory, and RPC request rates.
Prometheus Configuration (for HAProxy metrics):
Prometheus Configuration (for Nginx metrics): You’ll need to enable the Nginx VTS module or use the Nginx Exporter to expose metrics.
Grafana Dashboards:
Use Grafana to create dashboards for load balancer health, traffic distribution, and node performance.
2. Scaling the Load Balancer
You can horizontally scale your load balancer setup by adding more RPC nodes to the upstream
block (for Nginx) or backend
block (for HAProxy). As traffic grows, increase the number of backend nodes to handle the load efficiently.
Example Nginx Scale-Out:
Example HAProxy Scale-Out:
Conclusion
Setting up a highly available load balancer for Berachain's public RPC endpoints ensures that your infrastructure can handle large volumes of traffic while maintaining uptime and performance. Both Nginx and HAProxy offer flexible, scalable, and reliable options for load balancing and failover, ensuring optimal service delivery for Berachain users.
Last updated
Was this helpful?