RPC Load Balancer
This guide explains how to set up a highly available load balancer for Story Protocol's public RPC endpoints using Nginx or HAProxy.
Prerequisites
Before setting up the load balancer, make sure you have:
At least two or more RPC nodes running Story Protocol.
Nginx or HAProxy installed on your load balancer server.
Monitoring tools like Prometheus and Grafana to track performance.
Step 1: Install Nginx or HAProxy
Nginx Installation
For Ubuntu/Debian:
bashCopy codesudo apt update
sudo apt install nginx
For CentOS:
bashCopy codesudo yum install epel-release
sudo yum install nginx
HAProxy Installation
For Ubuntu/Debian:
bashCopy codesudo apt update
sudo apt install haproxy
For CentOS:
bashCopy codesudo yum install haproxy
Step 2: Configure Nginx for Load Balancing
Open the Nginx configuration file:
bashCopy codesudo nano /etc/nginx/nginx.conf
Add the following configuration under the http {}
block to distribute traffic across your RPC nodes:
nginxCopy codeupstream story_protocol_rpc {
server rpc-node1-ip:26657;
server rpc-node2-ip:26657;
}
server {
listen 80;
location / {
proxy_pass http://story_protocol_rpc;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Save and exit the file (Ctrl+X, then Y).
Test and reload Nginx:
bashCopy codesudo nginx -t
sudo systemctl reload nginx
Explanation:
upstream story_protocol_rpc: Defines the backend RPC nodes to which traffic is distributed.
proxy_pass: Forwards requests to one of the RPC nodes in a round-robin manner.
Step 3: Configure HAProxy for Load Balancing
Open the HAProxy configuration file:
bashCopy codesudo nano /etc/haproxy/haproxy.cfg
Add the following configuration to balance traffic between multiple RPC nodes:
haproxyCopy codefrontend story-protocol-rpc
bind *:80
default_backend rpc_backend
backend rpc_backend
balance roundrobin
server rpc1 rpc-node1-ip:26657 check
server rpc2 rpc-node2-ip:26657 check
Save and exit the file (Ctrl+X, then Y).
Restart HAProxy:
bashCopy codesudo systemctl restart haproxy
Explanation:
frontend story-protocol-rpc: Listens on port 80 and forwards traffic to the backend.
backend rpc_backend: Distributes traffic across RPC nodes in a round-robin fashion with health checks.
Step 4: Configure Failover
Both Nginx and HAProxy support failover. If an RPC node goes down, the load balancer will route traffic to the available nodes.
Failover in Nginx
Nginx will detect when an RPC node is down and stop sending traffic to it. You can fine-tune failover behavior using max_fails
and fail_timeout
:
nginxCopy codeserver rpc-node1-ip:26657 max_fails=3 fail_timeout=30s;
Failover in HAProxy
HAProxy automatically checks node health using the check
directive. You can adjust health check intervals:
haproxyCopy codeserver rpc1 rpc-node1-ip:26657 check inter 3s fall 2 rise 2
Step 5: Monitoring and Scaling
1. Monitoring with Prometheus & Grafana
Set up Prometheus to scrape metrics from the load balancer and RPC nodes. Metrics such as CPU, memory, and RPC request rates can be tracked.
Prometheus Configuration for HAProxy Metrics:
yamlCopy codescrape_configs:
- job_name: 'haproxy'
static_configs:
- targets: ['localhost:8404']
Prometheus Configuration for Nginx Metrics:
For Nginx, you’ll need to enable the Nginx VTS module or use the Nginx Exporter to expose metrics.
Grafana Dashboards:
Use Grafana to create dashboards for load balancer health, traffic distribution, and node performance.
2. Scaling the Load Balancer
You can scale the setup by adding more RPC nodes to the upstream
block (for Nginx) or the backend
block (for HAProxy) as traffic grows.
Example Nginx Scale-Out:
nginxCopy codeupstream story_protocol_rpc {
server rpc-node1-ip:26657;
server rpc-node2-ip:26657;
server rpc-node3-ip:26657;
}
Example HAProxy Scale-Out:
haproxyCopy codebackend rpc_backend
balance roundrobin
server rpc1 rpc-node1-ip:26657 check
server rpc2 rpc-node2-ip:26657 check
server rpc3 rpc-node3-ip:26657 check
Conclusion
Setting up a highly available load balancer for Story Protocol's public RPC endpoints ensures that your infrastructure can handle large volumes of traffic while maintaining uptime and performance. Both Nginx and HAProxy offer flexible, scalable, and reliable solutions for load balancing and failover, ensuring optimal service delivery for Story Protocol users.
Last updated
Was this helpful?