RPC Load Balancer
This guide explains how to set up a highly available load balancer for Story Protocol's public RPC endpoints using Nginx or HAProxy.
Prerequisites
Before setting up the load balancer, make sure you have:
At least two or more RPC nodes running Story Protocol.
Nginx or HAProxy installed on your load balancer server.
Monitoring tools like Prometheus and Grafana to track performance.
Step 1: Install Nginx or HAProxy
Nginx Installation
For Ubuntu/Debian:
For CentOS:
HAProxy Installation
For Ubuntu/Debian:
For CentOS:
Step 2: Configure Nginx for Load Balancing
Open the Nginx configuration file:
Add the following configuration under the http {}
block to distribute traffic across your RPC nodes:
Save and exit the file (Ctrl+X, then Y).
Test and reload Nginx:
Explanation:
upstream story_protocol_rpc: Defines the backend RPC nodes to which traffic is distributed.
proxy_pass: Forwards requests to one of the RPC nodes in a round-robin manner.
Step 3: Configure HAProxy for Load Balancing
Open the HAProxy configuration file:
Add the following configuration to balance traffic between multiple RPC nodes:
Save and exit the file (Ctrl+X, then Y).
Restart HAProxy:
Explanation:
frontend story-protocol-rpc: Listens on port 80 and forwards traffic to the backend.
backend rpc_backend: Distributes traffic across RPC nodes in a round-robin fashion with health checks.
Step 4: Configure Failover
Both Nginx and HAProxy support failover. If an RPC node goes down, the load balancer will route traffic to the available nodes.
Failover in Nginx
Nginx will detect when an RPC node is down and stop sending traffic to it. You can fine-tune failover behavior using max_fails
and fail_timeout
:
Failover in HAProxy
HAProxy automatically checks node health using the check
directive. You can adjust health check intervals:
Step 5: Monitoring and Scaling
1. Monitoring with Prometheus & Grafana
Set up Prometheus to scrape metrics from the load balancer and RPC nodes. Metrics such as CPU, memory, and RPC request rates can be tracked.
Prometheus Configuration for HAProxy Metrics:
Prometheus Configuration for Nginx Metrics:
For Nginx, you’ll need to enable the Nginx VTS module or use the Nginx Exporter to expose metrics.
Grafana Dashboards:
Use Grafana to create dashboards for load balancer health, traffic distribution, and node performance.
2. Scaling the Load Balancer
You can scale the setup by adding more RPC nodes to the upstream
block (for Nginx) or the backend
block (for HAProxy) as traffic grows.
Example Nginx Scale-Out:
Example HAProxy Scale-Out:
Conclusion
Setting up a highly available load balancer for Story Protocol's public RPC endpoints ensures that your infrastructure can handle large volumes of traffic while maintaining uptime and performance. Both Nginx and HAProxy offer flexible, scalable, and reliable solutions for load balancing and failover, ensuring optimal service delivery for Story Protocol users.
Last updated
Was this helpful?