Haproxy config for CWMP load balancing

How can we configure HAproxy to load balance requests from cpe’s to be routed to multiple cwmp servers so that load balancing with dynamic weights can be configures such that the requests be routed to backend cwmp servers based on the resource availability. @ bajojoba

Would not it be better to ask the HAProxy community ? CWMP is using standard HTTP like any other, but on port 7547. So the config on HAProxy should be the same as any web server.

But, do how much devices do you have connected to GenieACS ? I have thousand of devices and 1 server is good enough.

I have about 500,000 devices and i found out that the TR069 protocol that is used needs persistent connections and all. That is why i wanted to know if anybody has used HAproxy and if they can help me out setup one

With NGINX, I’m sure it would be pretty easy to do but I’m not familiar with haproxy.

Another approach would could be anycast so a different GenieACS by group of customers. By example, you could have a GenieACS node by region and have all those GenieACS connect to the same database. This way, each CPE connect to the closest GenieACS.

Yes i am thinking of putting two cwmp servers and using ip_hash to forward requests to those backend cwmp servers and then use a single database (mongo). Could you help me with the configuration? as the load is being distributed but the online count is fluctuating too much. I have kept request zones and all as well with the config as this : limit_req_zone $server_name zone=req_zone:10m rate=1000r/s;

but i dont think the configuration is working good. Can you give me an example of a production level configuration of that so that the some requests can be offloaded and kept in waiting or something like that in the nginx lb such that consistent amount of requests can be forwarded to the backend cwmp?

I’m not a NGINX expert but I’ve done some test before, mostly for IPTV as a caching proxy. For load balancing with NGINX, you have to use the upstream option. Something like

http {
upstream genieacs {
server 10.0.1.10;
server 10.0.1.11;
server 10.1.0.12;
}
server {
listen 7547;
location / {
proxy_pass http://genieacs;
}
}
}

If you need two GenieACS, I would put at less 3 in case one fail for whaterver reason, need to be updated, etc. NGINX detect down servers and automaticaly stop streaming to those.

1 Like

If your using docker, nginx proxy manager come with an GUI if it can help.