SIP Termination load balance using domain or multiple IP
Posted: Fri Apr 15, 2016 2:44 pm
First time poster and newbie vicidial admin. This issue keeps coming up at my place of work without a known good solution, and I want to put this issue to rest. Your assistance will be greatly appreciated by our network and by our users.
I manage a cluster of SBC used in part for wholesale call center SIP termination. We provide a DNS SRV domain with properly weighted and prioritized records indicating how to use our cluster. For the sake of simplicity let's assume there are four servers total, two on the West Coast and two on the East Coast, with equal weight and priority. Our expectation is they give their PBX the domain, and the SRV details should result in that traffic being evenly distributed across our servers.
This is where the trouble begins. More often than not our customers end up retrying any call rejections across our entire cluster or worse, they fork every call to every server. What we want is traffic to be divided evenly, and rejections should "stop route" on the entire cluster, not get retried on the other members. One call to one server.
Some customers refuse to use our domain, insisting their PBX won't accept anything but an IP, or they don't want to involve DNS, or their PBX will only use one of the records in the domain, etc. We don't want to argue the point, so we're happy to give them IP addresses, with the expectation that they evenly load balance and properly stop route on rejection. Here again we have trouble similar to the domain. Once they configure multiple IP addresses, rejected calls get retried across the entire cluster or they end up forking calls.
What's the proper way to configure vicidial to round-robin or percentage route multiple IP addresses, or a respect SRV domain with multiple records, weights, and priorities?
How do we ensure that traffic is never forked, each call is sent to only one server, and a rejection from any single host will stop routing that call for the entire group/domain?
Also, some of our customers have us in an LCR or otherwise use secondary carriers. We still want those other trunks to be tried when our service gives a rejection.
I manage a cluster of SBC used in part for wholesale call center SIP termination. We provide a DNS SRV domain with properly weighted and prioritized records indicating how to use our cluster. For the sake of simplicity let's assume there are four servers total, two on the West Coast and two on the East Coast, with equal weight and priority. Our expectation is they give their PBX the domain, and the SRV details should result in that traffic being evenly distributed across our servers.
This is where the trouble begins. More often than not our customers end up retrying any call rejections across our entire cluster or worse, they fork every call to every server. What we want is traffic to be divided evenly, and rejections should "stop route" on the entire cluster, not get retried on the other members. One call to one server.
Some customers refuse to use our domain, insisting their PBX won't accept anything but an IP, or they don't want to involve DNS, or their PBX will only use one of the records in the domain, etc. We don't want to argue the point, so we're happy to give them IP addresses, with the expectation that they evenly load balance and properly stop route on rejection. Here again we have trouble similar to the domain. Once they configure multiple IP addresses, rejected calls get retried across the entire cluster or they end up forking calls.
What's the proper way to configure vicidial to round-robin or percentage route multiple IP addresses, or a respect SRV domain with multiple records, weights, and priorities?
How do we ensure that traffic is never forked, each call is sent to only one server, and a rejection from any single host will stop routing that call for the entire group/domain?
Also, some of our customers have us in an LCR or otherwise use secondary carriers. We still want those other trunks to be tried when our service gives a rejection.