High dispo + dead
Posted: Mon Mar 06, 2017 6:00 pm
Hello Everyone!
Hey i have been struggling with this issue for the last years, but it just became super relevant for me.
There some calls where i can see in real-time an agent is stucked in dipso or dead call, but i can see physically the agent is on call. The problem is in the reports it shows a high dispo.
We have already disabled dispo , instead with a dispo_url script we insert the call info into a memory table to be dispositioned later, but even though we are getting high Disposition Time
Same happens with Dead, i modified vicidial.php to not pause an agent when DCMX options is enabled, so the agent is ready one second after getting into a dead call, but i can see the real-time agents being on dead for several seconds and i can see the agent physically is ready or it is on call already with another customer and again. the reports shows dead time, not as much as dispo, but it shouldn't.
If i run queries from the database (vicidial_agent_log) i can see that when this happens dispo_sec equals talk_sec
For example
MariaDB [asterisk]> select * from vicidial_agent_log where user = "851" AND dispo_sec > 50 AND event_time > '2017-03-06 00:00:00';
+--------------+------+-----------+---------------------+---------+-------------+-------------+-----------+------------+----------+------------+----------+-------------+-----------+--------+------------------+----------+------------+------------+----------+-----------+--------------------+------------+
| agent_log_id | user | server_ip | event_time | lead_id | campaign_id | pause_epoch | pause_sec | wait_epoch | wait_sec | talk_epoch | talk_sec | dispo_epoch | dispo_sec | status | user_group | comments | sub_status | dead_epoch | dead_sec | processed | uniqueid | pause_type |
+--------------+------+-----------+---------------------+---------+-------------+-------------+-----------+------------+----------+------------+----------+-------------+-----------+--------+------------------+----------+------------+------------+----------+-----------+--------------------+------------+
| 8366019 | 851 | 40.8.0.12 | 2017-03-06 11:57:48 | 5037322 | XXXXXX | 1488826668 | 0 | 1488826668 | 7 | 1488826675 | 101 | 1488826776 | 101 | A | XXXXXX XXXXXX | NULL | NULL | NULL | 0 | N | 1488826662.1937421 | AGENT |
| 8345718 | 851 | 40.8.0.12 | 2017-03-06 10:19:41 | 4881302 | XXXXXX | 1488820781 | 0 | 1488820781 | 1 | 1488820782 | 1403 | 1488822185 | 1403 | N | XXXXXX XXXXXX | NULL | NULL | NULL | 0 | N | 1488820773.1884853 | AGENT |
| 8355189 | 851 | 40.8.0.12 | 2017-03-06 11:04:48 | 4888252 | XXXXXX | 1488823488 | 0 | 1488823488 | 11 | 1488823499 | 317 | 1488823816 | 317 | L | XXXXXX XXXXXX | NULL | NULL | NULL | 0 | N | 1488823488.1909576 | AGENT |
| 8363691 | 851 | 40.8.0.12 | 2017-03-06 11:45:59 | 5035962 | XXXXXX | 1488825959 | 0 | 1488825959 | 2 | 1488825961 | 73 | 1488826034 | 73 | N | XXXXXX XXXXXX | NULL | NULL | NULL | 0 | N | 1488825950.1605905 | AGENT |
| 8366463 | 851 | 40.8.0.12 | 2017-03-06 12:00:27 | 5037576 | XXXXXX | 1488826827 | 0 | 1488826827 | 2 | 1488826829 | 107 | 1488826936 | 107 | N | XXXXXX XXXXXX | NULL | NULL | 1488826935 | 1 | N | 1488826807.1581394 | AGENT |
| 8372527 | 851 | 40.8.0.12 | 2017-03-06 12:34:44 | 5041058 | XXXXXX | 1488828884 | 0 | 1488828884 | 4 | 1488828888 | 60 | 1488828948 | 59 | C33 | XXXXXX XXXXXX | NULL | NULL | NULL | 0 | N | 1488828875.1953569 | AGENT |
+--------------+------+-----------+---------------------+---------+-------------+-------------+-----------+------------+----------+------------+----------+-------------+-----------+--------+------------------+----------+------------+------------+----------+-----------+--------------------+------------+
I have the same problem in different setups and installations, this is one example
Vicidal 2.12-565a, installed with Vicibox 7.0
1 x Database / Web Balancer (Xeon E5 2.4 Ghz + 16 Gb + 100Gb 15k SAS)
3 x Phone / Web (Xeon E5 2.4 Ghz + 16 Gb + 120Gb SSD)
What can be causing this issue ? Any ideas on how can i fixed? Im not afraid to try to make a patch on my own, but i have been trying to figure out the cause and still have no clue. But this is a major issue, we can add up-to 70 hours of dispo and dead time per day even when dispo is disable in all campaigns
Server load are usually bellow 1 with 2.5 peaks, iostat doesn't show an io bottle neck neither, but even though i feel like this is hardware related.
What is you experience with this?
Hey i have been struggling with this issue for the last years, but it just became super relevant for me.
There some calls where i can see in real-time an agent is stucked in dipso or dead call, but i can see physically the agent is on call. The problem is in the reports it shows a high dispo.
We have already disabled dispo , instead with a dispo_url script we insert the call info into a memory table to be dispositioned later, but even though we are getting high Disposition Time
Same happens with Dead, i modified vicidial.php to not pause an agent when DCMX options is enabled, so the agent is ready one second after getting into a dead call, but i can see the real-time agents being on dead for several seconds and i can see the agent physically is ready or it is on call already with another customer and again. the reports shows dead time, not as much as dispo, but it shouldn't.
If i run queries from the database (vicidial_agent_log) i can see that when this happens dispo_sec equals talk_sec
For example
MariaDB [asterisk]> select * from vicidial_agent_log where user = "851" AND dispo_sec > 50 AND event_time > '2017-03-06 00:00:00';
+--------------+------+-----------+---------------------+---------+-------------+-------------+-----------+------------+----------+------------+----------+-------------+-----------+--------+------------------+----------+------------+------------+----------+-----------+--------------------+------------+
| agent_log_id | user | server_ip | event_time | lead_id | campaign_id | pause_epoch | pause_sec | wait_epoch | wait_sec | talk_epoch | talk_sec | dispo_epoch | dispo_sec | status | user_group | comments | sub_status | dead_epoch | dead_sec | processed | uniqueid | pause_type |
+--------------+------+-----------+---------------------+---------+-------------+-------------+-----------+------------+----------+------------+----------+-------------+-----------+--------+------------------+----------+------------+------------+----------+-----------+--------------------+------------+
| 8366019 | 851 | 40.8.0.12 | 2017-03-06 11:57:48 | 5037322 | XXXXXX | 1488826668 | 0 | 1488826668 | 7 | 1488826675 | 101 | 1488826776 | 101 | A | XXXXXX XXXXXX | NULL | NULL | NULL | 0 | N | 1488826662.1937421 | AGENT |
| 8345718 | 851 | 40.8.0.12 | 2017-03-06 10:19:41 | 4881302 | XXXXXX | 1488820781 | 0 | 1488820781 | 1 | 1488820782 | 1403 | 1488822185 | 1403 | N | XXXXXX XXXXXX | NULL | NULL | NULL | 0 | N | 1488820773.1884853 | AGENT |
| 8355189 | 851 | 40.8.0.12 | 2017-03-06 11:04:48 | 4888252 | XXXXXX | 1488823488 | 0 | 1488823488 | 11 | 1488823499 | 317 | 1488823816 | 317 | L | XXXXXX XXXXXX | NULL | NULL | NULL | 0 | N | 1488823488.1909576 | AGENT |
| 8363691 | 851 | 40.8.0.12 | 2017-03-06 11:45:59 | 5035962 | XXXXXX | 1488825959 | 0 | 1488825959 | 2 | 1488825961 | 73 | 1488826034 | 73 | N | XXXXXX XXXXXX | NULL | NULL | NULL | 0 | N | 1488825950.1605905 | AGENT |
| 8366463 | 851 | 40.8.0.12 | 2017-03-06 12:00:27 | 5037576 | XXXXXX | 1488826827 | 0 | 1488826827 | 2 | 1488826829 | 107 | 1488826936 | 107 | N | XXXXXX XXXXXX | NULL | NULL | 1488826935 | 1 | N | 1488826807.1581394 | AGENT |
| 8372527 | 851 | 40.8.0.12 | 2017-03-06 12:34:44 | 5041058 | XXXXXX | 1488828884 | 0 | 1488828884 | 4 | 1488828888 | 60 | 1488828948 | 59 | C33 | XXXXXX XXXXXX | NULL | NULL | NULL | 0 | N | 1488828875.1953569 | AGENT |
+--------------+------+-----------+---------------------+---------+-------------+-------------+-----------+------------+----------+------------+----------+-------------+-----------+--------+------------------+----------+------------+------------+----------+-----------+--------------------+------------+
I have the same problem in different setups and installations, this is one example
Vicidal 2.12-565a, installed with Vicibox 7.0
1 x Database / Web Balancer (Xeon E5 2.4 Ghz + 16 Gb + 100Gb 15k SAS)
3 x Phone / Web (Xeon E5 2.4 Ghz + 16 Gb + 120Gb SSD)
What can be causing this issue ? Any ideas on how can i fixed? Im not afraid to try to make a patch on my own, but i have been trying to figure out the cause and still have no clue. But this is a major issue, we can add up-to 70 hours of dispo and dead time per day even when dispo is disable in all campaigns
Server load are usually bellow 1 with 2.5 peaks, iostat doesn't show an io bottle neck neither, but even though i feel like this is hardware related.
What is you experience with this?