I have come across an issue that I'd like to see if I can track down.
I've installed a clustered system, with the following:
Vicibox Standard Install 6.0.3
VERSION: 2.8-403a
BUILD: 130510-1350
Working off SVN revision 1985
These are Dell PowerEdge R620, with Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz - x24, and 32 GB mem.
There were no issues on Servers 1 and 2
HOWEVER
Servers 3 & 4 have a particular issue that I'd like to run by someone.
Somewhere in the vicidial init script, the process of loading modules fails. In order to get the process running, I had to revamp the chkconfig sequence to:
Dahdi on
vicidial off
I then placed a cron script into crontab -e that waits 120 seconds at reboot, then starts the vicidial server, with tee'ing to a log file for debug purposes.
As the servers are running, and calls are being placed and received normally, I'm not in dire straits to get this fixed - however I'd like to know what if anything that should be done on the next install. It is safe to say that the servers in question are running through full diagnotics upon shutdown and reboot per customers demand. What's interesting is this:
Disabling ALL vicidial scripts in the cron, and keeping everything from running, I attempted to start vicidial manually through /etc/init.d - and this failed. Again there is a hang up somewhere in the process of starting dahdi and or asterisk, that causes a FATAL error in the initialization, preventing the system from finishing. Starting dahdi manually resolves this.
Zypper up and refresh were accomplished BEFORE the installation of vicidial - could this have re-written the kernel drivers existing on the standard install that would prevent dahdi from starting in the init script?