For a database, core count is essential. With hyperthreading or without, the number of processes that can run simultaneously is the primary concern. After that, of course, speed helps move those along so all the cores are never busy because they clear out their tasks quickly. In other systems (non-db) some of these tasks take a long time, but in the DB like Vicidial, each task is a nano-second or three and should clear out immediately.
And along with CPU speed of course, FSB and Memory speeds are equally important to get that data moved where it needs to be ... which brings us to:
Next up we have hard drive speed. Obviously faster HDs, RAID10 with LOTS of drives (6 or 8?) can improve throughput. I've experimented a bit with a fully virtual HD on some servers with lots of memory so there really is NO HD involved, it's all in memory. This is dangerous of course in case of a power loss, but redundant power supplies and UPSs would protect from that, and if it kicks up the throughput, a little risk is worthwhile. The prototype isn't finished yet (we're too busy). Multiple starts, but none got all the way to the end as nobody was paying for it. We always get too busy to play.
GPU should have zero impact as that's related to graphics. Overclocking the CPU is always fun, but also can burn out the CPU and requires equipment where this is possible (and excellent cooling, a nice cool server room, that sort of thing).
My dream system? 40 cores (physical) with hyperthreading at the fastest clock speed available. We have a couple of these, although I don't know if there are faster clock speeds available than what we have (and the accountant wouldn't let me buy one if there was, so I'm not looking) and 256G RAM so the entire DB could be loaded into a RAM drive. Add replication (for disaster recovery and reporting, both) and the system should be bulletproof.
Happy Hunting
Oh: And if you wanted to "turbo" an existing Vicidial, some modifications to the code would be useful.
* Turn off all logging that isn't absolutely required. This includes log tables and log files, both.
* Modify any required logging routines that edit a table to ONLY append, never edit. If a value isn't known when the log entry is created, a new log table should be built (look up "normalized") to hold this other value that will arrive late. Requires modifying some reports, too, but dramatically improves throughput on DB writes.
* All "Read" entries that are not mission-critical should be read from the replication server. Any functions that have "read" entries could be blocked if replication falls behind, or automatically shunted to the live server, depending on which is more important (ie: screw the reports, we're dialing, that can wait 'til after we're done dialing).
I also have a client who has had us build (prototype online for a few months, it's working) a Master server capable of pushing leads to multiple Vicidial Clusters. He's not yet had us claw back the resulting reports and/or unused leads, but the system is designed to keep the hoppers full with "Just in time delivery" (by estimation, not technically calculated) so that a each list can be "doled out" among the various clusters until exhausted instead of putting 50k leads on each server and then hoping they all finish at the same time.