Simpler method:
Log in to the server's command line.
Use:
- Code: Select all
uptime
... for a simple, one-time output showing the 1, 5, 10 minute Average Server Load.
These represent the number of Cores busy (on average) for those time period. Thus if the numbers are:
- Code: Select all
load average: 2.12, 3.08, 1.09
Then you had:
2.12 cores busy average in the last 1 minute,
3.08 cores busy average in the last 5 minutes, and
1.09 cores busy average in the last 10 minutes.
Then the question is: Was that good ... or bad? The answer depends on how many cores your system has. If it has 8 cores, then you were well under 50% at all times and life is good. If your system has 4 cores, then it was fairly well loaded (over 50% at the 5 minute average, so it was sweating a little bit). But if it has 2 cores, you were overloaded for a while there and in danger of failure.
What normally happens if you go over 100% (ie: 4.0 if you have 4 cores), is that it begins to snowball. It will climb quickly after it hits overload, and can go to 20, 100, and even into the multiple hundreds as it flips out trying to catch up. Any time your Average Server Load exceeds your Core Count ... you need to begin worrying (and probably should do something to reduce that load). This applies equally to all servers in a cluster, and of course to a Standalone server.
If you get lucky, whatever caused the overload will complete and everything will go back to normal. But if not ... Cascade Failure and likely a need for a reboot.
Now go back and read the help and see if it makes more sense after that description.