WARNING: This server is unstable and will be retired in the next days.
If you want to keep this forum available, please request immediately a migration
on the Nabble Support forum.
Forums that don't receive any migration request will be deleted forever.
On Wed, Jun 01, 2011 at 09:49:17AM -0600, Theo de Raadt wrote:
> > On 2011-06-01 15.53, Joel Wiramu Pauling wrote:
> > > On 2 June 2011 01:41, Benny Lofgren <bl-lists@... > > > <mailto:bl-lists@...>> wrote:
> > > I agree with what you are saying, and I worded this quite badly, the
> > > frame I was trying to setup was "back in the day" when multi-user meant
> > > something (VAX/PDP) - the load average WAS tied to core utilization - as
> > > you would queue a job, and it would go into the queue and there would be
> > > lots of stuff in the queue and the load average would bumo, because
> > > there wasn't much core to go around.
> > Not wanting to turn this into a pissing contest, I still have to say that
> > you are fundamentally wrong about this. I'm sorry, but what you are saying
> > simply is not correct.
> > I've worked in-depth on just about every unixlike architecture there is
> > since I started out in this business back in 1983, and on every single
> > one (that employed it at all) the load average concept has worked
> > similarly to how I described it in my previous mail. (Not always EXACTLY
> > alike, but the general principle have always been the same.)
> > The reason I'm so adamant about this is that the interpretation of the
> > load average metric truly is one of the longest-standing misconceptions
> > about the finer points of unix system administration there is, and if
> > this discussion thread can set just one individual straight about it
> > then it is worth the extra mail bandwidth. :-)
> 100% right. The load average calculation has not changed in 25 years.
> Anyone who says otherwise hasn't got a single fact on their side.
The metric was invented in a time when processes were swapped in and out
routinely. This is why load, nice and the scheduler all behave the way
they do (which to our modern minds seem strange and arcane).
> What has changed, however, is that the kernel has more kernel threads
> running (for instance, ps aguxk, and look at the first few which have
> the 'K' flag set in the 'STAT' field.
> Some kernels have decided to not count those threads, others do count
> them. Since these kernel threads make various decisions for when to
> do their next tasks and how to context switch, the statistical
> monitoring of the system which ends up creating load values can get
> That's what this comes down to.