still not subscribed, still not the preferred list to discuss this.
This mail is about defects in GNU Mach.
People have mentioned several issues in GNU Mach (in particular, there
was one good summary by Olaf, but heck if I can find it in the
archive. Do you guys never change the subject lines?) But there was
at least one glaring omission, and one that is in my opinion Given the
Of course it is impossible to give a comprehensive review of all
issues, but at least this one deserves particular mention, because it
is so obvious. If you ever did I/O-intensive work on the Hurd, you
know what I am talking about.
GNU Mach does not have any resource accounting, and it has very poor
resource scheduling. Resources that need to be accounted are memory
consumption (and therefore kernel objects like ports) as well as CPU
time. Resources that need to be scheduled are CPU time, physical
memory and I/O bandwidth. Mach contains a VMM with an opaque policy
that is extensible via memory objects (look that up in the Mach
Unix systems at least allow setting quotas. Given the design of the
Hurd and Mach, we can't even do that, not to talk about any of the
fancy things. We can not even reserve the 5 percent disk space (or
whatever it is) for the superuser in ext2fs. This is because resource
principals (ie, users or processes, depending on the level you are
interested in) are not concepts that are in any way integrated in the
kernel and system design.
The resulting problems are stability issues (Mach panics due to
resource exhaustion) and performance issues (Mach can not make smart
Unix and GNU/Linux have pretty good heuristics, which depend on all
relevant components being integrated in a single kernel, sharing data
structures (and thus information). In a microkernel system, the same
heuristics are not as easy to apply (due to lack of central
authority), so different strategies may be preferable. This is
difficult, because resources are distributed over many components. In
a monolithic kernel, for example, all filesystems can share the same
block cache. In the Hurd, where drivers and filesystems are isolated
components, this is not feasible.
Check out  for more details, and possible solutions.
Unfortunately, this is one aspect where there is no precedence: To my
knowledge, there are no user-extensible, multi-server, microkernel
based systems that target a broad range of applications and perform
well under dynamically varying load. So, this is one of the areas
where we have to experiment, and where success can not be guaranteed.
It is common knowledge that Mach IPC slow. But what is not common
knowledge is that Mach never got to a point where that matters.
Mach's lack of IPC performance is dominated by its enormous working
set, which eats the CPU caches and cause a large number of TLB misses.
This was analyzed carefully, see  and .
Device drivers are just an insane amount of work (unless you use para-
or pre-virtualization), but not a conceptual or design challenge.
 Chen, J. B. and Bershad, B. N. 1993. The impact of operating
system structure on memory system performance. In Proceedings of
the Fourteenth ACM Symposium on Operating Systems Principles
(Asheville, North Carolina, United States, December 05 - 08,
1993). SOSP '93. ACM Press, New York, NY, 120-133. DOI=