WARNING: This server is unstable and will be retired in the next days.
If you want to keep this forum available, please request immediately a migration
on the Nabble Support forum.
Forums that don't receive any migration request will be deleted forever.
Bruce, I agree completely, and thanks for the very interesting recap of the history.
I got onto this hercules/linux kick when I was reading about fault-tolerance in computing. There were different approaches from different vendors, like the HP/Tandem approach, Stratus's approach, and most intriguingly IBM's approach. I appreciate accuracy over speed when it comes to storing and manipulating data, so I wanted to find out more, which got me looking at Hercules (After I ruled out buying a used s390 server on the grounds it would consume too much electricity for this hobbyist!)
I read that fault-tolerance evolved over time, so by s390/ESA in the 90s the IBM engineers had a few decades of experience in understanding the failure-modes of their computers and took certain measures to protect the data from corruption, and the integrity of the processing. Sounded good. I got Debian working on Hercules, and thought, "how can I tell whether the emulated processor is doing more than the hardware processor to protect my data?" Because it felt like it might not be doing anything much differently than the x86 linux I was running it on.
Of course, I could and will spend some time carefully studying it, but I thought, "surely someone else has done this, so I might just ask on the forum."
I really appreciate your perspective and efforts to enlighten me!
> I know virtually zip about Linux, esp Linux "internals". But I do have a fair amount of experience with DOS(VSE) internals, and a little with VM.
> First, S/3x0 is a "complex" environment, today. But it didn't start out that way, it took 47 years to get here. System/360 was pretty basic. There were only about 90 instructions (not counting floating point). Some things, like Storage Keys, were optional on some models. There were only a handful of "supervisor only" instructions, only one PSW format, there was no virtual storage, no control registers, etc. Almost none of the "complex" features that you see in modern z/Arch.
> But that "limited" environment was enough to give us great and powerful operating systems, like OS/MVT.
> S/360 grew into S/370, then XA, then S/390, then ESA, and finally into z today. Each new generation introduced new features, and more instructions, esp more "supervisor" instructions. Operating systems and compilers adapted, using these new features where possible. Sometimes, new instructions were provided to better support a specific OS or language. The "string" instructions are a prefect example, they exist only because it's a "natural fit" with the C language. Another example are the dedicated IFL ("Linux only") processors that are available today...
> I guess what I'm trying to say is, you can have a great OS, without using each and every "feature" available on a modern z machine today. I don't think any OS does that. And I know z/VSE sure as heck doesn't... :) :) :)
> --- In hercules-390@..., "hec.tor1" <hec.tor1@> wrote:
> > I guess what I'd like to know is whether these kernel modules and tools take advantage of the processor features that have been implemented in Hercules. Will it run the OS and applications fault-tolerantly? Will there be memory checks for data integrity? Other checks for data integrity? Does there need to be some programmer intervention (like inlining assembly in one's C programs)?
> > I've read a little bit around this subject. For example, there is this from 2002 (nearly 10 years ago):
> > "Linux does not use much of the complexity of ESA/390 and z/Architecture.
> > In Linux, there is/are no:
> > ! Storage keys
> > ! Linkage Stack
> > ! ASN Tables
> > ! ASTE
> > ! Linkage Table
> > ! Trace Table
> > Recent releases of the Linux kernel do set up the DUCT address in CR2."
> > And,
> > "Most of the Control Instructions described in Principles of
> > Operation cannot be used in User Mode because they must
> > be executed in supervisor state or because:
> > BAKR, EREG, ESTA, MSTA, PC, PR:
> > No linkage stack
> > EPAR,ESAR, IAC, IPK, IVSK:
> > Extraction Authority bit in CR0 is zero.
> > MVCP, MVCS, MVCDK, MVCK, MVCSK, SPKA:
> > PSW Key Mask is all zero.
> > PT, SASN:
> > ASN Translation Control bit in CR14 is zero.
> > SAC, SACF:
> > Home Space Switch Event in CR13 is set."
> > (http://linuxvm.org/present/SHARE99/S8139db.pdf)
> > Is this still the case? Have there been kernel developments since then?
> > I suppose I will end up reading all the s390-specific parts of the kernel now, but it would be helpful if someone could give me a high level-overview of exactly what parts of the processor linux does/doesn't take advantage of, and what is left to implement.
> > I am getting progressively more interested in a writing tools that would realize the full-potential of zArchitecture with linux. But I find it hard to believe that in the decade since IBM has embraced both linux and zArchitecture, and with 1/5 of their customers workloads running on zLinux, that IBM hasn't beaten me to it.
> > --- In hercules-390@..., Harold Grovesteen <h.grovsteen@> wrote:
> > >
> > > There are a number of s390x specific kernel modules. There is binutils
> > > configured for s390 or s390x. Depending upon your definition, GNU as
> > > could be considered a proper assembler. Externally Linux looks the
> > > same. Internally, it is not.
> > >