IBM Z systems are always fascinating to a computer engineer. I was fortunate at my first job to be able to access a z10 brand new, and was tasked with installing, configuring, and trying out several POCs with z/OS, DB2 on z/OS, WebSphere Application server on z/OS, CICS transaction server, z/VM operating system, running 1000's of linux images on top of z/VM, making them communicate with each other and z/OS using Hipersockets. It was pure fun.

The cool moments I distinctly remember were runing the z/OS installation binaries from tape drives, and swapping in and out - DASDs(Hard disks), and perhaps the coolest was being able to hot plug in/out CPU blades from the cabinet while the system is live.

It's pity that thing is so expensive, and out of reach of the common folks.

For those not in the know, and curious. Based on my knowledge from 2011, CPs are the general purpose CPUs that run z/OS - the main OS companies typically run on System Z. There were special CPUs(may be microcoded so), called IFL that were allowed to run z/VM and Linux. I don't know how things have evolved in the last 7 years :-)

So I joined IBM in 2005 working on firmware for a project called eCLipz, which eventually produced the POWER6 family of systems. We've just shipped POWER9 which is in the #1 and #2 supercomputers in the world, Summit and Sierra. [1]

The "i/p/z" part of the project name is important, because it represents:

i - the lineage of the AS/400, which was a 'minicomputer' design from the 80's designed for multiple users, which came up with the idea of "wizards" for administration tasks before there was even a name for a "wizard." The "i" means "integrated," since IBM i is meant to require very little administration. [2] [4]

p - the lineage of the RS/6000, which begat AIX UNIX. The "p" indicated "performance." The box I worked on shipped at 5 GHz in 2008 (IIRC).

z - the lineage of the System/360 mainframe which the above article discusses. The "z" indicated "zero downtime."

The control structure and power and cooling infrastructure for the systems used common elements with that project. The hardware and firmware underneath the operating systems for i and p converged into Power systems, which eventually produced a variety of projects in the open source community (OpenCAPI, OpenPOWER itself, OpenBMC), and of course runs bi-endian to support things like little-endian Ubuntu or the classic Big-endian AIX or i/OS on the same processor core.

Interesting trivia - In IBM systems (Mainframe or Power), the cold boot of the processor from an unpowered state is called an "Initial Program Load," or IPL. The IPL lingo dates back to System/360 (see its Wikipedia), and survives to this day in really interesting places in the wild. Take the OpenBMC project, which aims to create a 100% open source software stack for those sideband management processors found in servers (and increasingly in other devices too). This github bug from a few months ago which complains about a bug causing "IPL" problems. [3] ;-)

[1] https://www.theverge.com/circuitbreaker/2018/11/12/18087470/...

[2] https://en.wikipedia.org/wiki/IBM_i

[3] https://github.com/openbmc/openbmc/issues/2831

[4] edit - fixed "70's" to "80's" per https://news.ycombinator.com/item?id=18494866 ;-)

I have friends who still say their program "abended".

Just a minor nitpick - the green screens we see today are also often descendants of the 3270 family as well as the 5250's that started life with the System/34 and found their way into the AS/400 and iSeries.

I had to nitpick because the 3270 is where this beautiful (shameless plug) font originated: https://github.com/rbanffy/3270font