Mainframe dark age

Posted on Thursday, August 5th, 02010 by Alexander Rose - Twitter: @zander
link Categories: Digital Dark Age   chat 0 Comments

The usual “digital dark age” stories we see are the ones where people lose data because a platform obsolesces.  Business Week is running an interesting story about a computer platform that has refused to obsolesce, and it is the people who are leaving it behind – The Mainframe.  It turns out that there are still over 10,000 Mainframe computers out there churning away at major companies – representing a $3.4 billion dollar market segment.  Who knew right?

One part of the story that is poorly addressed is why these companies have not ported the functionality they are getting out of these mainframes to a more modern computer system.  Wikipedia answers that question this way:

Modern mainframe computers have abilities not so much defined by their single task computational speed (usually defined as MIPS — Millions of Instructions Per Second) as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility with older software, and high utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and hardware upgrades taking place during normal operation.

…[IBM's modern] mainframe processors such as 2008′s 4.4 GHz quad-core z10 mainframe microprocessor. IBM is rapidly expanding its software business, including its mainframe software portfolio…

So I guess we still need mainframes and they have been modernized somewhat, but it seems to me this would be better handled by cloud or cluster computing that would be more hardware and software agnostic.  My bet is that most of these systems are actually emulating other emulations several layers deep – in some cases all the way back to punch card programming.  I assume no one actually wants to unravel that spaghetti out of fear of losing some critical legacy functionality.  I welcome comments here from anyone who actually uses mainframes (and if that story is to be believed, your skill set is in high demand, congrats!)

  • http://longgame.org/ Matt Warren

    Back when I worked in Nordstrom's IT department, I wondered why they were still employing PacBase programmers. One of the (btw, expensive) contractors gave me a convoluted explanation that mirrors – in its essentials – those last two sentences. Too much of the buyer-software's core functionality was tied up in systems that nobody dared to potentially break.

    I have no idea if this is still the case; I was only a document maintenance revision-guy (for functional specifications). But it always stuck with me that even during the larger society's mad rush to upgrade-upgrade-upgrade, there were people with old-school programmer chops that were absolutely essential.

    Very cool find – thanks for sharing it.

  • http://awesomethingoftheday.tumblr.com/ Remy Porter

    I work with mainframes. Not on the mainframe itself, but as a client programmer that often needs to scrape data from the mainframe. I would totally love to not have to do that any more.

    And you're right about the emulation. We have code that emulates a 30s era Burroughs calculator for certain accounting practices. And you're right about the spaghetti. Even applications that are only a few years old suffer from this: “What does it do?” “I don't know, but it's mission critical.”

    The other thing to keep in mind is this: a business isn't going to upgrade until the cost of upgrading isn't going to be a significant part of the annual IT budget, and the savings from doing it are going to pay themselves off within five. Not exactly “long now” thinking, but that's how it works.

  • smitchell

    You're right that cloud architectures should finally replace the mainframe. In fact, if you squint, much of the new cloud technology looks something like the old mainframe architecture.

    I worked for some time on a J2EE, n-tier, application development project designed to replace a client-server bank branch automation software package. The J2EE application (itself with much complexity) used an XML/SOA based library to message the mainframe (aka, “the host”). The XML/SOA library used 3270 screen scraping of the original accounts and teller applications. The teller applications were written in assembler and were particularly old. Rumor had it that the original programmers were not only retired, but mostly deceased.

    The spaghetti is not only deep, but also wide. While one could imagine reimplementing the clear and present functionality of the teller application, one didn't know what other applications or batch jobs might run against those files and depended on that data for some critical accounting. Better to follow the “if it ain't broke” principle.

  • http://profiles.yahoo.com/u/UL3UDCP4FMROPO64OLKD33K4F4 Edward J. Renehan Jr.

    In general, “updating” or replacing these systems – extracting all that data and putting it, intact, into a new environment – represents a massive and expensive job, something like disassembling a Turks Head knot. But why invest the time and energy if the Turks Head – a beautiful if antiquated thing in and of itself – is getting the job done? “Obsolescence” is a relative term. The fact that data exists usefully in old mainframe environment suggests that the data in question demands no more complex manipulation/extraction than the mainframe software offers. Thus the data is right where it belongs, and there is no question of over-capacity.

  • Giedo De Snijder

    I am engaged in a company that’s specialized in implementing and managing commonly used SCM products in mainframe environments. Take a look at http://www.abitmore-scm.com. Our CEO has pointed me to a slide of a AbitMORE presentation, which was held at the SERENA global conference in 9/2006:
    - Far from being dinosaurs, mainframe computers have proven to be reliable and essential, and they still handle an estimated 70 percent of business data
    - Experts in the industry and at the W. P. Carey School of Business say a shortage of mainframe-trained technicians looms as the mainframe generation nears retirement after decades in which younger technicians gravitated toward newer areas of enterprise computing.
    - Companies continue to expect the traditional level of service provided by the mainframe. Experts believe the real reason for renewed interest in the mainframe is that customers tried — and failed — to operate using smaller and cheaper machines.
    - About 70 percent of business data resides on mainframes and only 28 percent of IT workers are involved in mainframes
    - Through the IBM Academic Initiative, IBM and companies that use mainframes in the Phoenix area look forward to ASU providing technology students with IBM mainframe knowledge.
    In fact this slide is a summary of what’s written at http://knowledge.wpcarey.asu.edu/article.cfm?articleid=1236 (The Dinosaur Myth: Mainframe Behemoths Aren’t Dead Yet)

  • Davide Bocelli

    I like these posts. Once I worked for an important italian bank software company. When I asked why mainframe where still (at the beginning of the XXI century) the heart of all the banks we followed, the answer was “because it works, and until it does, a new technology is just a risk”.
    At the time, all the mainframe data was already completely mirrored and running on SQL and OLAP db since some years, working perfectly for more modern applications in parallel with very old applications (cobol, assembler etc.) that have been crunching bits for many decades. Therefore the answer seemed to me not enough (although I got the spirit). Talking with a bank IT manager I understood that the missing part of the answer was that he just never knew of a bank without the word 'mainframe'.
    (In the future we could see ancient mainframes from the eighties virtualized and reincarnated in the universe-in-a-nutshell cloud of a bank, extending their life to the next technological jump… )

  • http://hewhocutsdown.blogspot.com hewhocutsdown

    I work with the IBM i mainframes all the time; some folks are switching but the majority of companies I've worked with have recently invested more into the systems.

    The biggest issue I've come across is that disk is so damn expensive, so a number of folks are using cheap Linux/Windows fileservers to store the data being generated by the larger systems.

    The other thing is, some of these systems have been 'modernized' – web interfaces and the like – and the performance/user efficiency drop is considerable. There's a lot to be said for simple, stripped down interfaces. There's a steeper learning curve, but once that hurdle is overcome users can actually work at their own speed, rather than the system's.

  • http://hewhocutsdown.blogspot.com hewhocutsdown

    Truth be told, I've seen a company develop a “green screen” interface for a web application, directly working with the application's API, because their users didn't want to interrupt their workflow by having to use a browser!!!

  • http://twitter.com/espenandersen Espen Andersen

    And the mainframe (surprise!) still isn't dead. But it may eventually feel very lonesome, even ethereal.

  • Matt Nuttall

    Hi Folks;
    I've got 21 years and counting hands on mainframe experience.

    If you use a credit card or a bank card, then you're a mainframe user.

    You're right: mainframes have peculiarities that date back to punch card. The job that runs the program that handles and stores your financial transaction must be submitted using a maximum of 72 characters per line because that's the how many holes a punch card had. But after that, the limitations are gone….

    Few organizations run a mainframe in production by itself. They almost always exist as a cluster of many fully interconnected, completely independent mainframe instances, called images. Each image can be functionally (if not always in terms of MIPS) capable of carrying the entire workload. Each image can sense when another image is slow or stopped and redirect workload dynamically and seamlessly to match transaction capabilities to the best suited. So, you can upgrade hardware and software on one image at a time,while the entire cluster remains 100% available. The entire cluster can be represented by a single IPv4 or IPv6 address and this address can move automatically depending upon requirements. The media access address (MAC) associated with the IP address is moved dynamically from interface to interface if necessary, so that in addition to routing layer redundancy, you get ARP redundancy. Meanwhile, bigger shops will have remote, real-time complete hot backup at a disaster recovery site, running parallel identical transactions. If the production cluster gets a bomb dropped on it, the disaster cluster takes over without missing a transaction.
    Meanwhile, you can run Linux in any of these images, and use an internal direct I/O LAN to get from, say, an Apache webserver to a DB2 database with the traffic being done via an emulated LAN in internal mainframe memory.
    All if this has been up and running pretty much like I've described for many years. Without a hiccup.

    This is just a summary of some of the capabilities.

    So, I think that today's mainframe is actually the forefront of server technology — there's nothing that can encompass the breadth and depth of this technology.

    Here's the Long View problem. This kind of availability and stability has really matured in the last decade. A mainframe image pretty much never goes down, and to lose a cluster is unheard of. To lose an offsite backup cluster as well? Pretty low probability…. Which brings us to the Maytag repairman problem: without any crashes or problems (50 years of debugging must have some impact), there are fewer coders and support folks required than ever, even though more and more transactions are being handled and stored. It is will get harder and harder to fix these extremely rare problems because there are fewer and fewer around with current experience at fixing them….

  • http://twitter.com/finalcontext Jamie Stanton

    I've worked at large organisations before that had systems nested within systems within systems. I always imagined it like the neocortex on top of the mammalian brain on top of the reptile brain.

  • http://twitter.com/ctpoole Clyde Poole

    I work for a company that often has mainframe users as customers. The primary reasons that I hear for keeping the systems in place are: (1) systems where uptime is measured in years rather than days are required for our work, (2) we need systems where security is built in from the begining rather than added on, and (3) we need to have confidence that the applications work correctly and years of experience with correct behavior is more important than new systems with unknown reliability.

  • http://www.auto-my.ro/piese-auto piese auto

    This is a past age. Now we live in other age and this is the true. This past age was interesting but was only a step.

  • moorthy

    i dont know about mainframe. what is mainframe. actually what benefit to students. am finished BE. but all are friends says now improving this technology. where are all institute this.

  • http://profiles.google.com/rob.kinyon Robert Kinyon

    As someone who's building one of these cloud systems, the theoretical uptime is forever. The actual uptime, right now in its infancy . . . much much less. After 20 or 30 years and a full 2 generations of developers, maybe the uptime and security will be sufficient that the mainframe will wander off into the desert. (We have a long view, too.)

    But, I guarantee that the code running in the cloud will be those images Matt Nuttall described. Deconstructing an existing program is extremely hard. It is almost always cheaper over the life of the program to keep it running than to undertake the testing burden of reimplementation. Read http://www.joelonsoftware.com/… about Netscape's mistake in this regard.

  • Steeleweed

    Most of those who put them down know nothing about mainframes – they once took a 6-month COBOL course in high school and think they understand. That’s like claiming you’re a Indy-level racer based on having once ridden a trike.

    @MN:twitter
     IBM cards were 80-column, but the last 8 bytes were sequence numbers so by convention, only 71 bytes of data, 1 byte of ‘continuation punch’ and 8 bytes of sequence. (Univac cards were 96 bytes).

    Mean time to failure on today’s zboxes is 25-30 years and serious users run multiple system (where do you think the server world got the idea of clusters?)
    VM on servers is a pale imitation of mainframe VM capabilities.

    When sneering at mainframes as ‘dinosaurs’, it is worth remembering that humans have only been around about 150-thousand years and dinosaurs prospered for over 170-million years. They ain’t dead yet or likely to be.

    I’ve been programming PCs and server apps since they were invented and I’ve been working mainframes for 50 years. Intel is fine for the small stuff but I haven’t seen any serious work that can’t be done better, faster, more reliably on a mainframe.

  • Pingback: Losing my religion | Grisnoir


navigateleft Previous Article

Next Article navigateright