Hacker News new | past | comments | ask | show | jobs | submit login
Games for IBM AS/400 (jcrcmds.com)
96 points by remoquete on Nov 26, 2021 | hide | past | favorite | 59 comments



Back in the 80's and early 90's, my uncle's company had an IBM System/36, which was a predecessor to the AS/400. They used it for accounting. The thing was a monster, and it looked exactly like the top photo on wikipedia: https://en.wikipedia.org/wiki/IBM_System/36

It had 8 terminals attached, and a couple of huge line printers. We opened up the main system cabinet one day, and the hard drive (about 200 megs?) was over a foot wide. The system also used "disk cartridges" which were several 8 inch floppies in a box. Supposedly the system cost almost $200K (in 80's money.)

In the mid 90's they replaced it with PCs.


As drive packs come those were pretty compact already. Prior to that a 200 meg pack + drive would be the size of a top loading washing machine. Packs were removable, you'd screw them into place with a handle at the top and after closing the lid they'd spin up. Massive gear, a pack of 11 platters weighed a couple of kilos and would spin up to several K rpm, and woe to you if the pack was written on a drive with a different alignment, then you'd have to get the heads re-aligned.

This was one of the reason why most machine to machine data transfer was done using tapes. Slower, but more reliable and much, much cheaper. Packs would usually stay within the same datacenter, and likely would be assigned to the same drive though theoretically there was no reason why they couldn't be moved to a different one that had been calibrated with the same calibration setup.


Back in the 90's I worked as a field engineer and had a few of these on contract. They're really nice machines to work on and a common task was replacing these hard disks when they failed. The IBM field service manuals were pretty good and you could repair/replace pretty much anything on-site. They look like monsters but they're actually really friendly to field service techs.


Considering the power of mainframes, I wonder why there are few games made.


To be nitpicky -- The AS/400 is not a mainframe, it's what IBM calls a "midrange system" positioned between commodity x86 servers and mainframes. Today, the AS/400 OS (OS/400, or as it's known now, IBM i) runs on IBM's POWER servers. For a typical IBM i system you're talking a couple 10s of thousands of dollars, for a typical mainframe you're in the hundreds of thousands.

Also regarding the power of mainframes: in the case of AS/400, it's definitely not what I would call fast. Everything runs on what is essentially a virtualization layer... the AS/400 instruction set (MI, or "machine interface") doesn't exist in any hardware, it's just run on top of a translation layer on top of (these days) POWER. It's a very weird system from top to bottom.

As for actual mainframes, traditionally the power came not from raw CPU power, but from an architecture that efficiently handles parallel I/O. Definitely powerful for batch processing of large amounts of data, but not necessarily optimized for something like games.


> Everything runs on what is essentially a virtualization layer... the AS/400 instruction set (MI, or "machine interface") doesn't exist in any hardware, it's just run on top of a translation layer on top of (these days) POWER

To nitpick your nitpick – MI isn't interpreted, it is compiled to machine code. Not JIT, AOT: when you compile a program, it actually goes through two steps. First it compiles the source code to MI, and saves that in the executable (program object). Second, it compiles the MI to machine code, and saves that in the executable too, in a different section of it. At runtime, the MI is ignored and the machine code is run. There is even a command to delete the MI part of the executable (called deleting the "observability"), and your program will still work after the MI is deleting because it isn't actually used at runtime.

When they transitioned from custom CISC [0] to POWER, if you still had the MI in your executable, the system would automatically recompile it to POWER machine code. If you had deleted the observability, it wouldn't run, and you'd have to recompile it from source. Theoretically, the same would apply if IBM ever decided to move IBM i to some other platform than POWER (such as ARM or x86), although I doubt that is ever going to happen.

Also, there are actually two versions of MI, old MI and new MI. Old MI (OMI) was used with OPM (Original Program Model) executables. OMI was the original AS/400 machine interface, and is in fact descended from the System/38 machine interface. Around the same time they moved to RISC, they introduced a new program model (ILE, Integrated Language Environment) and new MI (NMI) to go with it. OPM programs still run, but now the system translates OMI to NMI, and then NMI to machine code. Now, the interesting thing about NMI – OMI was indigenous to the 38/400, whereas NMI was actually just the internal intermediate language used by IBM's compilers on other platforms (such as AIX and OS/2), known as "W-code", with some 400-specific extensions. So, in the contemporary ILE world, MI is really just a compiler intermediate language, like LLVM IR is, and so isn't really anything that different from what happens on AIX or Linux or Windows or z/OS – the history and the marketing make it sound a lot different.

Now, I agree that IBM i isn't necessarily that fast. It has a very complex software stack, and while I find that complexity fascinating, no doubt it imposes performance costs. But the "virtualisation" aspect in itself doesn't cost much at all. I think the widespread use of 128-bit pointers is likely a bigger issue – although, the 128-bit pointers don't get used directly, they get converted to 64-bit pointers before actually being used – still that has to have some performance cost which other systems don't have to deal with.

But you can opt out of that by running your code in a "teraspace", in which case you just use 64-bit pointers and the 128-bit pointers are not used. If you are running Java code or PASE code (AIX compatibility subsystem), both use teraspaces, and if you avoid invoking OS services (as in a compute-heavy workload), your performance should be close to that of AIX on the same hardware.

[0] the CISC instruction set, IMPI, was inspired by the 370 instruction set but not compatible with it. For AS/400, I don't believe it has ever been publicly documented, but I'm led to believe the 400 CISC instruction set was just an evolution of the 38 CISC instruction set, which was publicly documented in this manual: http://www.bitsavers.org/pdf/ibm/system38/SC21-9037-3_IBM_Sy...


I've often considered AS/400 a "road not taken" in systems engineering, one that would have been much more influential had it come out 15 years sooner (though if it had, the hardware would not have been performant enough to run it), particularity the abstraction between memory and storage being handled by the OS rather than the application. The machine independent codebases, and other features and bonuses and have allowed it to continue to live on at minimal cost.


One of the big problems I see with 38/400, is while it had some very innovative features, it also had some very old-fashioned ones: EBCDIC, COBOL, RPG, etc. IBM has de-emphasised those old-fashioned aspects over time, but maybe it would have been more successful if they'd moved away from them at the start?

I think it also might have been more successful if instead of porting it to POWER they'd ported it to x86 – they might have needed to add some tagged memory extension to x86 to do that, but I'm sure Intel would have been happy to talk. Indeed, the same idea has a history in Intel, if you look at the iAPX-432 and the i960, so Intel would certainly have understood the request–they might have even been excited about it. (Of course, given IBM's commitment to RISC and POWER, wasn't going to happen.)

Like a lot of other IBM systems, it became more closed over time. The System 38 had huge amounts of very detailed technical documentation publicly available. With the AS/400, much of those details moved under NDA, and it has only got worse over time. Now, you can't even write a compiler for (non-PASE) AS/400 without signing an IBM NDA, the executable format is under NDA. How closed is that!

I even wonder, what if instead they'd ported 400 to 370/390/z, or tried to make 38/400 and 370 more compatible from the start? Maybe some of these ideas might have cross-pollinated to MVS/VM/etc. A lot of what is most original about the 38 and 400 (single-level store, capability-based addressing) actually descends from the Future Systems project, a cancelled 1970s effort to design a next-generation successor to the 360/370. Maybe those technologies might have been more influential if they had a future on the mainframe? A merger of the AS/400 and 9370 (low-end mainframe) lines could have led to that.


I'm not in the hate COBOL camp, COBOL is very good at what it does, when I finally got exposure to system z and COBOL, I was astounded at how easy it was to work with record data there. It's something that's several orders of magnitude harder to do in the Unix world with its "bag of bytes" data paradigm. Everything that originated in the Unix world is really designed for that data model, and is really just a different way of thinking.


I don't hate COBOL either. And you are right, there are some positive aspects of COBOL, including some of the data modelling ideas in the DATA DIVISION. However, I think if you look at PL/I, you'll find most of the same ideas present, but on the whole it is a nicer language. Some people hate PL/I too, and no doubt it can be overcomplicated (which is part of why PL/I subsets were so popular in the 1970s and early 1980s), but I think PL/I has most of COBOL's strengths with fewer of COBOL's weaknesses.


I think of PL/I as a systems programming language, and COBOL as an application programming language, much in the same way you'd look at C/C++ and Java in the Unix world, I think (I think that analogy is actually almost directly applicable, though maybe Pascal is a closer fit? I'm not sure).

Different tools designed to do different things.

I'd actually like to learn PL/I (or COBOL beyond a cursory knowledge), I'm someone who has a great deal of knowledge of the thought patterns of programming (as well as how you architect software, systems design, etc), without knowing any one tool well, I tend to learn a tool for a task, then forget the tool when I no longer require it anymore. My job is.. hard to explain, but its quasi programming, but also very much not aside from helper scripts I write now and again (I use all of the thought patterns of programming, without any of the tools). I enjoy writing software, but only have so many itches to scratch in my personal time.


> I think of PL/I as a systems programming language, and COBOL as an application programming language, much in the same way you'd look at C/C++ and Java in the Unix world, I think

PL/I is a general purpose applications programming language. Its primary design goal was to unify COBOL, FORTRAN and ALGOL into a single language, and as such was intended to meet the needs of both business and scientific application programmers. IBM invested heavily in trying to convince business software developers to move from COBOL to PL/I - see for example [0] - but, while there was a decent amount of business software development done using PL/I in the 1970s, it never had the success in that market which IBM had hoped for, PL/I ended up being an also-ran to COBOL instead of what it was meant to be, the successor.

PL/I proper was generally not used as a systems programming language. On the one hand, it had a number of features (decimal arithmetic, formatted and record IO and multithreading support built-in to the language syntax) which while useful in applications programming were unavailable or unsupportable or not useful in systems programming. On the other hand, a lot of features which are valuable in systems programming - such as inline assembly, the ability to manually control the allocation of variables to machine registers - are not in PL/I proper.

Given this, it was common to create systems programming dialects of PL/I which removed those application-centric features and added systems programming specific features. But those dialects were not properly speaking PL/I and were often called by other names - IBM’s was called BSL, later renamed to PL/S, which spawned a bunch of descendants with similar names (PL/AS, PL/X, PL/DS, PL/MI, PL/MP, PL.8). Honeywell had PL/6 which was used as the systems programming language for its CP-6 operating system (a backward compatible rewrite of Xerox CP-V). Prime Computer had PL/P as the systems programming language for PRIMOS. Gary Kildall developed PL/M in which he wrote the first versions of CP/M (later versions were rewritten in assembly to get better performance, but even in later versions PL/M was still used to implement some of the utilities.)

However, none of these systems programming languages were in the strictest sense PL/I; the original unmodified PL/I (without subsetting or supersetting) was an applications programming language not a systems programming language.

[0] http://bitsavers.org/pdf/ibm/360/pli/SC20-1651-1_A_Guide_to_...


Thank you for taking the time to explain that to me! I found it informative and enjoyable to read.


> I was astounded at how easy it was to work with record data there. It's something that's several orders of magnitude harder to do in the Unix world with its "bag of bytes" data paradigm.

Hear, hear. I started out in the block and record-oriented I/O world, and quickly found that being unable to make a quick use of highly advanced I/O system calls to be frustrating – for simple tasks. One had to define the record structure, typically in MACRO-11/MACRO-32 (we did not use COBOL, anyway), and only then commence the I/O. Record-oriented I/O also comes with its own wealth of quirks, e.g. if an application crashes / the OS restarts, the system administrator has to determine which files had not closed and manually close each of them off after the crash / system restart. Applications would refuse to start up if the files they operated on had not been properly closed. Blimey. Direct device I/O was even more complex.

Even the block I/O wasn't as trivial to use, especially for the simple text processing as the system did not keep the used bytes counter for the last file's disk block; the app would have to read all the blocks in and then either search for a special control character that indicated the end of the text or at least one text editor reserved the very last word of the last file's disk block as the total file character counter. Wash, rinse and repeat. All in all, as soon as I could get a hold of access to a UNIX box, I ran away screaming.

UNIX shifts the disk I/O complexity into the application layer (user space) and raw devices (user space + VMM bypass in the kernel space). Want to implement a record-oriented I/O? Write a user space application for it. Want to implement a high performant, record-oriented I/O? Use a raw device (Oracle absolutely wanted to give exclusive access to such one, and it was very fast for its time). That has yielded a highly successful cottage industry of databases, relational and record-oriented ones (e.g. BerkelyDB).

Looking back, I have come to appreciate the record-oriented I/O, and I would love to be able to define my application's record structure in the COBOL-like way in C/Rust/whatever and use that for simple and slightly more complex things. I certainly would not want to have to deal with the complex legacy that comes with it, i.e. seeking out unclosed files after the restart to close them and stuff. For simple one-off things and for talking to simple device drivers, I will take my bag of bytes any moment.


Being able to sort records with their systems tools, was astounding to me as someone who gets unix pretty clearly.


I learned on the AS/400 and RPG. I still kind of prefer it to SQL for reading/writing from database. I also really liked CL as a scripting language and the object-orientation of the operating system.

In the early 2000's I was involved in porting Subversion to OS/400. I had to build a replacement for make and the build tools but it almost all compiled on the first try. Then spent the next year dealing with EBCDIC issues. By the time we were done, IBM had updated the C compiler so we could make it UTF8-native and that let us remove our entire patch.

There was/is a lot of cool stuff about the AS/400


P-code is essentially the same idea as AS/400 machine-independent code, and it didn't really work well on early machines because optimizing it to efficient binary/machine code, even AOT, was infeasible back then. It took time for compiler to become good enough.

As for handling "the abstraction between memory and storage", that's what memory mapped files are essentially. It's being done routinely in modern systems.


Actually it was quite feasible and a company was built on that in 1979.

https://en.wikipedia.org/wiki/Corvus_Systems


Given that system/34 and /38 is quite popular in 1980s or earlier not sure how early one can do for this platform. System 360 is 1960s.


Thanks for all those technical details! For anyone else who'd like a broader tour around the AS/400, I can recommend "Inside the AS/400: Second Edition" by Frank Soltis. I bought it used for just a few dollars and really enjoyed learning about the evolution of the platform from one of its original designers, even though I doubt I'll ever put my hands on one directly.


Used? How large is the whole system I wonder?


They were big and heavy but not as bad as you would think. They could fit under your desk and were regularly just run out of a closet somewhere. Of course there were also gigantic ones but that was mostly due to the quantity of disk and tape drives that were attached.

Being an ISV, the thing that sucked the most was all the variety of tape formats. The drives were expensive and the tapes were expensive. So for someone to try our software we had to put it on a tape and mail it to them. This was not cheap!


I think the comment is talking about buying a used copy of Frank Soltis’ book about the design of the AS/400, not about buying a used AS/400


That's right. I bought the book for a few dollars, not an AS/400 system, though I'd probably find a decorative use for one if I found one that cheap. I look at eBay and Craigslist every so often for cheap used mainframes (or at least the front doors) that I could turn into a neat garage fridge.


I don't think the parent said that it's interpreted, only that it sits on a translation layer which is true.


Oh yes. But the parent made it sound like having a "translation layer" makes it somehow unique. Contemporary IBM i, the "translation layer" is a modified form of compiler intermediate language copied from IBM's AIX/etc compilers, and as such isn't really fundamentally different from what happens on other platforms. It also made it sound like the translation layer had a performance cost – in itself, it doesn't, any more than LLVM IR has a performance cost. Some of the entities the bytecode/IR allows you to manipulate – such as 128-bit pointers – likely have some cost, but that is a cost of those entities, not of having a bytecode/IR in itself.

Not saying there is nothing unique about AS/400. The single-level store is a big one, although in more recent decades there has been a big move away from it (teraspaces, PASE, migration from Classic to J9 JVM).

I think the history of 38/400/i could be described like this: it started out with some very innovative, even in some way alien ideas – capability addressing, single address space single-level store, having a bytecode (as opposed to the real hardware's assembly language be the public interface to the OS), primitive object-orientation backed into the OS, a relational database embedded in the operating system (although how deeply integrated it really is has never been that clear), etc. At the same time, it mixed that up with some far less innovative traditional IBM technologies – COBOL, RPG, EBCDIC, block mode terminals, SNA, etc – so even from the start it was some odd hybrid of cutting-edge and trailing-edge ideas. But, gradually since, it has been moving away from that odd mix towards become a much more mainstream system. While the old MI bytecode was a public interface, the new MI bytecode documentation is only available under NDA, and is basically the same as IBM's proprietary compiler intermediate representation on their other platforms. The single address space single-level store exists, but now programmers are encouraged to run their code in a more traditional separate address architecture (teraspace) instead. With the move to POWER, an AIX compatibility subsystem (PASE) was introduced, and more and more code was ported from AIX – and more recently open source packages from Linux. Java was ported to it, and programmers were encouraged to develop in Java – and while IBM's original AS/400 JVM, the "Classic JVM", integrated Java with the original AS/400 single address space world, at some point it was replaced by a port of IBM J9 from AIX running in a teraspace, and now it really behaves no differently than J9 on any other platform does. It is a system which started out as a weird hybrid of the innovative and the traditional, but which has been gradually evolving to be come more and more mainstream, without dropping the backward compatibility necessary to support that original weird hybrid.


Does that mean part of the compiler is security-critical?


Essentially, yes, it does. Classic AS/400 software runs all programs, belonging to all users, in a single address space. The security boundary between different programs/users is enforced at the bytecode level, not at the machine code level, so the bytecode-to-machine code translator is security-critical. That translator is a core part of the OS, whereas the compiler frontends, while shipped with the OS, are not so core. However, I believe the current version of the translator code was actually extracted out of IBM's standard compiler backend for other platforms. And while this security is primarily software-based, it is partially hardware-accelerated – POWER has a tagged memory extension which gives you one tag bit for every 16 bytes of memory, and 400/i uses that tag bit to help detect (intentional or accidental) pointer forgery. It is worth noting that this is only a performance optimisation, it provides very little security in itself – the instructions to set the tag bit are (from the hardware viewpoint) unprivileged, so if you have the ability to execute arbitrary machine code, you can forge pointers to your heart's content. But the translator will not generate those tag-setting instructions for you, to do that you need to actually generate POWER machine code yourself – and there is no unprivileged interface which lets you run arbitrary machine code in the single address space.

By contrast, newer stuff on 400/i runs in separate address spaces (teraspaces), including recent versions of Java and the Unix compatibility subsystem (PASE). So a Python script, for example, since Python usually runs under PASE, it uses the same address space per process security model which is used on Unix(-like) systems.

(There is actually an old port of Python 2.7, called iSeriesPython, which runs Python inside the classic AS/400 environment ILE instead of the Unix compatibility PASE environment; even that I believe is still running it inside a separate address space aka teraspace for performance reasons, but probably would run fine inside the single address space instead; that whole approach to porting Python to IBM i has been abandoned, because it has few advantages over the PASE approach, but PASE makes the porting work a lot easier.)


> The security boundary between different programs/users is enforced at the bytecode level, not at the machine code level, so the bytecode-to-machine code translator is security-critical.

Worth noting that this is exactly how sandboxing across WASM modules works nowadays. It's not exotic technology.


Indeed, WebAssembly tries to sell themselves as exotic and having discovered the magic bytecode, while all these ideas go back to early 1960's.

Burroughs is another architecture with lots of similar ideas, still being sold nowadays by Unisys.


Thanks! Do you know why IBM did it this way, as opposed to implementing the security part fully in hardware? Or perhaps do you have some pointers or keywords I could use to search for?


I believe because the original AS/400 hardware used a proprietary CPU (Called IPMI ?) and proprietary RAM that had extra space for tag bits. When the AS/400 was moved to POWER processors, IBM added extensions for OS/400 called PowerPC AS[0], and uses some "magic" to store a tag bit in the ECC bits of standard RAM.

Also, "Fortress Rochester" by Frank Soltis (the follow up to Inside the AS/400) gives a detailed look at the differences compared to other systems.

[0] https://www.devever.net/~hl/ppcas


> I believe because the original AS/400 hardware used a proprietary CPU (Called IPMI ?)

Called IMPI (Internal MicroProgrammed Interface), the CPU architecture used by the CISC AS/400’s was the same as that used by the earlier System/38. Documented in this manual: http://bitsavers.org/pdf/ibm/system38/SC21-9037-3_IBM_System...

With AS/400, IBM stopped publicly so documenting these kinds of details, but most of that manual still applies to CISC AS/400. And, I believe IBM included a disassembler in the diagnostic tools shipped with the OS, which I assume would have known about any new instructions, so it would be possible (with sufficient effort) to work out what the changes in CISC AS/400 were.

The IMPI instruction set has some resemblances to the IBM 370 mainframe instruction set, but is not compatible with it.


The compilers generate MI instructions. The translation to real PowerPC machine code is a separate process which happens in a part of the SLIC (effectively the kernel). The machine code of an executable is not accessible by users (even privileged ones) so it's not possible to patch a binary with your own instructions.


> The machine code of an executable is not accessible by users (even privileged ones) so it's not possible to patch a binary with your own instructions.

I’m not so convinced this is true. If you have access to SST, you can use the Display/Alter/Dump tool to make arbitrary changes to memory. A common use of this is to put your program object into system state so it can use system domain APIs or MI instructions even when the system is at security level 40 or above. Are you sure there is no way using SST to either alter the memory containing POWER machine code, or give a program you wrote authority to do that?


That is probably true. Perhaps I should have said "If you are able to patch the machine code in a program object, you probably have full access to the system anyway".


Definitely you can use SST to modify the translated POWER machine code. Here is an article giving an example of why you might want to do it, and how to do it: https://www.mcpressonline.com/programming/rpg/change-the-las...

Of course unprivileged users can’t do this. But if you have privilege to access SST (which means you basically have the equivalent of root in Unix) you can do absolutely anything if you know how.


Depending on the server generation, i is supported on bigger Power servers (E980).


Is AS/400 Open Source?


No, it is a closed source IBM product. I've seen copies of its installation media on "warez"/"abandonware" sites before, but there is no emulator for it so you can't run it unless you have real hardware. IBM has recently got quite keen on porting open source software to its Unix compatibility environment (PASE) – it is a way for IBM to make it feel fresher and more modern and more accessible to mainstream developers, for only a modest investment of resources.

Even if you download the installation media from one of those sites, you can't run it because there is no emulator for it. Now, it is POWER, so in principle there could be an emulator, but QEMU only supports running Linux and AIX, not IBM i. Probably if someone wanted to, they could enhance QEMU to support running IBM i, but it would be a lot of work – IBM i exploits various obscure features, not just of the CPU, but also of the firmware, which AIX doesn't, and so understandably QEMU/etc don't implement those obscure features. Commercial emulators don't really exist either, because if you start selling a commercial emulator, it is almost guaranteed IBM will sue you.

I think an emulator for the classic CISC hardware would be really cool – probably make the most sense to start by emulating System/38, then try to extend it to CISC AS/400. Who would ever carry out such a gargantuan and pointless task? Probably some day, eventually, someone will do it. And it will be cool.

There were commercial third party emulators, of sorts, for the System/38, such as California Software Products' Baby/38 package. However, Baby/38 wasn't full system emulation, it was actually just an RPG compiler which supported the System/38 dialect, and had a limited emulation of OS services which RPG programs commonly used. They later followed up with a successor, Baby/400. I'm sure it was quite useful to people back in its heyday of the 1990s, trying to port their RPG business applications to Windows. Practical, but not very interesting from a technology perspective.


Well there is Hercules emulating the big mainframes!


Mainframe hardware and software has been documented to an extent to which the AS/400 and later hardware has never been. Extensive documentation was available for the S/38 (including source listings) but many details of the AS/400 have deliberately been kept under wraps.


I would imagine there's a lot of overlap between places that use mainframes and places where anything unauthorized, let alone games, will get you fired.


Definitely any company that could afford a (then) AS/400 wouldn't look kindly on using it for gaming.

That being said, some did, otherwise these little games wouldn't exist. At one of my previous jobs, the end of the year was boring, no clients and projects would be on hold until January. One of those years I wasted a week making a crude reversi game complete with CPU player, and it was well received among the other programmers.

We tried to make a checkers game, but turns out a checkers CPU player is a lot harder than reversi for business programmers.


We still have two of them in production, running old heavily regulated applications from decades past.

If we loaded games on them, I doubt anyone would notice for a while.

Honestly though, I've had to take care of these dinosaurs for so long I can't stand to look at them much less log in if I don't have to do it.


Is it possible to make a gui game?


It has always been predominantly a text-oriented platform but there has been some graphics support over the years.

IBM ported their mainframe graphics package GDDM to the AS/400. I don’t think many people used it but it could be used for drawing charts. It required a special version of the physical 5250 terminal with graphics support. Standard 5250 terminal emulators don’t support the graphics extensions; IBM had a 5250 emulator with graphics support for MS-DOS; apparently they never ported it to Windows or anything else.

GDDM saw more use on IBM mainframes. On mainframes it has 3D graphics support with PHIGS as the API. Several late 70s / early 80s high-end CAD systems (such as CADAM and CATIA) used GDDM, before moving off the mainframe on to Unix (and later, Windows and Linux). Don’t think the 3D part was ever ported to OS/400 though, I think GDDM/400 is 2D only.

At one point (years ago) IBM was pushing a technology called RAWT (Remote AWT) which is basically Java AWT over RMI. I don’t think it is supported any more but older versions of the 400/i JVM included the RAWT JAR.

400/i also ships with a (deprecated) Java/XML based GUI framework called Graphical Toolkit - however, it is actually intended to run on client machines (primarily Windows), not the host.

The AIX compatibility environment (PASE) includes X11 ported from AIX (and more recently IBM has made X.org X11 available as well, which is more modern than AIX X11). I believe the only local X server supported though is Xvnc; the operating system doesn’t support graphics cards.


AFAIK it can only do text games, but I don't know what improvements IBM has made in two decades. The other limitation is that the terminal buffers input until you press Enter, then sends the entire screen, so only turn-based games are practical.

The buffering is a neat feature. Your AS/400 could be extremely busy (sometimes the operator would prioritize a very urgent process), but you could do a screenful of code, then press Enter and go for a coffee.


This is why graybeards understand the difference between the [RETURN] key and the [ENTER] key.

Accountants, too.


I believe the AS/400 calls it Field Exit. One of the things I used to do for new users was remap Field Exist to Enter in the IBM 5250 emulation software. Then the enter key acted like the user expected (on a real 5250 terminal there is a separate physical Enter key located where the right control key is on modern keyboards).

Explaining Field Exit vs Enter was something most users didn't get. However, they did understand Enter as like a submit button (like on web pages) and were fine selecting fields with the mouse.


It's been many years, but wasn't that a key that allowed you to change input fields, just like TAB does it with web pages? IIRC that key was managed purely client-side.


It is my understanding there is a third party port of X to the machine. I would guess you would have to do X11 forwarding, but theoretically it could work.


Nowadays IBM supplies X11 as part of PASE - AIX X11 is included and IBM offers X.org for install using yum. But the OS doesn’t support graphics cards so X11 is either using Xvnc or X11 forwarding.


many moons ago, I was lucky enough to have root (QSECOFR) access and treat the box as my personal play thing - even while running prod stuff.

we could have put games on it, if we trusted the developer - well may be just the dev server.

Windows NT 4 and Windows 3.11 was more interesting at the time with AOL dial up access


You would be surprised. I worked for an AS/400 ISV and while the customers were all huge companies, the part of the company that ran an AS/400 was usually very small and had a lot of autonomy.


Agreed, that's my thought too.


Because they cost a ton of money to own and operate, and because they typically lacked graphical output, either in real time or at all.

But that didn't stop programmers from writing games for them, those just didn't enter distribution, they were typically just run locally.


I remember IBM mainframes being used as a backend for the MMO Taikodom by Hoplon entertainment. https://en.m.wikipedia.org/wiki/Taikodom


I worked for a company that had Tetris and an analog clock app installed on their AS/400




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: