This is accurate to what we've seen in the market.
If they were large enough to need compute 30-40+ years ago, they certainly have some mainframes running today. Think Walmart, United Airlines, JPMC, Geico, Coca Cola and so on.
Like Bloop, we’re also focused on modernization, but our approach extends beyond code to include the people behind these systems and capturing the institutional knowledge they hold.
Absolutely true, and the challenge is that a large portion of modernization projects fail (around 70%).
The main reasons are the loss of institutional knowledge, the difficulty of untangling 20–30-year-old code that few understand, and, most importantly, ensuring the new system is a true 1:1 functional replica of the original via testing.
Modernization is an incredibly expensive process involving numerous SMEs, moving parts, and massive budgets. Leveraging AI creates an opportunity to make this process far more efficient and successful overall.
COBOL projects have millions of lines of code. Any prompt/reasoning will rapidly fill the context window of any model.
And you'll probably have a better luck if you had tokenization understands COBOL keywords.
You probably have better luck implementing a data miner that slowly digests all the code and requirements into a proprietary information retrieval solution or ontology that can help answer questions...
What an engineer tells you can be inaccurate, incomplete, outdated, etc.
There are may be other general-purpose tools out there that overlap in some ways, but our focus is on vertically specializing in the mainframe ecosystem and building AI-native tooling specifically for the problems in this space.
I'm not sure what "AI-native" would mean but GT has LLM-integrations, support for working in distributed systems and a COBOL-bridge that has been used in managing transitions of legacy systems.
I'm sure you'll manage to figure out the LLM-integrations.
Edit: The Feenk folks also have a structured theory for why and how to do these things that they've spent a lot of time and experience on refining, visualising and developing tooling around.
I think it is a good idea for anyone working with large legacy systems to have such a theoretical foundation for how to communicate, approach problems and evaluate progress. Without it one is highly likely to make expensive decisions based on gut feeling and vague assumptions.
I’d also note that COBOL is only one layer of the stack.
The real complexity lies in also understanding z/OS (mainframe operating systems), CICS, JCL, and the rest of the mainframe runtime, it’s an entirely parallel computing universe compared to the x86 space.
Mechanical Orchard is a major player in this space, though their model is closer to professional services than a true end-to-end AI modernization platform.
I was offered a MUMPS job in the 1980s. I took one look at the code and very quickly concluded that life was too short for that.
Later I got into programming language theory, and took another look at MUMPS from that perspective. As a programming language, it’s truly terrible in ways that languages like COBOL and FORTRAN are not. Just as one example, “local” variables have an indefinite lifetime and are accessible throughout a process, i.e. they’re not scoped to functions. But you can dynamically hide/shadow and delete them. It would be hard to design a less tractable way of managing variables if you tried.
MUMPS’ value proposition was how it handled persistent data as a built-in part of the language. In that sense it was a precursor to systems like dBASE, which were eventually supplanted by SQL databases. MUMPS was a pretty good persistent data management system coupled with a truly terrible programming language.
Looks like it's already been pointed out. We’re not applying AI to these systems — IBM is already pursuing those initiatives (https://research.ibm.com/blog/spyre-for-z).
Our focus is different: we’re using AI to understand these 40+ year-old black box systems and capture the knowledge of the SMEs who built and maintain them before they retire. There simply aren’t enough engineers left who can fully understand or maintain these systems, let alone modernize them.
The COBOL talent shortage has already been a challenge for many decades now, and it’s only becoming more severe.
If they were large enough to need compute 30-40+ years ago, they certainly have some mainframes running today. Think Walmart, United Airlines, JPMC, Geico, Coca Cola and so on.