I've actually had to do this with a couple of different Fortran projects when I was in college, I translated them to C for various reasons.
Maybe it's because it was specifically code written by scientists (i.e somewhat brute force and very straightforward) but there really wasn't very many features that I can recall that didn't have a direct C counterpart, other than column major ordering and arrays staring at 1.
Was I just blissfully unaware?
In theory AI could do a more idiomatic translation, but I think I would still prefer the janky but correct translation over the looks nice but probably subtly buggy AI one.
I tried to find reference to how they did it, does anyone know?
It sounds like this approach of translating old code could help speed up teams that are looking at rewrites. I also have some old code that's in Kotlin that I'd like to move to something else. I had a bad NullPointerException take down my application after missing a breaking change in a Kotlin update.
do concurrent (i = 1:n)
y(i) = y(i) + a*x(i)
enddo
and then let the a compiler translate it into std::transform(par, x, x+n, y, y,
[=](float x, float y){ return y + a*x; }
);
if C++ is required for some reason.D has a module system, where you import a module and it pulls the global declarations out of it. To speed up this process, there's a compiler switch to output just the global declarations to another file, a .di file, that functions much like a .h file in C.
Then there came along ImportC, where a C lexer/parser was welded onto the D compiler logic.
aaaand it wasn't long before the switch was thrown to generate a .di file, and voila, the C code got translated to D!
This resulted in making it dramatically easier to use existing C headers and source files with D.
Using automatic tools - whether AI-based or transpilers - leaves that opportunity unused, and both approaches are likely to create some additional technical debt (errors in translation, odd, non-idiomatic ways of doing things introduced by the automatism etc.).
Fortran also hasn’t been faster than C++ for a very long time. This was demonstrable even back when I worked in HPC, and Fortran can be quite a bit worse for some useful types of performance engineering. The only reason we continued to use it was that a lot of legacy numerics code was written in it. Almost all new code was written in C++ because it was easier to maintain. I actually worked in Fortran before I worked in HPC, it was already dying in HPC by the time I got there. Nothing has changed in the interim. If anything, C++ is a much stronger language today than it was back then.
Also to me seems more likely that people that enjoy Fortran in HPC are more likely to change to Chapel than use C++.
What makes you say so? See musicale's comment above. I have a hard time seeing C++ as easier to maintain, if we are just talking about the language. The ecosystem is a different story.
Thus, availability of a compiler is but a small piece of the puzzle. The real problem is the spider web of dependencies on the mainframe environment, as the enterprises business processes have been intertwined into the mainframe system over decades.
COBOL varies greatly, the dialect depends on the mainframe. Chatbots will get quite confused about this. AI training data doesn't have much true COBOL, the internet is polluted with GnuCOBOL which is a mismash of a bunch of different dialects, minus all the things that make a mainframe a mainframe. So it will assume the COBOL code is more modern than it is. In terms of generating COBOL (e.g. for adding some debugging code to an existing system to analyze its behavior) it won't be able to stay within the 80 column limit due to tokenization, it will just be riddled with syntax errors.
Data matters, and mainframes have a rather specific way they store and retrieve data. Just operating the mainframe to get the data out of an old system and into a new database in a workable & well-architected format will be its own chore.
Finally, the reason these systems haven't been ported is because requirements for how the system needs to work are tight. The COBOL holdouts are exclusively financial, government, and healthcare -- no one else is stuck on old mainframes for any other reason. The new system to replace it needs to exactly match the behavior of the old system, the developer has to know how to figure out the exact confines of the laws and regulations or they are not qualified to do the task of porting it. All an LLM will do is hallucinate a new set of requirements and ignore the old ones. And aside from just knowing the requirements on paper, you'd need to spend a good chunk of time just checking what the existing system is even doing, because there will be plenty of surprises in such an old system.
There are also modern compilers to IBM mainframes, including Go, C++, Java, PHP,..
Also outside DevOps and CNCF application space very few people bother with Go, specially not the kind of customers that buy IBM mainframes.
You can run cobol on x86, there are at least two compilers.
f2c ? But yeah, 1 level of abstraction sucks. We need around 10 to be satisfied.