It's definitely not a dead language – a new Fortran front-end has recently been accepted into the LLVM project, joining Clang. So LLVM's only in-tree supported languages (excluding the various IRs) will be C, C++, Objective-C, and Fortran.
It had been scheduled for merge into the LLVM monorepo 2 days ago, but has been delayed pending some additional architecture review.
Numpy actually explicitly doesn't need fortran. It will use some fortran lapack libraries for optimizations if they're present, but it doesn't depend on fortran at all.
Scipy is a different story.
The need for a fortran compiler was a big part of the original rationale for the divide of numpy and scipy when they replaced numeric. Numpy was meant to be the lighweight/core version of things, and scipy was the full-fledged environment. (I think numpy was even scipy-min, or scipy-core, or something along those lines for a bit.) A key differentiator for what when into numpy vs scipy was whether or not it needed a fortran compiler. That's still true today -- numpy explicitly doesn't depend on anything fortran related, and anything that uses it is optional.
(I'm not authortative on any of this, I'm just someone who's been doing scientific computing in python (and fortran) since the numeric days.)
On a different note, modern fortran (e.g. F90 and above) is actually a really nice scientific software language. It's _still_ really hard to beat, and modern fortran is pretty maintainable and easy to read in addition to being performant. I've dealt with highly optimized fortran codebases and highly optimized C codebases for the same scientific tasks. The fortran ones were usually a lot easier to read, more maintainable, and were still faster. Fortran is not really a general purpose language, but if you're doing lots of number crunching on large arrays, it's damned nice.
Yep! Seconding what you said: A key aspect is that it’s very easy to write fast matrix math in Fortran. Vectorized operations are downright pythonic!
I was amazed how much more complex things were when moving to C++ (as opposed to C or C written in cpp files which you’ll find in academia and some books).
Of course, that ease is a hazard when all the modern (i.e. fun) sci-eng software development is in C++.
Do not grow complacent in school, ye numerical methods researchers! Write your stuff in C++. The job market awaits.
Sure - I am speaking broadly of the job market for computational programmers outside of academia and national labs, and I'd better say that I am in the USA. So if you did a masters or PhD in CFD, or the finite element for structures, (and I wager also, E&M/Maxwell, computational physics, and numerical simulation generally (a catch all because the boundaries are fuzzy), but I think "as it goes in CFD and FEA for structures, so it goes in other numerical domains...?"), then it's quite possible you came from a school where your professor(s) wrote in old style C++ (or maybe straight up procedural C), or maybe used only the most basic features of Fortran 90 or worse (F77). The main thing is that while you did something noteworthy in your field to get your degree in engineering simulation or computational physics, you used a language, or language variant, that is not common anymore in _actively_ developed projects commercially. So, outside of academia and national labs, which are hard to get into if you are coming in from say, a state school with fewer connections (and did not intern somewhere as you had research obligations at school, say), in commercial code bases, as far as I have seen, hiring "from without" looks generally for lots of C++ computational/numerical experience.
Not such a big deal if you are a pure-researcher in, say, CFD, and not such a big deal in a national lab, where the big codebase may be FORTRAN anyway (caps to denote old school code bases). But it really limits your mobility not to be "C++ first". Lastly, if you land a job at a big numerical analysis company, you may be looking after legacy products forever if you are "just a Fortran person".
Well, this is obviously anecdotal. Sorry to state such bald opinions. But it's what I've seen, and really would have been helpful to know as I was going through school. I wrote a lot of Fortran and Python. It's left me playing catch up and it could have been avoided with comparative ease compared to my time budget now.
I'll second all of that. It's well worth actually learning C++ (and not C masquerading as C++) if you want to go into the broader scientific computing job market.
I certainly wish I had focused more on "real" C++. I have maintained some very large C++ codebases, but I still don't really know C++, just "C with classes". Thankfully, C++ is not my day-job anymore.
I'd also give a quiet nod to trying to get familiar with modern dev-ops tooling. E.g. docker, kubernetes, and interacting with at least some forms of cloud storage. Get used to the idea that a bucket is not actually a directory, it's a hierarchy of prefixes and learn how to use that. Think of a dockerfile as learning how to write a makefile. It's worth learning the basics of, and it's not hard for simple things. HPC work has traditionally been a lot of fairly custom clusters (Condor, anyone?) or problems that are best solved with a lot of RAM on one machine. However, things are moving towards commodity solutions (i.e. "the cloud") or at the very least, common software layers. You don't need to understand the the ins and outs of managing a kubernetes cluster, but it's helpful to have used kubectl to spin up and debug a pod or two at some point.
However, I will say that there seem to be more Python-based jobs than C++ or Fortran jobs in the industry-based scientific computing world these days. Perhaps less so for fluid dynamics or finite element methods or other "simulation on a grid" problems, but there are a lot of python codebases out there in industry right now. I think it's very important to be comfortable working in higher level languages as well, and python is kinda becoming the lingua franca of a lot of sub-fields. For better or worse (I'm in the "better" camp), a lot of users are going to want a python library for what you're writing. It's best if you're used to "thinking in python" and can design a pythonic api.
Thanks, and it's nice to hear that about Python! I don't know why I am surprised, as the more forward thinking folks at (even such a place as) my work are having a Python api built for the stuff they look after presently (their bit does structural analysis). That should make design generation/variation/optimization a lot more fun.
I guess my fear was that all Python jobs (that I'll find) are going to be "machine learning" -- but really that would just mean data munging to feed Tensor Flow (or similar lib) and post process. Total snooze fest unless "post process" means something more like design generation/variation/optimization - and we are back to the api.
I don't work on it directly, but the code base where I work has lots of Fortran. I'm pretty sure they've been trying to migrate away from it since the early 90s, yet it's still there. Fortran has an amazing amount of staying power.
There are some optimizations Fortran compilers can make which C compilers cannot, though we can use the "restrict" keyword to allow C to make the same assumptions Fortran makes for aliasing pointers (ie. there are none).
However, in order to support all existing C programs I believe they still can't make the same level of optimization as Fortran does. It'd be nice to see this gap merged, in fact.
C++ can actually do better with some numeric code since commonly used libraries perform expression optimization at compile-time using Eigen for example.
Fortran receives updates to the language spec with new features, and modern fortran is pretty different both syntactically and semantically from the 70s fortran you are probably thinking of. It is also extremely fast, often faster than C due to less pointer aliasing which lets compilers optimize loops more. (or so I'm told)
I work in computational fluid dynamics. Fortran is very much alive and being updated. The bigger issue, as I see it, is the adoption of newer features by current programmers. I know many people who program in modern Fortran (the 2003 and later standards with object-oriented features), but I also know some people who still program in Fortran 77 (either literally or in spirit). The second group tends to work on older "legacy" projects that may be decades old at this point. The maintainers usually decided that the cost of updating to a newer standard wasn't worth it ("if it ain't broke, don't fix it").
That said, I only write in modern Fortran when I use Fortran. The newer features make it much easier to write complex data structures, in particular.
But Latin changed quite a bit even after it stopped being spoken as a first language. There are courses in reading Medieval Latin because it isn't that close to Classical Latin.
In regard to Fortran, things are changing too. There's even a Fortran 2018 standard, which is fairly close to Fortran 2008, but would be almost unrecognizable to people who learned Fortran 77 or earlier.
Latin never died or stopped changing or stopped being spoken; it is still spoken by about a billion people today — they just stopped calling it “Latin” at some point.
Speakers of French, Spanish, Italian, Portuguese, Romanian, Catalan, and various other lesser-known languages and dialects.
All of these are directly descended from Latin in exactly the same way that today's English is descended from 1900s English, just over a longer period.
Is 1900-AD English "a dead language" because nobody who spoke it natively is still alive? What about 1800, 1700, or 1000? What year did English die and if it did indeed die, what are we speaking now?
The reasons for considering English circa 1000 and English circa 2020 to be "different stages of the same language" but Latin and French to be "different languages" are an arbitrary cultural distinction not rooted in science. The English of Beowulf is just as different from the modern variety as French is from Latin. And at every point from the introduction of Latin into France thousands of years ago until the present day, everybody could communicate perfectly easily with their own grandparents, and thought they were speaking the same language.
The natural conclusion to this argument is that everyone on earth is speaking the same language, because they all descended from the Mitochondrial Eve’s utterances 150000 years ago.
No, the natural conclusion is that whether two varieties are "the same language" or "different forms of the same language" is a cultural question, not a scientific one that can be answered objectively.
(Tangent -- certainly not all languages are descended from a hypothetical Proto-World, since we have examples of brand new languages forming: any creole languages, or Nicaraguan Sign Language. Whether most of the major language families are descended from a Proto-World is very much an open question, and probably one that will never be answered, since even if say, English and Navajo are genetically related, they diverged so long before we had written records that no trace remains.)
In any case, there's a meaningful difference between a language gradually changing over hundreds of years such that the newer varieties are quite different from the old ones, and a language truly dying, because it has exactly one remaining native speaker, and that person dies.
> No, the natural conclusion is that whether two varieties are "the same language" or "different forms of the same language" is a cultural question, not a scientific one that can be answered objectively.
It's a question that linguists (who are scientists) strive to answer in ways that are useful to studying, reasoning about, and explaining language. It's not much different than the species problem in biology. Everyone knows that speciation happens gradually, but scientists still propose ways of defining and explaining speciation to aid in scientific inquiry. The labels and dividing lines themselves are not empirically observed, of course, but that doesn't mean they're unscientific or outside the purview of scientific inquiry.
As far as I know, there generally aren’t papers that contain debates between multiple linguists. Scientific papers usually aren’t written as debate transcripts. But there are certainly many papers about dead languages, and about the lines between languages. You can look at pretty much any of the citations on Wikipedia articles about dead languages like a Middle English, or creole languages.
> Scientific papers usually aren’t written as debate transcripts.
Sure they are, although a debate wouldn't be contained in one paper written collaboratively by debating authors, as you seem to imagine -- it would play out over a series of papers, each of which cites previous papers and claims they're wrong.
For example, Timm 1989[1] argues that modern Breton is a VSO language, disputing the claims of Varin 1979[2], who claims it is SVO, which in turn disputes the traditional understanding that it is indeed VSO.
Or the famous paper of Haspelmath 2011[3] citing many other authors' proposed approaches to word segmentation and arguing that they're all wrong (i.e., that "word" is not a meaningfully defined concept in linguistics).
Where are the papers that you claim exist about the lines between languages? If this is really something mainstream linguists care about, you should be able to give examples in non-fringe journals.
I just checked the citations on the Wikipedia article for Middle English like you suggested, and found zero papers about whether Middle English should be considered "the same language" as modern English. Can you tell me which ones specifically you mean?
[1]: Timm, L. (1989). "Word Order in 20th Century Breton". Natural Language & Linguistic Theory, Vol. 7, No. 3, Special Celtic Issue, pp. 361-378 (https://sci-hub.tw/10.2307/4047723)
[2]: Varin, A. (1979). "VSO and SVO Order in Breton". Archivum Linguisticum 10, pp. 83-101
It's very much used in science for naming things, for studying old literature and also in the Catholic Church. We also had Latin courses in highschool. Also many words in many European languages, including Englush, have Latin or Greek roots.
I studied (classical, not ecclesiastical) Latin for about 8 years, and I don't think it's accurate to claim that the Romance languages are in any meaningful sense "Latin".
The weaker (and more reasonable) claim is that learning Latin improves your ability to recognize words and (very basic) structures in its descendants.
Roman graffiti from Herculaneum and Pompeii in particular shed some light on early vulgar Latin, suggesting that already that the regional homogeneity of the vulgar register was already breaking down by the time of the Plinian eruption. Given the large volume of uncovered graffiti, it is fairly easy to discern several trends, notably the loss of written dipthongs (æ->e, oe->e; comparably au->o), losses of final unaccented consonants and medial vowels.
http://ancientgraffiti.org/about/ is an excellent resource specifically for Herculaneum and Pompeii, but it also links to broader collections to which the project has contributed.
There are interesting aspects of graffiti throughout the Roman Empire. Children (or exceptionally short adults) practised writing on walls; some taller people's graffiti showed not just literacy but familiarity with Virgil and even decent command of Greek and other second languages. Conversely, numerous graffiti are supporting evidence for partial Latin literacy among speakers of other languages, even among celtic-language informal epigraphers in the west and northwest in the first decades CE. It seems likely that these influences "accented" day-to-day Latin, perhaps comparably to https://en.wikipedia.org/wiki/Singlish .
Latin and the Romance languages are indeed very different, but their similarities are much stronger than just vocabulary. The Romance languages have lost noun cases and gained prepositions, fixed word order, and definite articles, but they retain noun genders (though without neuter), remnants of the case system in pronouns ("je", "me", "moi"), many of the same verb tenses, and the subjunctive/indicative moods, to give a few examples.
Anyway, my point is that it's an error to say that Latin "stopped changing". Other languages have changed similar amounts: modern Americans can't understand Beowulf, modern Greeks can't understand Homer, and modern Chinese people can't understand Confucious, but nobody would claim that English or Greek or Chinese died and stopped changing. The fact that people call Latin, but not English, a "dead language" is purely due to the fact that the different stages of English all happen to be called "English".
In an alternate world where Latin was exactly the same as it was in the past, and Italian is exactly the same as it is now, but Latin had never spread outside of Italy, I suspect that we would today call Latin "Old Italian" (or perhaps we would call Italian "Modern Latin"), and nobody would be having this discussion, despite the scientific/linguistic facts being identical to what they are in our reality.
> The fact that people call Latin, but not English, a "dead language" is purely due to the fact that the different stages of English all happen to be called "English".
I don't think this is the case: the language that Beowulf is written in is normally referred to as Old English. Chaucer is Middle English. Shakespeare is Early Modern English. William Makepeace Thackeray is Victorian English.
IMO, it's reasonable to assert that each of these are "dead" in some meaningful way: even Early Modern and Victorian English, despite their intelligibility, are simply not spoken by any group of current-day English speakers.
I held out hope that maybe there was some sort of special treatment for that parameter. But then I looked at the source and... nope, it's just slapped on the end of the command.
Generating an SQL string with any input from the user is not accepted best practice, even if that were a sufficient way to achieve it (I suspect it would disallow a parameter string to contain a legitimate, escaped ' character).
Generally, it is recommended that you create a prepared statement entirely from static SQL string(s) (no user input) and then bind parameters into it, such that there is no possibility for any user input to be parsed as SQL:
This is quite frankly nonsense. You have released the software to the wider world it is your responsibility to make sure it is decent. Just because you have released it for free doesn't suddenly mean you can avoid criticism.
SQL injection is such a basic thing to check there is really no excuse.
Fortran actually isn't a static language. It's evolved over time. The last significant revision according to wikipedia is just over a year old. The beauty and horror of it probably depends which era of it you're looking at.
After Fortran got its array syntax 29 years ago, there aren't many languages that you can use to write array-manipulating code like in Fortran. Maybe just Julia and NumPy?
That's what I do in C++ with Eigen or boost::ublas or any other library. Operator overloading is not a feature limited to Fortran.
Fortran gets its performance from prohibition of aliasing, which is something that C++ admits as per the standard. One could enable the same performance boosts with a non standard compiler flag, but this breaks a lot of codebases, including the Linux kernel AFAIK.
Aliasing is a pox upon the world. There's a lot of work gone into stuff like UBSan for C++. I think aliasing should be added to stuff that it checks for. Dunno how much overhead that would be though.
Taking rows and columns is still nice in Eigen. For example taking the second column of a matrix is m.col(1) (Eigen, 0-based indexing) and m(:,2) (Fortran, 1-based indexing unless specified something else).
But more generic array slicing with strides seems to get nasty in Eigen. For example if
Interesting. Does it result in a compile error if you try arrays of different sizes? (If it's just a run time error, I don't see how it's different to implementing this with operator overloading in any language that supports it).
IIRC, most people who use Fortran use it because there are extremely specialized physics libraries (or astronomy, or chemistry, and so on) that were probably written three to four decades ago by an extremely intelligent PhD student who lacks a proper programming background.
Which suggests that the biggest source of horror would be that most code you'll come across was written by a non-programmer who is intelligent enough to translate an extremely complicated and niche problem domain into very clever and completely un-idiomatic code, who likely was unable to read their own library mere months after graduating because most PhDs don't fully understand their own thesis within months after graduating.
"Idiomatic" depends on language and domain and era and purpose. Consider the dgemm method in BLAS (double general matrix multiply, powers all sorts of modern code like numpy). It's such a workhorse, used by everything under the sun. But it does one thing. It multiplies two matrices of doubles. You're going to want to optimize the hell out of it and ensure it's correct, then never revisit it. It's been around (in the exact same form as far as I can tell) nearly as long as I've been alive. The right, idiomatic code for that is going to look different from the right, idiomatic code for something at the business logic level that changes every 6 months. It does it a disservice to shit on it like you are.
You're going to want to optimize the hell out of it and ensure it's correct, then never revisit it.
Balderdash. Claptrap. Codswallop. Poppycock. What if you want to revisit it to make it run on parallel processors? Optimize it for a new CUDA architecture or caching scheme? A new instruction set that handles sparse matrices better? Code lives forever, at every level.
For this kind of linear algebra, you're going to have a different routine for a sparse matrix (dcoomm) or a symmetric one (dsymm) or a triangular one (dtrmm) or symmetric banded matrix vector product (dsbmv) or whatever. Hopefully those are separate functions from your dense one (dgemm), which is what I'm talking about. You'll also have a different function for CUDA (which exists in cuBLAS) than for multi-core (in ScaLAPACK? PLASMA?) than for single-CPU (in BLAS), which is what I'm talking about. General matrix multiply in this sense isn't "all the different matrix multiplies." It's a function with a very specific purpose, and yeah you don't really need to revisit it every decade.
For sure, there are a few cases like the one you mention (I personally work with this kind of code in a daily basis), but that's not the reason most people use Fortran. There are many libraries with a level of robustness and performance that you won't find anywhere else, there are highly optimized compilers, and it is the standard language in many fields.
It's also quite safe for non-programmers. Those non-programmers write complex algorithms, but the code itself is not "clever and completely un-idiomatic". Most often, it is quite naive, just a series of formulas one after the other. You will find very long subroutines, bad names, some obvious inefficiencies, and maybe some spaghetti, but nothing to be afraid of.
My understanding from supercomputing center folks is that, if you just want to run these half a dozen equations for a bajillion nodes in parallel as fast as possible, it is still the best language for the job. Not speaking from personal experience, though.
> the 'non-programmer' you seem to not think much of
If what I wrote lead to that as your main take-away, then I apologize for expressing myself badly, because nothing could be further from the truth! I studied physics myself at one point, and base this on conversations with friends who (unlike me) didn't drop out. Specifically, those who were asked by their professor to update old libraries during their PhD thesis.
I think one of the reasons its still frequently used in some parts of academia is a Grad student can figure it out in a few day of self-study and move on to doing their actual research from there. As opposed to if you want them to work with a large C++ program, getting them familiar enough with the language to be useful can take half a semester.
I’m a professional FORTRAN developer. I would say even FORTRAN 90 has a very rich and capable feature set. The problem is not the language, it’s the other developers :). A lot of codes have been developed by people used to FORTRAN 77, so even if they’re using new features the codes are poorly designed.
How is it not? Someone said something about fighter jets being written in Fortran. Someone corrected them saying they aren't, but they are modeled using Fortran. I tried to add to the conversation by saying what they were written in assuming that would also be of interest to the person who made the original comment. I'm sorry this has perturbed you.
There is likely a lot more railway traffic control software written in Simula ( "The basic simulation kernel, the TTS module based on the SIMON/TTS system, is today implemented in Simula and running on a UNIX platform." https://pdfs.semanticscholar.org/ed7e/d90fbc57b7ee0473d41edb... [2008] -- last listed author works for and the work is funded by Banverket, the Swedish National Rail Administration ) than in Ruby, and yet it's still rubyonrails.org.
Amusingly, a short while ago I HN-commented on the eruption of Vesuvius that destroyed Pompeii and nearby settlements. Vesuvius also destroyed the funicular that the song advertised, in a later eruption.
I honestly am not too sure why there is so much derision regarding Fortran (ageism?). A lot of people don't seem to realize the importance this language still has in scientific computing.
The opposite, in fact. My professor talked about the amazing things he did with Fortran. Then I had to learn it in order to find the bugs. I'd never use it by choice.
I know of 4 major languages created in the 1950's: Lisp, Algol, Fortran, and Cobol. Lisp and Algol were brilliantly designed, years ahead of their time, and have inspired nearly every programming language since then. Fortran and Cobol not only look like evolutionary dead ends in hindsight, but they weren't pleasant to use even when they were popular.
There's a reason we still see Lisp posts on HN almost every day, and very rarely Fortran posts.
In my work life, I have a problem where I have to interface with a stringly typed API. For instance, characters 20-25 in a 400 byte string are a floating point number. No it's not a good API but I don't make decisions. So you need a six character float. "12" is wrong because it's two characters, also "12<four spaces>" is invalid. "1.2e01" is wrong compared to "12.000" because it has (many) more significant digits. ".00001" is wrong compared to "1.2e-5".
I don't know how to solve this problem. printf doesn't cut it. It simply can't specify an exact number of characters.
The latest version of Visual COBOL[1] supports .NET Core. And I believe you can extend C# classes in COBOL. So you might just be able to write an entire ASP.NET Core application in COBOL.
Whether you'd want to is another question rntirely. Might make for a fun weekend project. :)
Google has this thing called “Testing on the Toilet” where they post factoids about Google’s codebase and best practices. I read one and it said FORTRAN is a highly secure language (or there haven’t been any security vulnerabilities found in the language in a while).
Devil’s advocate for not writing FORTRAN off completely at first sight.
Apple used to run ad campaigns about how Macs were immune to viruses and therefore highly secure. Turns out enough people just weren't using Mac, and viruses started popping up once it went mainstream.
Have there been Mac viruses that actually spread user-to-user in the wild? The articles I've seen talk about malware, which is different. Also the occasional proof-of-concept.
Apple once advertised "Macs don't get PC viruses", which was true, but misleading. Microsoft often claimed that the reason there weren't Mac viruses was that it wasn't as attractive a target. They kept this trope going for a very long time. They said it for so long, that one would think hackers would want to crack MacOS just as a way to stand out from the crowd. The higher prevalence of PCs was never enough to explain the orders of magnitude difference in exploits. Macs may not have been super secure, but they were way more secure than Windows, and the number of viruses for Macs never "popped" in any way comparable.
Very much could be! Then again, I doubt Google would want to intentionally spin technical content for its own engineers, that might have implications for its own codebase, culture, and other deliverables.
I'm not jumping on the Fortran bandwagon until it runs natively in the browser! No transpiling to Javascript or any other such monkey business either. I want a browser with true Fortran support built right in.
For a while the site returned 502 Bad Gateway. I literally thought that was the whole point and upvoted for appreciative sarcasm... until the link started actually working.
Thanks for a shout-out to Simply Fortran! I'm the primary developer behind it (had to finally create an account to reply...), and we're always trying to improve the development environment.
It seems the home page is down (sub-pages are working) and I coincidentally visited this site several months ago and it wasn't working back in the day either.
Wow. This sounds intense and interesting. However I will stick to C++ but it is great to see power languages up and ready for new generations of hardware, software and people.
Once nice thing about Fortran's evolution is that when you lag behind the bleeding edge, you make fewer mistakes. If you like C++ that's great; you might notice that everyone else doesn't agree that it's the future.
Funny that you should mention it. To me it looks like web development is mostly driven by fashion rather then practical choices. Also I suspect that unless something radically good comes out (I do not consider Rust, Go and likes to be the case) C and C++ will stay very active, including new development. I actually do hope that this happens but so far no real candidate.
I saw the reference to Ubuntu 14 and thought "what the?" Then I noticed the code hasn't been updated in 3yrs. One question answered but the bigger one remains: Why do we (meaning: me, I) gauge a project by it's recentness? I've written plenty of LOC that are structurally sound that I've not needed to touch up/ improve upon... And as a FORTRAN project you can be pretty sure updates can be glacial but still very viable.
I actually started softly clapping after reading the title without even realizing it. I too wish to be able to make a framework from a language like fortran, one day I might be good enough or motivated enough.
I have a Fortran codebase which takes about a week to finish a simulation on 10 cores. Rewriting this code in Python and running would take about 50 years to finish the same simulation in serial mode, and at best 5 years if run in parallel on 10 cores.
Thanks for posting this! I was about to say "it could be worse: it could be COBOL". Especially since every couple of months someone posts an enthusiastic article about COBOL and people get intrigued by it and I feel it's my God-mandated duty to warn them about its mind-numbing unholiness.
Drifting further from god is a metaphor for moving closer to hell. He is implying that developing for the web using FORTRAN is moving us closer to existing in hell.
It's a meme, not a metaphor. Though you can argue that a meme is at its core a metaphor, in this case you're better off trying to understand this specific meme, it's quite funny.
IIRC, FORTRAN is used heavily in the LINPACK/LAPACK libraries which are further used in higher level libraries such as numpy. So it’s likely true that it still sees use in control software running trains, banks, air traffic control, etc.
It is still heavily used for numerical code in various places, especially aerospace (to avoid reverifying code with new languages or just to ensure new work can be accurately compared with old work)
There's even a CUDA compiler.
As I understand it, lack of pointers makes it possible for fortran compilers to parallelize code quite well.
Multidimensional arrays are first-class in Fortran, so there's no need to use pointers there. There are some rules around pointers and subarrays that I expect make any possible aliasing undefined behaviour.
It has things called pointers, but they are significantly different than pointers in C. They can't return memory addresses and so there's no pointer arithmetic or other difficult to optimize actions.
And after writing a significant amount of fortran 90 code I didn't even know the feature existed...
Very different use-cases. All the other languages you've mentioned are great for statistical analysis and plotting but are quite slow - for R and Python by virtue of being interpreted rather than compiled. I dont know about the others.
Fortran is often used in academia for intensive number crunching and parallelized code to run on computing clusters with 10s or 100s of cores. I work with some people who run hydrodynamics sims using a code written in FORTRAN. Their workflow is to pull data from these sims into Python scripts for subsequent analysis. It's just a different tool for a different job.
People talk about C++ and FORTRAN being competitors, and there the difference might have something to do with familiarity.
As a computational scientist who knows C, Fortran, IDL, MATLAB, Mathematica, Pascal, Python, and R for about two decades, I can tell you that the unique advantage of (modern) Fortran (2008/2018) over any other languages mentioned here is in its great flexibility and high-performance for numerical computing and basically any mathematical problem. It has a coding and syntax flexibility comparable to MATLAB and Python, but the runtime speed of C, and in many cases, beyond C. It is the only high-performance programming language that has native built-in mechanism for one-sided parallel programming on shared and distributed architectures that can scale your code from your laptop to the largest supercomputers in the world (Coarray Fortran parallelism). Checkout the OpenCorrays software on GitHub and Intel's amazing Coarray-enabled Fortran compiler. Also, Fortran along with C, are the only two languages for which the MPI standard is written. OpenMP parallelism also has its roots in the Fortran language. Nvidia is also working on a new F18 compiler for Fortran which enables GPU parallel computing via the same Coarray Fortran syntax and rules. One parallel programming API for all architectures and platforms. No programming language has this capability that I am aware of.
Typically, simulations that I run in MATLAB, Python or R, take 100 to 500 times longer than the equivalents that I write in Fortran. I am sure you can achieve a similar performance with C as well. But the development cost in C is much higher for numerical computing than in Fortran, given Fortran's native array-based syntax, parallelism features, optimization hints to the compiler, and the new high-level string and memory manipulation tools that it has.
That said, every language has its usage and place in the world of programming. In my case, all of the post-processing analyses of my Fortran/C simulation results are done in either Python or MATLAB, and sometimes in R. They complement each other rather than being rivals to each other.
Add Julia to the list. But if you're thinking of using Fortran, then Excel is likely the wrong tool for the job. Fortran is usually used for heavy duty number crunching and supercomputing simulations. Or in this case, web sites???
It had been scheduled for merge into the LLVM monorepo 2 days ago, but has been delayed pending some additional architecture review.