It's definitely not a dead language – a new Fortran front-end has recently been accepted into the LLVM project, joining Clang. So LLVM's only in-tree supported languages (excluding the various IRs) will be C, C++, Objective-C, and Fortran.
It had been scheduled for merge into the LLVM monorepo 2 days ago, but has been delayed pending some additional architecture review.
Numpy actually explicitly doesn't need fortran. It will use some fortran lapack libraries for optimizations if they're present, but it doesn't depend on fortran at all.
Scipy is a different story.
The need for a fortran compiler was a big part of the original rationale for the divide of numpy and scipy when they replaced numeric. Numpy was meant to be the lighweight/core version of things, and scipy was the full-fledged environment. (I think numpy was even scipy-min, or scipy-core, or something along those lines for a bit.) A key differentiator for what when into numpy vs scipy was whether or not it needed a fortran compiler. That's still true today -- numpy explicitly doesn't depend on anything fortran related, and anything that uses it is optional.
(I'm not authortative on any of this, I'm just someone who's been doing scientific computing in python (and fortran) since the numeric days.)
On a different note, modern fortran (e.g. F90 and above) is actually a really nice scientific software language. It's _still_ really hard to beat, and modern fortran is pretty maintainable and easy to read in addition to being performant. I've dealt with highly optimized fortran codebases and highly optimized C codebases for the same scientific tasks. The fortran ones were usually a lot easier to read, more maintainable, and were still faster. Fortran is not really a general purpose language, but if you're doing lots of number crunching on large arrays, it's damned nice.
Yep! Seconding what you said: A key aspect is that it’s very easy to write fast matrix math in Fortran. Vectorized operations are downright pythonic!
I was amazed how much more complex things were when moving to C++ (as opposed to C or C written in cpp files which you’ll find in academia and some books).
Of course, that ease is a hazard when all the modern (i.e. fun) sci-eng software development is in C++.
Do not grow complacent in school, ye numerical methods researchers! Write your stuff in C++. The job market awaits.
Sure - I am speaking broadly of the job market for computational programmers outside of academia and national labs, and I'd better say that I am in the USA. So if you did a masters or PhD in CFD, or the finite element for structures, (and I wager also, E&M/Maxwell, computational physics, and numerical simulation generally (a catch all because the boundaries are fuzzy), but I think "as it goes in CFD and FEA for structures, so it goes in other numerical domains...?"), then it's quite possible you came from a school where your professor(s) wrote in old style C++ (or maybe straight up procedural C), or maybe used only the most basic features of Fortran 90 or worse (F77). The main thing is that while you did something noteworthy in your field to get your degree in engineering simulation or computational physics, you used a language, or language variant, that is not common anymore in _actively_ developed projects commercially. So, outside of academia and national labs, which are hard to get into if you are coming in from say, a state school with fewer connections (and did not intern somewhere as you had research obligations at school, say), in commercial code bases, as far as I have seen, hiring "from without" looks generally for lots of C++ computational/numerical experience.
Not such a big deal if you are a pure-researcher in, say, CFD, and not such a big deal in a national lab, where the big codebase may be FORTRAN anyway (caps to denote old school code bases). But it really limits your mobility not to be "C++ first". Lastly, if you land a job at a big numerical analysis company, you may be looking after legacy products forever if you are "just a Fortran person".
Well, this is obviously anecdotal. Sorry to state such bald opinions. But it's what I've seen, and really would have been helpful to know as I was going through school. I wrote a lot of Fortran and Python. It's left me playing catch up and it could have been avoided with comparative ease compared to my time budget now.
I'll second all of that. It's well worth actually learning C++ (and not C masquerading as C++) if you want to go into the broader scientific computing job market.
I certainly wish I had focused more on "real" C++. I have maintained some very large C++ codebases, but I still don't really know C++, just "C with classes". Thankfully, C++ is not my day-job anymore.
I'd also give a quiet nod to trying to get familiar with modern dev-ops tooling. E.g. docker, kubernetes, and interacting with at least some forms of cloud storage. Get used to the idea that a bucket is not actually a directory, it's a hierarchy of prefixes and learn how to use that. Think of a dockerfile as learning how to write a makefile. It's worth learning the basics of, and it's not hard for simple things. HPC work has traditionally been a lot of fairly custom clusters (Condor, anyone?) or problems that are best solved with a lot of RAM on one machine. However, things are moving towards commodity solutions (i.e. "the cloud") or at the very least, common software layers. You don't need to understand the the ins and outs of managing a kubernetes cluster, but it's helpful to have used kubectl to spin up and debug a pod or two at some point.
However, I will say that there seem to be more Python-based jobs than C++ or Fortran jobs in the industry-based scientific computing world these days. Perhaps less so for fluid dynamics or finite element methods or other "simulation on a grid" problems, but there are a lot of python codebases out there in industry right now. I think it's very important to be comfortable working in higher level languages as well, and python is kinda becoming the lingua franca of a lot of sub-fields. For better or worse (I'm in the "better" camp), a lot of users are going to want a python library for what you're writing. It's best if you're used to "thinking in python" and can design a pythonic api.
Thanks, and it's nice to hear that about Python! I don't know why I am surprised, as the more forward thinking folks at (even such a place as) my work are having a Python api built for the stuff they look after presently (their bit does structural analysis). That should make design generation/variation/optimization a lot more fun.
I guess my fear was that all Python jobs (that I'll find) are going to be "machine learning" -- but really that would just mean data munging to feed Tensor Flow (or similar lib) and post process. Total snooze fest unless "post process" means something more like design generation/variation/optimization - and we are back to the api.
I don't work on it directly, but the code base where I work has lots of Fortran. I'm pretty sure they've been trying to migrate away from it since the early 90s, yet it's still there. Fortran has an amazing amount of staying power.
There are some optimizations Fortran compilers can make which C compilers cannot, though we can use the "restrict" keyword to allow C to make the same assumptions Fortran makes for aliasing pointers (ie. there are none).
However, in order to support all existing C programs I believe they still can't make the same level of optimization as Fortran does. It'd be nice to see this gap merged, in fact.
C++ can actually do better with some numeric code since commonly used libraries perform expression optimization at compile-time using Eigen for example.
Fortran receives updates to the language spec with new features, and modern fortran is pretty different both syntactically and semantically from the 70s fortran you are probably thinking of. It is also extremely fast, often faster than C due to less pointer aliasing which lets compilers optimize loops more. (or so I'm told)
I work in computational fluid dynamics. Fortran is very much alive and being updated. The bigger issue, as I see it, is the adoption of newer features by current programmers. I know many people who program in modern Fortran (the 2003 and later standards with object-oriented features), but I also know some people who still program in Fortran 77 (either literally or in spirit). The second group tends to work on older "legacy" projects that may be decades old at this point. The maintainers usually decided that the cost of updating to a newer standard wasn't worth it ("if it ain't broke, don't fix it").
That said, I only write in modern Fortran when I use Fortran. The newer features make it much easier to write complex data structures, in particular.
But Latin changed quite a bit even after it stopped being spoken as a first language. There are courses in reading Medieval Latin because it isn't that close to Classical Latin.
In regard to Fortran, things are changing too. There's even a Fortran 2018 standard, which is fairly close to Fortran 2008, but would be almost unrecognizable to people who learned Fortran 77 or earlier.
Latin never died or stopped changing or stopped being spoken; it is still spoken by about a billion people today — they just stopped calling it “Latin” at some point.
Speakers of French, Spanish, Italian, Portuguese, Romanian, Catalan, and various other lesser-known languages and dialects.
All of these are directly descended from Latin in exactly the same way that today's English is descended from 1900s English, just over a longer period.
Is 1900-AD English "a dead language" because nobody who spoke it natively is still alive? What about 1800, 1700, or 1000? What year did English die and if it did indeed die, what are we speaking now?
The reasons for considering English circa 1000 and English circa 2020 to be "different stages of the same language" but Latin and French to be "different languages" are an arbitrary cultural distinction not rooted in science. The English of Beowulf is just as different from the modern variety as French is from Latin. And at every point from the introduction of Latin into France thousands of years ago until the present day, everybody could communicate perfectly easily with their own grandparents, and thought they were speaking the same language.
The natural conclusion to this argument is that everyone on earth is speaking the same language, because they all descended from the Mitochondrial Eve’s utterances 150000 years ago.
No, the natural conclusion is that whether two varieties are "the same language" or "different forms of the same language" is a cultural question, not a scientific one that can be answered objectively.
(Tangent -- certainly not all languages are descended from a hypothetical Proto-World, since we have examples of brand new languages forming: any creole languages, or Nicaraguan Sign Language. Whether most of the major language families are descended from a Proto-World is very much an open question, and probably one that will never be answered, since even if say, English and Navajo are genetically related, they diverged so long before we had written records that no trace remains.)
In any case, there's a meaningful difference between a language gradually changing over hundreds of years such that the newer varieties are quite different from the old ones, and a language truly dying, because it has exactly one remaining native speaker, and that person dies.
> No, the natural conclusion is that whether two varieties are "the same language" or "different forms of the same language" is a cultural question, not a scientific one that can be answered objectively.
It's a question that linguists (who are scientists) strive to answer in ways that are useful to studying, reasoning about, and explaining language. It's not much different than the species problem in biology. Everyone knows that speciation happens gradually, but scientists still propose ways of defining and explaining speciation to aid in scientific inquiry. The labels and dividing lines themselves are not empirically observed, of course, but that doesn't mean they're unscientific or outside the purview of scientific inquiry.
As far as I know, there generally aren’t papers that contain debates between multiple linguists. Scientific papers usually aren’t written as debate transcripts. But there are certainly many papers about dead languages, and about the lines between languages. You can look at pretty much any of the citations on Wikipedia articles about dead languages like a Middle English, or creole languages.
> Scientific papers usually aren’t written as debate transcripts.
Sure they are, although a debate wouldn't be contained in one paper written collaboratively by debating authors, as you seem to imagine -- it would play out over a series of papers, each of which cites previous papers and claims they're wrong.
For example, Timm 1989[1] argues that modern Breton is a VSO language, disputing the claims of Varin 1979[2], who claims it is SVO, which in turn disputes the traditional understanding that it is indeed VSO.
Or the famous paper of Haspelmath 2011[3] citing many other authors' proposed approaches to word segmentation and arguing that they're all wrong (i.e., that "word" is not a meaningfully defined concept in linguistics).
Where are the papers that you claim exist about the lines between languages? If this is really something mainstream linguists care about, you should be able to give examples in non-fringe journals.
I just checked the citations on the Wikipedia article for Middle English like you suggested, and found zero papers about whether Middle English should be considered "the same language" as modern English. Can you tell me which ones specifically you mean?
[1]: Timm, L. (1989). "Word Order in 20th Century Breton". Natural Language & Linguistic Theory, Vol. 7, No. 3, Special Celtic Issue, pp. 361-378 (https://sci-hub.tw/10.2307/4047723)
[2]: Varin, A. (1979). "VSO and SVO Order in Breton". Archivum Linguisticum 10, pp. 83-101
It's very much used in science for naming things, for studying old literature and also in the Catholic Church. We also had Latin courses in highschool. Also many words in many European languages, including Englush, have Latin or Greek roots.
I studied (classical, not ecclesiastical) Latin for about 8 years, and I don't think it's accurate to claim that the Romance languages are in any meaningful sense "Latin".
The weaker (and more reasonable) claim is that learning Latin improves your ability to recognize words and (very basic) structures in its descendants.
Roman graffiti from Herculaneum and Pompeii in particular shed some light on early vulgar Latin, suggesting that already that the regional homogeneity of the vulgar register was already breaking down by the time of the Plinian eruption. Given the large volume of uncovered graffiti, it is fairly easy to discern several trends, notably the loss of written dipthongs (æ->e, oe->e; comparably au->o), losses of final unaccented consonants and medial vowels.
http://ancientgraffiti.org/about/ is an excellent resource specifically for Herculaneum and Pompeii, but it also links to broader collections to which the project has contributed.
There are interesting aspects of graffiti throughout the Roman Empire. Children (or exceptionally short adults) practised writing on walls; some taller people's graffiti showed not just literacy but familiarity with Virgil and even decent command of Greek and other second languages. Conversely, numerous graffiti are supporting evidence for partial Latin literacy among speakers of other languages, even among celtic-language informal epigraphers in the west and northwest in the first decades CE. It seems likely that these influences "accented" day-to-day Latin, perhaps comparably to https://en.wikipedia.org/wiki/Singlish .
Latin and the Romance languages are indeed very different, but their similarities are much stronger than just vocabulary. The Romance languages have lost noun cases and gained prepositions, fixed word order, and definite articles, but they retain noun genders (though without neuter), remnants of the case system in pronouns ("je", "me", "moi"), many of the same verb tenses, and the subjunctive/indicative moods, to give a few examples.
Anyway, my point is that it's an error to say that Latin "stopped changing". Other languages have changed similar amounts: modern Americans can't understand Beowulf, modern Greeks can't understand Homer, and modern Chinese people can't understand Confucious, but nobody would claim that English or Greek or Chinese died and stopped changing. The fact that people call Latin, but not English, a "dead language" is purely due to the fact that the different stages of English all happen to be called "English".
In an alternate world where Latin was exactly the same as it was in the past, and Italian is exactly the same as it is now, but Latin had never spread outside of Italy, I suspect that we would today call Latin "Old Italian" (or perhaps we would call Italian "Modern Latin"), and nobody would be having this discussion, despite the scientific/linguistic facts being identical to what they are in our reality.
> The fact that people call Latin, but not English, a "dead language" is purely due to the fact that the different stages of English all happen to be called "English".
I don't think this is the case: the language that Beowulf is written in is normally referred to as Old English. Chaucer is Middle English. Shakespeare is Early Modern English. William Makepeace Thackeray is Victorian English.
IMO, it's reasonable to assert that each of these are "dead" in some meaningful way: even Early Modern and Victorian English, despite their intelligibility, are simply not spoken by any group of current-day English speakers.
It had been scheduled for merge into the LLVM monorepo 2 days ago, but has been delayed pending some additional architecture review.