Scala has a lot of interesting concepts in it for sure! The problem is that the JVM is a deal-breaker, either because of its complexity or because of the associated licensing risk. Clojure has the same problem, for what it's worth.
I think intrinsic to that deal-breaker is the underlying problem. Memory management in a parallel environment is a tough nut to crack. Leaning on the JVM punts on the issue.
There's no reason JavaScript couldn't transparently implement parallel map/filter/etc. Other than the fact it is really hard to do. On the other hand a lot of the things JavaScript engines do today are also really hard, so maybe one day it'll happen.
Best I've seen so far (outside of lisp) is elixir, but it really bogs down the entire thing by requiring a bunch of extra information and metadata that common lisp can infer by position and an extra layer of indirection.
Thank you. I think you've basically summarized the approach! If you have any specific guidance to that end please send me an email or DM with details :)
If the setter is at compile-time it might still be functional. If it's at run-time, not. A run-time setter must be a binding (as in lambda args), not an assignment.
I disagree. You make two claims: familiarity, and beginner friendliness.
For the first, I think it's a mistake to perpetuate the mistakes of the past. I don't think JS developers have had problems adjusting to the lack of return in arrow functions.
Beginners have no preconceived notions of how a programming language should operate, and return makes the language model more complex.
1 + 1 evaluates to 2, but (in a language that requires return)
def foo = 1 + 1
foo
doesn't for no good reason. This breaks the simple substitution model of evaluation.
These claims could be addressed emperically, but I guess neither of us are going to do the research. :-)
For a very simple, single-expression lambda function I agree you don't need an explicit return. Even Python skips the "return" for lambdas. But for anything more complex, I find explicit returns, especially early returns, makes the code much more readable for people who are used to imperative languages.
For example, which of these is more clear to people who don't know Lisp? I'd argue the second one because of the early return if guard.
I love to swing a hammer as much as the next programmer. But if you offer me a nail gun, even with slightly lower precision, I will happily use it the majority of the time and revert to my hand tools when it's most appropriate. This is about developing force multipliers and producing leverage, not avoiding the craft.
I would say that a better analogy is that you've already got an experienced crew, some swinging hammers, some lugging nail guns and everyone's really quite good at what they do. But the plans from the architect are just really hard to build. You and the crew can do it, for sure, but the issues are not going to better or worse if the guys use the table saw to crosscut some of the framing elements or someone gets one of those new german tool widgets. I mean, sure, the tools might make a small difference to the process, but the overall experience will be dominated by the fact that the thing is just hard to build.
Let's keep going with the building analogy. You're running a cabinet making company. You've got all kinds of hand tools and power tools. You build jigs to make certain repetitive tasks faster. Then a ridiculously difficult design comes in for you to build. You and your crew are flummoxed by its complexity. Suddenly, someone offers you a CNC machine that you've never used before. What seemed hard is now easy. The nature of solving problems with a CNC is different. Using a CNC presents other challenges. But you have entered a new realm of what's possible. Analogies are a lot of BS but hopefully that gets across the flavor of what I'm talking about.
It's a fine analogy. The only problem is that I don't recall ever coming across the equivalent of a CNC machine for the class of problems I face in my work.
Thread-safe lock-free sparse integer-to-integer map? No CNC for that.
Translating time between two domains, one of which is linear and monotonic, and the other is non-linear and non-monotonic. No CNC for that.
Generating and caching the right versions of different segments of audio waveforms at different zoom scales, in multiple threads? No CNC for that.
I could go on, but you get the point.
What tends to be more like a CNC machine are libraries. For example, realizing that you need some sort of reference-counting system for lifetime management, preferably combined with pointer-like behavior ... and then discovering boost::shared_ptr (later to be std::shared_ptr) ... now that's like getting a new CNC machine. But it doesn't require a new language (and realistically, it didn't even require the library - the library just made it possible to not implement it locally).
I think what I'm really trying to say is that I rarely come across problems where I think that the kind of help offered by the putative "new CNC machine aka new language" is anywhere nearly as substantive as the help offered by an actual CNC machine to a cabinet making company. Put differently, the new tool (language) still leaves the problem essentially as hard as it was before.
p.s. a good friend runs a high end wood shop, and I'm fairly aware of the impact their first CNC machine made to what they could do.
When the web became popularized there problem of writing really fast concurrent servers that can handle 10k connections without the overhead of 10k threads.
This problem is arguably harder than the example problems you gave and this problem was solved by language primitives that now exist in basically every popular language. These primitives, when used basically change the nature of the language they are used in.
These primitives (async await) are more than libraries. They intrinsically change the nature of your code. (Though technically they could be made into libraries for languages without async await it's just the syntax would be extremely busy)
This only occured because the web was popular and the specific problem of servers and IO changed from a specific problem to a general one. So when someone wants to create a new language it's to attack a general problem.
Your issues in your example look to be somewhat domain specific, so new languages won't really help you in these specific areas you need to handle.
>Generating and caching the right versions of different segments of audio waveforms at different zoom scales, in multiple threads? No CNC for that.
I would say that for this example there are enough general issues here that modern languages CAN help you with. For example do you want to program in a language that can guarantee with helpful static error messages that your code will never have a data race or a seg fault or a buffer overflow or a dangling pointer?
Well there's a language that can help you here. In the same vein I've seen languages go even further then this and guarantee that the compiler will never ever let you write code that will make your program crash.
I think we can both agree that these general features that improve safety WILL make the issues you face easier.
Based on my experience (and I was doing web stuff starting in 92), this is a mistelling of the tale you're trying tell.
The problem was called "the thundering herd": if you had N threads all sleeping/waiting on a condition, and then that condition was raised/signalled, there were no OS primitives available that would wake only a single thread. Instead they all woke up, tried to get whatever work was available, only one succeeded, the rest go back to sleep. Incredible waste of cycles. These days, you can signal a condition in a way that will only wake a single thread that is waiting on it. Problem solved, for every language, without language modifications.
This was NOT fixed at the language level. It was fixed by adding new OS-level primitives that did the right thing.
Async-wait is another wrinkle in this, but for those of us old enough to remember life before pthreads, that was already effectively taken care of using threads (whatever the API) and existing OS-level sleep/wait primitives.
Doing this without threads is popular among the cool kids these days, but that's even harder than doing it with threads. Consequently, various languages have wrapped this sort of code into builtins in the language. Yes, that makes thread-less async wait easier to code, but it doesn't actually address the design problems where you might be using thread-less async wait to accomplish something.
The problem with waveform caching is not data races etc (though of course, those issues are hard enough). It's figuring out what you should cache and when. The best answers vary depending on user behavior, so you need an adaptive approach that isn't particularly linear, and you also a need way to recognize when user behavior means you should clear out everything in the cache and start over.
> What tends to be more like a CNC machine are libraries.
If a language or library gives you composable semantics, you may have a programming CnC on your hand. CnC requires minimally one degree of disconnect between the artefact and the artisan. A language compiler/runtime (or library) that applies composable semantics to logical & computational abstractions.
But the syntax is precisely the only thing a language can give you that a library cannot.
This might be true on a similar level to all Turing-complete languages being equivalent, but I’m not sure how useful it is beyond that level. For example, the features that a language provides as directly supported building blocks and the guarantees a language makes about how certain entities will or won’t behave and relate to each other profoundly affects the developer’s experience. That remains true even if some of those features could eventually have been recreated with a library modulo syntax and even if a perfect programmer would always use them properly and never rely on the language to prevent a mistake.
Right, but that means it has to be implemented by someone. It could be a language builtin, but almost nobody is going to switch languages for such a feature (if it could even exist as a language feature anyway). Or it could be a library, in which case the question of better languages is again moot.
The idea that the compiler could somehow pick "the right" implementation of a sparse unordered map based on a list of constraints that combine to create potentially dozens of versions strikes me as far-fetched. Even specifying "lock-free" for example is very, very far from providing enough detail about what is actually required. Wait-free? Readers-not-blocked-by-writers? Writers-not-blocked-by-readers? etc. etc.
I don't dream of a future when compilers (or something) can somehow do all this, and I'm not convinced it will arrive. But I've been (very) wrong before.
Fair points. It would need to be a meta langauge of sorts, with compilation stages, stuff like that. Maybe a compromise can be reached with customizable compilers. PaulDavisThe1st's meta-compiler may know which sparse lock-free doodad you want. Even a CnC machine can't just spit out anything under the sun. Key thought here is industrialization of software production. It will (has to) happen (not that I'm pining for it :o) And our little chat here is whether programming languages will have a role to play here.
This was fixed at the OS level. But the usage of new system calls involved fundamental shifts in basically every language to account for the new paradigm.
When it gets out of its data-science pedigree and can be used for standard apps. When it gets good tooling. When its stack-traces stop being arcane gobbledygook. When it can be compiled to a single binary at the command line without going through arcane hoops. When it gets interfaces/protocols/traits.
At the moment, Julia is a nice language at v1alpha1 for scripts and data exploration.
I agree it lacks all of these things, but in scientific computing - at the current state - it is already ahead of fortran, python, and C++ in terms of convenience. Precompilation doesn't matter as much here, and the packaging and JIT compilation as well as relatively simple FFI are making one's life a lot easier.
And NB, python also doesn't offer many of these features, such as dependency management and simple single-binary builds. Yet it's popular.
"And NB, python also doesn't offer many of these features, such as dependency management and simple single-binary builds"
Python has tools for both of these - including virtualenv in standard Python. Nuitka if you want native compilation to a single binary. Julia has none.
I did mention simple single-binary builds using the CLI. (ie - it just works with one command). Not fiddling around for hours with PackageCompiler.jl. Fiddling around with snoopfiles, then forced to ask questions on the forums to do what other languages do out of the box is not the way to go for developer ergonomics.
Uhh, you don't have to do anything with snoop files, and it's just a one line CLI call.
julia -e 'using PackageCompiler; create_sysimage(["MyPackage"], sysimage_path="MyPackage.so"; precompile_execution_file = "MyScriptOfWhatToCompile.jl")'
and now you have a binary. How are people "forced to ask questions on the fourms" if the only thing to do is to change file location names? Are you talking about PackageCompiler from 2019 or PackageCompiler from 2023?
It's true and that is one of the largest challenges. Producing something that feels familiar (but not error prone) has been the way I've solved such conflicts so far.