I have had a somewhat unearned distaste for Python for the last decade or so.
I mostly just don’t like some of the design decisions it made. I don’t like that lambdas can’t span multiple lines, I don’t like how slow loops are, I don’t like some functions seem to mutate lists and others don’t, and I am sure that there are other things I missed.
But it really comes down to the fact that my career hasn’t used it much. I occasionally have had to jump into it because of scripting stuff, and I even did teach it for a programming 101 course at a university, but I haven’t had a lot of exposure otherwise.
For scripting stuff for myself, I usually end up using Clojure with GraalVM (yes I know about babashka), or nowadays even just a static linked compiled language like Rust.
I don’t really understand why people think that compiled languages can’t be used for scripting (or at least task automation), honestly. Yes you have to add a compilation step. This involves maybe one extra step during development, but realistically not even that. With Rust I just do cargo run while developing, I don’t see how that’s harder than typing Python.
> The most insane python feature is that for loops keep their intermediate variable.
"Insane feature" is a generous way of describing this behavior. I would say it is just a stupid bug that has managed to persist. Probably it is impossible to fix now because of https://xkcd.com/1172/
How typecheckers and linters should deal with this is a tricky question. There is how the language ought to work, and then there is how the language actually does in fact work, and unfortunately they are not the same.
Comments like this basically should say "I want as much handholding as possible"
Lucky for you, LLMs are pretty good at that these days.
>And the types I can never trust. I've used all the tooling and you still get type errors in runtime. It's also ridiculously slow.
IDE integrated mypy checking does this in the background as you type. As for errors, it all has to do with how much typing you actually use. You can set the IDE to throw warning based around any types or lack of type annotation.
Because sometimes I want to have logic that spans multiple lines and I don't want to assign it a name. An easy example might be something with a `map` or a filter. For example, in JavaScript
[1,2,3,4].filter(x => {
let z = x * 2;
let y = x * 3;
let a = x / 2;
return (z + x * a) % 27 == 2;
});
Obviously I know I could name this function and feed it in, but for one-off logic like this I feel a lambda is descriptive enough and I like that it can be done in-place.
You're free to disagree, but I think there's a reason that most languages do allow multi-line lambdas now, even Java.
> Obviously I know I could name this function and feed it in, but for one-off logic like this I feel a lambda is descriptive enough and I like that it can be done in-place.
FWIW, you'd also have the benefit of being able to unit test your logic.
I mean, maybe, that's why your lambdas shouldn't be too long.
I have done a lot of Haskell and F#, and I'm very familiar with the concept of "lifting", and yeah being able to individually test the components is nice, but even within Haskell it's not too uncommon to use a lambda if the logic doesn't really need to be reused or is only a couple lines.
If you have a huge function, or you think there's any chance of the logic being reused, of course don't use a lambda, use a named function. I'm just saying that sometimes stuff that has 2-4 lines is still not worthy of having a name.
Honestly, I don't really see the appeal of unnamed functions in general. I so rarely use lambdas that I wouldn't really miss them if they were gone. Just occasionally as a sort key, or in a comprehension.
I have seen people do this in JavaScript quite often, but I always assumed there was some kind of underlying performance benefit that I didn't know about.
As I think about it I guess it makes sense if you're passing a function to a function and you just want it to be concise. I could imagine using something like that off the top of my head, but then pulling it apart and giving it a name the moment I had to troubleshoot it. Which is how I currently use nested comprehensions, just blurt them out in the moment but refactor at the first sign of trouble.
I think maybe I just have trouble seeing some of the braces and stuff, and it's easier for me to reason about if it's named. I guess that's why we have 32 flavors.
Thanks for answering me honestly I really do appreciate it, even if my tone came off as dismissive. Sometimes I don't realize how I sound until after I read it back.
Obviously it's totally fine to have a difference of opinion for something like this.
> I have seen people do this in JavaScript quite often, but I always assumed there was some kind of underlying performance benefit that I didn't know about.
I don't think so, at least I haven't heard of it if there is.
I tend to have a rule of thumb of "if it's more than 6-7 lines, give it a name". That's not a strict rule, but it's something I try and force myself to do.
Like in Python, most lambdas can be done in one line, but that also kind of gets into a separate bit of gross logic, because you might try and cram as much into an expression as possible.
Like, in my example, it could be written like this:
But now I have one giant-ass expression because I put it all into one line. Now where previously I had two extra names for the variables, I have the ad-hoc logic shoved in there because I wanted to squeeze it into a lambda.
And I think that's where the reasoning behind only allowing one line lambdas came from. I believe I read a thread a long time ago where GVR didn't even want to include lambdas at all, if I have time I might look for it and edit in a link.
At it's core, I think it's fair to say Python is about forcing the user into formatting their code in a readable way. It's gotten away from it over the years for practicality reasons, and to increase adoption by people who disagree on which ways are more readable.
Sometimes I wish they would take nested comprehensions away from me, I am too lazy to avoid them in the heat of the moment, and I get a thrill out of making it work, even though I know they're disgusting.
If your style is doing a lot of functional programming, multi-line lambda is a very natural thing to do. Other times you want to use a variable or several variables without actually passing it around as an argument. It makes sense especially if you are already used to it in Java/C++/JavaScript/Go.
Is it "better" than a named function? No, of course, they work mostly the same. But we are not talking about better or not. We are talking about syntax just for the sake for syntax, because some people prefer to write code in a way you don't necessarily care about.
This makes a lot of sense, I got to a similar conclusion in my other reply.
I always thought the appeal of functional programming was more about test-ability and concurrency, it never occurred to me that people actually preferred the syntax.
Can't speak for anyone else, obviously, but part of the reason that I got into functional programming is specifically because I found the syntax very expressive. I felt like I was able to directly express my intent instead of describing a sequence of steps and hope that my intent comes to fruition.
Different strokes and whatnot, not everyone likes functional programming and of course there are valid enough criticisms against it, but I've always appreciated how terse and simple it feels compared to imperative stuff.
Even with regards to testability, if your function is pure and not mucking with IO or something, then even using a multi line lambda shouldn't affect that much. You would test function calling it.
Keep in mind, Haskell doesn't really have "loops" like you'd get in Python; in Python you might not necessarily need the lambda because you might do your one-off logic inside a for loop or something. In Haskell you have map and filter and reduce and recursion, that's basically it.
weeell, you can still do stuff like this in Haskell:
import Data.Vector.Mutable
...
let n = length vec
forM_ [0 .. n-1] $ \i -> do
next <- if i < n-1 then read vec (i+1) else pure 0
modify vec (+ next) i -- increase this value by next value
it's just in many cases the functional machinery is so accessible that you don't need to reach for for-loops.
I mostly just don’t like some of the design decisions it made. I don’t like that lambdas can’t span multiple lines, I don’t like how slow loops are, I don’t like some functions seem to mutate lists and others don’t, and I am sure that there are other things I missed.
But it really comes down to the fact that my career hasn’t used it much. I occasionally have had to jump into it because of scripting stuff, and I even did teach it for a programming 101 course at a university, but I haven’t had a lot of exposure otherwise.
For scripting stuff for myself, I usually end up using Clojure with GraalVM (yes I know about babashka), or nowadays even just a static linked compiled language like Rust.
I don’t really understand why people think that compiled languages can’t be used for scripting (or at least task automation), honestly. Yes you have to add a compilation step. This involves maybe one extra step during development, but realistically not even that. With Rust I just do cargo run while developing, I don’t see how that’s harder than typing Python.