I just converted a relatively simple node.js program to Rust (because the native libusb bindings for node.js crash immediately on Windows 7 64-bit)
Rust made me worry about a lot more than I wanted to. Threads, locking, mutability. What was 80 lines of:
1. open USB device, get handle
2. start a WebSocket server, accept clients
3. proxy incoming USB device traffic to all WebSocket clients
4. proxy incoming WebSocket traffic to USB device
in node.js turned into battles with dyn FnMut traits to pass "callback/event" handlers around, spawning threads to not block on libusb reading / WebSocket client reading, borrowing, cloning, mutex locking, reference counting with Rust like you can in JavaScript.
You can't just "move variables from one scope into callback/lambda" scope with any sort of ease in Rust.
It sounds a bit like you tried to write Rust like you do JS.
Rust is “move” by default. Passing ownership of a variable to callbacks requires making sure you have the correct types along the way, especially if you want to mutate. It can be easier in many cases to only deal with “owned” data in Rust at first. Clone a lot, and so on.
I have a fair bit of Rust experience, and I agree that it adds a lot of cognitive overhead. It gets better over time as you learn how to appease the compiler, but the borrow checker can still be a pain in the ass. It's something you're constantly aware of, taking up space in your brain.
I can understand the move from C or C++ to Rust, the safety guarantees can be compelling. And it does feel like a much more modern language. However, if something with GC like C# can get the job done, I'd choose that every time over Rust.
Yeah part of this is definitely true - Rust is more more verbose about things you often don't really care about (e.g. `String`/`str` in code that only runs once - who cares?).
But a lot of the extra effort in writing Rust code is about moving runtime errors to compile time. It's like static types - more effort writing it but you don't spend ages debugging typos at runtime. I've had way more instances of spending several hours writing complex code and having it work first time in Rust than in any other language. In C++ it's so rare it makes me extremely suspicious when it happens (did I actually recompile?), whereas in Rust I'm almost starting to expect it.
Modern C++ has a strict type system if you don't walk around it.
Nevertheless, the borrow checker can help with bugs that otherwise could be hiding, which is the actual advantage over C++. But those are not the kind you see right away after running your program again.
I'm not just talking about the borrow checker though - Rust has a much more strict type system in general. It's really explicit about everything, for example you can't just `println!("{}", some_path)` because `Path` might contain invalid UTF-8 that can't be converted to a string.
As another JS -> Rust dev I do agree. Over time I have grown to understand why it complains when I do X, Y and Z and I like having deeper visibility into what's going on. That said, I did some experimentation with Nim and it truly feels like a language that does the best of both. But the community isn't so huge.
Speaking of which, coming from Rust (well, when I say it like that it sounds like Rust is the only language I ever used which is of course not true. I think Rust as a first language would be highly unusual still), I enjoyed being able to say
Rust code:
let foo = if x <= 0
{
0
}
else if something_else > 50
{
9000
}
else
{
42
};
And I miss this in Swift.
The ternary operator is fine on its own but I’m not a huge fan of nested ternary operators.
With a bit of whitespace it's somewhat readable still but still not as comfortable and easy to read correctly.
Swift code:
let foo = x <= 0 ? 0 :
(somethingElse > 50 ? 9000 :
42 )
One way of writing something that is more similar to the way I'd write it in Rust would be to make a closure and then run it.
Swift code:
let foo =
{ () -> Int in
if x <= 0
{
return 0
}
else if somethingElse > 50
{
return 9000
}
else
{
return 42
}
}()
But it's a bit annoying still, both to read and to write. Also, even in the current version of Swift (Swift 5), the compiler cannot infer the return type of the closure on its own even though all of the branches return an Int, so I'd have to explicitly annotate that as I have done in the code above.
I guess for a lot of people they would just make foo mutable and write the code as
Swift code:
var foo = 42
if x <= 0
{
foo = 0
}
else if somethingElse > 50
{
foo = 9000
}
I concede that this is probably the most readable out of all of the three Swift code samples in my comment. But the point is that I didn't want foo to be mutable. I wanted to assign a value to it based on some simple logic and have it be immutable.
I find ternary operators perfectly readable if you split the parts across lines — they map exactly to the conditions and statements in an if-else group.
Cond1 // if this
? Result1 // return this
: cond2 // else if this
? Result2 // return this
: cond3 // else if this
? Result3 // return this
: ResultElse // else return this
It's funny how people have so many different preferences for ternary formatting. I can't think of any other operator that would have tens of different format prefs.
Cond1 // if this
? Result1 // return this
: cond2 // else if this
? Result2 // return this
: cond3 // else if this
? Result3 // return this
: ResultElse // else return this
It's funny how people have so many different preferences for ternary formatting. I can't think of any other operator that would have tens of different format prefs.
I dislike the ternary operator in languages like C++. And many languages copy-paste it because they are used to it. I much prefer the more literal and reversed form in Python.
Agreed. Plus if the else clause has a side effect, you can't assign it to the variable, which means you need to instantiate to null or a dummy value. Very messy.
What's funny is that expression oriented languages translate really nicely to stack VMs. My hobby language compiles down to WASM and it was almost trivial to code gen if expressions.
Agreed about missing if and switch expressions in Swift. My preferred way of writing that would be:
let foo: Int
if x <= 0 {
foo = 0
} else if somethingElse > 50 {
foo = 9000
} else {
foo = 42
}
That way you get immutable foo and the compiler will shout at you if you forget to assign the value in one branch. It's not quite as nice as an expression, but it's less punctuation than the closure and the result is just as safe.
As others explained, you do need a default case (either an 'else' in your if statement, or an exhaustive match), otherwise how would the expression typecheck?
For a concrete example, if you use an 'if' only, you can have the following:
let x = if income < 10 { 0 };
At that point, there's no way to statically know x's type. If income is >= 10, x is what?
In order to convince the compiler x has a statically knowable type, the "if" expression need an else statement, or should be converted into a match that the compiler can prove is exhaustive.
It wouldn't work like that but if you made them else if and else branches respectively then yes it would. Each if/if else/else block as a whole is a single statement. But two if statements and a literal is three individual statements
Coming from writing JS all day, I've recently started learning Rust and I find it extremely refreshing.
Yes, it is heavy, low level and forces you to think about how things work under the hood, but it empowers me to do high performance stuff I just coulnd't do in JS before.
Thanks for that, I don't do enough D to know some of these syntatic features but I do love the language. I do plenty of JS so this overview is spot on for me.
I am a C# guy and every time I see modern Rust, C++ or Go syntax I am really surprised by its deviation from the norm (which is interestingly no longer C but a hybrid between JS/Java/C#). I mean why different Rust syntax for closures and functions. || vs ().
On the other hand, the linage is not Java/JS/C# to Rust but C/C++ to Rust. Considering that, it is an improvement.
Rust has a philosophy of easy machine and human parsing. ECMAScript-style `(…) => …` requires that you get to the => before you know whether the (…) is a parenthesised expression or function arguments. With `|…| …`, you know you’re dealing with a closure immediately because there’s no unary pipe operator.
Many languages require arbitrary lookahead in order to parse code. (Some are even worse, e.g. C++ requires a full compiler to parse it because of the question of whether `(a<b, c>(d))` is calling a generic function a, or doing two comparisons; and Perl is unparseable, you’ve got to run it before you can figure out how certain things should be parsed.)
Rust’s philosophy has been to avoid that, and make it simple to parse, because that helps tooling and people alike.
Most people don’t see why this is a big deal. Here’s why: humans parse code when they’re reading it in a very similar way to how computers do. This holds true of natural language parsing, too, and is much better documented in that industry. If it’s hard or takes longer for a computer to parse it, it’s very likely to be hard for a human to parse it.
If you have fat arrow syntax, you need to skip ahead on the line to find it to confirm that what you’re looking at is a closure. Pipes say from the very start “this is a closure”. This could be said to be why Rust uses the `let` keyword too, which is theoretically unnecessary (though you might need something to replace it in a small fraction of cases). Not only does it make parsing way easier for machines (in a way that makes extending the language grammar later much easier too, but that’s part and parcel of the parsing philosophy), it makes the intent of the line immediately clear to humans.
I agree to avoid look ahead etc, however, that could be solved more consistently when you look holistically at the beginning of your language design. Considering Rust was fresh restart (similar to C#), I expected more consistency.
But maybe I should take a step back, considering my limited Rust knowledge :)
There is at least one compelling argument for making new languages easy for machines to parse: it makes it easier to write good supporting tools.
We don’t just use a compiler or interpreter any more. We also use editors and refactoring tools and debuggers and profilers and style checkers and source formatters and static safety analysers and diff tools and… If the developers of these tools don’t have to worry so much about the mechanics of parsing the source, that leaves more resources to spend on making each tool useful.
To some extent you could achieve the same benefit by making a library available to parse the source and return some sort of annotated AST or similar data structure. However, unless you can provide an easy way to call that library from every language someone might want to use to write a tool, avoiding unnecessarily complicated parsing still seems advantageous.
As you alluded to in your 3rd paragraph, that is exactly the problem that "language servers" [1] aim to solve. The API is JSON-RPC based, which is callable from any tool. If every modern language shipped with a canonical language server, machine parseability becomes a non-issue since the parser would be the same as that of the compiler's.
For instance, VS Code already uses language servers to support autocomplete/refactoring for languages like Python and they're pretty zippy (I used to have concerns about speed, but in practice it's a non-issue). Those same language servers are also supported in Vim, emacs, etc. The editor itself doesn't need to know anything about the underlying language. And it looks like a Rust language server exists:
In fact, to take the idea further, the goal of the Roslyn [2] project is to expose APIs to the compiler itself in order to provide services to external dev tooling. Imagine a third-party generic debugger being able to tap into compiler or runtime internals and provide services around them it without explicitly knowing anything about the underlying language.
It probably won’t surprise you to learn that I had LSP in mind when writing my previous comment. :-)
I think there is a wider issue here than the (still very useful) scope of LSP, though. If you are building a tool that depends heavily on the semantics of the source language, beyond common operations like “go to definition”, you might need more than a standardised language server can provide, and then you’re back to the position I described before.
I had the sense it was on the tip of your tongue since the way you described it matched to a T, but I decided to flesh it out for folks who hadn't heard of LSPs. :)
You're right, at this moment, the LSP is circumscribed in that it doesn't expose the AST (even though it could) which means one is limited to the capabilities it does expose. The claim is that the goal is parity across languages and exposing the ASTs is contrary to this [1]. I'm hoping this philosophy is challenged. In the mean time one might have to rely on languages themselves exposing their ASTs, eg. Python is able to expose its ast (via the ast module).
So yes, right now one wouldn't be able to write say an IntelliJ IDEA type IDE based off a language server alone -- one wouldd have to be able to create the AST oneself.
> What's the reason for including ease of machine parsing? Aside from the fact it might make the compiler slighty easier to write I guess.
Most of the designers have spent a long time writing C++, and C++ is famously extremely hard to parse for machines, and this has caused a lot of problems for compiler/language authors. It's normal for people to overcorrect when they have real trouble with something.
> I'm sure that code that's easy for machines and humans to parse is harder for humans to parse than code where humans are the first class citizens.
I'm not sure at all about that. Making syntax that is easy for machines to parse mostly means that your syntax needs to be wholly context-free and low-lookahead. Both these features help people too, even if they result in more different kinds of characters in your source code.
That works for JavaScript, because functions are just functions.
But in Rust, closures are different from functions, though they all implement the Fn, FnMut or FnOnce traits. Functions don’t close over any state (so you can cast them to a 'static function pointer), while closures can. Using the fn keyword for closures would thus muddy the conceptual waters—though you could argue that the names of the Fn* traits has already done that—and potentially hinder future language design.
I feel a stronger argument is that it’s also markedly longer and more visually noisy; `x.map(|x| …)` is seven characters shorter than `x.map(fn x => …)` and eight than `x.map(fn(x) => …)`, and to some extent words are more distracting than symbols.
Pipes isn’t perfect, but given the consistent philosophies of the Rust language, I am not aware of any better option. But it’s definitely a fairly subjective matter.
> Arrow functions are a popular feature in modern JavaScript - they allow us to write functional code in a concise way.
> Rust has something similar and they are called “Closures”. The name might be a bit confusing (...)
This is in turn confusing to me!
I can't imagine a world where you know JS and especially "functional" JS, but haven't heard of the term "closure".
"Arrow functions" are just sugar. They don't have any special meaning. All functions in JS, regardless of whether they are declared, assigned, returned etc., are closures and capture their environment.
Similarly a normally declared function in Rust are first class values as well and for example can be passed into higher order functions.
Giving the benefit of the doubt, I assume that the author knows this and just wanted to introduce the term in a beginner friendly manner. But I think it would be less confusing to just use the term and link to an official documentation:
This statement is incorrect. There are important differences between arrow functions and regular function.
> An arrow function expression is a syntactically compact alternative to a regular function expression, although without its own bindings to the this, arguments, super, or new.target keywords
You can refactor an arrow function into a specific regular function but you usually end up with something more verbose.
Or in other words you can express all things "arrow" before it existed. Which is exactly what we did before they were adopted. And we still in fact do, just automated by transpilers like Babel[0].
But more importantly, in the context of the article: all functions in JS are closures, regardless of how you write them.
Well you couldn't resolve "this" lexically without some closure tricks like that = this , so sure, you could do everything you do with arrows without them, but not with one function alone
Doing currency manipulation using with i32 as an example for developers of a language that only supports float. What can go wrong?
Not only this is a bad idea from a programming perspective, or a teaching perspective, it is also a bad idea from a taxation perspective. In terms of taxation, what you suggest would result in [at least] civil penalties.
If you are going to create examples of code that manipulates currency, at least take it seriously and think about what would actually happen to people if they decided to use your code. Otherwise, use another concept to illustrate your example.
> Doing currency manipulation using with i32 as an example for developers of a language that only supports float. What can go wrong?
Using i32 for currency calculations is probably a bad idea (although there are situations where it would be OK), but this has absolutely nothing to do with javascript - it's no worse in js than it is in rust, since javascript fully supports all i32 numbers. In fact, it has a number of bitwise operators that operate on i32 numbers, and a bitwise operator that even works on u32 numbers.
Rust made me worry about a lot more than I wanted to. Threads, locking, mutability. What was 80 lines of:
1. open USB device, get handle
2. start a WebSocket server, accept clients
3. proxy incoming USB device traffic to all WebSocket clients
4. proxy incoming WebSocket traffic to USB device
in node.js turned into battles with dyn FnMut traits to pass "callback/event" handlers around, spawning threads to not block on libusb reading / WebSocket client reading, borrowing, cloning, mutex locking, reference counting with Rust like you can in JavaScript.
You can't just "move variables from one scope into callback/lambda" scope with any sort of ease in Rust.