Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Arguably a programming language is ultimately a user interface, and the more intuitive the interface, the better.

In 2020, not sure we should care that much about how hard compilers have to work to achieve this. Computers and software are here to support us--we're not here to support them.



A parser tends to get confused in the same places where a less experienced human would get confused. Making a language that's easy to parse dovetails with making one that's easy to read.


That's a reasonable point. As long as it's the "good for humans" that driving things, this makes total sense. It's "good for computers" but "bad for humans" that needs to go away these days.


> In 2020, not sure we should care that much about how hard compilers have to work to achieve this.

That's not right. Long compile times are a real issue for some programming languages even today (Rust and C++).

> Computers and software are here to support us--we're not here to support them.

A false dichotomy, and not a perspective that offers any insight. Sometimes a low-level language is appropriate, and sometimes it is not.


C++ is stuck in this trap where because it's slow to compile, the compiler maintainers increase the amount of optimizations the compiler does to make it faster. Which of course makes the compiler even slower. Which motivates them to increase the amount of optimizations. Which makes the compiler yet slower.

I feel part of the problem with rust is it doesn't have quick and dirty mode thats fast and a production ready mode that is slow but does all the checks. I do this with my C programs, using various formal analysis tools which are slow to vet the code before releasing it.


Checks in rust are fast. In fact, you can 'cargo check' to run checks without actually compiling, and that will finish in less than half a second.

Most editors did this on save before LSP came along. For logic, this is fast enough because the type system catches enough mistakes and I don't need to run tests all the time. For UI though, a faster iteration cycle would be nice...


C++ debug builds (i.e. without optimizations) are not that much faster in my experience. So it's not optimizations that are the culprit.


The checks are not the most expensive part of compiling Rust.


What is it then?


The codegen is. This is partially because the IR passed to LLVM is not the best, but it’s also not terrible.


Does Rust allow for fast unoptimised builds then? Perhaps I'd missed that.


Faster, yes. Still lots to do.


You're saying LLVM is slow even when generating code without optimisations?


Yes.

The thing that will massively speed up rustc is the current re-architecting of it. We’re at the point of “few percent here, few percent there” with the current design. These add up over time, of course, but batch compilers are inherently slower than the newer style ones (after an initial compile).


To add to Steve’s point, Rust adds more checks during debug (non-release) more, which can makes the IR larger. So it’s not always the case that debug mode is faster to compile (though it generally is)


I'd consider compile time as part of the UI, so in that sense I think we agree. (If the compile is long because the implementation is poor, that needs to be fixed.)

Not sure what you mean on the dichotomy. If someone says that a language needs to have X because that will make things simpler for the computer, I say that they are wrong. The goal, the only reasonable goal, is to make things better for humans.


> If the compile is long because the implementation is poor, that needs to be fixed

The compile time could be long because the implementation is poor. But it's also possible that the specific requirements do not allow for a significantly faster compile time.

That's why the requirements matter. They determine the space of possible implementations. If the requirements eliminate all "fast" implementations, then the resulting user experience will be poor because of slow compile times.


The best example of this problem is probably SPARK, a variant of Ada which permits formal verification.

Its verifiers are awfully finicky, and tuning the parameters (including selecting the most appropriate verifier) can mean the difference between successful completion in a few seconds, and outright non-termination/timeout-with-failure.

It's true that the answer is to have better verifiers, but that's not just a matter of tweaking the verifier code, it's a serious research challenge. One of the most serious problems with formal methods is the ability to scale.

https://en.wikipedia.org/wiki/SPARK_(programming_language)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: