Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On similar note, why Rust over Go?

If I look at everything I used to write in C, I'd say 80% is well suited for Go and the rest I would fallthrough to Rust for. For the stuff where having a GC and slightly less control is OK, I don't see why I would want to use Rust. Rust is just much more complex and I prefer to keep it simple stupid (KISS).

Basically, Go is good for 90% of what I used to use Java for and 80% of what I used to use C for... trying to understand where it makes sense for Rust to fit in.



I don't use Rust just because I want to avoid GC. I also use it for algebraic data types, compile time elimination of data races, sophisticated polymorphism, a clear and simple module system, excellent tooling in the form of Cargo and an unrelenting focus on providing abstractions with as little overhead as possible.

(I've used Go and Rust daily for the past few years. I love them both.)


Good point. I agree there are other reasons to use Go, I what mean is that unless I have a project that gets a lot of bang for the buck out of those, the KISSness of Go wins out over those nice features and their correlated complexity. GC was just at the forefront of that list of features.


There's simple, and there's anti-intellectual. Go would be the latter.


Well, in many respects Basic, PHP, Javascript, Java all had various forms of "anti-intellectualism" baked. They lost some of them along the way.

Still, the fact that all those languages are in the top 10 of programming languages kind of says that people don't really care.


Go is against the sort of programming that builds up conceptual dream worlds of gratuitous abstraction and needless complexity. When that's the only way you want to think, Go will indeed seem anti-intellectual.


A modern programming language without any whatsoever support for generics from my point of view is just extremely bad, for someone else can very well be proof of anti-intellectualism.


> I don't use Rust just because I want to avoid GC.

Go that is?


Hmm, not sure I understand? Re-reading, perhaps my phrasing wasn't clear. What I meant was that I use Rust, and it's not simply because it lacks GC. There are lots of other good reasons too.

To be even clearer: I don't think Rust's value proposition depends on whether you absolutely must avoid GC or not.


Rust has gc, no?


No, it does not. It does allow you to implement it if you want though: https://github.com/Manishearth/rust-gc


Not in the mandatory runtime overhead, cycle collection, or non-deterministic pausing senses of the word.


It has optional RC type.

It had GC as core part of language, about six or more years ago?


It has reference counting for some of the constructs which is a form of garbage collection and it has runtime overhead. Like shared_ptr in c++.


This is a misleading comment. A Rust program may use reference counting by explicitly wrapping some value in Rc<> or Arc<>; the standard library provides these generic types. There are no "constructs" built in to the language or standard library that require the use of reference counting.


I wrote it's like shared_ptr which is also construct in C++ standard library, what is misleading here? You don't need to use it also in C++ and I did not wrote that you need to use it in Rust also, you disagree with something that I didn't wrote.

PS. Pozdrawiam rodaka.


Not really, no. You could implement one with e.g. reference counting, but one is not provided for you by default.


Depends on your Rust implementation. You can have an implementation without one and use it to make, say, an operating system.


There is only one implementation of Rust, and it does not have tracing GC. The language does not include semantics for one, so it would be an extension of the language.


Right. I had that backwards. (I don't program Rust)


> I don't program Rust

If you don't know anything about rust, you shouldn't respond to a question about rust.


No. He is saying he uses rust to avoid GC, but not just to avoid GC.


For me this is pretty easy. It's about leadership.

The leaders of Go foster an attitude of exclusiveness (just like a month ago they wanted to get rid of the Go subreddit in favor of a solely Google owned option of Google groups).

The leaders of Rust are very receptive and helpful to new people. They are on IRC / Reddit and many other channels.

I'd much rather invest my time into a truly open language and to me, that is not Go.


> just like a month ago they wanted to get rid of the Go subreddit

To be fair, this was a proposal by a single person on the Go mailing list, and it was in reaction to Reddit's CEO publicly admitting to editing other users' comments. The person who proposed closing the Go subreddit was also under the mistaken impression that the subreddit was hosted by the Go team, which wasn't the case. In the end, there was a lot of discussion, and nothing was deleted. Tempest in a teapot, as usual.

Go has a serious culture problem, but that's not a good example of it.


What's a good example of it?


I don't want to single out anyone or any specific projects, so this is going to be a generalization, but since you asked:

In my experience, there's a lot of hostility and arrogance. More than in any other community, I've seen Go developers shut down discussion by closing comment threads on Github; reject pull requests and refuse to debate the merits of the change; castigate people for "not following proper procedure"; and being dismissive or contemptuous instead of humble when they fail to understand a problem that is being discussed (on, say, the official Go Slack channel).

My very personal hypothesis is that this is a case of mirroring. The Go development team may be said to have a laconic, authoritarian style, which is, of course, their prerogative, and which befits their position; unfortunately, a lot of Go developers seem to be under the impression that they, too, can behave like demigods. In particular, Go developers, more than other cultures, seem to get an ego boost out of telling people "no".

But this is a generalization. There are lots of friendly Go devs around, to be sure (I hope I'm one of them). I'm particularly happy with the Kubernetes team. At the same time, I do think it's an issue and one that the community needs to be aware of.


> the official Go Slack channel

This is very misleading. There is no official Go slack channel. The slack channel you are referring is just a slack channel and is by no means official.

> In particular, Go developers, more than other cultures, seem to get an ego boost out of telling people "no".

This is misleading as well. Go was designed from the start to exclude certain features e.g. inheritance and many others. There are usually very good reasons for those decisions that people who've been following Go from its creation know and understand very well.

Now what happens when someone who does not understand those design choices comes to Go? They want their favorite feature obviously! When Go members attempt to explain them why something like that does not exist in Go (and probably will never be included) it starts feeling like "no". And people do not take "no" well. They get emotional and fail to see reason. If they had bothered to explore the language or at least read the official documentation and FAQ, that state of mind might have been avoided.

Including everyone's favorite feature does not make a language better.

Go has created a community around a certain school of programming. Nobody claims that it is better than other schools but it is a fact that it exists. But who is the one that is close-minded in this case? The new person that doesn't bother with the teachings of the school or the school that dismisses the ideas of the new person? Who is really the one that says "no"?

From my experience, the Go developers always carefully consider every new idea that is brought to the table. But it is also a fact that after the 10th time you've seen the same idea, you are not going to sit down and spend time considering it. You are just gonna link a previous discussion and say "Sorry this has been brought up before, please check these". How does that feel to a new person? I bet it feels like "no" again.


>I'm particularly happy with the Kubernetes team.

I find it pretty terrible that Kubernetes develops on Slack/Github/Videochats. For being an Open Source project, it rejects open source tooling and makes it difficult for people with poor english skills to participate (text-based meetings are much better for inclusiveness). Even if you do listen in on the video meetings, they give you a feeling as an outsider that many of the decisions have already been made elsewhere (some obscure github issue conversation, a google doc comment on a 1 year old doc, etc). It seems the only way to meaningfully participate in the k8s community is to be a known insider... :(


Kubernetes is opensource-by-Google. It's suprisingly open, and still alive, which is better than almost any of the other Google known projects. (Chrome and Android come to mind, but Android is write-only, and Chrome isn't that great at listening to users either. Not that Mozilla is better about Firefox stuff, but the Rust team is wonderful.)


I only saw one Go team member speak in favor of deleting /r/golang, in order to dissociate the project from what he viewed as deplorable actions by reddit's CEO. I thought the proposal was rash but I don't believe the purpose was to herd people onto Google properties.


> (just like a month ago they wanted to get rid of the Go subreddit in favor of a solely Google owned option of Google groups)

Wow, really? Do you have a link to that discussion? That's wild.


I believe this post was stickied at the top of the golang subreddit for a bit: https://www.reddit.com/r/golang/comments/5eubdp/the_future_o...

It should be a good summary of the event.


I write libraries.

In Go I can only write libraries for Go programs. In Rust I can write libraries for any program.

i.e. Rust can easily produce static and dynamic libraries that are linkable with C programs and any language with a C FFI. I can write Rust code that works for programmers using C, C++, C#, D, Go, Swift, Python, PHP, Java, etc.



> In Go I can only write libraries for Go programs.

That's not true. It's quite simple to create a loadable shared object in go and call it using anything with a c ffi.


Calling C from Go has significant overhead [1], doesn't that mean calling Go from C is equally slow?

[1] https://www.cockroachlabs.com/blog/the-cost-and-complexity-o...


It may be even worse calling Go from C since you are bring the whole Go runtime with GC and all when you call into Go.


It's not quite simple because of the GC go brings.

As proof, notice that it barely happens, and only as an oddity in Go, yet in rust there are actual uses (e.g. ruby and python library optimization)


Like Dlang


>On similar note, why Rust over Go?

I was going to say that rust has performance advantages over Go (due to GC), but look at benchmarks:

http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan...

Go wins some and looses some, but it's all in the ballpark (except Binary trees [1] which it loses even to Java(!)).

It's true that rust is a new language, but so is Go.

[1]: I assume it's because it's a test of GC, but Go loses to Java (which, like Go, is a GC language)


This is only because the Rust implementations are using particularly slow code paths, either because SIMD/AVX optimizations requires a nightly compiler, some optimizations would require unsafe code, or that other languages are using particularly hacky code that would never fly in real world software.

For example, many of the Java/C/C++ benchmarks are using custom optimizations that should be illegal for the benchmarking. Case in point, some are featuring custom hash maps that feature hashing algorithms that, while fast, would never be useful as they provide no protection against collisions. You'll see a hashing algorithm in a C preprocessor, for example, that just fakes having an actual algorithm whereas Rust examples are sticking to the tried and tested production-grade algorithms shipping in the standard library.


> Case in point, some are featuring custom hash maps that feature hashing algorithms that, while fast, would never be useful as they provide no protection against collisions.

They are useful in this case, aren't they? Custom hash map implementation can be also useful in other scenarios. This is feature of the language that allow you to do that so it's not "illegal" to use it. If Rust doesn't allow you to do something while other language does, it doesn't mean it should be illegal.

I don't see any tricks in Go and it's faster than Rust and more memory efficient in almost half of the benchmarks. So what's your excuse for those cases ?


I believe that what your OP mmstick might have been saying is that the hashing algorithm used in the C version of the algorithm might be very different from the hashing algorithm in the Rust version, and this difference might be significant.

I don't speak Rust, so I can't quite tell what's going on here:

http://benchmarksgame.alioth.debian.org/u64q/program.php?tes...

But the C hash function is very simple, and probably not at all collision-resistant:

  #define CUSTOM_HASH_FUNCTION(key) (khint32_t)((key) ^ (key)>>7)
Again, I don't speak Rust so hopefully someone can comment on this, but I don't see that same simple hash is being used in the Rust...I think it might be using a default hash function, which would probably be more collision resistant.

I am sure that the Rust could be made to do the same thing, but I am not sure that, as-is, this is an apples to apples comparison.


llogiq wrote a post about this: https://llogiq.github.io/2016/12/08/hash.html


I have wondered the same for a while now, and I hope someone familiar with Rust can explain it: in many of the benchmarks Go is faster and/or uses less memory despite the GC. Like you, I don't see much trickiness in the Go code.


In many cases a garbage collected language will be faster when it comes to allocations than using naive allocation strategies (e.g. reallocating a hashmap many times as it grows instead of reserving memory upfront). This is because the garbage collector tends to have preallocated memory lying around, while the RAII needs to call malloc and free every time a heap object is created.

Another factor could also be that the GC in the Go examples just never runs since it doesn't allocate enough memory. It's hard to say exactly what's happening without tracing the runtime, which would also affect the benchmarks a bit.

It's important to note that while both languages are natively compiled (and I suspect Rust would inch out in that category due to LLVM's backing) most of the overhead would probably be from memory, whether allocations, or cache usage, which makes comparing them in microbenchmarks a little inaccurate.


> This is because the garbage collector tends to have preallocated memory lying around,

This is an allocator feature independent of garbage collection; jemalloc does this for example. GCd languages have the potential to do this better since they can do additional analysis whilst tracing for little additional cost, but non-GCd languages can still benefit from this.


Part of this is also that Go is really good at not allocating from the heap when it can allocate from the stack (escape analysis). Until Go 1.5 the Go garbage collector was pretty weak, but it didn't matter as much as it would have for a language with heavier heap allocation.


Rust can do custom hashes as well, to be clear. There's been some arguments over what hashes are allowed in the game in the past.


To be clear: anyone can contribute a Rust k-nucleotide program that uses a custom hash function, just like those used by other programming languages.


Wasn't there some issues around a "custom library" for this? I distinctly remember there being some kind of argument about what is legal and what isn't.

That is, I think I'm thinking of this:

> k-nucleotide will explicitly require built-in / library HashMap.

https://alioth.debian.org/tracker/?func=detail&group_id=1008...

Since C doesn't have a standard library hashmap, you can write an entirely custom one just for the benchmark. But since Rust has a built-in hashmap in the standard library, you cannot. Even though you could write Rust code that's the same as the custom C hashmap.


The rules provided aren't as clear as they could be:

http://benchmarksgame.alioth.debian.org/u64q/knucleotide-des...

But they say 'don't write a custom hash table', not 'don't write a custom hash function'. Maybe the problem is that the data in this benchmark is just not a good way to exercise the hash tables in a given language the way the benchmark intended. That probably means the benchmark should be modified; the complaint that some implementations use 'laughably bad' hash functions that seem to be measurably decent hash functions for the data at hand seems really strange.


To be clear: anyone can contribute a Rust k-nucleotide program that uses a custom hash function, just like those used by other programming languages.

> Since C doesn't have a standard library hashmap, you can write an entirely custom one just for the benchmark.

NOT TRUE!

#315195 states the opposite!

"k-nucleotide will explicitly require built-in / library HashMap"


The "library" distinction you're drawing here is arbitrary. Or at least, my understanding of it is.

Am I allowed to use a custom HashMap for the game, or am I required to use the one in std::collections? My understanding is that it's the latter, and so that penalizes Rust or any other language that includes a map in their standard library.


Just like everyone else Rust advocates can contribute programs that use a custom Hash function with a library Hash map.

In fact, the current Rust k-nucleotide DOES use a custom Hash function with a library Hash map.

No one is allowed to -- in your words -- "write an entirely custom [hashmap] just for the benchmark".

(This was all discussed to death, in early December on https://www.reddit.com/r/rust/ ).


You didn't answer my question. Does "library" here mean that you are able to use a hashmap other than the one in std::collections, or not?


When someone fully re-implements khash in Rust and publishes the library, that will become a question which merits consideration.

Meanwhile, from the task description: Please don't implement your own custom "hash table" - it will not be accepted.


So, the answer is "no, you are not allowed." But you're open to changing that. Got it.


Can't use a library that doesn't exist.

If you want an additional Rust hashmap implementation then you need to persuade the Rust community to provide one in std::collections.

No one is allowed to -- in your words -- "write an entirely custom [hashmap] just for the benchmark".


> If you want an additional Rust hashmap implementation then you need to persuade the Rust community to provide one in std::collections.

Again, this means that for Rust, it _has_ to be in the standard library. But C doesn't have one in the standard library. So C gets to write their own, and Rust doesn't.


If it compiles and run it is allowed. There is no rule to benchmark other than having the same end result with best performance.

I still can't get all the moral over this.

The best thing here for Rust would be to achieve the best performance on this game and nothing else.

To me this game is important and winning on it is more yet. So, if Rust don't have better result by "morals" this is so wrong that hurts. But each time I read these responses I ask myself if there is not a real problem there.

I like Rust, but performance is decisive. It is really sad to see that Rust keeps going down on this game and that makes me question myself when trying to invest time on Rust.


It isn't a "moral" issue. It is about the usefulness of the benchmarks. If you're using them to make a recommendation about which programming language to choose, it is important to have a sense for how well the results generalize to the kind of programs you will be writing. Using unrealistic hacks targeted specifically to each benchmark is contra that goal.

If, on the other hand, you're just treating the benchmarks as a fun competition akin to code golf, then by all means, hacks ahoy!


Benchmark GAME :)



> The best thing here for Rust would be to achieve the best performance on this game and nothing else.

That's absurd. The benchmarks game doesn't measure real-world performance in any sense whatsoever. And many of the implementations are written in ways that you'd never ever put into production code (for example, the laughably bad hashing functions that are mentioned elsewhere in this thread). From what I understand, the Rust implementations tend to not get up to those shenanigans, which makes them appear worse than the implementations from other languages that do.

It's not the goal of a programming language to be the top on a silly benchmarks game. The goal is to be an excellent language for real-world use.


The Benchmarks Game has more rules than that. It's not about "morals."


But people are criticizing C because the way it is implemented is not "realistic". Or that the "hash" is a hack that will never work in real life.

I don't know the background of these people, but they couldn't be more wrong. So that looks like "morals" or just they don't ever saw real C code out there.


You're making an assumption that the C code is allowed to have the same algorithm as the Rust code. This is not actually exactly true, based on the rules of the game.


Where is it forbidden for programs written in C to use FNV?

For example https://github.com/haipome/fnv/blob/master/fnv.c


I'm referring to my reply to you above about not being able to implement a different map, not the hash function itself.


Which ones do you have in mind? Preprocessor sounds a little cheaty but picking a hash function that's a better fit for the data is a pretty basic, practical sort of optimization.


I think he's talking about the C code here, right at the top

http://benchmarksgame.alioth.debian.org/u64q/program.php?tes...


If that's the one it's a custom hash function which is then inlined as a macro. That still seems like a pretty vanilla C optimization that one might write in real C code.


The problem isn't that it's a macro. It's that it's a horrifically bad hash function. It's blazing fast, but completely unsuitable for use in any real code.


If it's a hash function that only works for this exact data set, I can see the argument. If it's a hash function that works for this kind of data (these particularly formatted, particularly constrained strings), it's fair play. Which one is it? 'It's not a good general purpose hash function' alone doesn't seem like a valid criticism, especially along with 'the Rust version uses a general purpose hash function'. Nobody said you had to use a general purpose hash function.

Imagine you got a million strings of variable length such that the last 4 bytes are guaranteed to be unique value between 0 and 999999. Lopping off the last four bytes is a perfectly good hash function for that kind of data.


Sure, but it's not a good benchmark. If you can demonstrate that you have blazing fast performance, but only for the exact input data that the benchmark uses, then you haven't demonstrated anything at all beyond an ability to tailor your code to a very specific input. But in the real world, we don't get to write our code for a very specific input (after all, if we knew what the input was ahead of time, we could just pre-calculate the answers instead of publishing the program).

So yeah, if you can come up with a tailored hash algorithm that works correctly and is DoS-immune for all possible input for the program, go ahead and use that tailored hash algorithm. But if your hash algorithm is only correct for the specific input data the benchmark game uses, and would not be collision-resistant under real world conditions, then you really shouldn't use that hash as it does not demonstrate anything useful.


Well, here's the thing. I don't think it's for the exact input, it's for a type of inputs. Custom hash functions for specific types of data are a basic optimization technique and I find it odd you'd even suggest every hash function should be 'DoS-immune'. There's absolutely nothing 'real world' about this unless you think the world consists entirely of hostile inputs. In the real world, people absolutely optimize.

Your argument seems to be that that's not the intent of the benchmark which may be true but it's not clear from the rules provided at all. To me, it looks like the opposite is true - they talk about using a standard hash table and most of those allow user-specified hash functions.


Rust's default hashmap algorithm is DoS-immune for any kind of input, which is a perfectly logical default algorithm for a language intended to be used in security-critical areas like operating systems, system libraries, web browsers, etc. and promotes memory safety.


That's really great but I don't see how it's related unless your argument is truly 'custom hash functions are bad', in which case I don't really know what else to tell you beside 'that's completely wrong'.


yeah of course, doing simple programs and checking their time is a good benchmark... oh wait.. also real world performance in bigger programs is mostly different, especially when you deal with big heaps.

Btw. this site is extremly bad for benchmarks since it also measure's the startup time of the runtime in java/go/rust.



Rust doesn't have any significant runtime startup cost, but it certainly is an issue for Java (and presumably Go as well).


It is an issue for Java programs that complete in a few tenths of a second. So these do more work than a few tenths of a second.


While you're checking out the performance of Rust and Go benchmark implementations, you can also check out some SaferCPlusPlus benchmarks[1] (and kind of compare them with other languages (transitively via the C++ benchmaks)). I've already suggested that these days a "memory safe" implementation category for the benchmarks would be of interest. But apparently not to the current maintainer of the benchmarks. Anyone else out there got nothing better to do than maintain a benchmark site for memory safe implementations? :)

[1] https://github.com/duneroadrunner/SaferCPlusPlus-BenchmarksG...


The binary trees benchmark forks off a large number of threads. Those are OS threads in Rust, and that is probably an inefficient way to compute something if you have more threads than CPUs.


This site explicitly mentions that results of benchmarks mean almost nothing. Google published results that tell that golang is slower than java in real applications


> This site explicitly mentions that results of benchmarks mean almost nothing.

Quote?


The ownership system isn't strictly about memory management and can make it easier to catch yourself making larger architectural errors, and you can have more confidence in a refactor with Rust than Go. In fact, I'd argue Rust leads to simpler architectures that fit well into the ownership model as opposed the the "ad-hoc" architectures programs written in other languages seem to invariably turn into.


I've literally just started learning Rust after following it for a few years. I wanted a language that was type-safe and produced binaries to simplify deployment. I chose Rust over Go because I wanted a functional language with generics. Go's repetitiveness regarding error handling just put me off.

I've tried learning C/C++ at several times but I just don't have the inclination to have to bother about null-terminating strings, etc in 2016. I don't mind spending a little more time getting something to compile if it prevents silly mistakes.

Having said all that, it's obviously too early for me to say whether I like Rust. I'm picking it up pretty quickly since I know FP thanks to Scala, but I'll see how much time I spend fighting the borrow checker.


4th law of HN: Whenever Rust is brought up, Go inevitably follows, and vice versa.


It's kind of a shame, because I don't really feel like the languages are used for similar things in practice, so the constant comparisons don't do either of them justice.


I was writing software in Go for a year before I switched to Rust. I've not felt a need to touch Go since. Basically, anything you can do in Go, you can also do in Rust, but Rust will let you do it with higher efficiency and with significantly less lines of code. In the end, it's just easier to write software with Rust than it is Go.

Feature-wise, Rust features generics and functional programming via higher-order functions and iterators, which is something that Go especially lacks in. Go doesn't have nice concepts like `Option`, `Result`, or `Iterator`. That's not something I'd personally want to live without today. The Go method is effectively writing boiler plate code everywhere, which leads to much room for error prone implementations that require more testing.

I haven't felt that Rust was more complex than Go, at least when you're actually writing software in Rust. Rust libraries feature semantic versioning and are automatically downloaded and verified at build time based on the contents of your `Cargo.toml` and `Cargo.lock` files. No importing of Git repositories directly required. Go does not provide an equal on that front.

There are a lot of great libraries out there to bring you extreme performance, simply, such as the bytecount crate, which is just a library that features a single function, a function that counts the occurrence of a specific byte, 32 bytes at a time with AVX, with additional SSE/SIMD implementations depending on what the processor supports.

All there is to truly know about Rust is the borrowing and ownership mechanism and how to implement a custom `Iterator`. If you have a solid understanding of both then you've pretty much mastered all you need to know about Rust.

The borrowing and ownership mechanism can be simplified down to: - Passing a variable by value will move ownership, dropping the original variable from memory - Passing a variable by mutable reference will keep the original variable, but allow you to modify the variable. - You may only borrow a variable mutably once at a time, and you may not immutably borrow while mutably borrowing. - You may have as many immutable borrows as you want, so long as you aren't modifying that value. - You may mutably borrow a field in a struct, and then mutably borrow a different field in the same struct simultaneously, so long as you aren't also mutably borrowing the overall struct. - You can use `Cell` and `RefCell` to allow for mutably modifying an immutable field in a struct. - You may mutably borrow multiple slices from the same array simultaneously so long as there is no overlap. - Safe memory practices means that instead of mutably borrowing the same variable in multiple places, you queue the changes to make in a separate location and apply them serially one after another.

Then for the `Iterator` trait, you would know that all traits have required methods, whereby as long as you implement the required methods for your type, you will automatically gain all of the other methods associated with the trait. For the `Iterator` type, you only need to implement the `next` method, and that looks something like so:

``` struct DataIterator<'a> { data: &'a [u8], index: usize, }

enum Token<'a> { One(&'a [u8]), Two(&'a [u8]) } impl<'a> Iterator for DataIterator<'a> { type Item = Token<'a>; fn next(&mut self) -> Option<Token<'a>> { let start = self.read; for element in self.data.iter().skip(self.read) { self.read += 1; // if next value is found then return Some(&value[start..self.read]) } None } } ```


[dead]


http://blog.burntsushi.net/ripgrep/

There are only seven instances of unsafe: https://github.com/BurntSushi/ripgrep/search?q=unsafe&type=C...

Four of them are related to calling libc/kernel32 functions, which need unsafe to be called. One is due to using a memory map, which needs unsafe to be called. Only two are actual unsafe rust functions.

That's a "real-world" example though, you're asking for a more specific implementation. I can't compare because my Go would be very poor; if you posted a Go implementation, I'd be willing to give this a shot and write a Rust one.

(Mostly out of personal interest, I don't think it really proves anything larger about the two languages.)


[flagged]


None of the unsafe code that Steve linked to had anything to do with searching a file line by line. It has to do with other parts of ripgrep, like determining whether a tty is available or communicating with a Windows console to do coloring. (Hell, ripgrep doesn't even require memory maps, but they are faster in some cases.)

Your benchmark proposal is interesting on the surface, but if done correctly, its speed will come down to highly optimized SIMD routines. For example, in Go, the code to search for a single byte on amd64 is written in Assembly (as it is in glibc too). This means it's probably not a good indicator of language performance overall.


The thing is I don't really care if Go implementation is highly optimized for amd64 like memchr is in C which is also written in assembler and optimized for different platforms. What I care is that simple code written by me is faster without going into C/unsafe code myself. So it's correct, fast, simple and I do not pay with my time to figure out how to make it as fast in Rust. This is the point I am making. Of course this is only one example, but still a proof that what OP wrote is not valid in all cases.


ripgrep is your proof. Go read the blog post Steve linked. If you still don't believe it, read the code. There's not that much of it.

Then compare it with similar tools written in Go, like sift and the platinum searcher.

The problem is, searching a file quickly is not as simple as you want to believe. If you show me a naive line by line approah, I'll show you an approach that is much faster but more complex. Top speed requires work, sometimes algorithmic work, regardless of the language you choose.


> If you show me a naive line by line approah, I'll show you an approach that is much faster but more complex.

Of course, I did it myself many times. But this is NOT the point, I've already wrote it. The point is that I wrote naive approach in both languages and it's a lot faster in Go. Which is a reply to what OP wrote (don't forget where this discussion started). In this case this is the fact and I don't see any reason to fight with facts if we get bias out of the equation. In other cases? I don't know. What I would expect is that naive searching for one string in haystack to be faster than Go in a language that is performance/zero-cost abstractions oriented like Rust but its false in this case. But it doesn't mean it's false in every case. And to be honest writing that "but they have faster implementation in assembler" is not an excuse at all, you can also write your own, especially if it works so well for those languages that have custom asm for specific platforms. In the end average Joe will not care if it's hand written assembler, he will care that his naive solution using standard library without any magic is just faster.


> The point is that I wrote naive approach in both languages and it's a lot faster in Go.

I tried your challenge, and the first data point I uncovered contradicts this. Here is the source code of both programs: https://gist.github.com/anonymous/f01fc324ba8cccd690551caa43... --- The Rust program doesn't use unsafe, doesn't explicitly use C code, is shorter than the Go program, faster in terms of CPU time and uses less memory. I ran the following:

    $ /usr/bin/time -v ./lossolo-go /tmp/OpenSubtitles2016.raw.sample.en the
    $ /usr/bin/time -v ./target/release/lossolo-rust /tmp/OpenSubtitles2016.raw.sample.en the
Both runs report 6,123,710 matching lines (out of 32,722,372 total lines). The corpus is ~1GB and can be downloaded here (266 MB compressed): http://burntsushi.net/stuff/OpenSubtitles2016.raw.sample.en.... --- My /tmp is a ramdisk, so the file is in cache and I'm therefore not benchmarking disk reads. My CPU is an Intel i7-6900K.

The Go program takes ~6.5 seconds and has a maximum heap usage of 7.7 MB. The Rust program takes ~4.2 seconds and has a maximum heap usage of 6 MB. (As measured by GNU time using `time -v`.)

---

IMO, both programs reflect "naive" solutions. The point of me doing this exercise is to show just how silly this is, because now we're going to optimize these programs, but we'll limit ourselves to smallish perturbations in order to put a reasonable bound on the task.

If I run the Go program through `perf record`, the top hotspot is runtime.mallocgc. Now, I happen to know from experience that Scanner.Text is going to allocate a new string while Scanner.Bytes will not. I also happen to know that the Go standard library `bytes` package recently got a nice optimization that makes bytes.Contains as fast as strings.Contains: https://github.com/golang/go/commit/44f1854c9dc82d8dba415ef1... --- Since reading into a Go `string` doesn't actually do any UTF-8 validation, we don't lose anything by switching to using raw bytes.

Knowing this, we can tweak the Go program to great effect: https://gist.github.com/anonymous/c98dc8f6be6d414ae3e7aa6931... --- Running the same command as above, we now get a time of ~2.3 seconds and a maximum heap usage of 1.6 MB. That's impressive.

Now let's see if we can tweak Rust, which is now twice as slow as the Go program. Running perf, it looks like there's an even split between allocation, searching and UTF-8 validation, with a bit more towards searching. Like the Go program, let's attack allocation. In this case, I happen to know that the `lines` method returns an iterator that yields `String` values, which implies that it's allocating a fresh `String` for every line, just like our Go program was. Can we get rid of that? The BufReader API provides a `read_line` method, which permits the caller to control the `String` allocation. If we use that, our Rust program is tweaked to this: https://gist.github.com/anonymous/a6cf1aa51bf8e26e9dda4c50b0... --- It's not quite as symmetrical as a change as we made to the Go program, but it's pretty straight-forward IMO. Running the same command as above, we now get a time of ~3.3 seconds and a maximum heap usage of 6 MB.

OK, so we're still slower than the Go program. Looking at the profile again, the time now seems split completely between searching and UTF-8 validation. The allocation doesn't show up at all any more.

Is this where you got stuck? The next step from here isn't straight-forward because getting rid of the UTF-8 validation isn't possible to do safely while still using the String/&str search APIs. Notably, Rust's standard library doesn't provide a way to search an `&[u8]` directly using optimized substring search routines. Even if you knew your input was valid UTF-8 before hand, there's no obvious place to insert an unsafe `from_utf8_unchecked` because the BufReader itself is in control of producing the string contents. (You could do this by switching to using `BufReader.read_until` and then transmuting the result into an &str, but that would require unsafe.)

Let's take a leap. Rust's regex library has a little known feature that it can actually search the contents of an &[u8]. Rust's regex library isn't part of the standard library, but it is maintained as an official crate by the Rust project. If you know all of this, then it's possible to tweak the Rust program just a bit more to regain the speed lost by UTF-8 checking: https://gist.github.com/anonymous/bfa42d4f86e03695f3c880aace... --- Running the same command as above once again, we now get a time of ~2.1 seconds and a maximum heap usage of 6.5 MB.

In sum, we've beaten Go in CPU time, but lost the Battle for Memory and the battle for obviousness. Beating Go required noticing the `read_until` API of BufReader and knowing that 1) Rust's regexes are fast and 2) they can search &[u8] directly. It's not entirely unreasonable, but to be fair, I've done this without explicitly using any unsafe or any C code.

None of this process was rocket science. Both the Go and Rust programs were initially significantly sub-optimal because of allocation, but after some light profiling, it was possible to speed up both programs quite a bit.

---

Compared to the naive solution, some of our search tools can be a lot faster. Performing the same query on the same corpus:

    ripgrep    1.13 seconds, 7.7 MB
    ripgrep    1.35 seconds, mmap
    GNU grep   1.73 seconds, 2.3 MB
    ag         1.80 seconds, mmap
    pt         6.41 seconds, mmap
    sift      50.21 seconds, 16.6 MB
The differences between real search tools and our naive solution actually aren't that big here. The reason why is because of your initial requirement that the query match lots of lines. Lots of matches results in a lot of overhead. If we change the query to a more common type of search that produces very few matches (e.g., `Sherlock Holmes`), then our best naive programs drop down to about ~1.4 seconds, but ripgrep drops to about 200 milliseconds.

From here, the next step would be stop parsing lines and start searching the entire buffer directly. (I hope to make even this task very easy by moving some of the searching code inside of ripgrep to an easy to use library.)

---

In sum, your litmus test essentially comes down to these trade offs:

- Rust provides a rich API for its String/&str types, which are guaranteed to be valid UTF-8.

- Rust lacks a rich substring search API in the standard library for Vec<u8>/&[u8] types. Because of this, efficient substring search using only the standard library has an unavoidable UTF-8 validation cost in safe code.

- Go doesn't do any kind of UTF-8 checking and provides mirrored substring search APIs between its `bytes` and `strings` packages.

- The actual performance of searching in both programs probably boils down to optimized SIMD algorithms. Therefore, once you get past the ability to search each line of a file with minimal allocation, you've basically hit a wall that's probably the same in most mainstream languages.

In my opinion, these trade offs strike me as something terribly specific, and it's probably not something that is usefully generalizable. More than that, in the naive case, Rust is doing you a good service by checking that your input is valid UTF-8, which is something that Go doesn't do. I think this could go either way, but I think it's uncontroversial that guaranteeing valid UTF-8 up front like this probably eliminates a few possibly subtle bugs. (I will say that my experience with text encoding in Go has been stellar though.)

Most importantly, both languages at least have a path to writing a very fast program, which is often what most folks end up caring about at the end of the day.


... whoa that's a comprehensive comment.

Do you think you could refactor out bytestring-based string manipulation into its own library? Even better would be something that worked for all encodings (using https://github.com/servo/tendril or something)


> Do you think you could refactor out bytestring-based string manipulation into its own library?

IIRC, someone was working on making the Pattern trait work on &[u8], but I'm guessing that work is stalled.

To factor it out into a separate crate means copying the &str substring routines, since there's no way to safely use them on an &[u8] from the standard library. (bluss did that in the `twoway` crate, so we could just use that.)

It does seem like a plausible thing to do, at least until std gets better &[u8] support.

> Even better would be something that worked for all encodings

I suspect the standard practice here is something like "transcode to UTF-8 and then search the UTF-8." (This is what I hope to do with ripgrep.)

> (using https://github.com/servo/tendril or something)

I don't think I know what problems tendril is solving, so it's not clear to me what it's role is.


woah, amazing comment. I usually just a silent reader on hacker news, but this comment urge me to create an account. I think lossolo just want a flamewar. He already has opinion which you cannot easly change. So any futher discussion after this comment will be pointless.

EDIT: and how can you have time to write this? I just usually close the browser tab when this situation occurs...


Thanks! I like to think I usually close the browser tab, but text search is just kinda my thing. :-)


> I think lossolo just want a flamewar. He already has opinion which you cannot easly change. So any futher discussion after this comment will be pointless.

My opinion in this case is empirically checked. I am not saying this to start flame war, I am just showing my observations in particular example. I can also say that Rust regex implementation eats Go regex implementation by magnitudes (performance wise) and it will also be true and I am not looking for any flame war in this case also. I am only sharing my experience that I've backed up with proofs (code + perf results) for that particular use case. This is a factual discussion, I don't agree it's pointless.


Best example of evidence-based rebuttal I've seen in a language argument on here in a long time. Great write-up!


Amazing! Thank you for your comment.


I have different times for both naive solutions on my machine than you.

It's 2.6 seconds for Go and 3.5 seconds for Rust, both perf results and code here: http://pastebin.com/WwhvHH6S


Your Rust program corresponds to my second Rust program.

Your Go program is not what I would expect. A bufio.Scanner is the idiomatic (and naive) way to read lines in Go. But this is immaterial. Your results are consistent with mine. They aren't different. I just included more analysis and more programs to provide proper context to this challenge of yours.


I gave you naive solution, now lets see amateurish solution (someone totally new to both languages)

Rust 4.6s

Go 3.1s

http://pastebin.com/r6K22Dt2

EDIT:

Using grep 0.6s

Then I have installed rigrep and...

Using ripgrep 0.4s

Really nice burntsushi. I am surprised by those (ripgrep) results compared to grep.


Seems I can't reply to your other comment, so I'll reply here. How can you say that my naive implementation is not naive? What is not naive about it? It's very naive. It's basically the same naive code that you were writing, but in actual idiomatic Rust with the linting issues fixed.

Using a `lines()` approach is naive because that allocates owned heap-allocated strings. An optimal, non-naive solution would not use heap allocation for buffering lines but use a stack-allocated array. That alone would bring significant speedup versus the line approach.

As for ripgrep, it's a pretty comprehensive application that makes use of SIMD/AVX so it's only natural that it's fast.


Your solution is not naive in my opinion because you set the size of the buffer and use map/filter but ok... Let's check. Your solution is the slowest from all the solutions.

It took 4.6s which only confirms what I wrote on beginning when we started this discussion. Perf counters for your solution in here:

http://pastebin.com/Anak1ahe


Your Rust example is kind of odd. Did you not see the linting errors in your editor or from the compiler? You can basically boil it down to just two lines of Rust.

https://gist.github.com/mmstick/a8316ba0514f9d9ab33b18fa9b91...

As for timing, I'm doubtful that Go is any faster than Rust. I don't have this www.js file so I can't test it on my laptop, but I'm pretty sure you didn't even attempt to do things like enabling LTO, setting O3, disabling jemalloc, and using the musl target. All these things can make significant differences to the performance of the binary that's produced.


I don't know if you noticed but we are discussing naive solutions which I've mentioned couple of times in previous posts. Code you linked is not naive solution. Things you proposed to do are not naive either.


Here's the thing; it is a naive solution. You may not want to accept it because that contradicts your claim of the Go solution being faster, but at that point it becomes a he-said/she-said kind of scenario because I can claim that your Go solution isn't naive as well and have exactly the same "validity" for such a claim as you do.


Anyway this really doesn't matter as his solution is the slowest of all which confirms what I wrote and contradicts what he wrote.

For me it's not naive solution. Do you have any proof it is? Can you mathematically prove that it's naive solution? I know I can't. For everyone naive solution is something different, what I saw in burntsushi reply and what I wrote myself is the closest to what I think are naive solutions.

> like enabling LTO, setting O3, disabling jemalloc, and using the musl target.

And this is for sure not part of naive solution either.

> You may not want to accept it because that contradicts your claim of the Go solution being faster

This is not true. You can find perf numbers of his solution in my second reply to him. Or you can compare those solutions yourself.


> For me it's not naive solution. Do you have any proof it is? Can you mathematically prove that it's naive solution?

You can't prove a negative.


> but still a proof that what OP wrote is not valid in all cases.

You are not proving anything until you post your Go code, which you don't seem to have done (please correct me if I'm wrong, I am legitimately curious to see for myself how Go and Rust stack up against each other for this problem). Until then, all you're doing is making vague claims backed up by precisely zero evidence. Why should anyone take you seriously?


I have posted the code in reply to burntsushi. You can check it out yourself.


I've seen several of your comments indicate "requiring naive implementation". This seems strange to me. Why require a naive implementation and then be concerned over some slight differences in performance?


Yes, and I was saying "even in production-grade, world-class grep implementation, there is barely any unsafe." I also acknowledged that that was different than what you were asking about.

When did you ask on IRC? I'll take a look.


> I told you NO unsafe code. Use only standard library, no third party tools.

I'm not sure what such an example would prove, other than maybe "Go is better than a subset of Rust excluding some of the core features of the language and its greater ecosystem".


> no third party tools

This is a tricky requirement. The standard libraries of both languages use a crap ton of unsafe code. You might end up just asking for a comparison of which standard library is bigger.

But at the end of the day, both languages have perfectly standard buffered readers (https://doc.rust-lang.org/std/io/struct.BufReader.html), shouldn't a simple search like this compile to the exact same code?


As someone who writes Go full time, once I'm done with my current big Go project I will be taking a break to investigate alternatives. Both Rust and Swift are at the top of my list.

Go is good, even great, at many things. But it's a language largely defined by its limitations, usually intentionally. It's an engineering language, not made for big abstractions. For me, the largest frustration is that the language gets in the way, and the pain increases with the scale of the problem. Which is to say: I think Go scales to large projects just fine, but there are problems where you'd like to build big building blocks on top of smaller blocks on top of smaller blocks, and Go doesn't lend itself to certain kinds of big, composable, data-oriented abstractions. It's small building blocks all the way.

I've bumped into several very real problems recently where Go's coarse, not-very-data-oriented imperative approach has revealed itself as a liability, and where I found myself fantasizing how I could have done it in just a few elegant lines in Haskell. Sometimes they're about expressing things simply in a composable manner, and sometimes these problems simply manifest themselves in immense blobs of boilerplate/repetition (for example, because you have to implement the same method a few dozen times on different data structures, which in a different language could be solved with a generic implementation), where Go's solution is to either eschew type safety, use slow reflection APIs, or programmatically generate the Go code as part of the build process.

Go's is also frustrating in its selective pragmatism. Where Go has chosen to automate and sugarcoat some complicated things (memory management, goroutines), it's stubbornly unpragmatic about other things (error handling, working with polymorphic data, memory safety). Go has been ridiculed for its simplistic error system, but I'm not an extremist here; I'm all for errors being values, and not a fan of exceptions. But if you look at actual Go code, a huge amount of code has to interact with errors. When nearly every function is riddled with "if err != nil", you should know that your language is crying out for just a little syntactic sugar. Or a solid type-system solution for that matter. Enums (Rust-style) and pattern matching wouldn't go against Go's grain at all, but since Go is "done", we're stuck with how it is.

I think Go's focus on simplicity is very important (I'm a big fan of the Wirth school of languages), and my worry about Rust and Swift is that they never learned this lesson. To me, both Rust and Swift looked more promising early in the design process than in their current state; Swift looks increasingly like Scala every time I visit it, whereas Rust often feels lost in a sea of punctuation. That said, my annoyance with Go is acute enough that I'm willing to deal with a few downsides if I can get a language that better matches the kinds of projects that I build.


> my worry about Rust and Swift is that they never learned this lesson

FWIW the Rust team does consider simplicity to be important. However, it's not the only goal, which means that sometimes it has to be sacrificed to be able to make something possible. But most new language features do get discussed in the context of a "complexity budget".

So Rust tries pretty hard to ensure it doesn't get more complex than it has to; but it doesn't put simplicity as the overriding goal and hang all else to get it.

> Enums and pattern matching wouldn't go against Go's grain at all

I've said this many times before -- I don't miss generics from Go; I understand why they're not in the language. I miss ADTs and pattern matching.


We need some syntax blurring goggles to judge new languages it seems.


Rust is not more complex, it just takes some more time to get used to.


I'm learning Rust. I considered Go but feature wise compared to Rust it seems really boring and plain. IMHO modern language has to have functional flavor.


Go's GC isn't a big deal for most use cases. However, the loss of static guarantees regarding thread-safe manipulation of arbitrarily complex data structures is a big deal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: