Hacker Newsnew | past | comments | ask | show | jobs | submit | forgotpwd16's commentslogin

And was added after they rewrote everything for the new GTK version when there're functional patches adding thumbnails to previous versions. (Which were rejected/ignored because they didn't feel good.) A situational very in parallel to Xorg/Wayland if consider: https://news.ycombinator.com/item?id=46382940.

One of which the implementation is 100% vibe coded even.

As read in quote given by GP, `autofree` partially uses a GC. And is WIP. (Although was supposedly production-ready 5+ years ago.)

Reading "Memory safe; No garbage collector, no manual memory management" on Rue homepage made me think of V for this very reason. Many think is trivial to do it and Rust has been in wrong for 15 years with its "overcomplicated" borrow checking. It isn't.


C# places quite low though (at ~1s). C# (SIMD) that you see towards the top is more complicated: https://github.com/niklas-heer/speed-comparison/blob/master/.... In your metric winner is Nim: https://github.com/niklas-heer/speed-comparison/blob/master/... (followed by Julia and D).

C# non SIMD (naive non optimized version) is in the same ballbark as other similar GC languages. Nim version is not some naive version also and seem rather specially crafted so it can be vectorized and still looses to C# SIMD.

Loses? My comparison is regarding GP's metric perf/lines_of_code. Let m := perf/lines_of_code = 1/(t × lines_of_code) [highest is better], or to make comparison simpler*, m' := 1/m = t × lines_of_code [lowest is better]. Then**:

   Nim          1672
   Julia        3012
   D            3479
   C# (SIMD)    5853
   C#           8919
>Nim version is not some naive version

It's direct translation of formula, using `mod` rather `x = -x`.

*Rather comparing numbers << 1. **No blank/comment lines. As cloc and similar tools count.


Nim "cheats" in a similar way C and C++ submissions do: -fno-signed-zeros -fno-trapping-math

Although arguably these flags are more reasonable than allowing the use of -march=native.

Also consider the inherent advantage popular languages have: you don't need to break out to a completely niche language, while achieving high performance. Saying this, this microbenchmark is naive and does not showcase realistic bottlenecks applications would face like how well-optimized standard library and popular frameworks are, whether the compiler deals with complexity and abstractions well, whether there are issues with multi-threaded scaling, etc etc. You can tell this by performance of dynamically typed languages - since all data is defined in scope of a single function, the compiler needs to do very little work and can hide the true cost of using something like Lua (LuaJIT).


> Nim "cheats" in a similar way C and C++ submissions do: -fno-signed-zeros -fno-trapping-math

I don't see these flags in Nim compilation config. The only extra option used is "-march=native"[0].

[0] https://github.com/niklas-heer/speed-comparison/blob/9681e8e...



Per the rules[0]: "Use idiomatic code for the language. Compiler optimizations flags are fine."

Agree with the rest of your comment.

[0]: https://github.com/niklas-heer/speed-comparison#rules


>It's not fair to assume the author didn't know how to implement generics before this project

Yeah... what they ended up implementing is not generics. So good thing the LLM doesn't read link/comments too or will've probably wrote an actual roast.

>It's also not fair to assume the project won't gain traction

Very fair to assume this. Referencing Rust/Zig disregarding the thousands other now abandoned ones is survivorship bias. Most small hobby projects remain small. But, besides joking about it, "built [something] nobody will use", if is in their free time, and enjoy it, does it matter? Is there a need for all hobby projects to have a goal of making it big?

>This just goes a little too far for my tastes.

But the "Please star my repo so I can get a job" is fine?


Thing with LLMs, they'll tell you what a great idea and then output a design and tons of code for you which if lack the necessary knowledge will look coherent and correct. It's good to throw the design/code back in and tell them to review it and explicitly prompt them to tell you what is wrong.

So here it says your error handling maps directly to POSIX exit code. But then "On success, the function returns a non-zero value."

For the sh JIT: The slowness isn't due to the language per se but due to launching multiple processes. If performance is really the goal then you essentially need to replace every process launch with a built-in command. The benchmark is an hallucination unless can indeed be run. Hypothetical benchmarks with hypothetical results are nonsense. (Unless you've a mathematical model backing it up.)


Quite surprised at Vivaldi. Considered that as Opera spiritual successor including any possible feature, will've been one of the first browsers adding AI.

Summarization is using a chosen cloud-based AI provider.

Are you sure? I see a huge spike in CPU when I long-click on a link to see the preview and summary. This is the newest summarization feature, not the older one with the chatbot on the side.

Ah, didn't know they moved to local models. My comment was about the old chatbot-based feature.

>PLoS [...]

At low costs of $2k~$3k per publication[0]. Elsevier closed-access journals will charge you $0 to publish your paper.

>Elsevier makes over $3 billion dollars with the closed publication model.

Elsevier is also[1] moving to APC for their journals because is better business.

>The Institutions often do not supply access to the general public despite the papers being produced with public money

Journals (usually) forbid you of sharing the published (supposedly edited) version of a paper. You're allowed to share the pre-published draft (see arXiv). Institutions could (and some indeed do) supply those drafts on their own.

>Paying the cost upfront from the grant increases the availability to the public.

At the expense of making research more expensive and hence more exclusive. It's money rather quality that matters now. Thus it isn't unsurprising that Frontiers & MDPI, two very known open-access proponent publishers, are also very known to publishing garbage. It's ironic that once was said that any journal asking you for money to publish your paper is predatory, yet nowadays somehow this is considered best practice.

[0]: https://plos.org/fees/ [1]: https://www.elsevier.com/open-access


Better busness or are their customers demanding it? PLoS is a Non-Profit - feel free to look up how much they make. I believe it is public record.

If researchers cannot pay the APC then PLoS often reduces the fee. Also - half of that grant money is used by the Institution as administrative overhead. An part of that overhead is paying Elsevier for journal access. If you want to decrease the cost of research that may be a better place to start.

I agree that volume often tends to result in garbage but the review is supposed to lessen that. Again that garbage did get funded some how.

I am not pushing PLoS - they are simply a publisher I am familiar with that uses this model.


One last post.

The garbage thing is really interesting. I'm going to propose another reason for garbage is Academia's reliance on publication as the primary means for giving promotions and judging peoples work. This leads to all kinds of disfunction.

Was it Nobel Prize Winner Peter Higgs that said his University wanted to fire him because he didn't publish frequently enough?


React is an abstraction over UI state, not the platform (ie HTML/CSS). This is by design and non-parallel to C#/CLR case. If you want something akin to this, then Flutter is what you should be looking at.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: