Hacker Newsnew | past | comments | ask | show | jobs | submit | tjelen's commentslogin

This one is basically my daily driver for similar tasks: https://app.heynote.com/ (full app at https://heynote.com/). And it seems to work almost out-of-box for your first two examples, once you switch the buffer from "Plain-text" into "Math" mode.

It also supports switching between different buffers and some kind of local storage.


Umm, yes? Not sure about the rest of Europe, but here in Czechia most of merchants have a direct payment method based on QR code.

Basically: 1) you select your bank at the checkout; 2) you're redirected to the bank's payment page with a QR code; 3) you scan the code from your banking app and confirm it; 4) the bank redirects back the merchants page with the payment status confirmed/declined. No payment processor involved and the money goes straight from your account to merchant's account.


> the money goes straight from your account to merchant's account.

For the financial systems version of straight away (2-3 business days, weekends & holidays excluded) ;)


I would say that the problem really is there. Dealing with native dependencies and addons was almost always a pain as the article describes (and not just from developer perspective), so anything that helps there is really appreciated.

Not sure what you mean by the right tools in this context.


First of all, this is for C, and most extensions are written in C++ or Rust nowadays.

Secondly, the right tools are having Python, the C and C++ compiler, node-gyp and cmake.js installed, and actually understand how they work.

But what do I know, nowadays folks use C and C++ as scripting languages putting a full library into a single header file to avoid learning how to use the compiler and linker.


I know how to compile and link C. I've never done C<->JS FFI but could probably figure it out. But if I have relatively small C code in single files, why bother? I'll take the easy route unless there's a clear reason not to.

The thing is, I'm not already using the Bun runtime and wouldn't switch just for this.


I definitely want to avoid learning how to set up and use the compiler and linker when I just want to use some package.


Knowledge is empowering.


Code testing is a big one for me. I'm currently using in-memory sqlite for tests and I'm often running into differences between sqlite and postgres (default values, JSON handling, etc). This could allow me to use the real thing without running a full Docker instance.


The innermost rendering loop in the JS seems to create and destruct an array instance for every single pixel iteration (see https://github.com/dmaynard/chaos-screen-saver/blob/master/s...). I guess that this could be potentially optimized away by the JIT, but it will make things slower or at least less predictable.


That's kind of an underrated aspect of these comparisons - while you absolutely can work around Javascript's weird performance cliffs and avoid putting pressure on the garbage collector, you have to fight the language at every turn because so many idiomatic JS patterns are inherently slow or flood the GC. You may find idiomatic JS easier to work with than something like Rust, but Rust is much easier to work with than the narrow and loosely defined subset of JS that you have to stick to for optimal performance. Taken to its limit you end up more or less writing asmjs by hand.


The rust code is 3x as long and a lot more complex too.

In theory, static typing would correct the biggest performance issues in JS (use monomorphic functions, don't mutate objects, and limit arrays of just one primitive/object type).

In practice, TypeScript allows and encourages you to create types that are horrendous for performance.

I'd love to see a comparison using AssemblyScript (basically stricter TS for WASM). I'd bet it's nearly the same speed as Rust while still being a third of the size.


The Rust version is longer mostly due to boilerplate for WASM<>JS interface, and awful vertically-exploded formatting (probably caused by rustfmt's dumb heuristics).

but the core loop in Rust is pretty straightforward. It could have been shortened and optimized further.

Also keep in mind that the larger the project, the harder it gets to keep JS in the performance sweet spot without tipping over any JIT heuristic, using GC, or accidentally causing a perf cliff, while the Rust has pretty stable and deterministic optimizations and keeps its memory management control at any scale.


https://blog.suborbital.dev/assemblyscript-vs-rust-for-your-...

This is a pretty good rundown of expected comparisons, but I doubt there will be any surprises here.


That's a really good point and I think that in this sense the comparison is really made between Rust and JS rather than between WASM and JS (as others have complained).


That's an interesting point.

It was my understanding that the V8 GC frankly was rarely used, and that they generally just let memory pile up quite a lot before it's used, in the hopes that it may never have to be run during application lifetime.


It depends on the application, a short-lived script may complete all of its work before the GC interrupts it, but something that runs continuously can't afford to generate much if any garbage in its main loop because it will inevitably pile up and eventually cause a huge stall when the GC decides that enough is enough. It's especially critical for animated or interactive applications like games, because with those the stall will manifest as the application freezing completely until the GC is finished.


Last I checked, destructuring created 4-5x as many bytecode instructions and a potential GC pause. I'd think this could be detected and optimized easily enough, but I guess there are bigger problems for the JIT devs to solve.

A quick profiling seemed to indicate that just a bit less than 10% of the JS time is being spent on the DOM rather than the calculations at hand. I wonder how much of that could be reclaimed simply by running the calculation in a web worker.

I suspect the bitwise AND operator every loop is another big performance issue. Normally, the JIT would leave the loop iterator as a 31-bit int, but because it is stored in a shared object, I suspect it must (f64 -> i31 -> AND -> f64) every time. A local variable that updates the object variable every 64ms and resets to zero would probably be faster.

The decPixel function should use a switch statement with values of 0, 1, 255, and default so it only needs to branch one time. This is probably a decent performance win too as around 15-20% of all time is spent here.

EDIT: I should praise the author for using a ternary instead of Math.max() as very few people know that the ternary is literally 100x faster. I wonder why this optimization was never made as it seems common enough.


I actually thought Math.max was faster in modern Chrome than a ternary.


You can try for yourself (this uses an if..else, but they compile to the same thing)

https://www.measurethat.net/Benchmarks/Show/6528/0/mathmin-v...


Are these Peter de Jong attractors?

I'm trying to work out how my interpretation of the calculations (in JS)[1] compare with the authors code, but trying to measure performance in CodePen is ... difficult to work out. My approach was to: 1. Run the CodePen with the inspector open; 2. Start recording performance; 3. Right click on the display panel and select 'Reload Frame'; 4: Stop recording performance after the images reappear.

... But when I look at the results nothing is making sense. Clearly my approach was wrong.

[1] - https://codepen.io/kaliedarik/pen/JjRJwNa


Why do you think the JIT would be able to optimize this? And how would it go about it? I know only about a few rough things and heuristics. I wouldn't expect or assume that this would be optimized.

It would probably have to recognize that the _usage_ of this function can be translated into a local mutation without allocating additional arrays. But from just looking at the function locally it isn't clear whether that is a safe assumption.


What I meant to say is that this definitely isn't a safe assumption and performance of this loop will be less predictable. That said, I wouldn't be surprised that such complex JITs as V8 or JSC can detect this scenario.


The actual code running comes from an unpublished module @davidsmaynard/attractor_iterator. It also returns a new array instance for that part. Most importantly, it uses a strange mix of global variables, class methods and properties, which I guess the author tried to optimize by trial & error.


It has been reported by a major Czech newspaper (it is originally a Czech project): https://domaci.hn.cz/c1-67066670-fiktivni-eshop-se-zbranemi-...


And all that is just for being able to write `response = await fetch(...)` instead of `fetch(...).then(...)`.


My fault. Didn't know that ES2015 actually supported `.then()`. Anyway, this was also an exercise for me to learn about babel (I use CRA all the time so have no idea what is going on behind the scenes) and this was a nice intro!



Isn't it already supported backend in LLVM? https://github.com/llvm/llvm-project/tree/master/llvm/lib/Ta...

I mean it is still under development but I think it is usable. Here's a quick intro I found some time ago: https://dassur.ma/things/c-to-webassembly/


And at the same time they recommend using UAs to detect agents incompatible with SameSite=None cookies (see https://www.chromium.org/updates/same-site/incompatible-clie...), including certain Chrome versions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: