Hacker Newsnew | past | comments | ask | show | jobs | submit | ptspts's commentslogin

It's possible to run WebAssembly programs from the command line (without any GUI) using WASI (see e.g. https://github.com/WebAssembly/WASI). Thus if the user downloads pdfconverter.wasi , and the user already has e.g. wasmtime installed, they can run `wasmtime pdfconverter.wasi input.pdf output.pdf` from the command line (see https://github.com/bytecodealliance/wasmtime/blob/main/docs/... for details).

In addition to the web site, the Electron app and the Chrome extension, you may want to distribute a command-line version of your tools as WASI-style .wasm program files. If you do so, I would exclusively use the them this way, from the command line.


Your commands to process PDF with Ghostscript are lossy (they lose lots of metadata and in minor ways they also change how the PDF renders), and they produce very large PDF files.


Can you expand on why the produced PDF files are supposed to be larger than the originals? I've not observed that yet.


It does exclude unused code. But glibc has too many inter-object-file dependencies, so too much code gets used.


Neither the article nor the README explains how it works.

How does it work? Which WASM euntime does it use? Does it use a Python jnterpreter compiled to WASM?


There's a link to the author's work here:

https://github.com/mavdol/capsule

(From the article)

Appears to be CPython running inside of wasmtime


yep, and to be specific, it leverages the WASM Component Model and uses componentize-py to bundle the user's script


See the linked project at the end: https://github.com/mavdol/capsule


As a text editor user, I prefer selecting the font and the syntax highlighting independently. This font is not useful for me.


>As a text editor user, I prefer selecting the font and the syntax highlighting independently. This font is not useful for me.

Then it's not for you. This comment does not add anything to the conversation and comments like these are better left unwritten.


I suppose this gets useful in applications where you can change the font, but not add syntax highlighting. Besides being a neat trick, of course.


What is the advantage of this circular implementation?

Is it faster than the simple one? Does it use less memory? Is it easier to write? Is it easier to understand?

I think all of the above is false, but I have a limited understanding of Haskell. Please correct me if I'm wrong.


> The algorithm isn’t single-pass in the sense of Adaptive Huffman Coding: it still uses the normal Huffman algorithm, but the input is transformed in the same traversal that builds the tree to transform it.

Limited understanding here too. Sounds like it's not really single pass anyway so it's not usable to process a stream in real-time either, before having all the data?


There's no (practical) advantage to the circular implementation; it's just a curiosity.

It is useful for understanding laziness and some interesting theoretical tools for traversing data structures, though. For a more in-depth look at the idea of circular programs for traversal, Bird's paper (linked in the post, https://link.springer.com/article/10.1007/BF00264249) is a good start.


It's a weird claim about a single pass too. It's more of a "let's replace some iteration with building a tree of functions to call" and then pretends waking/executing that is not another pass.


Why is ELF so much slower and/or more memory hungry than a.out on Linux?


Relocation information, primarily.

ELF supports loading a shared library to some arbitrary memory address and fixing up references to symbols in that library accordingly, including dynamically after load time with dlopen(3).

a.out did not support this. The executable format doesn't have relocation entries, which means every address in the binary was fixed at link time. Shared libraries were supported by maintaining a table of statically-assigned, non-overlapping address spaces, and at link time resolving external references to those fixed addresses.

Loading is faster and simpler when all you do is copy sections into memory then jump to the start address.


Shameless plug: Some of my hobby projects written in C (e.g. https://github.com/pts/bakefat) can be built reproducibly for Linux >=1.0 (1994), FreeBSD (same ELF executable program file as for Linux) and Win32 working on all versions of Windows (Windows NT 3.1 (1993)--Windows 11). The C compiler (running on Linux i386 and amd64 host) used for the release build is self-contained and included in the project, along with the libc.

Doing such backward compatibility is definitiely possible for command-line tools. Once set up, it's automatic, and it needs extra testing after major changes.


If a text-mode process monitor is larger than about 200 KiB, then it sounds bloated to me. If it's loaded with tons of features, then my upper limit is 1 MiB.


This video doesn't explain what the project does and how it does it. Also it's deliberately misleading the viewer, for example it purposefully incorrectly states that C++ is an interpreted language.

Also the music is way is too loud and sudden.


The video is a compliment to the Github repository, the presenter even shows code and brings up the repo in the video. I guess you didn't watch that part and unfortunately you didn't get the joke either.


Well the video is almost entirely a joke and almost every sentence in it is ironically false; that's the point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: