Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> for the real goal of of interoperable modules working together

CORBA, Beans etc predate the modern web but its telling that no one has really bothered to try to recreate them...what is the demand? server-side wasm just doesn't seem to be anything that is strongly needed and looks a lot like completists just scratching an itch

two languages that only communicate at the wasm level...isn't this just a variation on "DLL Hell"?

as to a Go program accessing a Rust module...on a practical level you would almost always be better off taking the thing in Rust that you want to use in Go and just doing a best-effort source translation or find some library out there that provides substitute functionality

I'd rather have source translation tech that just gets functionality into the language I am using



Maybe I'm missing the mark, but isn't server-side wasm basically solving the need for source translation?

The semantics of golang and rust are so different that any translation runs the risk of producing wildly not-idiomatic code.

Of course, you lose out on all the runtime benefits that various platforms might provide (I don't see anyone who is invested in the JVM giving it up for wasm) but why not have rust and golang binaries compiled to a common format and let the communicate directly?

Personally, I don't do much polyglot programming these days so I don't really have a dog in the fight, so to speak, but my experience with source-to-source translation is that it tends to be pretty awful short of simple translations that keep the same runtime and underlying data primitives.


> isn't server-side wasm basically solving the need for source translation

The hard part about bridging languages isn't solved by WASM, it's about working with complex types across the call boundary. If you want to pass a Python dict to a C++ function that accepts std::map then something will have to do that translation, handle GC concerns, etc. Likewise for function types, object instances, files, strings etc. And then that interop overhead will hurt performance so you may need to optimize that.

That's why if you look at Truffle, which is SOTA for this problem, it works in a very different way to WASM. There's no attempt to do a unified bytecode or a COM-like component system. It works with whatever the language's natural representation is and does interop by optimizing and JIT compiling merged partially evaluated ASTs or unrolled bytecode interpreters.


If you already have to build a wasm target, I don't see why building one for each arch is that much worse, especially if there are probably performance benefits by building platform specific.


> but isn't server-side wasm basically solving the need for source translation?

hypothetically...but it would also move part of your integration testing into the realm of wasm

I can't think of why this is better than just translating functionality into the language you are using


> I can't think of why this is better than just translating functionality into the language you are using

Because this can be a huge amount of effort.


COM is used pervasively in windows. The wasm component model is like COM without DCOM (which was the part that caused all the trouble).


There's a lot of very lovely very interesting very enjoyable Function-as-a-Service things & platforms, and server-side wasm is one very promising generic way to be able to potentially simmer down & make generic & very fast these networks of functions.

Today FaaS (ex: aws lambda) are most frequently run by having a process for each function, but with wasm the nice sandbox system/object-capabulities style, everyone can share a runtime & have less barrier crossing & much less overhead. There's a ton of near magic properties here that could help radically change how we write & maintain systems, shifting us from big blocky monoliths to something more like an Erlang universe, where there's lots of small interoperating pieces, sharing a common fabric that stitches them together.

I think the CloudFlare Workers world probably comes the closest to showing off these promises, and it's a pretty slick elegant low fluff take, that also comes with very big advantages about being architected from the start to have many many deployment points, with the ability for individual workers to come & go on demand at very high speed at any given deployment site.

We can also easily spot incipient demand for wasm by just looking at all the places folks embed scripting languages. Every time you see a Lua scripting engine (neovim, nginx, a million other places), you could potentially have any langauge you please. And given the lightweight sandbox, how quick it is to spawn instances, we can make our software pipelines much more user configurable & dynamic.

A good parallel is the kernel, which has become much more capable of programmed behavior via the addition of ebpf to instrument not just networking but all kinds of subsystems. This has lead to huge wins, huge gains in all kinds of very fast very secure networking interlinks, with record low overhead. It's being used to help remap weird input devices behaviors, and dozens of other interesting little things you only get by having some flexibility built in. Rather than hard-coded subsystems everywhere. Wasm is lined up to be the ebpf of user land systems, it is the thing that allows apps to bake in flexibility & programmability, in a fast, safe, interoperable way.

The discussion here is about how the library of wasm modules can arise & reinforce each other. The cynicism against interoperability, suggesting source translation, is blind to the use cases above. It doesn't see the Erlang view of the world, it's rooted in the boring legacy view that computing is and always will be giant monolithic apps doing three million different things within the confines of the colossus-sized process. Having other alternatives to computing where we can have a sea of smaller units that message each other doesn't have a clear & obvious triumph, it's not inherently obviously better than the megaprocess view of the universe, but it should be possible & we should find out what we can do. And even if the femto-service / Nano-function model ends up being bad, we'll still have apps that have a well established means & well supported common tool chains for adding programmability in. That requires a mature interoperable modular system to get anywhere, and thankfully folks aren't scared off from trying, not dissuaded because someone mentions the CORBA boogeyman under the bed & tries to gloom us away.

This is incredibly promising technology & I recommend everyone come evaluate it with a positive hopeful light. I hope I've helped share good outlooks that can help get folk excited (as they should be!)


> Every time you see a Lua scripting engine (neovim, nginx, a million other places), you could potentially have any langauge you please

but people like code reuse, so its almost always better to communicate, collaborate and share at the source code level than at the level of a compiler target

as far as I can tell, the only real target audience here are pathological language snobs ("I insist on writing neovim extensions in Rust!") who will validate their inflexibility by pointing at the lowest common denominator of wasm


What do you propose nginx does? Should they embed rust, go, c, visual basic, turbo pascal, nim, and 37 other scripting runtime? Should they pick 3 to support?

How many of these have embedded scripting systems? How different will they be? How many offer lightweight nano-processes? How many are secure sandboxes? How many support arm, RISC-V, or the next architecture?

Having a common platform here makes so much sense to me. Enabling a variety of things, safely, quickly, & with well defined patterns, seems intuitively advantageous, frees us from having to tangle with a variety of minor subdivisions that fix software onto narrower paths.


> What do you propose nginx does? Should they embed rust, go, c, visual basic, turbo pascal, nim, and 37 other scripting runtime? Should they pick 3 to support?

no, pick one that is a decent compromise and move on

no one is going to dismiss the entirety of a platform like nginx because they can't script it in their favorite language


Your last argument was,

> but people like code reuse, so its almost always better to communicate, collaborate and share at the source code level than at the level of a compiler target

And now you argue we should all only ever have Lua to collaborate on.

Cause that's the only big player in embedded scripting atm. Js alas didn't take off but it's not that bad to embed. Few other languages have much at all. There are some projects to make ebpf embeddable in apps.

You seem to really have it in for what seems like a perfectly good attempt that seemingly makes a lot of sense & has hugely obvious benefits. I don't get the opposition to trying.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: