Hacker Newsnew | past | comments | ask | show | jobs | submit | eugeneionesco's commentslogin

You're spreading FUD.

I suggest you educate yourself on the reasons grsecurity patches are no longer public anymore.


You're welcome to present an argument for your case.


>The webassembly exploit part of the chain bums me out (I was always afraid of stuff like this when I was working on the design for it) but it's pretty uninteresting, really. The simple sort of bug you get when you insist on writing stuff in C++.

I really hope people don't think webassembly is the fault for this, this vulnerability is no different from any other memory corruption vulnerability you would find in the js interpreter or the css parser or whatever.


Well, WebAssembly's primary near term contribution will be introducing the world of C++ exploits to web apps, which are already groaning under the load of XSS, XSRF, path traversal, SSRF etc attacks. Adding double frees, use after frees and buffer overflows on top doesn't seem ideal.

As for the rest, well, it'd be nice if there was any sort of plan to make Blink safer. I know about oilpan but what Mozilla is doing with Rust is impressive. The JVM guys are working on rewriting the JVM in Java. What's Blink's plan to make its own code safer? Sandboxing alone?


Microsoft is also taking steps to incrementally rewrite .NET runtime in C#.

https://www.infoq.com/articles/virtual-panel-dotnet-future

And the D guys have been rewriting dmd, the reference compiler, into D.


What exploits are you specifically worried about with wasm?

The ones you call out in this post don’t have the same impact as native, even when it’s C or C++ compiled to wasm.


http://foo.bar.com/url?q=<base64 encoded stuff>

wasm program parses q

stack smash occurs

ROP chain is used to gain code execution

user cookie is stolen

attacker now controls your account

I don't know enough about wasm to know if it has some special mitigations for this but when I looked at it, wasm seemed closer to a CPU emulation than a high level language VM. Flat memory space, no GC, no pointer maps.


WASM memory is a set of memory specific to a module (and they only allow one memory instance right now). It can be imported/exported to other modules, but there is no sandbox escape (in theory). For the web backend, it's just backed by a UInt8Array IIRC. It's all userland. If anything escapes the WASM interpreter/compiler, it is the fault of the interpreter/compiler (as is the case here) and not the fault of the WASM bytecode itself which has no escape mechanism. Think of a WASM VM just like a JS VM. Even though it may appear low level just because it can JIT better/cleaner, it operates in the same arena as JIT'd JS (at least for the web target).


You don't need to escape a sandbox when the application has access to all the user's data.

The attack surface of a gmail implemented in C++-compiled-to-wasm is almost certainly going to be larger than a gmail implemented in JS, because the runtime environment is vulnerable to double frees and heap corruption and other attacks, even if it won't escape the browser sandbox. My gmail tab basically has access to my entire life.


I don't understand. In the gmail example, the attack surface to who, a malicious email sender? As in something being handled by wasm in the browser has a better chance at XSS than if it was handled with JS? Why would untrusted content like that be handled by a client-side language anyways? Whether it is wasm, JS, wasm-interpreted-by-a-JS-interpreter, JS-interpreted-by-a-c++-intrpreter, wasm-interpreted-by-a-c++-interpreter or whatever the risks are similar. If you are talking about untrusted wasm or JS scripts accessing things inside the same sandbox, that's a different vector and it's less about the size of surface area and more about the introduction of the vector in the first place.


Simple example (though not something I think the Gmail team would actually ship): I want to load a .png file that's attached to an email. If I decide to use a build of libpng I control (for example, to work around broken color profile support in browsers), a bug in that libpng build could allow privilege escalation within the tab to get access to my gmail contacts or send emails. Bugs in loaders for image file formats are not unheard of, and people treat image files as innocuous.


Ah that example makes it clear. I'll ignore the obvious issue of libpng being written in JS having a similar problem. So the libpng WASM would have its memory instance (probably imported from the caller) and functions would reference the memory. It's not like with regular RAM where if you overflowed a mem write that it would write executable instructions. Code is different from memory. There is no eval. There is a "call_indirect" which can call a function by a dynamic function index, but what would a dangerous function that libpng imported? You can't execute memory or anything.

I can see some site DOSing though where you use the equivalent of a png-bomb to blow up CPU from the parser, but that is any not-meticulously written client-side parser of untrusted input.

So while you can toy with memory and maybe even affect the function pointer before a successive indirect call, it's not near as dangerous to the outside-of-wasm world as raw CPU instructions. I can see an issue where the caller that imports libpng and exports his memory to it might have something secret in that memory...hopefully multi-memory and GC-like-structs and what not can make passing and manipulating large blocks more like params than shared mem (and all of shared mem's faults).


It's a bit extreme but the reality is that a lot of production libraries tend to pull in imports that are as dangerous as eval, because the scope of the library is enormous, or it's actively supposed to interact with the DOM or JS. At that point, if someone can more trivially exploit it with a double free or buffer overflow, you've increased your security risk relative to JS (because overflowing a Uint8Array is basically never going to result in arbitrary function invocation)

The way function addresses are sequential in wasm tables (and deterministic) also means it is probably easier to get to the function you want once you get code execution.


WASM resulted in adding a lot of new API to JS, like thread-shared buffers and coming atomics. This requires quite a few new lines of native code in the implementation significantly increasing attack surface. Another thing is that WASM makes code faster so exploiting timing bugs or cache leaks gets easier.


My sibling comment is correct; the only way this can happen is an interpreter bug. Bugs happen, but they can happen in JS too. I think you’re assuming things the spec doesn’t allow.


"Controls your account" is possible without ever exploiting an interpreter bug or escaping the sandbox. Your account credentials are usually available inside the current tab.


I’m not sure what you mean, specifically, here. Or at least, how wasm is somehow worse than JavaScript in this regard, which is the baseline here.

In fact, it should be better, given the static declaration of external calls that can be inspected.


The example I gave in another comment holds here: Let's say I want to load PNGs and I'm fed up with color profile bugs in browsers' image decoders (sigh...) so I decide to compile a known-good build of libpng or stb_image with wasm. Now someone finds a png decoder exploit that works against my build. If I'm not cautious about my imports, they can escalate privilege out of my wasm library and then take control over my gmail.

Ideally wasm libraries will always be narrowly scoped and good about what they import, but there will definitely be broadly scoped libraries that import a ton of dangerous stuff, and there will be some that import a function that is effectively eval because they don't want to declare a thousand imports by hand.

It's certainly possible for JS libraries to have these kinds of vulnerabilities, but it's hard for me to imagine how a JS PNG decoder would end up with the same sort of attack possible on it since it's parsing binary data into pixel buffers. At worst, you'd crash it.


> I gave in another comment

Yeah, sorry about the duplication here, I'm extremely interested in this specific topic.

> Now someone finds a png decoder exploit that works against my build.

I think this is the part I don't get. Specifically, how would an exploit work within wasm? That is, in the wasm environment is different than in native; the memory is bounds checked, for example. Basically, I 100% agree that some security bugs are logic bugs, but take the above stack smash, for example: that can't happen, in my understanding. Again, modulo interpreter bugs, like any sandboxing technique.

> it's hard for me to imagine how a JS PNG decoder would end up with the same sort of attack possible on it since it's parsing binary data into pixel buffers. At worst, you'd crash it.

It's hard for me to imagine how wasm is any different than JS here.


How does wasm stop stack smashes? I can see that if it's not a von Neumann machine i.e. code is in a different memory space to data, it'd be harder, but that doesn't seem really compatible with C/C++?

Just in general if I have an arbitrary memory write primitive inside the wasm memory space, how much control over the program can I obtain?


wasm does some stack isolation so that the function's local stack variables are not next to things like the return value in memory and loads to/from stack slots are special instructions that reference slot identifiers. Most stack operations aren't arbitrary memory reads/writes from addresses so it's not possible to overflow them and corrupt other values.

The caveat is that not everything native apps put on the stack can currently be stored in wasm's safe stack, so applications often put a secondary stack inside their heap. This will also happen if you're - for example - passing large structs around as arguments. You can smash the heap stack if you manage to find an exploit, and if function pointers or other important data are stored there, you can turn that into an attack.

It's absolutely the case that a large subset of stack smashing attacks don't work on wasm, because of the safety properties. Some of them will still work though. The way function pointers work in wasm raises the risk profile a bit if you manage to get control over the value of a function pointer, since function pointer values are extremely easy to predict.


Thanks.

I am curious how JIT compilation policies will affect this. Wasm is a new bytecode form that has no mature JIT compilers. I wonder how many safety properties the compilers will assume. For instance if the wasm VM only really tries to stop wasm code escaping its own sandbox, then I guess compiling all stack ops down to a unified stack, C style, is a perfectly legitimate approach as long as the sandbox properties can be maintained. I don't think wasm is claiming it will make all C/C++ hackproof code.


Yup, and thanks for this; this helps me understand what I was missing, specifically "applications often put a secondary stack inside their heap".


I recently read a paper about security exploits in WebGL, thanks to bugs on shader compilers and drivers.

"Automated Testing of Graphics Shader Compilers"

http://multicore.doc.ic.ac.uk/publications/oopsla-17.html


> I really hope people don't think webassembly is the fault for this

Nah, I think it's pretty clear GP meant "when you insist on writing interpreters/compilers in C++" not that C++ was compiled into wasm.


Yeah, sorry for being unclear - that is what I meant. I don't see wasm as at fault here, it's just a bummer that this new attack surface was introduced by writing the wasm implementation in C++ instead of memory-safe languages. It's not something so complex that it really needs to be C++.

Most (all?) browser wasm backends function by just generating the internal IR used by the existing JS runtime, so it's not especially necessary to write the loader/generator in C++. The generated native module(s) are often cached, also, which diminishes the importance of making the generator fast at the cost of safety.

I wrote all the original encoder and decoder prototypes in JS for this reason - you can make it fast enough, and the browser already has a high-performance environment in which you can run that decoder. When the result is already being cached I think writing this in C++ is a dangerous premature optimization.

Similarly it's common to write decoders as a bunch of switch statements and bitpacking, which creates a lot of duplication and space for bugs to hide. You can build these things generally out of a smaller set of robust primitives to limit attack surface, but that wasn't done here either, despite my best efforts.


Beautiful chain of exploits, well done.


If this is the main problem people are complaining about, I am impressed by Code :)

What a great tool from Microsoft!


Well, it's not the main problem. Just a new problem, therefore gathers new attention. The main problem people have with it is the resource usage, as it's built with Electron.

There's an abundance of text editors available and there's generally only minor differences between them, so people will weigh up those minor differences all the more.


The main problem people have with it is the resource usage

From what I've read on HN, people seem to generally like VS Code. (Replies to this comment by people who don't notwithstanding.) So is resource usage the main problem of people who use VS Code, or is it the main problem of people who don't like Electron?


I don't see what purpose this differentiation serves.

It's a problem for people that like the text editor enough in other aspects to still use it and it's a problem in that many people will not use it, due to the resource usage not weighing up with potential other advantages for them.

And I really would not think all too hard about the opinion that an open online community seems to have about a product of a bigger company.

It's gotta be beyond trivial for Microsoft's PR department to steer the mood in threads about their products by deploying vote bots and a few workers that write enthusiastic comments.


VS Code is my primary editor. I don't really find myself having any issues with resource usage. Of course, I have 16GB of memory on my primary and secondary machines.


...for those privileged enough that can decline a job offer


Yes, that would be most of the programmers in the current setting of IT industry.


Your site is full of juicy content, thanks!


>I've used it on my laptop. Primarily because it has had few vulnerabilities and is very stable.

The OpenBSD propaganda works I see...

Do you really think the tools you use like your web browser, mail client etc, have less vulnerabilities on OpenBSD than on any other BSD or linux distribution, please...


> Do you really think the tools you use like your web browser, mail client etc, have less vulnerabilities on OpenBSD...

A reasonable question, but presumptuously and poorly framed, I think. Mitigation efforts like privilege separation[0] (for daemons), ASLR[1], SSP[2], and now KARL[3] are designed to make things systemically better. I'm personally a NetBSD person, and don't see that ending anytime soon, but I do appreciate the work that OpenBSD does and pay attention with interest. I expect some of their work to be ported to my environment directly, and other effects to be felt tangentially. People running different or "weird" environments is a good thing.

[0] https://en.wikipedia.org/wiki/Privilege_separation

[1] https://en.wikipedia.org/wiki/Address_space_layout_randomiza...

[2] http://wiki.osdev.org/Stack_Smashing_Protector

[3] http://undeadly.org/cgi?action=article&sid=20170613041706


OT, but I've had trouble in the past when trying out NetBSD; I wanted to install it on my laptop with full disk encryption, but I clearly was missing something about how to do it properly, and I've never been able to find a good guide for it. Any chance you might know a blog post or something that details how to do this properly for a NetBSD newbie like me?


I've run it in the past, but not recently. I'll see if something appears to me and try to post it here for you.

And good luck with your NetBSD journey, with or without FDE. I've thoroughly enjoyed my years with it as my primary OS.


I'd start here - https://www.netbsd.org/docs/guide/en/chap-cgd.html and point your IRC client to #netbsd on irc.freenode.net.


Thanks! I've tried out most of the other common BSDs (FreeBSD, OpenBSD, DragonflyBSD, and TrueOS) but I've always had more trouble with NetBSD for some reason. Hopefully I'll have better luck with it this time!


All of those were developed on linux and linux distributions and were available on those before obsd...


[flagged]


Of course it does, seccomp, developed way before pledge :))


And where is seccomp used (by default)?


Chrome and Firefox both use seccomp by default on Linux.


OpenBSD has around 500 programs with pledge support in base.

edit: and this number does not even include the ports available with pledge support, among others chromium.. and the fact that they were capable of adding pledge support this quickly says a lot, imho, about the implementation differences between libseccomp and pledge.


Also, pledge is everywhere. Seppconf, not. And SeLinux just in few distros.

Pledge is instrinsic. You have it right from the beggining with C and Perl. And it's straightforward to use.


Not the same approach.


Yes, browsers are a large attack surface. But I'd take a quick peek at the recent Security improvements section on this release page, and also OpenBSD's innovations page.

https://www.openbsd.org/innovations.html

OpenBSD was the second OS to enable W^X JIT on its firefox package, W^X being made mandatory system-wide, and in Theo de Raadt's most recent conference talk he mentions chromium being pledged. Both browsers are compiled as PIE by default.

http://undeadly.org/cgi?action=article&sid=20151021191401


That's not the point. Of course that the software will have the same number of bugs/vulnerabilities on OpenBSD. The question is how much damage an exploit/crash will do overall. OpenBSD has quite a few of protection mechanisms in place.


> Do you really think the tools you use like your web browser, mail client etc, have less vulnerabilities on OpenBSD than on any other BSD or linux distribution, please...

Yes. OpenBSD employs several mechanisms that improve the security of every application e.g. W^X and stack protector.

See: https://www.openbsd.org/security.html


All of those are available on linux distributions, enabled by default.

Not only that, they were developed on linux distributions and available on them way before obsd.


No, they are not.


Stack protector was developed by Immunix, W^X was developed by Openwall, both for linux.


Actually "your web browser, mail client etc" do a lot of system calls to do networking et al, so yes, they do have less vulnerabilities than on Linux.


I don't think you know where the vast majority(95%+ ) of browser vulnerabilities are...


Expensive and hard to get in Europe :(


They could get access to the device dnsmasq is running on, your pc, your router etc.


Wow... So you're saying a computer on the internet, just by knowing my IP adress, could just get access to the LAN at my house?

I'm glad Arch Linux has such strict firewalls by default!


Three of the issues would allow an attacker to take over your router from inside your LAN (not externally). Only one of the issues is an RCE that would usually be exploitable from outside your LAN, and there's some debate as to how hard that one would be to exploit. It might be as simple as getting you to visit a web page, but the published proof of concept would require something like tricking you into manually running a query via the command line or some other administrative tool.


Thanks for explaining. This wasn't obvious to me.


Yes but how many places have the infrastucture to do HIIT? Come on, let's be serious here...


You can easily do HIIT with minimal infrastructure. Burpees and jumping squats for example will get your pulse up in no time.

It will likely require a shower afterwards though which might pose a problem.


ok so to make this happen in an office setting, we'd perhaps need

1) A 60 degree F room to stave off sweating

2) Enough exercise to be healthy but that stops just short of making you sweat

3) (2) is variable based on body weight of course


My point is skip the office setting, do 10 mins of workout at home before you shower and save 20 mins of time.


I'd say do 20 minimum but your point is taken


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: