Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can you come up with some examples of when this would began to be noticeable to an end-user?


If you're crossing the FFI boundary a lot, any overhead adds up quick. For example, drawing a bunch of small objects using Skia, performing lots of OpenGL draw calls, allocating LLVM IR nodes, or calling a C memory allocator…


One of the nice things about M:N is it decouples concurrency from parallelism. That is, your application can spawn as many inexpensive green threads as the task requires without worrying about optimizing for core count or capping threads to avoid overhead, etc. With Go 1.5, underlying system thread count will default to the same as the system CPU core count.


It's noticeable to the end-user only in it's negative performance implications in certain situations, making things slower than they would be otherwise on the same hardware. It's a low-level construct, it is not directly noticeable to the user either way. The negative performance implications are largely under heavy load. The post you replied to gave some more specific situations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: